Achieving sustainable revenue growth requires tight controls on quality of service  across a range of services - and carriers must choose the right platform

Telecommunications service providers are seeking ways to increase Average Revenue Per User (ARPU) through value-added services such as IPTV, Video on Demand (VoD), and video collaboration applications such as on-line gaming. The services must be delivered in conjunction with VoIP and Internet access, over a single subscriber connection. With the cost of customer acquisition at ~$300 and annual per-subscriber multi-play service revenues at over $1000, subscriber loyalty is critical. Poor Quality of Service (QoS) results in subscribers switching to competitive offerings (cable, satellite, etc). Therefore, adequate service quality must be maintained. Of course service offerings must be price-competitive. Therefore, tight control over costs (CapEx and OpEx) is important.

Early approach to QoS
The basic challenge of delivering multi-play services arises because the services have different characteristics. Voice has stringent latency and jitter requirements, although bandwidth per channel is low. High quality video has low jitter tolerance, however, bandwidth per channel is much higher. Latency tolerance varies with the video service - VoD tolerates higher latency than video gaming. Data traffic (e.g., email, browsing) is largely agnostic to jitter and latency.

Prior to the era of ubiquitous rich content, social networking and file sharing, it was assumed that Internet traffic would remain a tiny component of overall traffic. Hence it was believed that multi-play QoS could be guaranteed with minor over-provisioning of bandwidth.

Bandwidth no QoS guarantee
With initial multi-play service deployments, it became apparent that though Internet use generated far less traffic than voice or video, it is bursty. With increasing rich content, the spikes get bigger. With recent widespread use of bandwidth-hogging applications (YouTube, eDonkey, Bittorrent etc), the situation has worsened. Several users accessing such applications simultaneously places a significant load on the network. In networks where traffic is not intelligently managed, excess traffic is dropped when provisioned bandwidth is exceeded. Indiscriminate traffic drops result in information loss for all types of traffic, and potentially for all subscribers, even those not using those applications. From the perspective of the end-user, loss in Internet data traffic is hardly perceptible, because re-transmission mechanisms recover losses before presentation. Conversely, even small traffic losses cause unacceptable quality degradation of voice and video.

Bandwidth over-provisioning at very high levels may guarantee QoS, however, it is often infeasible in many access networks and certainly not affordable.

Looking ahead, there is no crystal ball to predict what new services will be deployed, what new applications will be invented, and how usage patterns will evolve. What is certain is that for the foreseeable future, Internet traffic and temporal unpredictability will increase. According to studies published by Cisco Visual Networking Index, Internet video (excluding P2P video file sharing) already constitutes a third of all consumer Internet traffic, and is expected to grow at an annual rate of 58% over the next 4 years. New security threats will emerge.

Networks must be engineered to continually adapt to changing conditions. Wholesale equipment replacement is unaffordable. Software upgradeability is an imperative.

Attributes of NGNs
Networks must be built with platforms that offer adaptability, intelligence, security, control, and scalability.

Adaptable platforms enable the creation of "learning" network elements. With learning network elements, better traffic management, different protocols and services can be supported without hardware upgrades. Rather, improvements are enabled by software upgrades. For example, new traffic management algorithms are deployed as software upgrades to adapt to new traffic patterns. New protocols are also handled with software upgrades. An upcoming example is the imminent migration of access networks to IPv6, driven by the Internet address shortage problem of existing IPv4 based access nodes.
Intelligence encompasses service and subscriber isolation, traffic management (buffer management, policing, traffic shaping and scheduling), and the ability to dynamically configure algorithms for different network conditions. Service and subscriber isolation involves identifying traffic based on service type or origin, subscriber, etc, and separating it into distinct queues. Traffic management algorithms make discard decisions and regulate traffic flow in various queues to meet service-specific requirements modulated with subscriber-specific SLAs. Many algorithms have been devised for each function, e.g. Weighted Round Robin (WRR) for traffic scheduling. Software-based implementations of these algorithms have the advantage that they can be refined and dynamically tuned for specific network characteristics. Ideally, it should be possible to select from a menu of algorithms, so that service providers can appropriately select the algorithm and tune the parameters for each node in their network.

In broadband access, security often refers to stopping Denial of Service (DoS) attacks. Security is implemented through a variety of mechanisms - Access Control Lists, rate control of host-directed traffic, etc. An adaptable platform is essential to accommodate new threats, protocols and services.

Controls typically limit network misuse with respect to SLAs or regulatory frameworks. Policing and shaping algorithms guarantee users do not exceed bandwidth allotment of their SLAs. Service providers also want to better manage traffic related to bandwidth-hogs. In these cases, application recognition is used to identify the specific traffic that is filtered or de-prioritized.

Scalable platforms address the development of cost-effective portfolios for a wide range of performance and functionality, deployable across a broad spectrum of service providers.
A simple case study demonstrates the importance of these attributes. The WRR algorithm ensures that queue-specific weights specify relative priorities of services. But experience has shown that WRR is inadequate for triple-play QoS. To guarantee triple-play QoS, LSI recently implemented a sophisticated multi-level hierarchical scheduling algorithm, making it possible to deploy it as a software upgrade on existing nodes built on the LSI APP communications processor.

Choosing the right platform
Fixed function and Ethernet switch devices have significant drawbacks in many respects. Both include hardwired traffic management, most do not meet even today's requirements. Adaptability and functional scalability are non-existent or highly restrictive. Hence they are not suitable for subscriber-facing linecards in next generation network elements. Note that Ethernet switches are suitable, and often used in network elements for other purposes such as internal interconnects.

Programmable platforms offer these desirable attributes and are recommended for building learning network elements. However, they differ greatly in their degree of support, hence a deeper assessment is recommended. All the attributes should be affordable from a total cost of ownership (TCO) perspective. (TCO includes cost of development, maintenance and upgrade of software through the product lifecycle.). The architecture should support predictable performance in a variety of scenarios. Programmable, hardware-based scheduling with multi-level hierarchy support is critical. For known standards and world-wide service provider requirements (DSL Forum TR-101, IPv6 enablement, a menu of traffic management algorithms, etc), pre-packaged, platform-optimized software must be available. For adaptability, it is equally important that the platform vendor is committed to invest in a software roadmap. For differentiation, the architecture must be easy to program, and source code with modification rights complemented with robust tools must be available. For long-term requirements, the programmable platform evolution roadmap must not only consider new hardware functions but also incorporate a simple software migration strategy.

An example of a platform with all the desirable attributes is the LSI Broadband Access Platform Solution including APP communications processors and Broadband Access Software. The LSI Tarari Content Processors represent a good example of a platform to implement application-specific controls.

It is indeed possible to build cost-effective, future-proofed, next-generation networks that meet the requirements of multi-play services.

About the Author: Sindhu Xirasagar is Product Line Manager, Networking Components Division, LSI

Clearwire's CTO says it forms the highest cost of a network deployment, so what  solutions provide the best answers for the future of mobile backhaul, asks Alan Solheim.

The evolution from voice only cellular systems to 3G+ (HSPA, WiMAX, LTE...) has revolutionized the way we interact, share information, work and play. In order to deliver on this promise, the entire network - from the handheld device to the core of the network - has to change at every level. One area that has not only changed, but has been turned inside out, is backhaul. Long considered only a cost of doing business, many carriers are starting to see backhaul as an enabler for new service delivery, and in some instances like ClearWire in North America, a competitive advantage over traditional players. ClearWire has built the largest greenfield WiMAX network to date and their CTO, John Saw says, "It's what I call the elephant in the room that nobody talks about. Backhaul is probably the highest cost in deploying a network. Anyone who wants to roll out a real wireless broadband network nationwide needs a cheaper solution than current models." This same sentiment is true for anyone planning to build a 3G+ network: without radical change to the backhaul, the applications will be starved for bandwidth, the user experience will be unacceptable, and the network economics will not be favorable.

So what are the required changes? The most obvious one is in bandwidth. 2G voice networks need a single E1 connection to the base station in order to provide the required capacity. The advent of GPRS and EDGE to provide data services, resulted in an increase of up to 4 E1s per base station, but leased circuits or low capacity TDM microwave radios were able to provide the increased capacity. The introduction of High Speed Packet Access (HSPA) and HSPA+, has driven the capacity per base station up by a factor of 10, straining the throughput capability of these types of microwave and making leased E1 circuits cost prohibitive.

As 3G base stations began to support native Ethernet interfaces, enabling the use of packet microwave or leased Ethernet backhaul, a variety of approaches were adopted in order to support legacy E1 interfaces. One method has been to leave the legacy E1 transport for the 2G base stations intact while adding an overlay to support new HSPA/HSPA+ base stations. This has been more prevalent among operators who have used leased E1 circuits for the 2G backhaul. Alternative deployments have included hybrid TDM/Ethernet microwave, or packet microwave with pseudowire. Finally, fibre has generally been used when available at the cell site however fibre penetration is very low - even in developed countries.

With the advent of 4G technologies (WiMAX and LTE), the network is IP end to end, and the backhaul load per base station has again gone up by almost another order of magnitude. Furthermore, in order to deliver the desired user experience the base station density has to increase: between 1.5X and 2.5X depending upon the amount of radio access spectrum available to the operator. The net result is a requirement to deploy a new backhaul technology that can deliver the necessary capacity, is packet based, and can easily add the new base stations as needed. Again, if fibre is present, it is the preferred technology. If fibre is not already present at the base station, however, the relative economics of fibre vs microwave must be taken into account.

Business Case for Fibre
The cost to deploy fibre is dominated by the installation expense and is thus distance sensitive: the longer the fibre lateral that must be constructed, the higher the cost of the backhaul. Microwave on the other hand does not significantly change in cost as distance increases, however there is an on-going annual charge for the tower space rental (if the towers are not owned by the operator) and the backhaul spectrum lease. The break-even distance for a fibre construction varies with local conditions, but is typically less than 1000 meters. Given the large number of new sites and the low fibre penetration, the majority of the base stations will be served by packet microwave, so it makes sense for us to look at packet microwave systems in more detail.

Business Case for Packet Microwave
The 10-year cost of ownership for packet microwave is only influenced by the capital cost of the radios to a minor extent (even though this tends to be the focus of the purchasing process). As is shown in the graphic above, the majority of costs are driven by lease costs for space and for spectrum. These costs are very dependant upon local regulatory conditions, and whether or not the operator owns the tower and site locations. A 10-year TCO analysis should therefore be done for every network that is considered. Current generation packet microwave systems have a number of features that can be used to mitigate these costs.

First of all, packet microwave systems are not limited to the SDH hierarchy of bit rates and can deliver throughput up to 50 Mbps per 7 MHz of spectrum with average sized packets (note that the throughput increases with smaller packets and some manufacturers quote these artificially high throughput rates. In practice the throughput with the average packet size is a much better measure of the real world system capability). Channel sizes are software defined and can be up to 56 MHz, if allowed by the regulator, for a throughput of 400 Mbps. Polarization multiplexing can be used to double this capacity, but at a cost per bit that is more than double. A feature known as adaptive modulation (the ability to adjust the modulation and/or coding to optimize the throughput under varying propagation conditions) allows these systems to deliver the maximum capacity under normal conditions and maintain the high priority traffic under poor conditions. Both of these translate to higher throughput, reduced antenna sizes and higher spectral utilization, resulting in a lower cost per bit.

The Importance of Ring and Mesh Network Topologies
Ring and mesh network topologies can further reduce the network cost per bit by decreasing the required redundancy costs, and minimizing the average antenna size. Traffic engineering, which allows the use of statistical multiplexing, makes use of all the available paths in a ring/mesh network, and leverages packet based prioritization to maintain priority traffic in the event of a failure condition. This can increase the effective network capacity by at least a factor of 4, further reducing the networks average cost per bit. Ease of installation and the reduction in site lease costs can be addressed by the use of all-outdoor system design, where the RF and base band electronics are integrated into a single outdoor unit, eliminating the need for co-location space in the cabinet. The net result of these factors is at least a 10-fold reduction of the network cost per bit.

The Future of Backhaul
Looking into the future is always subject to error, however we should expect a continuation of the trend towards smaller cell sizes in order to deliver higher capacity per user and make better use of the radio access spectrum. This will require ongoing innovation in the backhaul in terms of cost and integration levels. Capacity in excess of 1 Gbps per link is required in order to allow packet microwave to be used for the aggregation layer in the network and not just the final link to the end station. Traffic patterns that link base stations directly to one another, rather than hubbing all the traffic back to a central site (as proposed in the LTE standards) will further drive the need for ring/mesh network topologies rather than conventional hub and spoke designs. Finally spectral efficiency improvements at all channel sizes are required in order to deliver higher levels of network capacity without exhausting the available spectrum. We are by no means at the end of the road when it comes to innovation and evolution, if not outright revolution, of the backhaul network.

About the author: Alan Solheim is VP Product Management, DragonWave.

Oliver Suard argues that policy control is shifting away from a network-centric and  defensive approach to more flexible applications, such as bandwidth management and roaming cost control, with a tight link with charging. He asserts this is how communications service providers can experience the real potential of policy control.

In the past year interest in policy control has grown dramatically, to the point where it has become one of the hottest subjects for communications service providers (CSPs). Driving this is the fact that many CSPs are experiencing a growth in data (IP) traffic that is far outstripping the growth in their revenue, and affecting the performance of their network. This is a trend that is clearly unsustainable, so CSPs are looking for ways to redress the situation. They not only want to make better use of available resources, but also offer more personalized services to their customers, with the hope of generating more revenue.

Policy control is seen as the means to achieve these goals. It has a multitude of definitions, objectives and solutions, but broadly speaking, policy control is about managing the access to services by subscribers.

Historically, policy control can be thought of as having two very separate origins. In the fixed environment, policy control was about network resources allocation. An early example of policy control in action is the use of Class of Services (CoS) in MPLS networks to differentiate the delivery of enterprise services. In the mobile space, policy control was about charging-for example, taking action when a pre-paid customer's credit runs out.

Now, with the advent of the broadband era, policy control is stepping out of those silos and coming of age. Most importantly, policy control is shifting from a network-centric and defensive approach to one that puts the customer experience first.

To achieve this, policy control has become far more dynamic, taking a multitude of factors into account-in real time. These factors include not only the type of service but also the current network conditions, the user's profile (e.g. business or consumer, gold or standard, high or low spender), the type of device being used to access the service and even the location of the user.

A good illustration of a flexible application that ensures a high customer experience is bandwidth management. The initial problem can still be seen as a classic network-centric one: CSPs want to ensure that the bandwidth available to users does not become squeezed as a result of excessive use by a minority of subscribers who do not contribute proportionately to revenues, such as heavy peer-to-peer (P2P) download users. When that situation occurs, the majority of users experience a reduced quality of experience (QoE), which may lead to churn. This problem is most acute in mobile networks, where bandwidth is clearly limited.

State-of-the-art bandwidth management solutions allow CSPs to monitor usage in real time and, when congestions occur, to dynamically adjust the access for specific services and specific users (at a cell level for mobile operators) to free up capacity. Such a solution is not just about defending the network-it's about providing the optimum broadband experience for the majority of users.

Policy control solutions can also be used to help subscribers manage their spending, by informing them when certain pre-set, personal credit limits are reached. At first glance, it may seem counterproductive for CSPs to help their subscribers control their spending, but in actual fact, the consequences of a customer receiving an unexpectedly large bill (commonly referred to as "bill-shock") is likely to be far more damaging to the CSP in terms of churn, bad publicity and liability for interconnect charges (regardless of any settlement reached with the subscriber). Furthermore, this "cost control" feature can be offered as a service, enhancing the personalization of the relationship between the CSP and the customer.

This last example hints to an important aspect of policy control: its tight link with charging. This is most clearly illustrated by the differentiated price plans now being offered to broadband customers. For example, one APAC operator is offering a monthly flat-fee of $30 that offers 10 Gbytes of usage, with additional usage charged at one cent per 10 Mbytes, capped at $40. Enforcing such a price plan means that policy control needs to be aware of both the price plan and the usage, for individual customers.

This connection with charging is recognised by the 3GPP standard for policy control. Originally drawn up in the context of IMS (IP Multimedia Subsystem), this standard defines a number of key components, including the Subscriber Profile Repository (SPR), the Policy and Charging Rules Function (PCRF-the part of a policy control solution that makes the decisions), the Policy Control Enforcement Function (PCEF-the part that implements the decisions), as well as the Offline Charging System (OFCS) and Online Charging System (OCS) to handle post and pre-paid charging, respectively.

Many CSPs currently investigating policy control solutions are demanding compliance to the 3GPP and other related standards. However, policy control deployments typically take place in existing environments, and some compromises need to be made. For example, a bandwidth management solution could be deployed using existing network capabilities to throttle usage rather than introducing a standards-compliant PCEF. 

It should also be noted that the 3GPP standard is about the logical capability (or functionality), not the physical architecture. So a 3GPP-compliant implementation need not have separate physical "boxes" for each of the components. As mentioned earlier, for example, there is a tight link between the PCRF on the one hand, and the OCS and OFCS on the other. So implementing these together in one solution makes a great deal of sense.

Ultimately, CSP must not forget that policy control should be about the customer experience and driven by marketing needs, rather than about network issues. Therein lies the real potential of policy control.

Olivier Suard is marketing director, OSS, Comptel.

With the recent announcement of the proposed T-Mobile and Orange merger in the UK a  number of analysts have commented on the possibility of a consolidation of the mobile sector not only in the UK but in a number of countries. At the same time competition authorities are increasingly wary of the emergence of large firms with significant market power that have both the means and incentives to act in an anticompetitive manner.

So what would the proposed deal entail and should we be worried about large telecoms firms?

A "synergistic" deal?
Two of the five UK mobile network operators are proposing to merge their operations to become the largest operator in the UK with 37% share of the market and 28.4 million customers, in front of O2, Vodafone and 3UK. The underlying rationale for the deal is based on synergies that have been valued by the merging parties at about £4bn. These benefits would arise as a result of network sharing, operating under one brand as well as the centralisation of some of the operations. According to the merging parties the deal would result in better coverage and more efficient operations leading to a better service for customers. The current rivalry in the sector has significantly eroded margins as mobile operators often struggle to amortise their increasingly high subscriber acquisition costs over a long enough period before their customers switch.

Why would such mergers be of concern?
Economic theory suggests that without Government intervention some firms would become so dominant that they would be able to set prices at anticompetitive levels, drive competitors out of the market (and/or acquire them) and benefit from a quasi monopoly position. While this would provide the shareholders of the dominant entity with great wealth (think Rockefeller) it would result in artificially high prices that would penalise the whole economy and therefore would not be in the interest of its citizens. That's why competition authorities have been tasked with keeping an eye on abuse of dominance cases and have such far reaching powers to enforce their decisions. The antitrust case against Rockefeller's Standard Oil in 1910 resulted in a forced break up ... into 47 pieces.

Mergers often result in increased market concentration and competition authorities through their merger control powers have to ensure that, overall, consumers will not be harmed.

Striking the right balance
The UK was until now, the only country in Europe with five infrastructure based operators. The merger would therefore result in a market with four operators, a configuration found in a number of other European countries. Orange and T-Mobile will no doubt try to persuade competition authorities that the merger is pro-competitive and would not lessen the level of rivalry within the sector. While it may sound surprising at first, Vodafone and O2 are unlikely to put up a big fight to prevent the merger from going through as less players in the market means less competition overall which is good for them. So it will be up to the competition authorities to clear the deal in one form or another. Most probably a number of conditions, aimed at ensuring that the market would remain competitive, will be imposed on the merged entity. Selling distribution outlets, auctioning portions of a customer base, redistribution of spectrum or mandating wholesale access are some of the measures that might be proposed to alleviate the concerns of competition officials. Operators such as 3UK are also insisting on more efficient number portability processes to ensure that switching from one operator to another is easier. Given that 3UK's 3G network is shared with T-Mobile and that its 2G roaming partner is Orange it is clear that it will be monitoring carefully the merger process. 3UK's exit options however (to be acquired by another UK mobile operator) have been seriously curtailedr.

Towards a more concentrated sector?
Over the next six months or so competition authorities will have to analyse the likely impact of the proposed merger on competition and decide under which conditions it should be cleared. These conditions, if any, will have to be designed to ensure that prices are kept at a competitive level, and that innovation is not harmed. No doubt Yoigo in Spain and Wind in Italy will be following closely these developments... as they may be next in line for merger talks.

Benoit Reillier is a Director and co-head of the European telecoms practice at global economics advisory firm LECG. The views expressed in this column are his own.

Turn on the evening news or look at any Internet news site and you'll probably see at least one story that optimistically says the recession is over. Just recently we marked one year since the collapse of US-based financial services firm Lehman Brothers, which many point to as the cataclysmic event that brought the eyes of the world on the looming threat of a total global financial meltdown.

Sure, we haven't sunk into a depression like many analysts predicted, but we're still a long way from climbing out of the hole we've gotten ourselves into. And this applies to the communications industry as well as to automotive, banking or any other vertical market.

Relatively speaking, communications has weathered the storm fairly well all things considered. We didn't see a mass exodus of people running away from their service provider contracts. In fact it looks like consumers are jumping on the smartphone bandwagon as eagerly as ever as we saw with recent launches of the iPhone 3GS, the Palm Pre and other new devices. Also, Apple's iTunes App Store was recently able to claim 1 billion downloads after just 9 months of service. But, that doesn't mean communication service providers (CSPs) can just sit back and wait for the money to keep rolling in.

There's a reason the tagline for our upcoming Management World Americas conference in Orlando this December is "Surviving to Thriving: New Business Models, New Services, New Profits." We're saying it's not enough to simply make it through the financial downturn in one piece. Just making it out alive doesn't constitute success, and in fact if that's all you're planning to do, you may as well write your epitaph now.

In this brave new world of global 4G mobile Internet access with blazing speeds capable of supporting applications and services we can only dream about today, it's far from a given that providers will continue to be the money-making operations they've been in the past.

With new networks and capabilities, CSPs also have to face the very real specter of market saturation. They may have a share of the worldwide $1.4 trillion communications market, but that number is growing very slowly. And the ones that are currently gaining the most ground in the market are providers of what we call "over-the-top" services, that is the video, music, games and other services that ride along an incumbent carrier's infrastructure. While there is no reason that incumbents have to be relegated to bit carriers in this scenario, in many cases the revenue sharing just isn't there, and they are being cut out of potentially lucrative deals.

Adapt or Die
As the traditional CSPs see the success of the iTunes App Store and similar services where small companies or people in their basements come up with useful (and sometimes ridiculous) applications that people are willing to pay 99 cents for, where does that leave the providers who own the means of delivery to the end customer?

If they are smart, they will take a page from Charles Darwin and keep evolving and changing to avoid extinction. This means starting at reducing operating costs, which all providers should have started doing well over a year ago. The next step is to reduce the operational complexity within their organization. This includes streamlining the OSS/BSS infrastructure and processes, which I'll admit is no small task, but it's absolutely vital to staying afloat and stop hemorrhaging funds.

Last, but certainly not least, is to create new opportunities to bring in revenues. This goes well beyond just focusing on existing end user services and thinking over-the-top services are something to be viewed with fear and loathing.

Quite the contrary, if incumbent operators take that attitude, they will surely fall by the wayside. But if they break the habits of past business models and embrace new areas like cloud computing, personalized services, mobile advertising and more, they will quickly move from being dreaded bit pipes to actually being enablers of innovative new services and opportunities. It won't be easy, but if providers can make this transition, they will be able to survive this and future economic slowdowns and thrive no matter what comes their way.

Keith Gibson says that presence is on a journey from the green and red icon to  providing the basis for the future of communication

At the turn of the century presence was seen as the coming saviour of communication. Starting off as a simple available/not available green and red icon, the industry was quick to recognise its massive potential. The vision of having presence integrated everywhere was compelling - presence to update your selected community on exactly where you are, what device you are using, what mood you are in, what you are doing; even to check the status of your mail order from last Tuesday. The next logical evolution was to have presence embedded rapidly in the operator's network as a key integration point for information.
However, the reality of this "all seeing, all knowing" utopia was much too hard to achieve. The main drawback was the fact that presence would have to be integrated into every service on the network to achieve this holistic approach - the feat was too much for industry to bear. Building a presence server to meet all these demands was very complex and costly, and no one service requiring presence could justify the cost of such a server, so the business case for presence fell over.

Technical hype also added to the slow growth of presence. Much of the industry was focused on the functionality presence could bring. Lots of time, effort and resource was devoted to developing these functions. Unfortunately most of these projects failed to place the user at the centre of the business model. If they had, they could have successfully created value-added services, which could have initiated rapid consumer adoption and demand for presence.

As a result of these drawbacks, many companies simply opted for the siloed presence solutions, or deployed IP applications without presence. The only few exceptions were for some corporate scale applications and stand-alone uses of presence, such as in internet and mobile instant messaging.

The vision of integrating presence at the heart of the network to show status information across services appeared to die, or at least be set aside. The market dipped and many vendors shelved their presence server development programs.

But in the past twelve to eighteen months, the tides have turned and the benefits of presence are beginning to become a reality. Presence has been rolling out onto networks slowly, particularly where it can add value to an end-to-end service. For example, some operators such as Mobilkom are using the technology to deliver innovative services such as intelligent call routing across IP and traditional networks in order to deliver calls to the device that the user is most likely to be using at the time. 

Presence has also become a major piece of the Rich Communications Suite (RCS) initiative being driven by the GSMA, now with over 80 operator and vendor members. The GSMA RCS Project is a collaborative effort to speed up and facilitate the introduction of next generation rich communication services over mobile networks. Presence is key to RCS as it focuses on ordering communication around the address book so that each individual can see how their contacts can be contacted, and also see their social presence information. This is important to operators as it promotes communication.

Some key aspects of RCS include:
- Enhanced address book: Allows the user to share key aspects of their social presence with their address book. These include a user indicating hyper-availability (like a ‘shout' it indicates the user wants to talk), Portrait icon, Status text (what am I feeling like today), and Favourite link (personal website link etc). The presence server manages all of this information and allows the user to decide who sees it.  The Enhanced Address Book also displays the services available for each user and the presence server tracks the capabilities of each user so only available services are displayed against the contact being viewed.  
- Content sharing: Users can exchange video, still images, or files whilst on a voice call or outside of one. Again the presence server tracks the capabilities of each user to receive these types of calls, ensuring the caller is never disappointed when they try to communicate.
- Enhanced messaging: presents a conversational threaded view of SMS and MMS within the phone client and also adds chat services using instant messaging. Again presence allows the user to see who is available for a chat session enhancing the possibility of group discussion.
- Fixed mobile convergence: RCS works across mobile and ‘fixed' networks, allowing the user to have one address book that is visible, whether they are using their mobile phone, netbook or PC.

Operators are already launching RCS market trials, and full deployments are expected in 2010.  One such operator is Korea Telecom, which recently announced its early RCS deployment. Many European operators are also in trials, or beginning them. RCS could become the next big evolution in mobile services, and at its core will be the presence server - this time driven by service need rather than architectural possibilities.

With presence no longer being a niche application, but forming the backbone of Rich Communication Suite applications and services, it is on its way to revolutionising communication to make it more streamlined, social and multi-dimensional. Telecom companies around the world are researching innovative ways that they can tap into the functionality of presence. For example, some cable companies in the US would like to use presence to promote messaging between their subscribers by allowing them to show their friends what they are currently watching on TV. That is just one a specific application of presence - the options are limitless.

One initiative that is driving the evolution of presence is the FMCA (Fixed-Mobile Convergence Alliance) open presence trial, where six major operator members are trialling the interconnection and interworking of rich presence applications. The FMCA are also exploring technical, operational and commercial models for presence enabled services in areas such as unified communications, VoIP, mobile IM, IPTV, social networking, content sharing services, networked gaming and wholesale services within the trial. Such initiatives will ultimately provide users with the capability to extend presence well beyond the boundaries of service silos and network boundaries, finally making presence a global feature.

So once again, presence has been ear-marked to become a component in the network that is integrated with all services.  However, history has shown us how this vision can stumble, so the industry must work together to ensure the roll-out of presence is a success this time around. The focus must be on the business needs where the business case is improved by the addition of a presence server, and the cost is justified. It is time for presence to grow up and deliver the potential we have been waiting to see. The next steps will be both critical and exciting to watch.

About the author: Keith Gibson is CEO of Colibria.

Mobile operators must contend with stagnating revenue growth resulting from  reduced consumer spending. To improve profit margins in this environment, companies must find ways to simplify their operations and refocus scarce resources on activities that offer the best returns. European Communications runs extracts from a white paper from CapGemini that looks at how mobile operators can suceeed in this quest for margin

CapGemini's Telecom, Media and Entertainment team analysed various cost reduction measures across three key areas: network operating expenditure (OPEX), subscriber acquisition (SAC) and retention costs (SRC) and the costs of servicing customers. It modeled the potential savings that could accrue from adoption of these measures, and its analysis shows that a typical mobile operator in Europe is positioned to improve EBITDA margins by up to four percentage points within four years by the judicious implementation of these measures. However, there exists significant challenges in doing so.

The context
Telecom operators in Europe are facing some of their toughest times in recent months. After a period of high growth, mobile telcos are now faced with a credit crunch that is impacting their growth plans and an economic slowdown that is affecting consumer spending. For some time, strong growth in mobile revenues had diverted the focus of operators from driving down costs. In a growing and competitive market, operators had focused on launching a wide portfolio of voice and data products, technology upgrades and ramping up their customer service functions, resulting in complex structures and systems.

In light of the current revenue challenge, mobile operators now have to shift their focus from growth strategies to simplifying their businesses and driving down costs to sustain healthy margins. Particularly since operating costs for most operators have been gradually rising over the past few years, and it seems there is scope for targeted OPEX improvement measures.

Network Opex
For the mobile operator that we have modeled, network OPEX accounts for over 26% of OPEX. We have identified three key areas of network expenses that operators can focus on in their drive to cut costs. We estimate cost savings initiatives focused on network OPEX are likely to result in a 2.7-3.8 percentage points rise in EBITDA margins, based on the extent of measures that are deployed. EBITDA uplift is loaded towards the end of the four year period due to the progressive deployment schedule that the measures entail.

Backhaul Ownership
With rapid increases in backhaul capacity driven by network upgrades, most operators are caught in a situation where their increasing share of payouts to backhaul owners are driving down their current margins. This has prompted some operators to venture out into building their own transmission networks.

For instance, Vodafone Germany has embarked on an initiative to build its own backhaul and estimates that it could save up to € 60 million annually in OPEX due to this shift. In Italy, the company has already migrated over 80% of its backhaul to their self-owned links.

However, savings through backhaul ownership are closely tied to the traffic requirements of the operator. We have modeled our analysis on the assumption that base stations would require a backhaul capacity of up to 6 E1 lines5, as opposed to the current average of 2. As such, we believe operators that are seeing a strong upswing in traffic or those that are already operating at high capacity utilisation rates are likely to benefit most by taking ownership of their backhaul.

Our analysis reveals a potential upside between 1-1.85 percentage points in EBITDA margins by implementing this measure. In bringing backhaul in-house, operators will need to follow a phased approach where they first identify the sites, prioritise them based on capacity utilisation forecasts and finally select the appropriate technology between microwave and fiber.

Energy Savings
Our analysis suggests that by deploying focused initiatives around improving cooling efficiencies and reducing energy consumption at mast sites, operators stand to realise a tangible savings potential.

Integrating these measures into our cost savings model, we believe that a savings of up to 4.5% can be obtained on the electricity OPEX costs of an operator. These savings translate into a direct uplift of EBITDA margins by 0.16-0.19 percentage points. We have modeled these savings as a one-time measure for implementation on existing sites.

Network Sharing
For larger operators, the key advantage is the opportunity to monetise assets that have already significantly depreciated, thereby offering them a steady revenue stream. For smaller operators, the case for network sharing appears even more attractive as these operators can convert significant parts of their CAPEX into OPEX and in the process also achieve a faster rollout.

An analysis of the potential savings that can accrue through sharing of network elements, including the Radio Access Network, reveals that operators with moderate coverage can achieve EBITDA upsides of around 1.0 percentage point while operators with nationwide coverage can achieve an EBITDA improvement of over 1.4 percentage points (see Figure 6).

Subscriber Acquisition and Retention Costs
Subscriber acquisition and retention costs (SAC/SRC) form the single largest OPEX element for most mobile operators. Handset subsidies account for the bulk of these costs with a 69% share while dealer commissions account for almost 15%.

Increasing Contract Duration
The duration of contracts offered by operators is closely tied to the amount of handset subsidy that the operator incurs. Consequently operators are experimenting in varying the duration of the contract to reduce the impact of high subsidies for feature and smart phones.
In the European context, we have modeled a scenario where the current average of 18 month contracts is extended to 24 months. An increase of over 40% in the customer lifetime value can be achieved by extending the duration of the contract. However, consumers are likely to resist any extension of contract durations. In order to drive uptake of extended contracts, operators will need to create loyalty benefit plans that encourage customer stickiness.

Our analysis shows that by extending contracts and implementing progressive loyalty benefits, operators can realize EBITDA uplift between 0.44-0.48 percentage points at end of the fourth year. However, challenges arise around managing revenues, customer expectations, and in the distribution of subsidies. Nevertheless, the challenges are not insurmountable and the measure, in itself, offers scope for operators to embark on a new low-cost subsidy path.

Direct Sourcing of Handsets
Our analysis shows that large operators that have significant purchasing power can reduce costs involved in handset sourcing by procuring handsets directly from ODMs. ODMs  have in-house design and manufacturing facilities and offer a significantly faster turnaround time, in comparison to traditional OEMs. Moreover, the lack of a strong brand for the ODMs, and relative scale of the operator gives the latter significant bargaining power in negotiating procurement of handsets. Indeed, operators such as Vodafone have experienced a price differential of over 16% between an OEM and an ODM for sourcing comparable budget handsets.

We have modeled a progressive rise in sourcing low cost handsets from ODMs, with the upper limit capped at 35% of budget handsets at end of year four. Our analysis reveals a potential upside of 0.12-0.2 percentage points to EBITDA margins at end of year four. Sourcing higher volumes and feature rich handsets from ODMs are likely to result in significantly higher savings for operators. However, a key challenge for operators will be to ensure sustained after sales support from the ODM.

Customer Service Costs
Our analysis of cost cutting measures focused on customer service reveals three initiatives that have not been implemented by operators in Europe extensively and have potential for margin uplift:

Paperless Billing
Research on the cost differential between paper and e-bills shows a differential of up to 59%. Building these savings into our analysis shows scope for EBITDA margin uplift of 0.1 percentage points for operators at the end of year four, assuming a rise of 3% in number of subscribers opting for an e-bill. Operators could strive to increase uptake through focused promotions and providing enhanced functionality in ebills to drive up savings.

Hutchison (3) Austria initiated a drive to migrate its customers to e-bills in mid 2007. At that point in time, 3 was sending out over 480,000 paper invoices per month, each having between 5 to 100 pages. Having seen limited success with opt-in strategies, 3 opted for aggressive opt-out measures resulting in strong success. They achieved a conversion rate of over 85% as opposed to their conversion target of 65%.

Unstructured Supplementary Service Data (USSD) based Self-Care
USSD is a real-time messaging service that functions on all GSM phones and has seen multiple deployments across emerging markets. Operators could build mobile portals that could be accessed through USSD, and benefit from the lower costs and faster query resolution that the service offers. By offloading some of the most common customer service queries such as those around bill payments, balance and validity checks, and status of service requests, operators can reduce the burden on their contact centers, and consequently, the cost involved in servicing each consumer. However, lack of regulation and limited interoperability among operators for consumers who are roaming have resulted in the service seeing limited traction in Europe. Operators will need to collaborate among themselves to ensure uptake of USSD services.

A Time Slot Approach to Customer Calls
Our measure envisages a scenario where customers are assigned specific time slots during which they can contact customer service, with calls outside the time slot being treated as regular charged calls. However, such a measure will have to be tempered by a minimum Quality of Service (QoS) guarantee, and offset by incentives (such as free minutes) for a drop in QoS.

By utilising the time slot approach to customer calls, we believe that operators could achieve a reduction in the number of resources deployed by over 37%. However, the implementation of this measure is likely to be challenging, given the complex analytics that drive the slot designs and managing the apprehension of customers. Nevertheless, we believe that sound implementation of this measure will result in EBITDA margin uplift by 0.2 percentage points by end of year four.

In conclusion, telcos will need to concentrate on gaining tactical benefits from cost reduction initiatives in the near-term and create sustainable cost advantages, with an emphasis on operating margins, before they can look at creating long term value through growth strategies. Operators will have to identify complexities in their systems, processes and cost structures and develop a roadmap to systematically mitigate them. Mobile operators will also need to identify activities that offer the maximum value realisation and redirect financial and operational resources on these activities to create lean and efficient businesses.

About the authors:
Jerome Buvat is the Global Head of CapGemini's Telecoms Media & Entertainment Strategy Lab.
Sayak Basu is a senior consultant in the TME Strategy Lab.

Centralising service and policy control management will mean operators can manage traffic and users in a holistic manner across 3G and 4G networks

Mobile data services over 3G networks are proving successful in the market. 3G subscribers account for 350 million of the 3.5 billon mobile subscribers worldwide, with more than 30 million being added every quarter. As 3G services grow in popularity, mobile operators face several challenges.

First, unlimited flat rate plans and competitive pricing pressures are accelerating data usage, putting pressure on business models as revenues fail to keep pace with mobile data traffic growth.

Second, the network is evolving to include not only person-to-person communication, but person-to-machine communication and machine-to-machine communication as more subscribers and devices become connected.

Third, the popularity of application stores and the proliferation of new multimedia applications is changing what subscribers expect from operators. They want more personalised services, access to a broader range of applications, and more interactive features to engage with their social networks.

Finally, new devices such as smartphones, smart meters, and healthcare devices offer improved ways to communicate and connect, access the Internet, interact and collaborate, entertain, and mobilise the enterprise. 

Mobile operators are realising the need to optimise their network and service architectures to continue to grow capacity, lower costs, improve network performance, manage devices, and meet subscriber expectations. 

The LTE Opportunity
LTE technology is emerging as the next generation wireless technology that will lead the growth of mobile broadband services in the next decade. Its adoption by operators around the world has the potential to generate economies of scale unprecedented by any previous generation of wireless networking technology.

LTE is critical to delivering the lower cost per bit, higher bandwidth, and subscriber experience needed to address the challenges of mobile broadband. It promises a whole new level of mobile broadband experience as services become personal, participation becomes more social, behaviour becomes more virtual, and usage reaches the mass market. It offers:
Significant speed and performance improvements for multimedia applications at a lower cost;
Enhanced applications such as video blogging, interactive TV, advanced gaming, mobile office, and social networking; and a wider variety of devices such as smartphones, netbooks, laptops, gaming and video devices as well as machine-to-machine supported applications including healthcare, transportation, and weather devices. 
To be a significant contributor to end-to-end service creation and enrich the subscriber experience, the LTEnetwork must support an agile, scalable and open approach.  This will depend on:
- The network's capacity to support peak user data rates, high average data throughputs, and low latency;
- The ability to leverage existing 3G infrastructure investments with a network migration path to LTE;
- Ensuring service continuity for existing revenue-critical 3G services, while supporting the rollout of new 4G services; 
- Balancing insatiable demand for mobile data services with LTE rollout plan dependencies on spectrum availability, and a device, services and applications ecosystem; and   
- Innovative service plans that encourage mass market adoption.

The LTE Evolved Packet Core (EPC) plays an important role in meeting these challenges and is a fundamental shift towards a service-aware, all-IP infrastructure. It has the potential to deliver a higher quality of experience at a lower cost, and improved management of subscribers, applications, devices and mobile data traffic.

Mobile operators are beginning to invest in LTE radio, transport, and core infrastructure to address the growth in mobile data traffic. However, bandwidth is a limited resource in much the same way as electricity. In the utility sector, smart meters are being used to manage electricity consumption by encouraging consumers and businesses to increase usage during off-peak hours with lower rates and decrease usage at peak hours.

Operators will need to adopt a similar approach by supplementing capacity improvements with controls that manage the flow and demand for data. This is where the key control components of the EPC including the Home Subscriber Server (HSS), Policy Controller (PCRF), and inter-working functions (3GPP AAA) come into play. Together they form the central control plane and include the main repository for subscriber and device information, provide authorisation and authentication functions for services, apply policies to manage network resources, applications, devices, and subscribers, and ensure inter-working with other access networks such as EVDO, WiMAX, and WiFi.

As the cornerstone for mobile personalisation and management, these ‘smart' subscriber, service, and policy controls enable mobile operators to moderate data traffic and entice subscribers with innovative, personalised services.

Getting Ready for LTE
Many leading operators are deploying subscriber, service, and policy controls in 3G networks. Over 65% of mobile operators polled in a recent Yankee Group survey require policy control currently or within the next 12 months to manage mobile data growth in their 3G networks and are not waiting for LTE. Operators can achieve significant benefits by centralising control across mobile access technologies. Benefits include smoother service migration and better management of mobile data traffic and applications such as the ability to direct traffic and applications to the optimal access network.

LTE is well positioned to meet the requirements of next-generation mobile networks as subscribers embrace multimedia services and as M2M applications are adopted. It represents a significant opportunity for mobile operators to meet the challenges and opportunities of exponential mobile data growth by complementing capacity and infrastructure investments with smart subscriber, service and policy control. This approach enables operators to control capital costs, manage the flow of data traffic, and create innovative and personalised service offers that entice subscribers and ensure profitability. 

About the author: David Sharpley is Senior Vice President, Bridgewater Systems.

Building a future-proof fibre optic infrastructure is as much about the business model you will follow, as it is the technical decisions you face

FTTx is vital if we are to fulfil the huge demand for large bandwidths in tomorrow's world. One of the options is to use FTTC (Fibre-to-the-Curb) to bring the DSL port closer to the customer. However, transmission by copper wire with DSL is as far as it goes. On the other hand, fibre optic transmission to the customer using FTTH (Fibre-to-the-Home) will provide sufficient bandwidths for the next 20 years.

The telecommunications industry has had more than ten years' experience with active and passive optical networks. And debates about the advantages and disadvantages of these networks have been running for at least that long. Fibre optic networks can be laid directly to households (Fibre-to-the-Home [FTTH]) by using Passive Optical Networks (PONs) and Active Optical Networks (AONs.)

The key technical difference between active and passive access technology is that a passive splitter for passive optical networks is used, whereas active optical networks function with Ethernet-Point-to-Point architecture. The objective of both passive and optical networks is to bring the fibre optics as close as possible, or ideally right to the subscribers' houses and apartments. This FTTH-solution is technically the best option as regards the transmission quality and the bandwidth.

Business case challenge
Using fibre optic cable promises virtually unlimited bandwidths, however the network operator only ever has just the copper wire line in the last mile, apart from a very few exceptions. So if DSL technology is no longer adequate, new optical cables always have to be laid.

The high investment costs of setting up this infrastructure, combined with telecommunications providers' falling revenue, mean it is often difficult to put a business case to investors and network providers' management boards. Nowadays, the ICT industry is still spoilt with returns on investment of one to three years. But expansion of FTTH and FTTB networks, (regardless of whether PON or Ethernet-Point-to-Point technology is used), sometimes takes more than 10 years before a return on investment is seen. Nevertheless, depending on the application and conditions at the time, business cases vary greatly, depending on whether passive or active access technology is used for FTTH rollouts.

Passive Optical Networks (PONs)
As regards the core network, the first network element of a PON network is the OLT (Optical Line Termination Unit), that provides n x 1 Gbit/s and n x 10 Gbit/s Ethernet interfaces to the core network and PON interfaces to the subscribers. The PON types used here today are usually Ethernet-PON (EPON), Gigabit-PON (GPON) and in future Gigabit-Ethernet-PON (GEPON) or WDM-PON. EPON installations are currently primarily found in the Far East, GPON on the other hand in the US and Europe.

In PON's case, the signal on the fibre optic to the subscribers is partitioned by a passive splitter into optical subscriber connections. The splitter is either located in an outdoor housing or directly in the cable run, for example in a sleeve. In other words, the network structure is a Point-to-Multipoint structure (PMP).

In an FTTH network architecture, subscriber access is implemented via optical network termination (ONT) that terminates the optical signal and feeds it into one or more electrical interfaces, such as for example 100BaseTx, POTS, or ISDN. ONTs with VDSL interfaces are available for FTTB to bridge the existing subscriber access lines in the property. In this case, each subscriber receives a VDSL modem as network termination.

Ethernet-Point-to-Point (PtP)
As regards Ethernet-Point-to-Point network structures, every subscriber gets their own fibre optic that is terminated at an optical concentrator (AN = Access Node.) Metro-Ethernet switches or IP edge routers are normally used here that were not originally conceived for the FTTH/FTTB environment. KEYMILE designed MileGate, the Multi-Service Access Node (MSAN), for this type of application. MileGate can be called an optical DSLAM because the system has a very high density of optical interfaces and at the same time fulfils all the demands of a DSLAM. MileGate uses standard optical Ethernet interfaces based on 100 Mbps (for example 100BaseBX) or Gigabit Ethernet. Because of this transmission interface, mini or micro DSLAMS that ensure distribution of data in individual properties, can be used in FTTB architectures too.

All network topologies can be implemented with PON and Ethernet -PtP. However, a network operator should decide early on which architecture will still be in a position to respond to demands in 15 - 20 years. Because infrastructure investments should have an ROI of about 10 years, so that modifications do not have to be made after just five years.

Initially, network operators save real money with a Point-to-Multipoint structure (of the type required for PON systems,) as they have to lay fewer fibre optics than if they used a Point-to-Point structure from the very beginning. However, the optical splitter is a weak point. This network component might have to be replaced if customers need greater bandwidth, or if the worst comes to the worst even be bypassed with additional fibre optics to upgrade the Point-to-Point structure.

A comparison of passive optical and Point-to-Point structures:
PtP technology is much better in terms of bandwidth per subscriber. The maximum bandwidth per subscriber is a lot higher. The flexibility to allocate different bandwidths to individual subscribers is also higher (e.g. for corporate customers) than when PON systems are used. Depending on the splitting factor, a PON connection via fibre optics supplies less bandwidth than a VDSL2 connection via copper wire. Even if it is a question of increasing the bandwidth, PtP architecture is superior to the PON's PMP architecture. Just by converting boards, subscribers can obtain an upgrade, without the network architecture or the service of other subscribers having to be changed.

Within a PON tree, all the subscribers are on the same optical point. If an ONT causes faulty synchronisation, or produces an optically indefinable signal, a remote localisation of the malfunction in the ONT involved might not be possible. Where PtP is concerned on the other hand, due to the PtP architecture, both the fibre optic path and the end customer's ONT can be clearly assessed. In the worst case scenario, the laser on the AN for each subscriber can be deactivated by the control centre. As regards availability, the PON is at a disadvantage compared with PtP because to date, there are no plans to connect customers redundantly in one PON.

Currently, when the same functions are offered, there are no significant differences in the costs of the subscribers' terminal equipment (CPEs, ONTs.) Because PtP Ethernet installations use standard Ethernet interfaces however, substantial falls in prices are to be expected as more and more flood the market. Despite standardisation, ONTs in today's PON environment are not interchangeable between different manufacturers' systems. Which means the selection of models is restricted and the savings provided because a larger number is produced, are negligible. However, in terms of price per subscriber and because the optical paths can be used in several ways, PON is at an advantage compared with Ethernet-PtP.

This advantage is eaten up by the subsequent costs for upgrades. An entire PON tree is affected by an upgrade. Because of the better granularity of the ANs and the separation of the customers (PtP), customised upgrades can be carried out in the active optical network. The advantages of PtP flexibility really bear fruit where business customers are concerned. Requirements from bulk customers are always highly individual, but PON network concepts tend to be more static. Therefore, in this case the active approach is a lot better.

A generic comparison of technology can only serve to gain an initial overview. While network operators in Asia prefer passive optical networks, a study by the FTTH Council Europe showed that in Europe over 80% of the FTTH/FTTB installations are based on Ethernet-PtP.

About the author: Klaus Pollak is Head of Consulting & Projects, Keymile.

Although smaller and quieter than in previous years, ITU Telecom World 2009 offered an opportunity for industry and governments from all round the globe to meet, and examine how ICT technologies can play their part in the development of societies and economies

Many said it would be a disaster. They said that without the big European and western manufacturers footing the bill then the event couldn't go ahead. No Nokia, no Ericsson, no Alcatel-Lucent, no show.

Well, despite the fact that at 18,000 visitors the event mustered only quarter of the attendees that came to the 2003 show in Geneva, ITU Telecom World 2009 felt like a success to many that were there as it took on a different tone from past shows. Others, though, found that business was slow and regretted their decision to attend.

The show lost its focus as a glossy showcase for the headline products of all the world's major manufacturers, and instead became a meeting point for those concerned with how best to plot the course of the development of all the world's markets.

So this time, the focus shifted to the southern and emerging markets. And the noise came not from the western manufacturers but from the Chinese vendors, and from Russia and the host of national pavilions that made up most of the show floor. There was also news around legislation and standardization from the ITU itself, to go with the focus on what ICT technologies can bring to the economies of nations across the world.

And there was debate too, whether it was warning from the head of the ITU on the need for vigilance in combating security threats in the IP sphere, or on standardization development, or the latest research on the state and size of the markets.

4.6 Billion Mobile Subscriptions and the broadband divide
The ITU's latest statistics, published in The World in 2009: ICT facts and figures, revealed rapid ICT growth in many world regions in everything from mobile cellular subscriptions to fixed and mobile broadband, and from TV to computer penetration - with mobile technology acting as a key driver.

The data, forecasts and analysis on the global ICT market showed that mobile growth is continuing unabated, with global mobile subscriptions expected to reach 4.6 billion by the end of the year, and mobile broadband subscriptions to top 600 million in 2009, having overtaken fixed broadband subscribers in 2008.

Mobile technologies are making major inroads toward extending ICTs in developing countries, with a number of nations launching and commercially offering IMT2000/3G networks and services. But ITU's statistics also highlight important regional discrepancies, with mobile broadband penetration rates still low in many African countries and other developing nations.

More than a quarter of the world's population is online and using the Internet, as of 2009. Ever-increasing numbers are opting for high-speed Internet access, with fixed broadband subscriber numbers more than tripling from 150 million in 2004 to an estimated 500 million by the end of 2009.

Rapid high-speed Internet growth in the developed world contrasts starkly with the state of play in the developing world. In Africa, for example, there is only one fixed broadband subscriber for every 1,000 inhabitants, compared with Europe where there are some 200 subscribers per 1,000 people. The relative price for ICT services (especially broadband) is highest in Africa, the region with the lowest income levels. The report finds that China has the world's largest fixed broadband market, overtaking its closest rival, the US, at the end of 2008.

ITU estimates show that three quarters of households now own a television set and over a quarter of people globally - some 1.9bn - now have access to a computer at home. This demonstrates the huge market potential in developing countries, where TV penetration is already high, for converged devices, as the mobile, television and Internet worlds collide.
Sami Al Basheer, Director, Telecommunication Development Bureau, said, "We are encouraged to see so much growth, but there is still a large digital divide and an impending broadband divide which needs to be addressed urgently."

New ITU standard opens doors for unified ‘smart home' network
The G.hn standard for wired home networking gained international approval at Telecom World, as the ITU approved a standard that it said will usher in a new era in ‘smart home' networking systems and applications.

Called ‘G.hn', the standard is intended to help service providers deploy new offerings, including High Definition TV (HDTV) and digital Internet Protocol TV (IPTV), more cost effectively. It will also provide a basis for consumer electronics manufacturers to network all types of home entertainment, home automation and home security products, and simplify consumers' purchasing and installation processes. Experts predict that the first chipsets employing G.hn will be available in early 2010.

G.hn-compliant devices will be capable of handling high-bandwidth rich multimedia content at speeds of up to 1 Gbit/s over household wiring options, including coaxial cable and standard phone and power lines. It will deliver many times the throughput of existing wireless and wired technologies.

Approval of the new standard will allow manufacturers of networked home devices to move forward with their R&D programmes and bring products to market more rapidly and with more confidence.

"G.hn is a technology that gives new use to the cabling most people already have in their homes. The remarkable array of applications that it will enable includes energy efficient smart appliances, home automation and telemedicine devices," said Malcolm Johnson, Director of ITU's Telecommunication Standardisation Bureau.

The physical layer and architecture portion of the standard were approved by ITU-T Study Group 15 on October 9. The data link layer of the new standard is expected to garner final approval at the group's next meeting in May 2010.

The Home Grid Forum, a group set up to promote G.hn, is developing a certification programme together with the Broadband Forum that will aid semiconductor and systems manufacturers in building and bringing standards-compliant products to market, with products that fully conform to the G.hn standard bearing the HomeGrid-certified logo.
Also agreed at the recent ITU-T Study Group 15 meeting was a new standard that focuses on coexistence between G.hn-based products and those using other technologies. Known as G.9972, the standard describes the process by which G.hn devices will work with power line devices that use technologies such as IEEE P1901. In addition, experts say that they will develop extensions to G.hn to support SmartGrid applications.

Shake up the standardization landscape
Nineteen CTOs from some of the world's key ICT players called upon ITU to provide a lead in an overhaul of the global ICT standardization landscape.

The CTOs agreed on a set of recommendations and actions that will better address the evolving needs of a fast-moving industry; facilitate the launch of new products, services and applications; promote cost-effective solutions; combat climate change; and address the needs of developing countries regarding greater inclusion in standards development.
Participants reaffirmed the increasing importance of standards in the rapidly changing information society. Standards are the ‘universal language' that drives competitiveness by helping organizations optimize their efficiency, effectiveness, responsiveness and innovation, the CTOs agreed.

Malcolm Johnson, Director, Telecommunication Standardization Bureau, ITU, said, "There are many examples of successful standards collaboration, a fragile economic environment and an ICT ecosystem characterized by convergence makes it all the more important to streamline and clarify the standardization landscape. We have agreed on a number of concrete actions that will help us move towards this goal and strengthen understanding of standards' critical role in combating climate change, while better reflecting the needs of developing countries."

The standardization landscape has become complicated and fragmented, with hundreds of different industry forums and consortia. CTOs agreed that it has become increasingly tough to prioritise standardisation resources, and called on ITU - as the preeminent global standards body - to lead a review to clarify the standardization scenario.

ITU will host a web portal providing information on the interrelationship of standards and standards bodies, which would facilitate the work of industry and standards makers while promoting cooperation and collaboration and avoiding duplication.

War in cyberspace?
The next world war could take place in cyberspace, Hamadoun Toure, secretary-general of the ITU warned during the conference.

"The next world war could happen in cyberspace and that would be a catastrophe. We have to make sure that all countries understand that in that war, there is no such thing as a superpower," Hamadoun Toure said. "The best way to win a war is to avoid it in the first place," he added. "Loss of vital networks would quickly cripple any nation, and none is immune to cyberattack," said Toure.

Toure said that cyberattacks and crimes have also increased, referring to such attacks as the use of "phishing" tools to get hold of passwords to commit fraud, or attempts by hackers to bring down secure networks. Individual countries have started to respond by bolstering their defences.

US Secretary for Homeland Security Janet Napolitano announced that she has received the green light to hire up to 1,000 cybersecurity experts to ramp up the United States' defenses against cyber threats.

South Korea has also announced plans to train 3,000 "cyber sheriffs" by next year to protect businesses after a spate of attacks on state and private websites.

Warning of the magnitude of cybercrimes and attacks, Carlos Solari, Alcatel-Lucent's vice-president on central quality, security and reliability, told an ITU forum that breaches in e-commerce are now already running to "hundreds of billions."

One high profile victim in recent years was Estonia, which suffered high profile cyber attacks on government websites and leading businesses in 2007. Estonian Minister for Economic Affairs and Communications Juhan Parts said in Geneva that "adequate international cooperation" was essential. "If something happens on cyberspace it's a border crossing issue. We have to have horizontal cooperation globally," he added.

To meet this goal, 37 ITU member countries have joined forces in the International Multilateral Partnership against Cyber Threats (IMPACT), set up this year to "proactively track and defend against cyberthreats." Another 15 nations are holding advanced discussions, according to the ITU.

Experts say that a major problem is that the current software and web infrastructure has the same weaknesses as those produced two decades ago.

"The real problem is that we're putting on the market software that is as vulnerable as it was 20 years ago," said Cristine Hoepers, general manager at Brazilian National Computer Emergency Response Team.

Brands can play a huge part in customer retention strategies, but operators must have systems that are flexible enough to support the development of personalised and real-time customer offers

In the complex landscape of next-generation services, it is essential for wireless communications service providers (CSPs) not just to acquire new subscribers - using the most cost-effective methods - but to keep them as well. Reducing churn drives down costs and provides for a platform to build ARPU and boost margin. Accordingly, an increasingly important element in the process of attracting and retaining subscribers revolves around the application of customer relationship principles integrated with developing brand loyalty.

A truly sticky relationship between subscriber and CSP can be achieved when the subscriber recognises and develops a personal association with key elements of the CPSs brand - be it quality, unique content, unique handset offers, ease-of-use or even customer service excellence. Witness the response to the loss of O2's iPhone exclusivity in the UK; the news of Orange's iPhone deal followed 24 hours later by Vodafone's intention to enter the iPhone market, was BBC Technology's online news site's most widely read story over the 24 hour period.

Understanding how to build brand characteristics in order to give subscribers something that's personally attractive and recognisable, yet perceptively different, is a fundamental objective of today's CSP marketeers. To develop that brand affinity it is important to move as close as possible to the subscriber, developing a CRM and communications strategy which has resonance for each market segment and sub-segment. This requires quite granular demographic information for each segment, as well as the ability to market directly to subscribers or prospective subscribers.

In short, it requires real-time responsiveness and a single view of the subscriber. Data needs to be transformed into information, and information must yield intelligence. With such intelligence CSPs can create innovative mobile services packages for greater subscriber intimacy. Equally, such an approach should change how subscribers interact with their CSP over the long-term.

Modeling Post-Paid, Adressing Pre-Paid
A new business model can be built around the subscriber, their interaction with the CSP, their service preferences, even their relationships with third party content providers, enabled using the CSP's network. This is not just about the relationship between the subscriber, the CSP and its commercial partners and merchants, but could conceivably include the use of intelligence about a subscriber's family, friends, acquaintances, personal interests, careers, behaviour and lifestyle.

This model encourages the emergence of a far more interactive brand experience, with subscriber attributes becoming less defined around segments and more defined around individual attributes - indeed the elusive market of one could become the norm for the consumer wireless market in time.

While such an approach works optimally for post-paid subscribers where the CSP has a wealth of information about its subscribers' behaviour and preferences, to support such an approach for its entire subscriber base, a CSP needs to recognise and understand how to utilise the business intelligence potential within its OSS and BSS infrastructure.
With pre-paid only services, the relationship between the subscriber and the CSP is arguably more tenuous. Increasingly, regulatory pressure is placing a minimum requirement for a name, address and related contact information to be validated before a CSP offers pre-paid services. However in many markets such information continues not to be captured, so knowledge about a subscriber starts and ends with the ‘International Mobile Subscriber Identity' number (IMSI) and/or ‘Mobile Identification Number' (MIN) and possibly a credit-card number.

Mediation, charging and billing systems represent a cost-effective source of business intelligence, but intelligence on its own is only one part of the equation. Customer care and billing systems need to be enabled to allow customers to model their own preferences and requirements. In charging and billing terms, this could mean providing a subscriber with the opportunity to access and model their own account structure preferences, thus yielding significant cost and service improvements for both subscriber and CSP.

To enable CSPs to support such an approach, a single "view" of subscriber interaction with the CSP can be created on the charging & billing platform, regardless of subscription type, which is presented to users in a simple, understandable way that addresses the subscriber, available/subscribed services and products, as well as providing a visual financial dimension. Such an approach could yield self-care access screens.

Polkomtel, through its New Billing System (NBS) using Intec's Singl.eView convergent charging and billing platform, is enabling itself - through its Plus+ brand - to realise the true potential of 3G and HSDPA supported voice, data and next generation services.

The operator launched an innovative pre-paid service, branded 36.6, to attract the lucrative youth market; as part of this the Chill Bill service was launched. With Chill Bill subscribers receive a PLN 10 monthly account top-up to spend on any 36.6 voice or data service, in exchange for receiving three, one and a half minute advertisements which they connect to via a freephone IVR number. Chill Bill is also available for subscribers with a zero account balance.

In the 12 months following launch Plus increased its subscriber base to 14.2 million, to become Poland's biggest operator; it was also the first operator worldwide to launch a Chill Bill style service. Phase 2 of Polkomtel's NBS project has recently commenced, to service the operator's post-paid customer base, which will enable them to launch even more innovative, new products and services at a rapid rate to both pre- and post-paid subscribers, all on a single charging engine.

Pre-Paid, Post-Paid & ‘Now-Pay'
As demonstrated by Polkomtel, divergence between pre-paid and post-paid subscription models should no longer be a barrier to a convergent CRM strategy. Previously post-event billing and customer care systems operated in batch mode, and could not support the real-time call decisions required by the pre-paid business model.

A combination of new technology and new process models now makes it possible to remove the barriers separating pre-paid and post-paid subscriptions, barriers which have made neither commercial nor technical sense for some time.

For CSPs to provide a further enhanced subscriber experience and differentiated service, they should also now be looking to offer several payment approaches.

This is a reflection of the emergence a new payment category associated with m-commerce transactions - ‘Now-Pay' transactions - whereby transaction request, authorisation and payment takes place in real-time, using similar models to well- established transaction models from the fixed internet.

Requirements have evolved stipulating a single view of all transactions, whether pre-paid, post-paid or now-pay, to help provide superior customer care and improved operational efficiency. In parallel, the multi-service nature of new generation networks also means that concurrent voice, data and content services need to be supported in real-time on the same account.

Offering their subscribers charging options in real-time further strengthens the CRM credentials of the CSP, improves their revenue generating and revenue assurance options as well as that all important brand equity and differentiation.

Obviously such a combination of diverse payment methods and converged and combined services demands an accommodating charging, billing and revenue management system. Not just a revenue system to manage account receivables, but one that supports the integration of real-time mediation processes and offers flexible rating, guiding and discounting all while running in real-time via highly available, distributed server configurations.

BSS/OSS for Pre-paid CRM
Having examined some of the challenges associated with providing convergent CRM for next generation services across the entire subscriber base, what are the type of BSS/OSS requirements associated with these challenges?

In order to maximise the subscriber relationship a unified approach to customer care and subscriber management, with a single point of control, regardless of pre-paid or post-paid subscription type is required, supporting:
- Service authorisation in real-time, to ensure the best subscriber experience and also providing a revenue assurance control, thus protecting a service providers liability to its content partners
- Dynamic account selection, to enable subscribers to control payment options and decide for themselves which of their accounts is used on a per session basis
- Service-specific spending limits allowing subscribers themselves to specify spending limits and account guiding, thus putting them in control of their own accounts
- Advice of charge, giving subscribers the confidence to make use of services
- Real-time charging and billing, to enable creative pricing strategies and protect the service provider from revenue loss, including fraud 
- The ability to calculate discounts and volume based charges for pre-paid and post-paid subscribers
- High availability, to ensure consistency between the IN based call control and balance management elements and the customer care and billing system infrastructure.

CSPs need not shy away from exploring the value of the trusted relationship with their entire subscriber base, regardless of whether Pre-paid or Post-paid, in order to better understand not only the commercial and technical challenges involved, but also the massive potential revenue and margin benefits associated with the provision of next generation services and consistent, brand differentiating customer relationship management to their entire subscriber base.

About the author: Ben Bannister is Director, Mobile Solutions for Intec Telecom Systems

Just as mobile operators looked set to conquer the world, along came a rival expedition that threatened to stake its claim for territory that operators regarded as rightfully theirs, says Keith Dyer

For years, operators have been grappling with the issue of how to force, enable, direct and persuade users to start using the services and applications available to them. It was hard work, but operators did not doubt that they would eventually achieve success. After all, they held the aces - the SIM card, the billing relationship, and the relevant customer data. One thing operators would not have feared even three years ago was that these problems would have been solved - but solved in a manner that threatens the very revenue streams they have had as their end-goal for so long.

For while the operators have been labouring through the polar ice on foot, dragging a heavy burden of legacy equipment behind them, their rivals (Apple, Google, Nokia,  RIM) have noted the mistakes they made, and have hitched their sleds to teams of keen huskies, packed the pemmican and set off for the promised land.

But can the operators fight back? Vodafone thinks so. It teamed together with China Mobile, SoftBank and Verizon to create a formidable one billion strong potential subscriber base. Its aim is to create a developer community to rival that of Google or Apple, but one that is available to users on a broad variety of handsets and OS platforms, rather than being limited to the Android or Apple OS.

The operator launched in September with two dedicated handsets developed on the LiMo Foundation platform, and also with an offering that can be downloaded to a choice of Symbian handsets, with compatibility with other handsets promised for later. In time it sees its JIL applications environment integrating, and interoperating, with other industry standard approaches such as OMTP BONDI. That would mean that more operators could join in with their own app stores, but tap into a broad range of applications without having to re-invent the wheel.

It's really a matter of opinion whether you think this is another example of operators attempting to seal off and control an area of revenue growth, or whether you think operators are finally grabbing hold of the apps opportunity in an open way. The dedicated handsets on LiMo look like a closed loop, but the commitment to wider compatibility looks like an open one. The JIL environment looks geared to the operators' advantage, but the commitment to integrate with other standards looks more open.

The operators know that they must stop bleeding revenues to Apple, Blackberry, Nokia et al. We have seen a very real indication of how they intend to stem the bleed. The question is, have they reached for the sticking plaster, or the long term cure?

Lynd Morley
I can't remember when I first met Lynd Morley, but it would have been shortly after the publishing company I was working for merged with the publishing company Lynd was working for. Although geography and the nature of our respective titles meant we
didn't work together closely, the merger made us stable mates; and I found over the years that when we did catch up Lynd would always be a sound source of advice. Opinions were delivered with her wry smile never far away, and events were always kept in perspective.
And now, in unhappy circumstances, I find myself taking over from her at the title that she made her own over the last 17 years. It is a testament to her work that everyone involved has made producing this issue a much easier task than it could have been. The sense of warmth and respect for the magazine that I encountered when planning the issue was tangible, and so much of that is surely down to Lynd. We have a full appreciation of Lynd on pages 6-7 and this issue is, of course, dedicated to her memory.  

Service providers are moving to Carrier Ethernet in response to the need to handle  fast-growing traffic from consumers and business, as well as to support wholesale applications such as mobile back-haul says David Noguer Bau

Carrier Ethernet - also known as transparent or native LAN, Ethernet, Gigabit Ethernet, GigE, metro Ethernet, Ethernet private line, Ethernet virtual private line, Layer 2 virtual private network, Ethernet access, and virtual private LAN service - represent a large market. In recent years, Carrier Ethernet has been widely adopted, responding to a diverse demand: business services, broadband aggregation, wireless backhaul, NGN deployments etc.  According to telecoms market watchers Insight Research, US enterprises and consumers alone are expected to spend more than $27 billion over the next five years on Ethernet services provided by carriers. The market is growing at around 25% per year to 2014 - a rare double-digit growth spike in a telecoms market.

Indeed, some researchers think the current economic downturn will give the technology a boost as it's less expensive to deploy than alternative legacy equipment such as TDM and, as a consequence, is currently growing faster than overall telecom CAPEX. But Carrier Ethernet demand has not always been the same throughout its history and the network requirements have also been evolving.

The early implementations of Carrier Ethernet by service providers around the world were led by business services and the demand to lower the cost of high-speed data connections. With almost 98% of the data traffic across the WAN originated and terminated on Ethernet ports, an Ethernet optimized transport network offers both cost savings and simplification by replacing costly CPEs with basic Ethernet switch/routers.

The architecture of the Metro Ethernet Networks were deployed as parallel infrastructures, completely separated from the existing networks and dedicated exclusively to provide a simple enterprise data service. Carrier Ethernet networks were based on enterprise switches relying on Spanning Tree protocols and simple VLAN domains.

The industry has realized the importance of Ethernet to service provider's success.  As broadband was getting more popular, the bandwidth requirements were growing exponentially and DSLAM vendors introduced high-speed Ethernet uplinks to reduce the cost of the infrastructure. Therefore Ethernet was gradually deployed to every central office extending  availability for business users.

The architecture evolved from dedicated Ethernet infrastructure to converged metro networks with full carrier grade attributes. Scalable Carrier Ethernet MPLS platforms were introduced to provide layer 2 and layer 3 services including IP VPNs and a full range of Metro Ethernet Forum services.

With the growth in wireless data services, Ethernet is again being asked to solve the backhaul component. Existing attributes such as scalability, huge bandwidth and availability are insufficient and Carrier Ethernet is now expected to provide clock synchronization and stronger OAM techniques to effectively replace TDM in this space.

As we've seen, across its short history, Carrier Ethernet is reinventing itself to become the technology of choice for a variety of applications. It provides bandwidth at the right cost for the booming, over-the-top traffic and rich services from MPLS.  But, what's next?      

To avoid commoditization and thereby increase the network monetization, service providers have to quickly evolve their business models from being based around connectivity to content and application models. As an example content delivery networks and cloud computing are new areas to explore for their future success. The requirements for successful service provider networks have evolved once more. The nature of cloud computing services, built around data centres and virtualized servers, introduces elasticity as a new requirement. Demand is not predictable anymore as cloud computing must be able to absorb the peaks and the subscriber will not be paying just for the connection so an SLA will be linked to the up time of the applications and their availability. Carrier Ethernet has the flexibility to allocate bandwidth on demand and can be coordinated at anytime with the cloud requirements. Extending server virtualization across multiple data centers maximizes the CPU utilization but it requires scalable layer 2 connectivity to transport a large number of VLAN; this can be achieved by integrating the data centres to the Carrier Ethernet network. 

Evolving the services towards content and applications introduces new requirements forcing the network to become content aware. A large number of network appliances have been introduced recently to perform such advanced functionalities as: deep packet inspection, video monitoring, session border control, firewalling, intrusion detection and Prevention,.. but such an variety of platforms adds to complexity and incremental opex.

In the same way that Carrier Ethernet helped the convergence of multiple networks, it can now provide a solution to simplify the advanced service deployment by consolidating all the content aware appliances. During the second half of 2008, Juniper Networks announced as part of the Intelligent Services Edge initiative a large number of layer 2 to layer 7 services for the MX and M Series platforms including security (IPS and firewalling), session border control, deep packet inspection leveraging the in-house technologies in this space.

Juniper Networks and TelecomTV  recently ran a survey of service providers in Europe, Middle East and Africa region to identify the importance of Carrier Ethernet in their networks. The survey covered business aspects, services and technologies.

For our carrier respondents, the best strategies of growing profitability in wireline business, involved:  cost-cutting and creating new services followed by converging services across Ethernet. All of these strategies are linked to the key attributes of Carrier Ethernet.

We also asked about the most demanding services for Carrier Ethernet. There doesn't appear to be any consensus over which services and applications put the most strain on the networks as all attracted about the same proportion of votes. But business and residential services seems to be a popular choice.

When asking about deployment drivers, broadband aggregation attracted the greatest number of responses, although the other options weren't far behind. It's interesting to see that data centre connectivity was a driver for just over a third of our carrier respondents, which exemplifies the importance of Carrier Ethernet in this area.

As to what were the greatest benefits from the introduction of the technology: its ability to provide high bandwidth is still rated high or very high by all the carrier respondents. Cost, simplicity and flexibility all came in at roughly second equal.

As stated by Kireeti Kompella when discussing the Purple Line story (see European Communications 2008), the traditional service provider organizational structure separates transmission from IP groups. We wanted to check the market on this know how are they organized; so while there are many carriers in our sample still maintaining the separation there are nearly the same number already with a converged group responsible for both transport and IP services.

On the technical side, the majority of the respondents are running MPLS/VPLS in their metro networks. It's also interesting to see they have plans to extend it towards the access to gain full advantage of a seamless MPLS deployment.

A couple of interesting points here on the services deployment: although Carrier Ethernet services seems to be widely available in the metro, it's followed closely by the nationwide services. But even more interesting, the Ethernet services across multiple service provider networks are gaining traction; this reflects the maturity of these services.

When looking on future services to be delivered in the Carrier Ethernet networks, the surprise comes by having IP-VPN in the first position, followed by High Value services to help full convergence of services and network monetization.

We've also asked service providers about the advanced applications they would like to see running on top of the Carrier Ethernet platforms. Around 70% of the respondents would like to have IP services to avoid L3 overlay networks. Interesting as well, is seeing 40-50% of the respondents looking to have integrated features such as: Subscriber management, Firewall, SBC or Video quality analysis.

Carrier Ethernet is a consolidated technology that moved from being a simple solution for business services to mainstream on NGN transformation. The future seems to evolve towards having advanced services running on the same platforms to simplify the architecture, consolidate devices and converge services.

David Noguer Bau is Head of Carrier Ethernet Marketing, Juniper Networks, EMEA

To view the tables that accompanied this feature please refer to version of article (on pages 30-32) published in summer 2009 issue of European Communications magazine: http://viewer.zmags.co.uk/publication/a9f29d6e#/a9f29d6e/1



Other Categories in Features