Features

The mobile broadband battle is hotting up, but does it really matter which technology rules the day, and is the notion of competition really just a distraction?

In mobile broadband, the temperature of debate is rising rapidly. It's WiMAX vs. LTE vs. HSPA+, with a torrent of propaganda washing over sensible comment. Despite the cacophony of competing claims and over-promises, this "battle" is really just a chicane, one that diverts attention from critical business issues that will determine success or failure as the technologies evolve.

The crux of the argument centres on "Mbps"; with partisans for all three contenders trotting out their peak data rates to savage opponents. In the HSPA+ camp, pundits fire out theoretical peak data rates of 42Mbps DL and 23 Mbps UL. The WiMAX forces respond with theoretical peak data rates of 75Mbps DL and 30Mbps UL. LTE joins the fray by unleashing its theoretical peak data rates of 300Mbps DL and 75 Mbps UL. All hell breaks loose, or so it would appear. Were it not for the inclusion of the word "theoretical", we could all go home to sleep soundly and wake refreshed, safe in the knowledge that might is right. The reality is very different.

Sprint has stated that it intends to deliver services at between 2 and 4 Mbps to its customers with Mobile WiMAX. In the real world, HSPA+ and LTE are likely to give their users single digit Mbps download speeds.

So in the one metric that really matters - end user experience - all three technologies will be much of a muchness. Data rates will offer a noticeable improvement on what you see via your home WiFi, or whilst surfing the web on a train, but not quite enough to herald the dawn of a new age in mobile. Despite this reality, the campaigns currently targeting end users have the same annoying ringtone as the campaign that preceded 3G. Remember all the hype around video calls? Remember the last time you actually saw someone making a video call? 3G has certainly transformed the way that people think about and use their mobile phones, but not in the way we were led to expect.

The pointless stoking of customer expectations around 3G set our industry back years, and we cannot afford a repeat performance with mobile broadband. Disappointed customers spend less money on handsets and services because the experience they were promised has not quite materialised. Disappointment is shared with friends and family and across the social networks we are trying so hard to monetise. All of this dampens uptake and diminishes expectations.

Meanwhile, the pundits bang on about their pet technology. One claims that HSPA+ might delay the deployment of LTE. Another posits that WiMAX might be adopted, predominantly, in the laptop or netbook market. A third insists that LTE could replace large swathes of legacy technologies. These scenarios might happen ... or not. The most likely, if less stirring, outcome is that they are all coming, will be rolled out to hundreds of millions of subscribers and, within five years, will be widespread.

Confusion unsettles investors, who move to other markets and starve us of the R&D funds needed to deliver mobile broadband. At street level, early adopters hold off on buying the next wave of technology while they "wait it out." Who wants to end up with a Betamax if VHS might ultimately ‘win' ?

What we all want are ecstatic customers who can't help but show off their device. We need to produce a ‘Wow' factor that generates momentum in the market.

Where we should focus, urgently, is on the two issues that demand open discussion and debate: are we taking the delivery of a winning user experience seriously, and are we ready to cope with the tidal wave of data traffic that will follow a successful launch?

The first issue concerns delivery to the end user of a seamless application experience that successfully converts the improved data rates to improvements on their device. This can mean anything from getting LAN-like speeds for faster email downloads through to slick, content-rich and location-aware applications. As we launch mobile broadband technologies, we must ensure that new applications and capabilities are robust and stable. More effort must be spent developing and testing applications so that the end user is blown away by their performance.

The second issue, the tidal wave of data, should force us to be realistic about the strain placed on core networks by an exponential increase in data traffic. We have seen 10x increases in traffic since smartphones began to boom. Mobile device makers, network equipment manufacturers and application developers must accept that there will be capacity shortages in the short term and, in response, must design, build and test applications rigorously. We need applications with realistic data throughput requirements and the ability to catch data greedy applications before they reach the network.

At Anite, we see the demands placed on test equipment by mobile broadband technologies first hand. We are responding to growing demand for new tools that provide measures of end user experience by test applications and simulate the effects of anticipated capacity bottlenecks. Unfortunately, not everyone is thinking that far ahead. On the current evidence, applications that should be "Wow", in theory, may end up producing little more than a murmur of disappointment in the real world.

So let's stop this nonsense about how one technology trounces another. Conflict may be interesting to journalists, but end users simply do not care. As an industry, our energy needs to be focused on delivering services and applications that exceed the customer expectations regardless of whether they access the network via WiMAX, LTE or HSPA+. Rather than fighting, we should be learning from one another's experiences. Do that and our customers will reward us with growing demand. If we all get sustained growth, then don't we all win..?

About the author: Dominic Rowles is business unit director at Anite.

Operators are seeing a dramatic increase in data traffic - but now need to operate smart controls to protect their revenues and the user experience, says Merav Bahat

The growing adoption of Apple's iPhone, and the G1 Google phone combined with flat-rate data plans is creating a dramatic increase in mobile Internet traffic. This increased mobile data usage could eventually suffocate network bandwidth and clog wireless networks.

According to a Forrester Research report released last month, more than a third of consumers in Western Europe will access Internet from their mobile phones by 2014.

In addition to increasing the number of users, each subscriber will be using bandwidth hungry services such mobile video. The Cisco VNI Forecast predicts that Western Europe will have the most mobile video traffic of all regions in 2013 accounting for 73 percent of mobile data traffic.

Eroding profitability with increased usage and flat rates will put financial pressure on operators. At the same time they will be faced with new competition from internet content and VoIP players such as Skype, Google, and Apple.

There is a way that mobile operators can ride the tsunami and come out ahead. By providing subscribers with mobile Internet services that improve the customerexperience through personalised  information, better content delivery, and the benefit of safe browsing, operators can strengthen their brand, increase customer loyalty and result in more efficient use of network resources.

Don't block, manage
Mobile data traffic is expected to double every year through 2013, according to Cisco Systems. This boost in traffic is eating up bandwidth and providing a threat to everyone's user experience.

Current methods of preventing heavy traffic, such as blocking and throttling, are alienating subscribers. A more effective solution is to implement traffic management systems that notify subscribers when they are near usage quotas, provide a temporary bandwidth boost, offer a service extension, or enable subscribers to set personalised  usage caps that can be updated in real-time. This way, users are in control not only of their quality of experience, but also of their expenses.

Already deployed at Russia's CDMA operator, Skylink, subscribers in Moscow can select pricing plans based on the applications they use and the time of day. For example subscribers can choose access to unlimited social networks, email only at night, receive news in the morning and mobile video after work hours. Differentiated pricing reduces overall subscriber prices, thereby increasing user adoption while resulting in more fair usage, by enabling subscribers to pay for the bandwidth they want when they need it.

In addition to choosing which content is accessed and when, users that have immediate visibility to their expenses can avoid sticker shock, and they can decide to add quotas in real time based on their financial limits providing real time flexibility. The subscriber has more control over the mobile Internet service and network resources are more efficiently allocated based on need and a user's willingness and ability to pay.

By offering flexible pricing plans which appeal to the needs of different segments of users, operators can maximize both their profits and network resources. Customer loyalty is also strengthened due to increased user interaction as interactive account management empowers subscribers to take charge of their service plans.

Advise, don't advertise
By enabling subscribers to opt in for downloading selected content, operators can advise and not advertise, making targeted promotions a welcomed service. For example, users that are browsing for information on their favorite band can receive recommendations to download the band's latest music video or buy the latest wallpaper from the operator's portal using convenient payment methods, due to the unique billing relationship with their operator.

Users that chose to have targeted content delivered to their device benefit from quicker personalised  navigation and more relevant offers. In addition content providers benefit since personalised content is more likely to be shared with friends, especially when subscribers are also using social media on the mobile.

The key to delivering mobile content is to make discovery un-intrusive. A simple icon appearing on the mobile phone screen can be clicked when the user chooses, eliminating the practice of sending unsolicited SMS or MMS messages that can be considered disruptive or even annoying and can result in automatic deletion.

By creating a highly personalised content environment that is automatically updated and easily shared, operators are providing a valuable service to subscribers that adds value and brand differentiation. By giving subscribers control over which content they receive, and when and how they access it, the mobile experience becomes highly personalised  and satisfying that can build long lasting customer loyalty.

Content Control
With the average age of new subscribers dropping to seven years old, there is concern that the mobile Internet enables young subscribers to browse websites and exchange user-generated video and images without parental supervision. Recognizing the risk, a total of 26 mobile operators and content providers servicing 96% of the EU mobile customers signed the GSMA framework for a safer mobile Internet.

In addition, to protect their brand and build customer loyalty, mobile operators such as T-Mobile and Telefónica Movistar have introduced advanced content control solutions that ensure safe browsing.

Content control applications include an easy-to-use, web-based application, enabling parents to control and monitor web content access for each of their children, including customizing hours and content type and controlling access after bedtime or during school hours. In addition, parents can choose to grant or block access to certain applications, such as chat, video, and image exchange. By providing sophisticated content control solutions, parents have more information on where their children spend time online, encouraging an open dialogue between parents and their children.

Parents can also choose to monitor Internet behavior by receiving immediate notification if access is attempted to dangerous content categories such as suicide, drugs, pornography, or anorexia. For young children, parents can create a "walled garden" or protected environment by only giving them access to a limited number of age-appropriate sites.
These new user customization features transform content control from a basic black list service to a revenue-generating solution and a key differentiator for mobile operators, positioning them as friendly and socially responsible..

Smarter pipes = better service
Increased usage of internet-centric phones, more multimedia rich applications, and the increase in sharing of data using mobile devices provides a threat to existing capacity and operator profitability.

By operators using technology to enable subscribers to control their own service levels, type of content they receive, amount of bandwidth they need, and what they are willing to pay they are demonstrating a customer centric strategy. Having the ability to build, monitor, and fund their own service puts the subscriber in the driver's seat and makes the operator a service partner which is important strategy for operators asserting their position in the value chain.

The operator is thus in a unique position to provide differentiating mobile Internet services, and to use them as an incentive to build lasting customer loyalty. Once subscribers become aware that they can receive from their operators more varied information, better content delivery, and the benefit of safe browsing, they will welcome smart pipes that make browsing the mobile Internet a pleasurable and worry-free experience.

About the author:
Merav Bahat is Vice President of Marketing, Flash Networks

Management World Americas will have a strong focus on explaining and  understanding what cloud based services can mean for communications management

Judging by the amount of media coverage, cloud computing is the biggest thing since sliced bread. While telecoms services may have been the original cloud (just plug in and pay for what you use), now software giants like Amazon, Microsoft and Google are leveraging their massive data centers and offering virtual or ‘cloud' computing services. With enterprise customers chomping at the bit to slash costs and make their life simpler, it's a market that seems to have great appeal. But before jumping on to the cloud, its maybe a good idea to step back and look at some of the issues that may be barriers to making this market take off. After all, customers are handing over sensitive and valuable internal information to a third party.

Cloud services aren't really new and I remember my first real encounter with computing in the mid 1980's on timeshared ICL and IBM mainframes. Back then, few people could afford to own their own computer - today it's the management costs and the fact that few local machines run flat out all day every day that's driving the appeal to go to virtual services. Ideas like this have a place and time to be a success - for example, application service providers, (ASP's), nosedived after their hype earlier this decade but Software as a Service (SaaS) services like Salesforce.com have quietly succeeded and grown significantly in recent years.

There are a number of types of cloud services but the basic premise is they are virtual; online and don't require the customer to have to purchase costly licenses and maintenance agreements with the software vendors or upgrade their data centers. Cloud computing, along with cloud storage and ‘cloud' broadband networking are the basic building blocks of a potential major shift in the way we interact with software with huge implications for the pc business. To be viable, it has needed good quality fixed and mobile broadband to be readily available plus the massive computing and storage horsepower that companies like Google and Amazon have built for their core business that they can now market at marginal cost. 
The really big pricing advantage comes from the fact that most servers are only utilized for a fraction of their power over a 24 hour period - i.e. most of the computing horsepower is wasted. Virtual or cloud services can deliver very attractive pricing because they don't dedicate a server to particular applications - the computing is managed on a virtual basis and can be used to nearly full capacity all of the time. Couple that with the instant availability; great flexibility and the promise of less day-to-day hassles and headaches, what company won't want to look closely at cloud computing?

But what happens after you've turned things over to a third party? For example, if I put my data into a cloud-based service, will I ever be able to get it back out again? What are the standards; what is the service level agreement and what recourse do I have if they let me down? Will I ever be able to switch to another provider? And how can I compare cloud pricing if there's no standard list of features that all cloud services should have. If a particular cloud interface is proprietary, how can I possibly shift my data from one provider to the other, and is the concept of data portability even possible?

Hey, you, get off of my cloud
Security is a huge concern. While most reputable cloud providers, it's  fair bet that the supplier isn't looking to make a quick buck by selling your data out the back door or rifling through it,  but every company has dishonest or disgruntled employees and at least if it's your own data centre, you have some physical security and control you can exercise. With cloud, it's much more difficult?  I recently heard about a business exchange in Russia where anyone with stolen passports or scammed data, bank account numbers and other personal information can sell it to the highest bidder. This information isn't obtained by people tapping into phone lines; rather it comes from inside jobs within companies trusted with this sensitive data.

Security is going to be a big barrier to uptake of cloud services and providers are going to have to work hard to convince enterprises that this aspect is under control. Almost weekly debacles by governments and companies admitting they have ‘lost' sensitive information don't exactly help. Customers have to know exactly how secure the service is how easily they can retrieve their data and how they can ensure nothing happens to the data when it's being hosted out of sight.

One angle here is the somewhat odd notion of private clouds. Private clouds use all of the same virtualization and high utilization approaches as public cloud services, but are closed and internal to a specific company. It may well be that private could services are where large commercial enterprises first enter the market, using public cloud services for less security conscious applications and users.

Security is also coming back to haunt the original cloud: network services. I recently had my eyes opened on VoIP security issues where innovative scammers can record conversations running over IP networks and slice and dice them to pick out words you've said and reassemble them so they can make your voice authorise a bank transfer or something else that's not on the up and up.

Even if any kind of voice recognition security software is in play, it wouldn't be of any use since it's really your voice saying those things. Today VoIP is a relatively small part of the overall voice market, but in a 4G world where everything is IP, we'll have billions of conversations happening daily that clever people will be able to do nefarious things with.
So getting back to security, I'm looking at the issue in the sense of how secure is something from an ownership perspective and how certain are you that your data isn't someone's plaything.

As we've all learned by now, just about anything can be monetized, and this becomes a very real issue when we have stolen information being trafficked out in the open.

While we've seen ASPs and SaaS operating successfully in point applications, the wider adoption of cloud computing could turn the entire computing model on its head if customer issues can be successfully ironed out. Today, most of the world's data (I've seen numbers as high as 95 percent) resides in a home or office safely tucked behind firewalls and other security measures.

Cloud computing will change all that as more and more data floats around in the ether, and how we address security and other important issues will determine how much of a success it becomes.

Blowing the clouds away
At the TM Forum we're launching a focus on cloud services, leveraging the expertise that communications companies have acquired in delivering high quality services at least cost to customers, and looking at how some of these techniques can be applied to cloud services in general. We will particularly be looking at user needs and what barriers they perceive Ito the take-off of cloud services.

At our Management World Americas this December in Orlando, we will be strongly featuring cloud services in a number of areas - we'll be having a special track that will focus on cloud computing, and cloud will be a hot topic at several of our conference sessions. In addition, we'll be offering a seminar entitled Cloud Computing for the Global Enterprise that's geared to C-level executives and business managers. The seminar will provide an understanding of what cloud computing is, an overview of the cloud computing landscape, the pros and cons of cloud computing and the factors you need to consider in order to be ready to embrace cloud computing and what your next steps ought to be. You will also hear about some real-life examples of how cloud computing has been adopted in other organizations and the benefits of using it.

  • Learn from Cloud experts who will discuss the different business models and the roles Communication Service Providers and Cable MSOs can play
  • Hear how Cloud based services are driving innovation and the implementation and management issues around performance visibility, network and data security, identity management and ensuring service levels
  • Get the latest on the importance of standards for the Cloud

If your organisation is looking seriously at cloud services, there's no better place to learn all about the subject and add your voice to the debate than Management World Americas. We hope to see you there.

See more information at www.tmforum.org
About the author: Keith Willetts is Chairman and CEO, TM Forum

 

Operators that have VoIP peering relationships need to ensure that any service issues are identified and corrected before they affect service levels and customer relationships. This means having end-to-end visibility of networks and performance through comprehensive monitoring

VoIP services are proving to be an extremely popular way of reducing telecommunications costs and adding new functionality. The very success and continued growth of VoIP services is putting ever greater pressure on operators' network resources, however. And one area in which this pressure has become apparent in a very public way is in VoIP peering, or interconnection.

Peering is a well established process by which service providers exchange traffic with one another by establishing direct connections between their networks, rather than routing it through the public Internet. The most popular business model for this is one based on mutuality: peering is free-of-charge as long as both partners exchange relatively equal amounts of traffic.

However, high-profile IP peering disputes - such as the one between Cogent and Sprint - have shown that relationships can quickly break down when the ratio of traffic traded by the partners swings out of balance. If no commercial agreement can be reached, the ‘disadvantaged' operator may, in a worst case scenario, switch off the connections to the peering partner's network - leaving customers high and dry.

To mitigate the risks associated with these often fragile relationships, best practice for operators is to implement measures to ensure that the peering ecosystem delivers the highest level of service quality to the subscriber.

While the majority of IP peering traffic to date has been Internet-related, real-time VoIP and video services are increasing in volume and importance. Such real-time services are critically reliant on high-quality connections to prevent distortion, noise and dropped calls. As a result, VoIP service providers must ensure, day to day, that their traffic is accorded the same Quality of Service (QoS) on partner networks as their own.

The quality challenge
There are two fundamental reasons for entering into VoIP peering relationships.
First, peering helps reduce capital and operational costs, thanks to the minimal interconnection fees and lower equipment and facilities expenditure.

Second, peering helps improve service quality. A VoIP call may pass between PSTN and IP networks several times, and must be transcoded at every intersection. By minimizing such transitions, VoIP peering reduces the number of transcoding steps, and the opportunity for packet loss, delay and jitter, as the call moves from one network to another.

While it is critical that peering partners establish service level agreements (SLAs), quality targets and problem-resolution processes, enforcing them has been a challenge. Without adequate tools to monitor service quality continuously, faults and non-compliance with SLAs can often not be identified and addressed quickly enough to prevent subscriber traffic from being affected.

The alternative of waiting for customers to complain about service quality is hardly an acceptable, or sustainable, approach however - especially when customer acceptance of VoIP is critical to its growth, and service quality is a key differentiator. It is therefore not enough to rely solely on reactive troubleshooting tools to address problems.

Analyses of trouble-tickets have shown that it can take hours to properly identify a VoIP-related problem within an operator's own network. The current lack of adequate monitoring tools means that placing test calls to replicate a fault is frequently the only way to trace its source. The fault-finding process becomes distinctly more complex - and lengthy - for VoIP peering, where sessions may transit several partner networks. Worse still, a fault on one partner's network is also likely to have a knock-on effect on other partners' networks.

Proactive monitoring 

This issue highlights the need for end-to-end visibility of networks and service performance through comprehensive, proactive monitoring and automated processes that enable issues to be identified, and rectified, before they surface among customers.

Monitoring VoIP quality throughout the peering ecosystem is not just of interest to network operations. It also has direct value for network engineering and planning, as well as for front-office functions like customer.

Being able to segment voice quality data by peering partner is critical to the value of monitoring. Operational staff need to have a bird's-eye view of the peering ecosystem and the ability to drill down to analyse traffic flow and individual call routes.

As many operators have evolved their operational and business support infrastructure in line with the growth of their service offerings, network engineering and planning information is often spread across multiple management systems. To enable efficient planning, this information needs to be available centrally from a monitoring solution.

By defining service quality thresholds within the monitoring system, an operator can make sure that alarms are raised automatically if QoS levels drop below a specific level, or if faults arise. This ability enables quality engineers to address problems at a much earlier stage.

In this way, operators can save valuable engineering time and staff costs: in a recent operator study, Empirix found that staff time spent on troubleshooting was reduced by more than one-third after the operator deployed a proactive monitoring strategy. The same study also highlighted that monitoring can reduce service and partner activation costs by automating interoperability testing for new partners and service verification for subscribers.

Another area in which monitoring can make a major difference is customer service. By addressing faults more proactively, operators can reduce the volume of in-bound calls and their contact centre agents will have fewer calls to handle. Monitoring can also help predict levels of trouble-tickets more accurately - enabling the operator to dimension contact centre resources more realistically and save costs as a result.

Avoiding the ‘blame game'

With relationships between peering partners often being unable to withstand conflict - and well-defined, enforceable SLAs being hard to impose - it is relatively easy for operators to switch from one peering network to another. Most operators have common peering points with each other anyway, to ensure redundancy for disaster recovery, and as traffic is mostly exchanged free of charge, there is no financial incentive to stay loyal to any one partner.

In this fragile environment, disputes and fault conditions should not be protracted, nor should relationships with partners be tarnished by playing the ‘blame game'. It is therefore critical that service quality can be measured equally stringently on both sides of the interconnect. Proactive VoIP peering monitoring can play a crucial role in making sometimes fragile peering relationships stronger and more manageable. It enables service providers to exert greater control over the peering ecosystem by establishing an independent basis for negotiating and measuring service levels.

There are also some practical benefits. For instance, most VoIP service providers use least-cost routing and peering routes that change dynamically through the day. Monitoring could help them determine which interconnect partner offers the best voice quality for the lowest cost. This visibility could act as an incentive for partners to deliver high-quality connections. As communications services become commoditized, subscriber decisionmaking will be increasingly influenced by more intangible differentiators such as service quality.

Actionable data can make a critical contribution to VoIP service quality and ensure greater customer satisfaction, with a positive impact on the bottom line.

Neil McKinlay, Director, Product Marketing, EMEA, Empirix, www.empirix.com

Achieving sustainable revenue growth requires tight controls on quality of service  across a range of services - and carriers must choose the right platform

Telecommunications service providers are seeking ways to increase Average Revenue Per User (ARPU) through value-added services such as IPTV, Video on Demand (VoD), and video collaboration applications such as on-line gaming. The services must be delivered in conjunction with VoIP and Internet access, over a single subscriber connection. With the cost of customer acquisition at ~$300 and annual per-subscriber multi-play service revenues at over $1000, subscriber loyalty is critical. Poor Quality of Service (QoS) results in subscribers switching to competitive offerings (cable, satellite, etc). Therefore, adequate service quality must be maintained. Of course service offerings must be price-competitive. Therefore, tight control over costs (CapEx and OpEx) is important.

Early approach to QoS
The basic challenge of delivering multi-play services arises because the services have different characteristics. Voice has stringent latency and jitter requirements, although bandwidth per channel is low. High quality video has low jitter tolerance, however, bandwidth per channel is much higher. Latency tolerance varies with the video service - VoD tolerates higher latency than video gaming. Data traffic (e.g., email, browsing) is largely agnostic to jitter and latency.

Prior to the era of ubiquitous rich content, social networking and file sharing, it was assumed that Internet traffic would remain a tiny component of overall traffic. Hence it was believed that multi-play QoS could be guaranteed with minor over-provisioning of bandwidth.

Bandwidth no QoS guarantee
With initial multi-play service deployments, it became apparent that though Internet use generated far less traffic than voice or video, it is bursty. With increasing rich content, the spikes get bigger. With recent widespread use of bandwidth-hogging applications (YouTube, eDonkey, Bittorrent etc), the situation has worsened. Several users accessing such applications simultaneously places a significant load on the network. In networks where traffic is not intelligently managed, excess traffic is dropped when provisioned bandwidth is exceeded. Indiscriminate traffic drops result in information loss for all types of traffic, and potentially for all subscribers, even those not using those applications. From the perspective of the end-user, loss in Internet data traffic is hardly perceptible, because re-transmission mechanisms recover losses before presentation. Conversely, even small traffic losses cause unacceptable quality degradation of voice and video.

Bandwidth over-provisioning at very high levels may guarantee QoS, however, it is often infeasible in many access networks and certainly not affordable.

Looking ahead, there is no crystal ball to predict what new services will be deployed, what new applications will be invented, and how usage patterns will evolve. What is certain is that for the foreseeable future, Internet traffic and temporal unpredictability will increase. According to studies published by Cisco Visual Networking Index, Internet video (excluding P2P video file sharing) already constitutes a third of all consumer Internet traffic, and is expected to grow at an annual rate of 58% over the next 4 years. New security threats will emerge.

Networks must be engineered to continually adapt to changing conditions. Wholesale equipment replacement is unaffordable. Software upgradeability is an imperative.

Attributes of NGNs
Networks must be built with platforms that offer adaptability, intelligence, security, control, and scalability.

Adaptable platforms enable the creation of "learning" network elements. With learning network elements, better traffic management, different protocols and services can be supported without hardware upgrades. Rather, improvements are enabled by software upgrades. For example, new traffic management algorithms are deployed as software upgrades to adapt to new traffic patterns. New protocols are also handled with software upgrades. An upcoming example is the imminent migration of access networks to IPv6, driven by the Internet address shortage problem of existing IPv4 based access nodes.
Intelligence encompasses service and subscriber isolation, traffic management (buffer management, policing, traffic shaping and scheduling), and the ability to dynamically configure algorithms for different network conditions. Service and subscriber isolation involves identifying traffic based on service type or origin, subscriber, etc, and separating it into distinct queues. Traffic management algorithms make discard decisions and regulate traffic flow in various queues to meet service-specific requirements modulated with subscriber-specific SLAs. Many algorithms have been devised for each function, e.g. Weighted Round Robin (WRR) for traffic scheduling. Software-based implementations of these algorithms have the advantage that they can be refined and dynamically tuned for specific network characteristics. Ideally, it should be possible to select from a menu of algorithms, so that service providers can appropriately select the algorithm and tune the parameters for each node in their network.

In broadband access, security often refers to stopping Denial of Service (DoS) attacks. Security is implemented through a variety of mechanisms - Access Control Lists, rate control of host-directed traffic, etc. An adaptable platform is essential to accommodate new threats, protocols and services.

Controls typically limit network misuse with respect to SLAs or regulatory frameworks. Policing and shaping algorithms guarantee users do not exceed bandwidth allotment of their SLAs. Service providers also want to better manage traffic related to bandwidth-hogs. In these cases, application recognition is used to identify the specific traffic that is filtered or de-prioritized.

Scalable platforms address the development of cost-effective portfolios for a wide range of performance and functionality, deployable across a broad spectrum of service providers.
A simple case study demonstrates the importance of these attributes. The WRR algorithm ensures that queue-specific weights specify relative priorities of services. But experience has shown that WRR is inadequate for triple-play QoS. To guarantee triple-play QoS, LSI recently implemented a sophisticated multi-level hierarchical scheduling algorithm, making it possible to deploy it as a software upgrade on existing nodes built on the LSI APP communications processor.

Choosing the right platform
Fixed function and Ethernet switch devices have significant drawbacks in many respects. Both include hardwired traffic management, most do not meet even today's requirements. Adaptability and functional scalability are non-existent or highly restrictive. Hence they are not suitable for subscriber-facing linecards in next generation network elements. Note that Ethernet switches are suitable, and often used in network elements for other purposes such as internal interconnects.

Programmable platforms offer these desirable attributes and are recommended for building learning network elements. However, they differ greatly in their degree of support, hence a deeper assessment is recommended. All the attributes should be affordable from a total cost of ownership (TCO) perspective. (TCO includes cost of development, maintenance and upgrade of software through the product lifecycle.). The architecture should support predictable performance in a variety of scenarios. Programmable, hardware-based scheduling with multi-level hierarchy support is critical. For known standards and world-wide service provider requirements (DSL Forum TR-101, IPv6 enablement, a menu of traffic management algorithms, etc), pre-packaged, platform-optimized software must be available. For adaptability, it is equally important that the platform vendor is committed to invest in a software roadmap. For differentiation, the architecture must be easy to program, and source code with modification rights complemented with robust tools must be available. For long-term requirements, the programmable platform evolution roadmap must not only consider new hardware functions but also incorporate a simple software migration strategy.

An example of a platform with all the desirable attributes is the LSI Broadband Access Platform Solution including APP communications processors and Broadband Access Software. The LSI Tarari Content Processors represent a good example of a platform to implement application-specific controls.

It is indeed possible to build cost-effective, future-proofed, next-generation networks that meet the requirements of multi-play services.

About the Author: Sindhu Xirasagar is Product Line Manager, Networking Components Division, LSI

Clearwire's CTO says it forms the highest cost of a network deployment, so what  solutions provide the best answers for the future of mobile backhaul, asks Alan Solheim.

The evolution from voice only cellular systems to 3G+ (HSPA, WiMAX, LTE...) has revolutionized the way we interact, share information, work and play. In order to deliver on this promise, the entire network - from the handheld device to the core of the network - has to change at every level. One area that has not only changed, but has been turned inside out, is backhaul. Long considered only a cost of doing business, many carriers are starting to see backhaul as an enabler for new service delivery, and in some instances like ClearWire in North America, a competitive advantage over traditional players. ClearWire has built the largest greenfield WiMAX network to date and their CTO, John Saw says, "It's what I call the elephant in the room that nobody talks about. Backhaul is probably the highest cost in deploying a network. Anyone who wants to roll out a real wireless broadband network nationwide needs a cheaper solution than current models." This same sentiment is true for anyone planning to build a 3G+ network: without radical change to the backhaul, the applications will be starved for bandwidth, the user experience will be unacceptable, and the network economics will not be favorable.

So what are the required changes? The most obvious one is in bandwidth. 2G voice networks need a single E1 connection to the base station in order to provide the required capacity. The advent of GPRS and EDGE to provide data services, resulted in an increase of up to 4 E1s per base station, but leased circuits or low capacity TDM microwave radios were able to provide the increased capacity. The introduction of High Speed Packet Access (HSPA) and HSPA+, has driven the capacity per base station up by a factor of 10, straining the throughput capability of these types of microwave and making leased E1 circuits cost prohibitive.

As 3G base stations began to support native Ethernet interfaces, enabling the use of packet microwave or leased Ethernet backhaul, a variety of approaches were adopted in order to support legacy E1 interfaces. One method has been to leave the legacy E1 transport for the 2G base stations intact while adding an overlay to support new HSPA/HSPA+ base stations. This has been more prevalent among operators who have used leased E1 circuits for the 2G backhaul. Alternative deployments have included hybrid TDM/Ethernet microwave, or packet microwave with pseudowire. Finally, fibre has generally been used when available at the cell site however fibre penetration is very low - even in developed countries.

With the advent of 4G technologies (WiMAX and LTE), the network is IP end to end, and the backhaul load per base station has again gone up by almost another order of magnitude. Furthermore, in order to deliver the desired user experience the base station density has to increase: between 1.5X and 2.5X depending upon the amount of radio access spectrum available to the operator. The net result is a requirement to deploy a new backhaul technology that can deliver the necessary capacity, is packet based, and can easily add the new base stations as needed. Again, if fibre is present, it is the preferred technology. If fibre is not already present at the base station, however, the relative economics of fibre vs microwave must be taken into account.

Business Case for Fibre
The cost to deploy fibre is dominated by the installation expense and is thus distance sensitive: the longer the fibre lateral that must be constructed, the higher the cost of the backhaul. Microwave on the other hand does not significantly change in cost as distance increases, however there is an on-going annual charge for the tower space rental (if the towers are not owned by the operator) and the backhaul spectrum lease. The break-even distance for a fibre construction varies with local conditions, but is typically less than 1000 meters. Given the large number of new sites and the low fibre penetration, the majority of the base stations will be served by packet microwave, so it makes sense for us to look at packet microwave systems in more detail.

Business Case for Packet Microwave
The 10-year cost of ownership for packet microwave is only influenced by the capital cost of the radios to a minor extent (even though this tends to be the focus of the purchasing process). As is shown in the graphic above, the majority of costs are driven by lease costs for space and for spectrum. These costs are very dependant upon local regulatory conditions, and whether or not the operator owns the tower and site locations. A 10-year TCO analysis should therefore be done for every network that is considered. Current generation packet microwave systems have a number of features that can be used to mitigate these costs.

First of all, packet microwave systems are not limited to the SDH hierarchy of bit rates and can deliver throughput up to 50 Mbps per 7 MHz of spectrum with average sized packets (note that the throughput increases with smaller packets and some manufacturers quote these artificially high throughput rates. In practice the throughput with the average packet size is a much better measure of the real world system capability). Channel sizes are software defined and can be up to 56 MHz, if allowed by the regulator, for a throughput of 400 Mbps. Polarization multiplexing can be used to double this capacity, but at a cost per bit that is more than double. A feature known as adaptive modulation (the ability to adjust the modulation and/or coding to optimize the throughput under varying propagation conditions) allows these systems to deliver the maximum capacity under normal conditions and maintain the high priority traffic under poor conditions. Both of these translate to higher throughput, reduced antenna sizes and higher spectral utilization, resulting in a lower cost per bit.

The Importance of Ring and Mesh Network Topologies
Ring and mesh network topologies can further reduce the network cost per bit by decreasing the required redundancy costs, and minimizing the average antenna size. Traffic engineering, which allows the use of statistical multiplexing, makes use of all the available paths in a ring/mesh network, and leverages packet based prioritization to maintain priority traffic in the event of a failure condition. This can increase the effective network capacity by at least a factor of 4, further reducing the networks average cost per bit. Ease of installation and the reduction in site lease costs can be addressed by the use of all-outdoor system design, where the RF and base band electronics are integrated into a single outdoor unit, eliminating the need for co-location space in the cabinet. The net result of these factors is at least a 10-fold reduction of the network cost per bit.

The Future of Backhaul
Looking into the future is always subject to error, however we should expect a continuation of the trend towards smaller cell sizes in order to deliver higher capacity per user and make better use of the radio access spectrum. This will require ongoing innovation in the backhaul in terms of cost and integration levels. Capacity in excess of 1 Gbps per link is required in order to allow packet microwave to be used for the aggregation layer in the network and not just the final link to the end station. Traffic patterns that link base stations directly to one another, rather than hubbing all the traffic back to a central site (as proposed in the LTE standards) will further drive the need for ring/mesh network topologies rather than conventional hub and spoke designs. Finally spectral efficiency improvements at all channel sizes are required in order to deliver higher levels of network capacity without exhausting the available spectrum. We are by no means at the end of the road when it comes to innovation and evolution, if not outright revolution, of the backhaul network.

About the author: Alan Solheim is VP Product Management, DragonWave.

Oliver Suard argues that policy control is shifting away from a network-centric and  defensive approach to more flexible applications, such as bandwidth management and roaming cost control, with a tight link with charging. He asserts this is how communications service providers can experience the real potential of policy control.

In the past year interest in policy control has grown dramatically, to the point where it has become one of the hottest subjects for communications service providers (CSPs). Driving this is the fact that many CSPs are experiencing a growth in data (IP) traffic that is far outstripping the growth in their revenue, and affecting the performance of their network. This is a trend that is clearly unsustainable, so CSPs are looking for ways to redress the situation. They not only want to make better use of available resources, but also offer more personalized services to their customers, with the hope of generating more revenue.

Policy control is seen as the means to achieve these goals. It has a multitude of definitions, objectives and solutions, but broadly speaking, policy control is about managing the access to services by subscribers.

Historically, policy control can be thought of as having two very separate origins. In the fixed environment, policy control was about network resources allocation. An early example of policy control in action is the use of Class of Services (CoS) in MPLS networks to differentiate the delivery of enterprise services. In the mobile space, policy control was about charging-for example, taking action when a pre-paid customer's credit runs out.

Now, with the advent of the broadband era, policy control is stepping out of those silos and coming of age. Most importantly, policy control is shifting from a network-centric and defensive approach to one that puts the customer experience first.

To achieve this, policy control has become far more dynamic, taking a multitude of factors into account-in real time. These factors include not only the type of service but also the current network conditions, the user's profile (e.g. business or consumer, gold or standard, high or low spender), the type of device being used to access the service and even the location of the user.

A good illustration of a flexible application that ensures a high customer experience is bandwidth management. The initial problem can still be seen as a classic network-centric one: CSPs want to ensure that the bandwidth available to users does not become squeezed as a result of excessive use by a minority of subscribers who do not contribute proportionately to revenues, such as heavy peer-to-peer (P2P) download users. When that situation occurs, the majority of users experience a reduced quality of experience (QoE), which may lead to churn. This problem is most acute in mobile networks, where bandwidth is clearly limited.

State-of-the-art bandwidth management solutions allow CSPs to monitor usage in real time and, when congestions occur, to dynamically adjust the access for specific services and specific users (at a cell level for mobile operators) to free up capacity. Such a solution is not just about defending the network-it's about providing the optimum broadband experience for the majority of users.

Policy control solutions can also be used to help subscribers manage their spending, by informing them when certain pre-set, personal credit limits are reached. At first glance, it may seem counterproductive for CSPs to help their subscribers control their spending, but in actual fact, the consequences of a customer receiving an unexpectedly large bill (commonly referred to as "bill-shock") is likely to be far more damaging to the CSP in terms of churn, bad publicity and liability for interconnect charges (regardless of any settlement reached with the subscriber). Furthermore, this "cost control" feature can be offered as a service, enhancing the personalization of the relationship between the CSP and the customer.

This last example hints to an important aspect of policy control: its tight link with charging. This is most clearly illustrated by the differentiated price plans now being offered to broadband customers. For example, one APAC operator is offering a monthly flat-fee of $30 that offers 10 Gbytes of usage, with additional usage charged at one cent per 10 Mbytes, capped at $40. Enforcing such a price plan means that policy control needs to be aware of both the price plan and the usage, for individual customers.

This connection with charging is recognised by the 3GPP standard for policy control. Originally drawn up in the context of IMS (IP Multimedia Subsystem), this standard defines a number of key components, including the Subscriber Profile Repository (SPR), the Policy and Charging Rules Function (PCRF-the part of a policy control solution that makes the decisions), the Policy Control Enforcement Function (PCEF-the part that implements the decisions), as well as the Offline Charging System (OFCS) and Online Charging System (OCS) to handle post and pre-paid charging, respectively.

Many CSPs currently investigating policy control solutions are demanding compliance to the 3GPP and other related standards. However, policy control deployments typically take place in existing environments, and some compromises need to be made. For example, a bandwidth management solution could be deployed using existing network capabilities to throttle usage rather than introducing a standards-compliant PCEF. 

It should also be noted that the 3GPP standard is about the logical capability (or functionality), not the physical architecture. So a 3GPP-compliant implementation need not have separate physical "boxes" for each of the components. As mentioned earlier, for example, there is a tight link between the PCRF on the one hand, and the OCS and OFCS on the other. So implementing these together in one solution makes a great deal of sense.

Ultimately, CSP must not forget that policy control should be about the customer experience and driven by marketing needs, rather than about network issues. Therein lies the real potential of policy control.

Olivier Suard is marketing director, OSS, Comptel.

With the recent announcement of the proposed T-Mobile and Orange merger in the UK a  number of analysts have commented on the possibility of a consolidation of the mobile sector not only in the UK but in a number of countries. At the same time competition authorities are increasingly wary of the emergence of large firms with significant market power that have both the means and incentives to act in an anticompetitive manner.

So what would the proposed deal entail and should we be worried about large telecoms firms?

A "synergistic" deal?
Two of the five UK mobile network operators are proposing to merge their operations to become the largest operator in the UK with 37% share of the market and 28.4 million customers, in front of O2, Vodafone and 3UK. The underlying rationale for the deal is based on synergies that have been valued by the merging parties at about £4bn. These benefits would arise as a result of network sharing, operating under one brand as well as the centralisation of some of the operations. According to the merging parties the deal would result in better coverage and more efficient operations leading to a better service for customers. The current rivalry in the sector has significantly eroded margins as mobile operators often struggle to amortise their increasingly high subscriber acquisition costs over a long enough period before their customers switch.

Why would such mergers be of concern?
Economic theory suggests that without Government intervention some firms would become so dominant that they would be able to set prices at anticompetitive levels, drive competitors out of the market (and/or acquire them) and benefit from a quasi monopoly position. While this would provide the shareholders of the dominant entity with great wealth (think Rockefeller) it would result in artificially high prices that would penalise the whole economy and therefore would not be in the interest of its citizens. That's why competition authorities have been tasked with keeping an eye on abuse of dominance cases and have such far reaching powers to enforce their decisions. The antitrust case against Rockefeller's Standard Oil in 1910 resulted in a forced break up ... into 47 pieces.

Mergers often result in increased market concentration and competition authorities through their merger control powers have to ensure that, overall, consumers will not be harmed.

Striking the right balance
The UK was until now, the only country in Europe with five infrastructure based operators. The merger would therefore result in a market with four operators, a configuration found in a number of other European countries. Orange and T-Mobile will no doubt try to persuade competition authorities that the merger is pro-competitive and would not lessen the level of rivalry within the sector. While it may sound surprising at first, Vodafone and O2 are unlikely to put up a big fight to prevent the merger from going through as less players in the market means less competition overall which is good for them. So it will be up to the competition authorities to clear the deal in one form or another. Most probably a number of conditions, aimed at ensuring that the market would remain competitive, will be imposed on the merged entity. Selling distribution outlets, auctioning portions of a customer base, redistribution of spectrum or mandating wholesale access are some of the measures that might be proposed to alleviate the concerns of competition officials. Operators such as 3UK are also insisting on more efficient number portability processes to ensure that switching from one operator to another is easier. Given that 3UK's 3G network is shared with T-Mobile and that its 2G roaming partner is Orange it is clear that it will be monitoring carefully the merger process. 3UK's exit options however (to be acquired by another UK mobile operator) have been seriously curtailedr.

Towards a more concentrated sector?
Over the next six months or so competition authorities will have to analyse the likely impact of the proposed merger on competition and decide under which conditions it should be cleared. These conditions, if any, will have to be designed to ensure that prices are kept at a competitive level, and that innovation is not harmed. No doubt Yoigo in Spain and Wind in Italy will be following closely these developments... as they may be next in line for merger talks.

Benoit Reillier is a Director and co-head of the European telecoms practice at global economics advisory firm LECG. The views expressed in this column are his own.

Turn on the evening news or look at any Internet news site and you'll probably see at least one story that optimistically says the recession is over. Just recently we marked one year since the collapse of US-based financial services firm Lehman Brothers, which many point to as the cataclysmic event that brought the eyes of the world on the looming threat of a total global financial meltdown.

Sure, we haven't sunk into a depression like many analysts predicted, but we're still a long way from climbing out of the hole we've gotten ourselves into. And this applies to the communications industry as well as to automotive, banking or any other vertical market.

Relatively speaking, communications has weathered the storm fairly well all things considered. We didn't see a mass exodus of people running away from their service provider contracts. In fact it looks like consumers are jumping on the smartphone bandwagon as eagerly as ever as we saw with recent launches of the iPhone 3GS, the Palm Pre and other new devices. Also, Apple's iTunes App Store was recently able to claim 1 billion downloads after just 9 months of service. But, that doesn't mean communication service providers (CSPs) can just sit back and wait for the money to keep rolling in.

There's a reason the tagline for our upcoming Management World Americas conference in Orlando this December is "Surviving to Thriving: New Business Models, New Services, New Profits." We're saying it's not enough to simply make it through the financial downturn in one piece. Just making it out alive doesn't constitute success, and in fact if that's all you're planning to do, you may as well write your epitaph now.

In this brave new world of global 4G mobile Internet access with blazing speeds capable of supporting applications and services we can only dream about today, it's far from a given that providers will continue to be the money-making operations they've been in the past.

With new networks and capabilities, CSPs also have to face the very real specter of market saturation. They may have a share of the worldwide $1.4 trillion communications market, but that number is growing very slowly. And the ones that are currently gaining the most ground in the market are providers of what we call "over-the-top" services, that is the video, music, games and other services that ride along an incumbent carrier's infrastructure. While there is no reason that incumbents have to be relegated to bit carriers in this scenario, in many cases the revenue sharing just isn't there, and they are being cut out of potentially lucrative deals.

Adapt or Die
As the traditional CSPs see the success of the iTunes App Store and similar services where small companies or people in their basements come up with useful (and sometimes ridiculous) applications that people are willing to pay 99 cents for, where does that leave the providers who own the means of delivery to the end customer?

If they are smart, they will take a page from Charles Darwin and keep evolving and changing to avoid extinction. This means starting at reducing operating costs, which all providers should have started doing well over a year ago. The next step is to reduce the operational complexity within their organization. This includes streamlining the OSS/BSS infrastructure and processes, which I'll admit is no small task, but it's absolutely vital to staying afloat and stop hemorrhaging funds.

Last, but certainly not least, is to create new opportunities to bring in revenues. This goes well beyond just focusing on existing end user services and thinking over-the-top services are something to be viewed with fear and loathing.

Quite the contrary, if incumbent operators take that attitude, they will surely fall by the wayside. But if they break the habits of past business models and embrace new areas like cloud computing, personalized services, mobile advertising and more, they will quickly move from being dreaded bit pipes to actually being enablers of innovative new services and opportunities. It won't be easy, but if providers can make this transition, they will be able to survive this and future economic slowdowns and thrive no matter what comes their way.

Keith Gibson says that presence is on a journey from the green and red icon to  providing the basis for the future of communication

At the turn of the century presence was seen as the coming saviour of communication. Starting off as a simple available/not available green and red icon, the industry was quick to recognise its massive potential. The vision of having presence integrated everywhere was compelling - presence to update your selected community on exactly where you are, what device you are using, what mood you are in, what you are doing; even to check the status of your mail order from last Tuesday. The next logical evolution was to have presence embedded rapidly in the operator's network as a key integration point for information.
However, the reality of this "all seeing, all knowing" utopia was much too hard to achieve. The main drawback was the fact that presence would have to be integrated into every service on the network to achieve this holistic approach - the feat was too much for industry to bear. Building a presence server to meet all these demands was very complex and costly, and no one service requiring presence could justify the cost of such a server, so the business case for presence fell over.

Technical hype also added to the slow growth of presence. Much of the industry was focused on the functionality presence could bring. Lots of time, effort and resource was devoted to developing these functions. Unfortunately most of these projects failed to place the user at the centre of the business model. If they had, they could have successfully created value-added services, which could have initiated rapid consumer adoption and demand for presence.

As a result of these drawbacks, many companies simply opted for the siloed presence solutions, or deployed IP applications without presence. The only few exceptions were for some corporate scale applications and stand-alone uses of presence, such as in internet and mobile instant messaging.

The vision of integrating presence at the heart of the network to show status information across services appeared to die, or at least be set aside. The market dipped and many vendors shelved their presence server development programs.

But in the past twelve to eighteen months, the tides have turned and the benefits of presence are beginning to become a reality. Presence has been rolling out onto networks slowly, particularly where it can add value to an end-to-end service. For example, some operators such as Mobilkom are using the technology to deliver innovative services such as intelligent call routing across IP and traditional networks in order to deliver calls to the device that the user is most likely to be using at the time. 

Presence has also become a major piece of the Rich Communications Suite (RCS) initiative being driven by the GSMA, now with over 80 operator and vendor members. The GSMA RCS Project is a collaborative effort to speed up and facilitate the introduction of next generation rich communication services over mobile networks. Presence is key to RCS as it focuses on ordering communication around the address book so that each individual can see how their contacts can be contacted, and also see their social presence information. This is important to operators as it promotes communication.

Some key aspects of RCS include:
- Enhanced address book: Allows the user to share key aspects of their social presence with their address book. These include a user indicating hyper-availability (like a ‘shout' it indicates the user wants to talk), Portrait icon, Status text (what am I feeling like today), and Favourite link (personal website link etc). The presence server manages all of this information and allows the user to decide who sees it.  The Enhanced Address Book also displays the services available for each user and the presence server tracks the capabilities of each user so only available services are displayed against the contact being viewed.  
- Content sharing: Users can exchange video, still images, or files whilst on a voice call or outside of one. Again the presence server tracks the capabilities of each user to receive these types of calls, ensuring the caller is never disappointed when they try to communicate.
- Enhanced messaging: presents a conversational threaded view of SMS and MMS within the phone client and also adds chat services using instant messaging. Again presence allows the user to see who is available for a chat session enhancing the possibility of group discussion.
- Fixed mobile convergence: RCS works across mobile and ‘fixed' networks, allowing the user to have one address book that is visible, whether they are using their mobile phone, netbook or PC.

Operators are already launching RCS market trials, and full deployments are expected in 2010.  One such operator is Korea Telecom, which recently announced its early RCS deployment. Many European operators are also in trials, or beginning them. RCS could become the next big evolution in mobile services, and at its core will be the presence server - this time driven by service need rather than architectural possibilities.

With presence no longer being a niche application, but forming the backbone of Rich Communication Suite applications and services, it is on its way to revolutionising communication to make it more streamlined, social and multi-dimensional. Telecom companies around the world are researching innovative ways that they can tap into the functionality of presence. For example, some cable companies in the US would like to use presence to promote messaging between their subscribers by allowing them to show their friends what they are currently watching on TV. That is just one a specific application of presence - the options are limitless.

One initiative that is driving the evolution of presence is the FMCA (Fixed-Mobile Convergence Alliance) open presence trial, where six major operator members are trialling the interconnection and interworking of rich presence applications. The FMCA are also exploring technical, operational and commercial models for presence enabled services in areas such as unified communications, VoIP, mobile IM, IPTV, social networking, content sharing services, networked gaming and wholesale services within the trial. Such initiatives will ultimately provide users with the capability to extend presence well beyond the boundaries of service silos and network boundaries, finally making presence a global feature.

So once again, presence has been ear-marked to become a component in the network that is integrated with all services.  However, history has shown us how this vision can stumble, so the industry must work together to ensure the roll-out of presence is a success this time around. The focus must be on the business needs where the business case is improved by the addition of a presence server, and the cost is justified. It is time for presence to grow up and deliver the potential we have been waiting to see. The next steps will be both critical and exciting to watch.

About the author: Keith Gibson is CEO of Colibria.

Mobile operators must contend with stagnating revenue growth resulting from  reduced consumer spending. To improve profit margins in this environment, companies must find ways to simplify their operations and refocus scarce resources on activities that offer the best returns. European Communications runs extracts from a white paper from CapGemini that looks at how mobile operators can suceeed in this quest for margin

CapGemini's Telecom, Media and Entertainment team analysed various cost reduction measures across three key areas: network operating expenditure (OPEX), subscriber acquisition (SAC) and retention costs (SRC) and the costs of servicing customers. It modeled the potential savings that could accrue from adoption of these measures, and its analysis shows that a typical mobile operator in Europe is positioned to improve EBITDA margins by up to four percentage points within four years by the judicious implementation of these measures. However, there exists significant challenges in doing so.

The context
Telecom operators in Europe are facing some of their toughest times in recent months. After a period of high growth, mobile telcos are now faced with a credit crunch that is impacting their growth plans and an economic slowdown that is affecting consumer spending. For some time, strong growth in mobile revenues had diverted the focus of operators from driving down costs. In a growing and competitive market, operators had focused on launching a wide portfolio of voice and data products, technology upgrades and ramping up their customer service functions, resulting in complex structures and systems.

In light of the current revenue challenge, mobile operators now have to shift their focus from growth strategies to simplifying their businesses and driving down costs to sustain healthy margins. Particularly since operating costs for most operators have been gradually rising over the past few years, and it seems there is scope for targeted OPEX improvement measures.

Network Opex
For the mobile operator that we have modeled, network OPEX accounts for over 26% of OPEX. We have identified three key areas of network expenses that operators can focus on in their drive to cut costs. We estimate cost savings initiatives focused on network OPEX are likely to result in a 2.7-3.8 percentage points rise in EBITDA margins, based on the extent of measures that are deployed. EBITDA uplift is loaded towards the end of the four year period due to the progressive deployment schedule that the measures entail.

Backhaul Ownership
With rapid increases in backhaul capacity driven by network upgrades, most operators are caught in a situation where their increasing share of payouts to backhaul owners are driving down their current margins. This has prompted some operators to venture out into building their own transmission networks.

For instance, Vodafone Germany has embarked on an initiative to build its own backhaul and estimates that it could save up to € 60 million annually in OPEX due to this shift. In Italy, the company has already migrated over 80% of its backhaul to their self-owned links.

However, savings through backhaul ownership are closely tied to the traffic requirements of the operator. We have modeled our analysis on the assumption that base stations would require a backhaul capacity of up to 6 E1 lines5, as opposed to the current average of 2. As such, we believe operators that are seeing a strong upswing in traffic or those that are already operating at high capacity utilisation rates are likely to benefit most by taking ownership of their backhaul.

Our analysis reveals a potential upside between 1-1.85 percentage points in EBITDA margins by implementing this measure. In bringing backhaul in-house, operators will need to follow a phased approach where they first identify the sites, prioritise them based on capacity utilisation forecasts and finally select the appropriate technology between microwave and fiber.

Energy Savings
Our analysis suggests that by deploying focused initiatives around improving cooling efficiencies and reducing energy consumption at mast sites, operators stand to realise a tangible savings potential.

Integrating these measures into our cost savings model, we believe that a savings of up to 4.5% can be obtained on the electricity OPEX costs of an operator. These savings translate into a direct uplift of EBITDA margins by 0.16-0.19 percentage points. We have modeled these savings as a one-time measure for implementation on existing sites.

Network Sharing
For larger operators, the key advantage is the opportunity to monetise assets that have already significantly depreciated, thereby offering them a steady revenue stream. For smaller operators, the case for network sharing appears even more attractive as these operators can convert significant parts of their CAPEX into OPEX and in the process also achieve a faster rollout.

An analysis of the potential savings that can accrue through sharing of network elements, including the Radio Access Network, reveals that operators with moderate coverage can achieve EBITDA upsides of around 1.0 percentage point while operators with nationwide coverage can achieve an EBITDA improvement of over 1.4 percentage points (see Figure 6).

Subscriber Acquisition and Retention Costs
Subscriber acquisition and retention costs (SAC/SRC) form the single largest OPEX element for most mobile operators. Handset subsidies account for the bulk of these costs with a 69% share while dealer commissions account for almost 15%.

Increasing Contract Duration
The duration of contracts offered by operators is closely tied to the amount of handset subsidy that the operator incurs. Consequently operators are experimenting in varying the duration of the contract to reduce the impact of high subsidies for feature and smart phones.
In the European context, we have modeled a scenario where the current average of 18 month contracts is extended to 24 months. An increase of over 40% in the customer lifetime value can be achieved by extending the duration of the contract. However, consumers are likely to resist any extension of contract durations. In order to drive uptake of extended contracts, operators will need to create loyalty benefit plans that encourage customer stickiness.

Our analysis shows that by extending contracts and implementing progressive loyalty benefits, operators can realize EBITDA uplift between 0.44-0.48 percentage points at end of the fourth year. However, challenges arise around managing revenues, customer expectations, and in the distribution of subsidies. Nevertheless, the challenges are not insurmountable and the measure, in itself, offers scope for operators to embark on a new low-cost subsidy path.

Direct Sourcing of Handsets
Our analysis shows that large operators that have significant purchasing power can reduce costs involved in handset sourcing by procuring handsets directly from ODMs. ODMs  have in-house design and manufacturing facilities and offer a significantly faster turnaround time, in comparison to traditional OEMs. Moreover, the lack of a strong brand for the ODMs, and relative scale of the operator gives the latter significant bargaining power in negotiating procurement of handsets. Indeed, operators such as Vodafone have experienced a price differential of over 16% between an OEM and an ODM for sourcing comparable budget handsets.

We have modeled a progressive rise in sourcing low cost handsets from ODMs, with the upper limit capped at 35% of budget handsets at end of year four. Our analysis reveals a potential upside of 0.12-0.2 percentage points to EBITDA margins at end of year four. Sourcing higher volumes and feature rich handsets from ODMs are likely to result in significantly higher savings for operators. However, a key challenge for operators will be to ensure sustained after sales support from the ODM.

Customer Service Costs
Our analysis of cost cutting measures focused on customer service reveals three initiatives that have not been implemented by operators in Europe extensively and have potential for margin uplift:

Paperless Billing
Research on the cost differential between paper and e-bills shows a differential of up to 59%. Building these savings into our analysis shows scope for EBITDA margin uplift of 0.1 percentage points for operators at the end of year four, assuming a rise of 3% in number of subscribers opting for an e-bill. Operators could strive to increase uptake through focused promotions and providing enhanced functionality in ebills to drive up savings.

Hutchison (3) Austria initiated a drive to migrate its customers to e-bills in mid 2007. At that point in time, 3 was sending out over 480,000 paper invoices per month, each having between 5 to 100 pages. Having seen limited success with opt-in strategies, 3 opted for aggressive opt-out measures resulting in strong success. They achieved a conversion rate of over 85% as opposed to their conversion target of 65%.

Unstructured Supplementary Service Data (USSD) based Self-Care
USSD is a real-time messaging service that functions on all GSM phones and has seen multiple deployments across emerging markets. Operators could build mobile portals that could be accessed through USSD, and benefit from the lower costs and faster query resolution that the service offers. By offloading some of the most common customer service queries such as those around bill payments, balance and validity checks, and status of service requests, operators can reduce the burden on their contact centers, and consequently, the cost involved in servicing each consumer. However, lack of regulation and limited interoperability among operators for consumers who are roaming have resulted in the service seeing limited traction in Europe. Operators will need to collaborate among themselves to ensure uptake of USSD services.

A Time Slot Approach to Customer Calls
Our measure envisages a scenario where customers are assigned specific time slots during which they can contact customer service, with calls outside the time slot being treated as regular charged calls. However, such a measure will have to be tempered by a minimum Quality of Service (QoS) guarantee, and offset by incentives (such as free minutes) for a drop in QoS.

By utilising the time slot approach to customer calls, we believe that operators could achieve a reduction in the number of resources deployed by over 37%. However, the implementation of this measure is likely to be challenging, given the complex analytics that drive the slot designs and managing the apprehension of customers. Nevertheless, we believe that sound implementation of this measure will result in EBITDA margin uplift by 0.2 percentage points by end of year four.

Conclusion
In conclusion, telcos will need to concentrate on gaining tactical benefits from cost reduction initiatives in the near-term and create sustainable cost advantages, with an emphasis on operating margins, before they can look at creating long term value through growth strategies. Operators will have to identify complexities in their systems, processes and cost structures and develop a roadmap to systematically mitigate them. Mobile operators will also need to identify activities that offer the maximum value realisation and redirect financial and operational resources on these activities to create lean and efficient businesses.

About the authors:
Jerome Buvat is the Global Head of CapGemini's Telecoms Media & Entertainment Strategy Lab.
Sayak Basu is a senior consultant in the TME Strategy Lab.

Centralising service and policy control management will mean operators can manage traffic and users in a holistic manner across 3G and 4G networks

Mobile data services over 3G networks are proving successful in the market. 3G subscribers account for 350 million of the 3.5 billon mobile subscribers worldwide, with more than 30 million being added every quarter. As 3G services grow in popularity, mobile operators face several challenges.

First, unlimited flat rate plans and competitive pricing pressures are accelerating data usage, putting pressure on business models as revenues fail to keep pace with mobile data traffic growth.

Second, the network is evolving to include not only person-to-person communication, but person-to-machine communication and machine-to-machine communication as more subscribers and devices become connected.

Third, the popularity of application stores and the proliferation of new multimedia applications is changing what subscribers expect from operators. They want more personalised services, access to a broader range of applications, and more interactive features to engage with their social networks.

Finally, new devices such as smartphones, smart meters, and healthcare devices offer improved ways to communicate and connect, access the Internet, interact and collaborate, entertain, and mobilise the enterprise. 

Mobile operators are realising the need to optimise their network and service architectures to continue to grow capacity, lower costs, improve network performance, manage devices, and meet subscriber expectations. 

The LTE Opportunity
LTE technology is emerging as the next generation wireless technology that will lead the growth of mobile broadband services in the next decade. Its adoption by operators around the world has the potential to generate economies of scale unprecedented by any previous generation of wireless networking technology.

LTE is critical to delivering the lower cost per bit, higher bandwidth, and subscriber experience needed to address the challenges of mobile broadband. It promises a whole new level of mobile broadband experience as services become personal, participation becomes more social, behaviour becomes more virtual, and usage reaches the mass market. It offers:
Significant speed and performance improvements for multimedia applications at a lower cost;
Enhanced applications such as video blogging, interactive TV, advanced gaming, mobile office, and social networking; and a wider variety of devices such as smartphones, netbooks, laptops, gaming and video devices as well as machine-to-machine supported applications including healthcare, transportation, and weather devices. 
 
To be a significant contributor to end-to-end service creation and enrich the subscriber experience, the LTEnetwork must support an agile, scalable and open approach.  This will depend on:
- The network's capacity to support peak user data rates, high average data throughputs, and low latency;
- The ability to leverage existing 3G infrastructure investments with a network migration path to LTE;
- Ensuring service continuity for existing revenue-critical 3G services, while supporting the rollout of new 4G services; 
- Balancing insatiable demand for mobile data services with LTE rollout plan dependencies on spectrum availability, and a device, services and applications ecosystem; and   
- Innovative service plans that encourage mass market adoption.

The LTE Evolved Packet Core (EPC) plays an important role in meeting these challenges and is a fundamental shift towards a service-aware, all-IP infrastructure. It has the potential to deliver a higher quality of experience at a lower cost, and improved management of subscribers, applications, devices and mobile data traffic.

Mobile operators are beginning to invest in LTE radio, transport, and core infrastructure to address the growth in mobile data traffic. However, bandwidth is a limited resource in much the same way as electricity. In the utility sector, smart meters are being used to manage electricity consumption by encouraging consumers and businesses to increase usage during off-peak hours with lower rates and decrease usage at peak hours.

Operators will need to adopt a similar approach by supplementing capacity improvements with controls that manage the flow and demand for data. This is where the key control components of the EPC including the Home Subscriber Server (HSS), Policy Controller (PCRF), and inter-working functions (3GPP AAA) come into play. Together they form the central control plane and include the main repository for subscriber and device information, provide authorisation and authentication functions for services, apply policies to manage network resources, applications, devices, and subscribers, and ensure inter-working with other access networks such as EVDO, WiMAX, and WiFi.

As the cornerstone for mobile personalisation and management, these ‘smart' subscriber, service, and policy controls enable mobile operators to moderate data traffic and entice subscribers with innovative, personalised services.

Getting Ready for LTE
Many leading operators are deploying subscriber, service, and policy controls in 3G networks. Over 65% of mobile operators polled in a recent Yankee Group survey require policy control currently or within the next 12 months to manage mobile data growth in their 3G networks and are not waiting for LTE. Operators can achieve significant benefits by centralising control across mobile access technologies. Benefits include smoother service migration and better management of mobile data traffic and applications such as the ability to direct traffic and applications to the optimal access network.

LTE is well positioned to meet the requirements of next-generation mobile networks as subscribers embrace multimedia services and as M2M applications are adopted. It represents a significant opportunity for mobile operators to meet the challenges and opportunities of exponential mobile data growth by complementing capacity and infrastructure investments with smart subscriber, service and policy control. This approach enables operators to control capital costs, manage the flow of data traffic, and create innovative and personalised service offers that entice subscribers and ensure profitability. 

About the author: David Sharpley is Senior Vice President, Bridgewater Systems.

Building a future-proof fibre optic infrastructure is as much about the business model you will follow, as it is the technical decisions you face

FTTx is vital if we are to fulfil the huge demand for large bandwidths in tomorrow's world. One of the options is to use FTTC (Fibre-to-the-Curb) to bring the DSL port closer to the customer. However, transmission by copper wire with DSL is as far as it goes. On the other hand, fibre optic transmission to the customer using FTTH (Fibre-to-the-Home) will provide sufficient bandwidths for the next 20 years.

The telecommunications industry has had more than ten years' experience with active and passive optical networks. And debates about the advantages and disadvantages of these networks have been running for at least that long. Fibre optic networks can be laid directly to households (Fibre-to-the-Home [FTTH]) by using Passive Optical Networks (PONs) and Active Optical Networks (AONs.)

The key technical difference between active and passive access technology is that a passive splitter for passive optical networks is used, whereas active optical networks function with Ethernet-Point-to-Point architecture. The objective of both passive and optical networks is to bring the fibre optics as close as possible, or ideally right to the subscribers' houses and apartments. This FTTH-solution is technically the best option as regards the transmission quality and the bandwidth.

Business case challenge
Using fibre optic cable promises virtually unlimited bandwidths, however the network operator only ever has just the copper wire line in the last mile, apart from a very few exceptions. So if DSL technology is no longer adequate, new optical cables always have to be laid.

The high investment costs of setting up this infrastructure, combined with telecommunications providers' falling revenue, mean it is often difficult to put a business case to investors and network providers' management boards. Nowadays, the ICT industry is still spoilt with returns on investment of one to three years. But expansion of FTTH and FTTB networks, (regardless of whether PON or Ethernet-Point-to-Point technology is used), sometimes takes more than 10 years before a return on investment is seen. Nevertheless, depending on the application and conditions at the time, business cases vary greatly, depending on whether passive or active access technology is used for FTTH rollouts.

Passive Optical Networks (PONs)
As regards the core network, the first network element of a PON network is the OLT (Optical Line Termination Unit), that provides n x 1 Gbit/s and n x 10 Gbit/s Ethernet interfaces to the core network and PON interfaces to the subscribers. The PON types used here today are usually Ethernet-PON (EPON), Gigabit-PON (GPON) and in future Gigabit-Ethernet-PON (GEPON) or WDM-PON. EPON installations are currently primarily found in the Far East, GPON on the other hand in the US and Europe.

In PON's case, the signal on the fibre optic to the subscribers is partitioned by a passive splitter into optical subscriber connections. The splitter is either located in an outdoor housing or directly in the cable run, for example in a sleeve. In other words, the network structure is a Point-to-Multipoint structure (PMP).

In an FTTH network architecture, subscriber access is implemented via optical network termination (ONT) that terminates the optical signal and feeds it into one or more electrical interfaces, such as for example 100BaseTx, POTS, or ISDN. ONTs with VDSL interfaces are available for FTTB to bridge the existing subscriber access lines in the property. In this case, each subscriber receives a VDSL modem as network termination.

Ethernet-Point-to-Point (PtP)
As regards Ethernet-Point-to-Point network structures, every subscriber gets their own fibre optic that is terminated at an optical concentrator (AN = Access Node.) Metro-Ethernet switches or IP edge routers are normally used here that were not originally conceived for the FTTH/FTTB environment. KEYMILE designed MileGate, the Multi-Service Access Node (MSAN), for this type of application. MileGate can be called an optical DSLAM because the system has a very high density of optical interfaces and at the same time fulfils all the demands of a DSLAM. MileGate uses standard optical Ethernet interfaces based on 100 Mbps (for example 100BaseBX) or Gigabit Ethernet. Because of this transmission interface, mini or micro DSLAMS that ensure distribution of data in individual properties, can be used in FTTB architectures too.

All network topologies can be implemented with PON and Ethernet -PtP. However, a network operator should decide early on which architecture will still be in a position to respond to demands in 15 - 20 years. Because infrastructure investments should have an ROI of about 10 years, so that modifications do not have to be made after just five years.

Initially, network operators save real money with a Point-to-Multipoint structure (of the type required for PON systems,) as they have to lay fewer fibre optics than if they used a Point-to-Point structure from the very beginning. However, the optical splitter is a weak point. This network component might have to be replaced if customers need greater bandwidth, or if the worst comes to the worst even be bypassed with additional fibre optics to upgrade the Point-to-Point structure.

A comparison of passive optical and Point-to-Point structures:
PtP technology is much better in terms of bandwidth per subscriber. The maximum bandwidth per subscriber is a lot higher. The flexibility to allocate different bandwidths to individual subscribers is also higher (e.g. for corporate customers) than when PON systems are used. Depending on the splitting factor, a PON connection via fibre optics supplies less bandwidth than a VDSL2 connection via copper wire. Even if it is a question of increasing the bandwidth, PtP architecture is superior to the PON's PMP architecture. Just by converting boards, subscribers can obtain an upgrade, without the network architecture or the service of other subscribers having to be changed.

Within a PON tree, all the subscribers are on the same optical point. If an ONT causes faulty synchronisation, or produces an optically indefinable signal, a remote localisation of the malfunction in the ONT involved might not be possible. Where PtP is concerned on the other hand, due to the PtP architecture, both the fibre optic path and the end customer's ONT can be clearly assessed. In the worst case scenario, the laser on the AN for each subscriber can be deactivated by the control centre. As regards availability, the PON is at a disadvantage compared with PtP because to date, there are no plans to connect customers redundantly in one PON.

Currently, when the same functions are offered, there are no significant differences in the costs of the subscribers' terminal equipment (CPEs, ONTs.) Because PtP Ethernet installations use standard Ethernet interfaces however, substantial falls in prices are to be expected as more and more flood the market. Despite standardisation, ONTs in today's PON environment are not interchangeable between different manufacturers' systems. Which means the selection of models is restricted and the savings provided because a larger number is produced, are negligible. However, in terms of price per subscriber and because the optical paths can be used in several ways, PON is at an advantage compared with Ethernet-PtP.

This advantage is eaten up by the subsequent costs for upgrades. An entire PON tree is affected by an upgrade. Because of the better granularity of the ANs and the separation of the customers (PtP), customised upgrades can be carried out in the active optical network. The advantages of PtP flexibility really bear fruit where business customers are concerned. Requirements from bulk customers are always highly individual, but PON network concepts tend to be more static. Therefore, in this case the active approach is a lot better.

A generic comparison of technology can only serve to gain an initial overview. While network operators in Asia prefer passive optical networks, a study by the FTTH Council Europe showed that in Europe over 80% of the FTTH/FTTB installations are based on Ethernet-PtP.

About the author: Klaus Pollak is Head of Consulting & Projects, Keymile.

Although smaller and quieter than in previous years, ITU Telecom World 2009 offered an opportunity for industry and governments from all round the globe to meet, and examine how ICT technologies can play their part in the development of societies and economies

Many said it would be a disaster. They said that without the big European and western manufacturers footing the bill then the event couldn't go ahead. No Nokia, no Ericsson, no Alcatel-Lucent, no show.

Well, despite the fact that at 18,000 visitors the event mustered only quarter of the attendees that came to the 2003 show in Geneva, ITU Telecom World 2009 felt like a success to many that were there as it took on a different tone from past shows. Others, though, found that business was slow and regretted their decision to attend.

The show lost its focus as a glossy showcase for the headline products of all the world's major manufacturers, and instead became a meeting point for those concerned with how best to plot the course of the development of all the world's markets.

So this time, the focus shifted to the southern and emerging markets. And the noise came not from the western manufacturers but from the Chinese vendors, and from Russia and the host of national pavilions that made up most of the show floor. There was also news around legislation and standardization from the ITU itself, to go with the focus on what ICT technologies can bring to the economies of nations across the world.

And there was debate too, whether it was warning from the head of the ITU on the need for vigilance in combating security threats in the IP sphere, or on standardization development, or the latest research on the state and size of the markets.

4.6 Billion Mobile Subscriptions and the broadband divide
The ITU's latest statistics, published in The World in 2009: ICT facts and figures, revealed rapid ICT growth in many world regions in everything from mobile cellular subscriptions to fixed and mobile broadband, and from TV to computer penetration - with mobile technology acting as a key driver.

The data, forecasts and analysis on the global ICT market showed that mobile growth is continuing unabated, with global mobile subscriptions expected to reach 4.6 billion by the end of the year, and mobile broadband subscriptions to top 600 million in 2009, having overtaken fixed broadband subscribers in 2008.

Mobile technologies are making major inroads toward extending ICTs in developing countries, with a number of nations launching and commercially offering IMT2000/3G networks and services. But ITU's statistics also highlight important regional discrepancies, with mobile broadband penetration rates still low in many African countries and other developing nations.

More than a quarter of the world's population is online and using the Internet, as of 2009. Ever-increasing numbers are opting for high-speed Internet access, with fixed broadband subscriber numbers more than tripling from 150 million in 2004 to an estimated 500 million by the end of 2009.

Rapid high-speed Internet growth in the developed world contrasts starkly with the state of play in the developing world. In Africa, for example, there is only one fixed broadband subscriber for every 1,000 inhabitants, compared with Europe where there are some 200 subscribers per 1,000 people. The relative price for ICT services (especially broadband) is highest in Africa, the region with the lowest income levels. The report finds that China has the world's largest fixed broadband market, overtaking its closest rival, the US, at the end of 2008.

ITU estimates show that three quarters of households now own a television set and over a quarter of people globally - some 1.9bn - now have access to a computer at home. This demonstrates the huge market potential in developing countries, where TV penetration is already high, for converged devices, as the mobile, television and Internet worlds collide.
Sami Al Basheer, Director, Telecommunication Development Bureau, said, "We are encouraged to see so much growth, but there is still a large digital divide and an impending broadband divide which needs to be addressed urgently."

New ITU standard opens doors for unified ‘smart home' network
The G.hn standard for wired home networking gained international approval at Telecom World, as the ITU approved a standard that it said will usher in a new era in ‘smart home' networking systems and applications.

Called ‘G.hn', the standard is intended to help service providers deploy new offerings, including High Definition TV (HDTV) and digital Internet Protocol TV (IPTV), more cost effectively. It will also provide a basis for consumer electronics manufacturers to network all types of home entertainment, home automation and home security products, and simplify consumers' purchasing and installation processes. Experts predict that the first chipsets employing G.hn will be available in early 2010.

G.hn-compliant devices will be capable of handling high-bandwidth rich multimedia content at speeds of up to 1 Gbit/s over household wiring options, including coaxial cable and standard phone and power lines. It will deliver many times the throughput of existing wireless and wired technologies.

Approval of the new standard will allow manufacturers of networked home devices to move forward with their R&D programmes and bring products to market more rapidly and with more confidence.

"G.hn is a technology that gives new use to the cabling most people already have in their homes. The remarkable array of applications that it will enable includes energy efficient smart appliances, home automation and telemedicine devices," said Malcolm Johnson, Director of ITU's Telecommunication Standardisation Bureau.

The physical layer and architecture portion of the standard were approved by ITU-T Study Group 15 on October 9. The data link layer of the new standard is expected to garner final approval at the group's next meeting in May 2010.

The Home Grid Forum, a group set up to promote G.hn, is developing a certification programme together with the Broadband Forum that will aid semiconductor and systems manufacturers in building and bringing standards-compliant products to market, with products that fully conform to the G.hn standard bearing the HomeGrid-certified logo.
Also agreed at the recent ITU-T Study Group 15 meeting was a new standard that focuses on coexistence between G.hn-based products and those using other technologies. Known as G.9972, the standard describes the process by which G.hn devices will work with power line devices that use technologies such as IEEE P1901. In addition, experts say that they will develop extensions to G.hn to support SmartGrid applications.

Shake up the standardization landscape
Nineteen CTOs from some of the world's key ICT players called upon ITU to provide a lead in an overhaul of the global ICT standardization landscape.

The CTOs agreed on a set of recommendations and actions that will better address the evolving needs of a fast-moving industry; facilitate the launch of new products, services and applications; promote cost-effective solutions; combat climate change; and address the needs of developing countries regarding greater inclusion in standards development.
Participants reaffirmed the increasing importance of standards in the rapidly changing information society. Standards are the ‘universal language' that drives competitiveness by helping organizations optimize their efficiency, effectiveness, responsiveness and innovation, the CTOs agreed.

Malcolm Johnson, Director, Telecommunication Standardization Bureau, ITU, said, "There are many examples of successful standards collaboration, a fragile economic environment and an ICT ecosystem characterized by convergence makes it all the more important to streamline and clarify the standardization landscape. We have agreed on a number of concrete actions that will help us move towards this goal and strengthen understanding of standards' critical role in combating climate change, while better reflecting the needs of developing countries."

The standardization landscape has become complicated and fragmented, with hundreds of different industry forums and consortia. CTOs agreed that it has become increasingly tough to prioritise standardisation resources, and called on ITU - as the preeminent global standards body - to lead a review to clarify the standardization scenario.

ITU will host a web portal providing information on the interrelationship of standards and standards bodies, which would facilitate the work of industry and standards makers while promoting cooperation and collaboration and avoiding duplication.

War in cyberspace?
The next world war could take place in cyberspace, Hamadoun Toure, secretary-general of the ITU warned during the conference.

"The next world war could happen in cyberspace and that would be a catastrophe. We have to make sure that all countries understand that in that war, there is no such thing as a superpower," Hamadoun Toure said. "The best way to win a war is to avoid it in the first place," he added. "Loss of vital networks would quickly cripple any nation, and none is immune to cyberattack," said Toure.

Toure said that cyberattacks and crimes have also increased, referring to such attacks as the use of "phishing" tools to get hold of passwords to commit fraud, or attempts by hackers to bring down secure networks. Individual countries have started to respond by bolstering their defences.

US Secretary for Homeland Security Janet Napolitano announced that she has received the green light to hire up to 1,000 cybersecurity experts to ramp up the United States' defenses against cyber threats.

South Korea has also announced plans to train 3,000 "cyber sheriffs" by next year to protect businesses after a spate of attacks on state and private websites.

Warning of the magnitude of cybercrimes and attacks, Carlos Solari, Alcatel-Lucent's vice-president on central quality, security and reliability, told an ITU forum that breaches in e-commerce are now already running to "hundreds of billions."

One high profile victim in recent years was Estonia, which suffered high profile cyber attacks on government websites and leading businesses in 2007. Estonian Minister for Economic Affairs and Communications Juhan Parts said in Geneva that "adequate international cooperation" was essential. "If something happens on cyberspace it's a border crossing issue. We have to have horizontal cooperation globally," he added.

To meet this goal, 37 ITU member countries have joined forces in the International Multilateral Partnership against Cyber Threats (IMPACT), set up this year to "proactively track and defend against cyberthreats." Another 15 nations are holding advanced discussions, according to the ITU.

Experts say that a major problem is that the current software and web infrastructure has the same weaknesses as those produced two decades ago.

"The real problem is that we're putting on the market software that is as vulnerable as it was 20 years ago," said Cristine Hoepers, general manager at Brazilian National Computer Emergency Response Team.

    

@eurocomms

Other Categories in Features