Features

Features

IMS - the ultra next generation network architecture - was to become the great unifier for all our disparate access technologies and a cure-all needed to deal with vendor interoperability issues. If done the wrong way, however, it can create an overly complex, difficult-to-manage architecture.  This realisation has put a renewed focus on interoperability testing and network monitoring. Chad Hart explores the challenges and makes the case for a lifecycle approach to testing and monitoring of NGNs and IMS networks

Most operators want an NGN, but few have actually been deployed. One major reason is that making these networks operate reliably is challenging, and many initiatives never make it out of the lab. This is especially true for IP Multimedia Subsystems (IMS). New approaches to quality assurance are, however, changing this - which is where testing and monitoring comes into the picture.

Those responsible for looking after the quality of NGNs and IMS-based networks face many challenges. In the first instance, they are complex beasts. Almost by definition, they are made up of many devices, offer several different kinds of services, interface with many legacy networks, and have to interact with other providers' networks.

The IMS architecture is especially complicated. It comprises many different protocols, dozens of standardized functions, and even more interfaces than one can imagine. Because of this, coping with these seemingly infinite details creates another challenge for quality engineers to deal with.

Theoretically speaking, specifications should make designing and implementing advanced networks easier. The standards should provide a good guide for everyone to follow. But in reality, many standards - particularly those for IMS - are incomplete, or have major pieces missing. What compounds this is that there are many industry bodies developing different specifications to apply to NGN networks. These include IETF, ETSI TISPAN, and the 3GPP. Furthermore, these associations frequently update their work, making it an arduous task to keep track of new versions to adhere to.

Thirdly, engineers face the challenge of identifying and sourcing all the pieces in this jigsaw puzzle, and to then make it work together. Because of the complex nature of the IMS architecture, and the many standard ambiguities, interoperability becomes a serious issue to deal with. Often the components from one vendor do not work with those of another without a significant amount of integration work.

Furthermore, because no one vendor does everything exceptionally well, operators are confronted with the challenge of dealing with each one's weaknesses, and go through an often laborious and tough vendor interoperability testing process - alternatively, operators could pick a vendor that has already interoperated with the best-of-breed components. But even this is not without its challenges.

Finally, the biggest and single most important challenge is to ensure that your customers and subscribers remain happy. The end user does not care about how the services he uses are implemented - all he's after is high-quality, reliable, secure and affordable networks. So it becomes crucial for IMS implementers to hide the complexities from its users while providing consistent - or even higher - service quality levels. Meeting these challenges requires a more advanced approach to ensuring quality.

You'd think all these challenges make it almost impossible for any NGN - never mind an IMS network  - to make it to market. But operators are dealing with them. They are providing their customers with top-notch services, and we believe this is because they have realigned their quality assurance processes and invested time and money into continuously testing and monitoring their networks.

Before progressing from a concept to a deployed network offering a service, operators implement gruelling tests, and these often take place in several unique lifecycle stages. Characteristically, they start within the infrastructure vendors and transition to the operator. They include research and development, quality assurance, production, field trials, deployment, and on-going maintenance. And within each phase, quality assurance should have been applied.

If handled correctly, each group should have had its own employees, equipment, processes, and test plans assigned to deal with this, with little being shared between groups. However, because of the many challenges created by IMS, the traditional process to managing quality requires that the testing process becomes more flexible - there simply is too much that can go wrong.

With too few quality engineers to meet today's needs the lifecycle function needs to be adaptable. Increasingly these separate groups are collaborating more in order to carry out thorough and implementation-specific testing. And this can be in the form of shared test methodologies, shared lab equipment, shared test metrics, and shared test scripts, or even shared test engineers; but what's critical is that no testing takes place in seclusion.
When doing any job it's fundamental to try and use the right tools. Therefore, when managing the lifecycle approach to quality assurance, it's absolutely imperative your teams are armed with the best tools to help them get the job done - this is especially important if you're to ensure your quality assurance remains watertight.

Lifecycle testing and monitoring consists of several different elements; typically these include:
Subscriber simulation/call generation - the slowest and least sophisticated way to test a network is to make manual calls into the network and to report on the result of each one. Although this works for simple tests, it's not the best approach to handling complex feature and scenario testing as this will take hours to run and it would be difficult to manage. For example, it would require more than thousands of callers with dozens of phones each to even begin to reach the traffic levels needed for today's load tests.

Call generation tools can normally emulate the precise end-point devices from a signalling and media perspective as well as simulate end-user calling behaviours. These tools usually have specialised capabilities for feature testing, load testing, and test automation; and often support advanced voice quality measurements and have reporting capabilities that are not viable to record with manual testing.

Infrastructure emulation -legacy networks have one main network switching component known as the Class 5 switch/softswitch or MSC. The IMS model separates these into several dozen clear-cut software and component functions such as CSCF's, AS's, BGCF's and a whole slew of other acronyms. As a consequence of this, most of today's IMS core infrastructure devices demand a considerable amount of interaction from other infrastructure devices in order to function. The problem, unfortunately, is that fitting all these devices into a test lab is not practical or feasible. By using infrastructure emulation tools, quality assurance engineers can emulate precise infrastructure devices as well as the distinct vendor implementations of these devices. What's more it helps operators save a significant amount of physical space, configuration time and capital equipment cost.

Network emulation - as a rule, labs are typically setup in a single room with all the devices connected to a single data switching infrastructure. However, real-world IP networks are quite different. In fact, several switches and routers are used to connect an array of different devices across hundreds of miles, and via many differing network topologies. In reality, this ultimately causes packet losses and delays to occur that you cannot see in a lab environment. Network emulation products let you emulate these LAN conditions, and even allow you to introduce jitter, bit error rates, and link outages.

Troubleshooting and diagnostics - being able to identify limitations and problems is the sign of a good test. Although, how can you identify whether an issue was created by the network, and not faulty testing? By using troubleshooting and diagnostic tools, engineers can isolate and analyse each problem. Information gathered during troubleshooting and diagnostics is invaluable to the development engineers as it allows them to fix any bugs discovered. Typical diagnostic tools for IMS networks have some low-level signalling message decoding and voice quality analysis capabilities.

Service monitoring - because of the intricate make-up of advanced networks, it's foreseeable that problems can arise over time, even after thorough lab testing has taken place ahead of deployment. Therefore it's important to proactively monitor the quality of service the network is delivering after being rolled out to customers, and to swiftly respond to any problems that may arise.

In order to achieve this most service providers deploy a monitoring system. This may be passive and simply listen to network traffic; or it could be active. If it's the latter it normally makes measurements against system-generated calls - or a mixture of both. In both cases they characteristically include reporting metrics that are useful to networks operations personnel, as well as specialised diagnostic and analysis tools that help them find and sort out network problems.

The testing and monitoring requirements for today's NGNs and emerging IMS networks are substantially broader and deeper than the industry has ever seen before. Creating a comprehensive test program that can be applied across the various layers, functions, applications and lifespan of such a network is not impossible to achieve. By using the advanced tools and techniques available in the marketplace you can tackle any quality assurance issues from day one, and beyond.

So, regardless of whether you're only in the initial stages of designing your NGN or IMS network, design, testing and monitoring should be at the top of your priority list - if it's not, it could certainly spell doom for the entire project.

Chad Hart is Product Marketing Manager, Empirix

Could VPLS offer the answer to the seemingly inevitable future bandwidth crunch?  Chris Werpy explores the options

The key to answering this question is in understanding some of the industry dynamics at play for WiMAX, a contender to the 4G throne. Spectrum is yet to be allocated in some countries although it is fair to assume it will be limited and therefore its usage needs to be maximised. Vendors and other stakeholders in the WiMAX infrastructure value chain are currently responding to RFPs and there is a great deal of network yet to be completed outside of North America, with European WiMAX subscribers estimated to represent 40 per cent of worldwide WiMAX subscribers by 2009. Then, there are the devices which will support WiMAX services and of course the services themselves. Does anyone really know what these services will be or what the experience will be like for the subscribers?


There's been a great deal of excitement around what WiMAX could deliver for subscribers - whether it's basic services in developing countries or more sophisticated interactive mobile broadband elsewhere. In fact, it's the subscribers who will decide the level of success of mobile WiMAX and other 4G technologies, and many of them will sign up with some pre-conceived ideas of what it will be like.


The pressure is now on the operators to deliver the network, device support and services that will prove compelling to users and accelerate subscriber acquisition. Yet, there are a number of challenges faced by operators in getting to this point, and it's overcoming these that provides much of the "what next?" for WiMAX. Not least of which is actually recognising subscribers when they access the network, and making sure that they get the services and experience to which they are entitled.


Crucial to the monetisation of the WiMAX network is, quite simply, attracting subscribers onto it. Those who have experience of mobile broadband will generally be used to those services provided by 3G networks, and so WiMAX does have the advantages of greater bandwidth in some instances, but also wider coverage and the prospect of greater interactivity and roaming. But by the same token, WiMAX must deliver an experience that is at least comparable to 3G as subscribers hop on and off the network, otherwise seamlessness between WiMAX, WiFi, 3G networks, and cellular networks will be lost, and with that, many of the subscribers themselves.


This means there needs to be a seamless and intuitive handover between networks, even during the same data session. Currently, operators largely have no way of recognising existing subscribers when they move onto the WiMAX network without a laborious login procedure that does not differentiate existing from new users and does not allow existing subscribers to easily ‘carry' their service entitlements with them as they move onto the WiMAX network. This could potentially jeopardise not only their future subscriber base but their existing one as well as.


To overcome this challenge, operators need to amalgamate subscriber information, including service entitlements, access credentials and credits, and centralise these in a subscriber profile. This profile details what they are entitled to, allows the network to ‘recognise' users and apply the policy to their mobile broadband experience.


More sophisticated uses of policy in this example could be subscribers automatically pushed onto the WiMAX network if greater bandwidth is needed for a service and they are in range. Or, it could be an opportunity for an operator to upsell a service that they know the subscriber enjoys using in the 3G world, or even a way of better targeting mobile advertising based on real time subscriber data such as location and presence.


Because the subscriber policy is always changing to reflect the personal needs of each individual subscriber, it is also the key asset source for operators to market new services to subscribers once they are on the network. Policy helps them build a relationship with the subscriber where in the near future it will be possible to personalise services based on where the subscriber is, what device they are using and what their preferences are, in real time.


However, in order to do this, operators must first establish a strong pricing model that may, by necessity, need to buck the trend for flat fees, and that certainly calls for some creative thinking.


WiMAX subscribers are expected to benefit from a wide range of services from voice in remote areas to interactive visual services such as video in other regions. But, in instances where spectrum will be limited, this suggests there is a need to transition from traditional flat fee models to service models that are based on metered bandwidth.  There are several models for doing this based on the value of the service, the service tier, the amount of data used or available at any point in time, or in fact whether the service is subsidised by advertising.


Some analysts have extrapolated that this could be the end of the flat fee pricing model, particularly when new services are likely to be bandwidth intensive and have the potential to use up bandwidth very rapidly. When operators do the maths, they may find that flat fee pricing encourages subscribers to ‘eat all they can' - and they may be biting off more than operators are willing to let them chew.


So, it's likely that operators will need to create different service models based on subscriber policies that enable the operators to manage access to the network, ensure fair usage, but also open the network up to those early adopters who may well want video-on-demand or any of the other broadband services which have been touted and who will be willing to pay for them.


They will need the next generation of WiMAX devices which - I would expect - will have large screens, multiple air interfaces, sophisticated onboard graphic and audio processing technologies and also batteries that will allow more than a few minutes viewing. These 4G WiMAX devices (including laptops) will need to be more flexible towards new services, especially with the unprecedented ‘openness' of the WiMAX network. That openness is not only the range of new services that could be developed, it's more to do with the way that consumer demand is affecting the device marketplace.


Operators will not be the sole stockists for WiMAX devices - they will be available from retail shops and will therefore not be tied to a specific network or service.  With device delivery now distinct from service delivery, the challenge for operators is to attract as many subscribers as is possible, but more importantly to make sure the network is as easy to access and use as possible.


At the moment, there's no clear way to ensure that WiMAX devices are compatible with services and that subscribers can be easily registered on the network and use those services without a hitch. Subscribers will tend to buy devices directly from a retailer and not a network operator, which certainly reduces the financial pressure of subsidising equipment, but also means operators must be able to support Over-the-Air (OTA) device configuration, activation and provisioning. Subscriber provisioning as a standard offering will enable operators to jump on their competitors.


Indeed, mobile devices - including phones, laptops and multimedia players with WiMAX modules - will not simply ‘work out of the box' as normal cellular devices do. Subscribers will need to choose for themselves who they subscribe with, what service package they buy and a number of other variables, reflecting that the future of telecoms services really is to meet consumer demand for any service, any time, anywhere.


Soon, OTA will drive the proliferation of open WiMAX networks and services by allowing the subscriber to activate their own subscription, receive firmware updates direct to the device that automatically supports new services or functionality, and select their own service features.


WiMAX already offers the openness that subscribers want, so operators need to be able to create subscriber policies that reflect the entitlements and changing demands of the subscriber. If they can master the network, service and charging models, and devices, encapsulating these in a policy, then they are in a prime position to begin the next phase of WiMAX and attract subscribers onto the network. By putting subscriber policy at the heart of their WiMAX service strategy, service providers can build a relationship that provides subscribers a personalised WiMAX experience that will improve subscriber retention and drive greater uptake of 4G services.

Ihsen Fekih is EMEA Managing Director at Bridgewater Systems
www.bridgewatersystems.com

WiMAX is often regarded as an economically attractive technology in rural areas with no wired networks, but it is also being increasingly positioned as an alternative to DSL in metro areas within developed countries, says Howard Wilcox

The global opportunity for WiMAX 802.16e to deliver 'local loop' broadband connectivity will begin to take off over the 2009 to 2011 period, according to Fixed WiMAX: Opportunities for Last Mile Broadband Access 2008 - 2013, a new report from Juniper Research.   There are significant prospects for WiMAX as a DSL substitute technology, and the fixed WiMAX subscriber base is forecast to approach 50 million globally by 2013. 

  
Currently, there are over 250 802.16e WiMAX networks being trialled across the world, and a relatively small but rapidly growing number of commercial networks in service.   With a profusion of trial and network contract announcements over the last 12 to 18 months, WiMAX is now much more of a market threat to existing broadband access technologies such as DSL. 

     
An analysis of the primary target market focus of each of over 50 service providers which have announced commercial network contracts revealed that the stand-out market focus is offering an alternative to DSL.   The analysis illustrated that WiMAX is well suited to rapid deployment in many underserved areas. 


Developing countries in Eastern Europe, the Middle East and Africa, and Asia, have shown most interest in WiMAX to date: many of these countries are part of the "underserved" world from a broadband perspective and are seeking pure Internet connectivity - fast.    These countries can enjoy the technology "leapfrog" effect, jumping from no or limited connectivity to multimegabit, state of the art broadband.    In Poland, for example, four carriers received nationwide 3.6 GHz WiMAX licences in 2006 including Netia, cable television operator Multimedia Polska, Crowley and Exatel. Netia has contracted with Alvarion for a 20-city national network for business and residential users, while Crowley has contracted with Redline and Multimedia Polska with Airspan, but Exatel's network has been delayed. Multimedia Polska is targeting homes in Central and Eastern Poland that have been previously underserved with Internet access.   Meanwhile, Russia is a very fragmented market, but with a growing number of existing and aspiring broadband operators. All of the operators are focusing in the short to medium term on providing fixed services in underserved areas.   In mid May 2008, there was a significant development with Virgin Group entering Russia via the nationwide launch of its high-speed broadband WiMAX network - known as Virgin Connect - and operated by Trivon; the service has been launched in 32 Russian regions including Moscow, St. Petersburg and the 20 largest cities.
Although WiMAX is often regarded as an economically attractive technology in rural areas with no wired networks, it is being increasingly positioned as an alternative to DSL in both rural and metro areas in developed countries.   Typically, WiMAX service providers are differentiating their services either by offering higher speeds than DSL, for example, for customers located at the distance limit from their local exchange, or by emphasising ease and speed of set-up for customers.   WiMAX will therefore both cater for broadband growth, and replace some existing DSL connections.   Service providers in a number of developing countries such as India are also targeting rural areas that have no wired networks at all, to provide basic telephony as well as more advanced services.   In these communities, WiMAX services will need to be priced at affordable levels.


The next most popular market focus is for high-end business users - those typically spending $400 to $500 per month on broadband services  - who require secure, very high-speed connections, and that have more demanding bandwidth needs such as hosting their own servers, but who also require some element of nomadic working.   Again here, WiMAX is proving attractive to subscribers who have used DSL up until now.


The survey showed that the vast majority of service providers are concentrating on providing fixed broadband services to begin with, although many have the intention of developing mobile offerings once their networks and services are established.
However, there a number of issues that WiMAX as an ecosystem needs to address including:

  • Availability of suitable devices: WiMAX has great potential to integrate broadband connectivity in a wide range of consumer devices such as MP3 players, cameras and satellite navigation units as well as more traditional items such as laptops and dongles. The industry must ensure that reliable, certified devices are readily available so that customers are not held back or discouraged from subscribing due to supply issues. In early April 2008 the WiMAX Forum announced that the first eight Mobile 802.16e WiMAX products received the WiMAX Forum Certified Seal of Approval. There is an opportunity to drive and sustain market takeoff through a steady stream of innovative devices. The "push" to achieve market launch needs to be counterbalanced by ensuring the availability of components and volume of production to meet anticipated demand - at the right attractive price point.
  • Timely network construction: service providers need to complete build programmes on time to achieve sustainable WiMAX based businesses and they also need to translate the many, usually well-publicised trials, into commercial networks offering reliable and attractively-packaged services. In future, users will take this as a given, and will become less tolerant of unreliability as broadband becomes inextricably linked with everyday life. The announcement by Sprint and Samsung in mid May 2008 that WiMAX has met Sprint's commercial acceptance criteria including overall performance, handoff performance and handoff delay is a very timely boost for the technology: the eyes of the (WiMAX and mobile broadband) world are on developments there. Commercial launches in Baltimore and Washington DC are planned for later in 2008 by Sprint. Further success will counteract the view in some parts of the industry that WiMAX is always coming tomorrow.
  • Brand identification and service differentiation: WiMAX service providers need to avoid entering the market on the basis of price: this will be a difficult battle to win against established DSL and mobile operators, especially in developed markets like Western Europe. These established (usually 3G) operators already have strong brand image and sophisticated marketing, and in some countries such as Ireland and Scandinavia are already enjoying success in the DSL substitution market.

With the plethora of broadband access technologies available - such as DSL, satellite, cable, HSPA, EVDO, WiMAX - not to mention the future technologies such as LTE, people often ask if there is going to be a technology that wins out over the rest.  Juniper Research discussed this issue with around 30 executives from a variety of vendors, service providers and industry associations.   Respondents were unanimous in viewing WiMAX as complementary and take a pragmatic approach - if there is a use for it, and the business case is sustainable, it will be deployed.   Telecoms operators need to consider all alternatives when making an investment. 
Most new technology launches face issues like these, and with the impetus that WiMAX now has in the marketplace, it is well-placed to grow.  Juniper's headline forecasts include:

  • The annual fixed WiMAX global market size will exceed 13m subscribers by 2013
  • The WiMAX device market - comprising CPE, chipsets, minicards, and USB dongles - will approach $6bn pa by 2013
  • The top 3 regions (Far East, N. America and W. Europe) will represent over 60 per cent of the $20bn p.a. global WiMAX service revenues by 2013.

In fact, WiMAX is forecast to substitute for nearly 50 million or 12 per cent of the DSL and mobile broadband subscriber base globally by 2013:
 
Howard Wilcox is a Senior Analyst with Juniper Research in the UK.
www.juniperresearch.com

The delivery of voice services over next generation networks has never been a comfortable journey. The business reality takes companies upwards and downwards, twists carriers in the wind of market challenges and throws them in heavy seas of competition. Konstantin Nikashov looks at the current market situation to explain how VoIP softswitches ensure the efficient performance of carriers' networks

VoIP adoption is in full swing worldwide. ABI Research predicts a seven-fold increase in the number of residential voice-over-IP subscribers between 2006 and 2013, while Frost & Sullivan forecasts enterprise VoIP services revenues to surge to $ 3.3 billion in 2010. VoIP is already taken for granted in Europe and the USA, while Asia and Latin America register an incredible interest in the technology. With this promising raise of demand for VoIP services telecom carriers definitely should keep their finger on the pulse of where the industry is heading.


Today's telecom landscape offers carriers numerous margin drivers that seem irresistibly tempting to anyone in the business. VoIP calls are but packets of data travelling across the Internet. The technology can be called virtual since it is not tied to physical locations or devices. Carriers rent out their VoIP capabilities to get higher revenues and traffic volumes. Virtualization facilitates launching service rendering to an unlimited number of subscribers and adding new phone lines wherever and whenever needed. VoIP allows for flexible control over the system (either by the system administrator or subscribers) and redundancy to help service providers manage risks.


However, revenue-generating opportunities go hand in hand with industry challenges. Cable companies and Internet service providers that compete for the market share with conventional telcos take advantage of the fact that VoIP can be easily bundled up with other services. Triple- and even Quad-Play, which are included in selling propositions, increase load on networks. Therefore, carriers need to ensure that their switching platforms are able to handle huge volumes of traffic with top-level reliability.


Obviously, there are two major tasks that telcos striving to succeed should complete today. The most urgent is a choice of the basic functionality that VoIP solutions deployed on the networks must deliver. At the same time service providers should always look ahead in terms of developing their networks, so the software has to keep up with industry upgrades. Another task is to set up criteria of the solution's successful performance. Along with basic features some vendors offer unique capabilities that generate additional business value and significantly raise carrier's revenues. 


The galloping migration to NGN technologies that started at the break of the 21st century instigated the intensive use of VoIP softswitches as core elements of carrier's networks. However, not all softswitches are created equal.


The primary softswitch functionality includes call routing, call control and signalling, and delivery of media services. A lot of carriers appreciate when softswitch routing capabilities are emphasized. The whole concept of the softswitch is advantageous since it allows decoupling software from hardware. It means that new services can be added and removed easily, and the deployed solution can be operated in a flexible manner. Compared to traditional circuit switches, softswitches deliver an elaborate functionality, leave carriers with more freedom and save up to 15-20 per cent on capex and opex.


Analysts argue, nevertheless, that the move towards converged IP communications makes vendors emphasize the session border controller (SBC) functionality of VoIP softswitches.
SBCs are carrier-grade systems designed to facilitate interconnection of disparate IP networks. Carriers deploy softswitches with session border controller capabilities on the border between two adjacent networks to overcome protocol and codec conversion challenges. SBCs also allow NAT and firewall traversal, provide for access control, topology hiding, lawful interception service compliance and ensure that only authorized calls are admitted across network borders.  Session border controllers give a competitive edge to service providers that search ways to easily combine calls and services from multi-vendor equipment networks.


Other VoIP softswitches offered on the market today are more sophisticated. And so is the ideology of their deployment. What used to be simply a router has evolved into a complex system of traffic transit management. Best-of-breed softswitches perform intelligent routing based on a variety of route hunting criteria, keep and regularly update all the information about rates and tariffs of peering partners. On top of that, operation and QoS analysis tools of industry-leading softswitches enable carriers to come up with competitive customer-driven service offerings, make profitability forecasts and select the best partners. Such softswitches are enhanced with session border control functions and include elements of easy integration into the carrier's network (real-time billing interface). Some software manufacturers add even more capabilities to their complex solutions - ENUM lookup and IPv4 to IPv6 interworking support, tools of interaction with B/OSS applications. The innovations contribute to the softswitch viability in today's ever-changing VoIP environment. All-in-one solutions meet the requirements of carriers that promptly react to network challenges and wish to efficiently run significant volumes of VoIP calls.


Functionality of modern softswitches can be defined depending on the purposes of a particular deployment. Vendors usually focus on the routing or session border controller capabilities or design comprehensive intelligent traffic management systems. Each finds its niche in the current market situation.


Even if carriers are sure about the desired basic softswitch features they often need to evaluate how successful a particular solution will be when deployed on the network. Software products can always be customized to address the carrier's needs, but some capabilities are a must for any VoIP solution offered as a cost-efficient competitive softswitch.


The first and foremost capability of a good softswitch is reliability. Being the focal point of a VoIP network and processing several million minutes of traffic per month, the softswitch has to guarantee business-critical dependability. Top-level fault tolerance can be ensured by the softswitch modular architecture. In case one module fails, its functions are taken over by other modules depending on the current workload. This mechanism enables carriers to choose between various redundancy schemes and set up complete or partial back up scenarios.


An advanced business logic embedded in the softswitch capabilities is another important criterion of successful performance. For instance, IP-based PBX solutions are more attractive to enterprise users than TDM-based systems. An IP platform is generally more service-oriented, and, therefore, delivery of voice-to-email, fax-to-email, email-to-fax and other popular services is easier. Best-of-breed IP Centrex solutions offer 20-40 value added services crucial for businesses.


Today's softswitches are often appraised by their ability to operate in the IMS environment. IP Multimedia Subsystem is an access-independent platform for multimedia service delivery. IMS is based on the IP technology but is designed to take VoIP to the entirely new level of development. However, it does not mean that carriers should look for solutions other than a softswitch. Softswitches perform Call Session Control Function (CSCF) well in the perspective of IMS architecture. One of the most important requirements here is the ability to control quality of service and effectively interact with network devices. This makes the session border controller functionality of softswitches especially relevant.


The last but not the least thing to consider when choosing the most successful VoIP softswitch is price-to-quality ratio. Open source solutions are free but they do not guarantee the reliability, 24x7 professional support service and other technology benefits offered by proven VoIP developers. At the same time it does not sound reasonable to overpay for mere basic features under a widely promoted brand. Today's market of VoIP solutions is highly competitive, and mid-sized developers often help retail and wholesale carriers find the golden mean, supplying reasonably priced full-featured softswithes with capabilities that carriers need most of all.
Modern VoIP softswitche

s have a great potential to dramatically shorten the carriers' way to the top and lay a solid ground for further innovation. However, the choice of robust VoIP solutions is always defined by carriers' needs and the ability of a softswitch to meet certain criteria of satisfactory performance. Eventually, carriers that demonstrate a thorough and thoughtful approach to equipment deployment issues always benefit from best-in-class VoIP softswitches.   

Konstantin Nikashov is CEO of MERA Systems
www.mera-systems.com

In a world where Corporate Social Responsibility (CSR) continues to make its mark in the boardroom, companies are looking for ways in which carbon footprints can be reduced, and employees' time can be used more efficiently and productively.  Conferencing technology is seen as one way of achieving this.  Meetings can be set up within minutes, even if the people involved are spread across the world. But how can costs be kept under control? Aaron McCormack looks at some of the options

Conferencing has taken huge steps forward in recent years.  Calls are much easier and quicker to organise, for example. For a formal meeting, you may still want to plan ahead but if you want to gather a few people together for a quick impromptu discussion, that's just as easy. A growing number of people maintain virtual ‘meeting rooms' they can dial into whenever they like.


Conferencing has become a much richer experience as well. Using tools such as Microsoft Live Meeting with an audio conference, people can see and work on documents while they speak, as if gathered around one PC. You can use these conferencing services to make presentations as well. As you talk, your audience can see your slides or your product demonstration on their screen.


And if you still think these services are a poor substitute for actually being there, the next generation of video conferencing services made possible by equipment from Cisco and other suppliers should change your mind. By placing large screens and cameras carefully around a meeting table, they make it possible to look people in the eye. Facial expressions and gestures are as clearly visible as if participants were in the same room.
No wonder, then, that more and more people are adopting conferencing as a time- and money-saving alternative to face-to-face meetings.


Gartner has described conferencing as a ‘birthright' application for high-performance workplaces and it's easy to see why. Thanks to globalisation, partners, suppliers and colleagues can be spread across the world. But despite their distance, you need a close and effective working relationship with everyone in your circle. Yes - it's good to visit them from time to time. In between, though, you need an effective and efficient alternative - something more personal and interactive than email.


Conferencing is also good for the bottom line. Time that would have been spent travelling can be put to better use, and conference calls are much cheaper than plane tickets - even if you travel on budget airlines!


The problem for many companies, though, is that the costs of travel and conferencing fall in different areas or budgets. While one manager smiles at the savings, another winces as phone bills escalate, apparently out of control.


Fortunately, there is a win-win solution. Those who have already completed the introduction of IP telephony across their organisation and have connected their various premises through IP VPNs have the most to gain.


As with any distributed organisation, a great many phone calls - often, the majority - are internal. This is particularly true of conferencing. Up to 20 per cent of minutes carried on an enterprise's phone network can result from audio conferences. And 50 to 60 per cent of conference calls are between people from the same enterprise.
The cost of connecting these people through public phone networks can account for 30 or 40 per cent of the total cost of a conference call. If employees are located in different countries, for example, some may need to make expensive international calls to connect to the conference. Where mobile phones are used, the costs can be even higher.


However, if organisations use their IP VPNs to connect everyone, much of this cost can be saved. By adding a managed conferencing service to their corporate network, the majority of calls can be brought ‘on net'. Savings of 20 per cent or more can be achieved as a result. Even better as far as cost management is concerned is that the variable cost of calls to public conferencing services is replaced by the fixed cost of providing enough network capacity to handle the additional calls.


Of course, while complete convergence is the ideal, most organisations have yet to reach it.
Surveys suggest that the 80:20 rule applies. While 80 per cent of organisations have started to introduce IP telephony, only 20 per cent have completed deployment. The remainder still operate a mix of old and new telephony systems across their facilities.
It's a situation that can persist for many years. New offices might be equipped with IP telephony, for example, while outlying branches continue to use their traditional PBX solutions.


So what can you do to gain control of conferencing costs in the meantime?
One option is to select a single global supplier for conferencing services, which can help remove the need to make expensive international phone calls.


Imagine someone in New York wants to arrange a conference involving colleagues in Europe and Japan. If the call is set up through a US-based supplier, those joining from other countries will have to make international calls to join it. If the call is arranged through a global supplier, everyone will be able to connect by calling a number in their own country. Costs are reduced considerably as a result.


Another option is for organisations to choose a global supplier's hosted conferencing service. In most respects, this is the same as choosing an on-net managed solution. There is one important difference, though - the conferencing platform is located on the supplier's premises and connected to the enterprise's IP telephony systems through VPNs. Calls from employees using the IP telephony systems are effectively ‘on net'. Everyone else connects in by dialling the local access number for their country, with their calls being connected to the conferencing platform through the supplier's global network. These solutions leverage the enterprise's network investment and create a highly cost-effective service.Whichever option you choose, there is one more thing you need to do to maximise the benefit your organisation gains from conferencing - embed it in your company's culture.


If you've invested in a managed or hosted conferencing service, it makes sense to get as much use from it as you can. Additional calls make little difference to the cost of providing the service, but can reduce travel bills and improve employee productivity.
The chances are you'll have little difficulty in convincing 10 to 15 per cent of your staff to use the service. They'll be your early adopters - the people who are keen to try something new and have probably been using conferencing for some time.


The main issue is in convincing the rest of your organisation and changing their behaviour. Unless someone is telling people why it is beneficial to change, showing them how to do it, training them in the technology and monitoring their usage, it won't happen.


The problem comes from the fact that companies rarely have any experience to build on when they begin to introduce their conferencing solutions and drive up adoption. They're doing it for the first time and may never need to do it again. The results they achieve suffer as a result.


Where a supplier can make a big difference is by bringing a wealth of practical experience and understanding to the table. Over the years, they will come across almost every situation and will know how best to address it. They'll also have ‘out of the box' CRM tools and information systems to provide effective support and a range of ‘tried and tested' training and communication programmes. Armed with this, they can help enterprises introduce conferencing cultures quickly and painlessly, bringing forward return on
investment.


Because they've done it many times before, global conferencing service providers are good at these sorts of education programme. They know what will get people's attention and how to get the message across. As a result, they can change an organisation's culture from face-to-face to conferencing much more quickly than might otherwise be possible.
And with the continual pressure on organisations to reduce costs and improve efficiency that has to be a good thing.

Aaron McCormack is CEO, BT Conferencing

Benoit Reillier provides an update on the key regulatory topics that will shape the telecoms market over the next few years

Members of the European Parliament (MEPs), Commission officials, the Council of Ministers, as well as lobbyists and advisors are currently busy in Brussels negotiating the wording of the proposal for a new EU regulatory framework. Many changes have been proposed since the initial proposal was put forward by the Commission last November and time is running out for a consensus to emerge. The stakes are high as the resulting package will have to be transposed into law by all member states in 2010 and 2011 and will effectively provide the rules and regulations for the telecoms sector until the next review.


While no decisions have been made, it is likely that several of the controversial proposals that were put forward by the Commission last November will be diluted before an agreement can be reached.


One of the most contentious proposals was the creation of a powerful pan-European regulator (European Electronic Communications Market Authority or EECMA) with a range of additional powers. This idea was seriously criticised by both the Council of Ministers and the European Parliament and is therefore likely to be replaced by a more official recognition of the role of the existing European Regulatory Group (ERG), made up of representatives from national regulators, in coordinating national and pan-European regulation. Details about the financing, powers, name and status of this "enhanced" ERG are yet to be finalised.
The rather ambitious Commission's proposals for the development of a more market based approach to spectrum allocation and management across member states is also unlikely to survive the current round of negotiations. A "mixed spectrum management regime", balancing economic and public policy considerations, is likely to be proposed instead. Unfortunately this may have more to do with the significant lobbying power of broadcasting institutions in national markets (that often benefit from "free" spectrum at the moment) than sound economics... some adjustments providing better coordination between member states are likely to be made but any significant spectrum reform will probably have to wait until the next framework review.


The ability for national regulators to mandate functional separation still appears to be on the agenda. Needless to say, incumbent operators are quite worried about the prospect of being forced to split up their operations. It is anticipated however that the final wording of this proposal will reflect the last resort nature of this particularly intrusive remedy. It is likely for example that the economic analysis to be carried out by national regulators to support the case for separation will have to be very robust and take into account investment incentives.


The Commission also proposed that markets that are deemed competitive be removed from the list of ex-ante markets. These proposed changes, that would broadly result in a move away from the regulation of retail markets (that are increasingly competitive) to focus on wholesale markets (where infrastructure providers can be dominant) are less divisive that some of the other proposals and are therefore likely to go through.


The EU Parliament and the Council of Ministers also asked that a number of important topics that were somewhat overlooked in the original proposal be addressed by the new framework.  For example, the Commission was asked to clarify its position on the regulation of investment in New Generation Networks (cf. last column on this topic).


While none of the above issues have been decided yet, a consensus is required soon and significant modifications of the original text will have to be made for an agreement to be reached before the end of the year. All the players in the telecoms market are anxious to know the new rules of the game that they will soon have to play... and win.

Benoit Reillier is a Director and European head of the telecommunications and media practice of global economics advisory firm LECG.  He can be contacted via: breillier@lecg.com
The views expressed in this column are his own.

As the telecoms industry embraces transformation - and all that implies - Alex Leslie argues that billing, or the now more fashionable "revenue management", remains strategic in a deeply unpredictable marketplace

There are a few sayings that I remember from about ten years ago. A couple came from the revenue assurance managers - ‘Billing reveals the sins of the entire company'; ‘My billing processes were perfect - the day before we launched' and perhaps less amusing but very true was this: ‘Billing is where you implement the rules of the business'.
There are a couple of problems with this saying. The first is that billing is no longer billing, in the traditional sense, and second, knowing what your business rules are going to be in two, three or five years time is impossible in the current circumstances.


But the saying is still true, in spirit, even though billing has basically been redefined. Various people have spent much of the past 15 years redefining it - mainly in bars late at night, after a full day of conference sessions. We have now come to the conclusion that it should be called revenue management, whether that revenue comes from a prepaid, postpaid, sponsored or some hybrid transaction. The revenue management process is about assigning a value to a service or product, and managing that value so that it becomes revenue (and then profit, and then shareholder value). Ultimately revenue management is about the customer experience - presenting products and services to the customer in such a way that the customer continues to use your services, satisfied that he is getting value from the service and being charged correctly. The bill itself must become a value statement, not simply a demand for payment.


The reason that we do not know what the business rules are going to be in a few years time is that markets are changing, the pace of competition keeps increasing and the array of competitors and players keeps multiplying. Just a couple of years ago convergence was about services and payment methods converging onto one platform, putting the customer at the centre of our universe. It meant consolidating the many systems that supported single product lines onto one or two systems that enabled this move to a customer centric view to happen. But convergence now means that entire markets, entire eco systems, are changing around us. Google, Apple and Microsoft now represent the innovators of the communications industry, the new Service Providers - not the more familiar telecoms names. The very definition of service provider or communications provider is changing too.


Competition is coming from all directions, and the telecoms world is not as powerful as it was ten years ago, when it was twice the size of the media industry. If it is not careful and quick it will be marginalised. Some of these new players are quite prepared, even keen, to go round or ‘over the top' of the network operator, in order to offer customers what they want.
A symptom of this is that real time charging is now one of the topics of the year, a feature of almost every industry event. Not, I suspect, because the communications industry has decided it is about time to do something about the buzzword that has been around for ten years but because real time is the way that ISPs and content providers think and operate. Post paid is too slow, too traditional for the new world, and real time is now the answer. We must not be left behind and even though ‘real time' is only appropriate in some instances, not all situations, the capability must be in place.


So, if billing is now about managing value whilst enhancing the customer experience, and the strategy that defines the business rules changes so fast that you are not entirely sure whether your business should look like a supermarket or a gas company - what do you do?
Both business models are possible and valid. There has been much discussion about the communications company as a supermarket. You should be offering, or offering access to, a comprehensive range of products, some your own brand, some branded by others. You should have inventory systems that are second to none, partner/supplier relationship management systems to be proud of. You must have logistics capabilities to beat them all and a point of sale system that is seamless, easy and flexible. And of course it must be able to provide information and reports that support the management decisions that define the strategy. The comparisons are obvious between a value added communications provider and a physical supermarket, and the emerging ‘services' markets and frameworks are being set up to offer a huge range of services, simply and quickly.


Alongside this the loyalty schemes are emerging in the communications world, schemes that the big supermarkets are so good at - business intelligence is competitive advantage nowadays.


I often wondered why my supermarket offers me occasional vouchers for things that I have not bought from them. It took me a while to realise that the things that I did buy were being profiled against a particular type of customer (and not just ‘male', over 40), and that what they were trying to do was to get me to buy the things from them that I would normally go to another store to buy - garden tools (male, over 40), tee shirts with improbable slogans on them (male, over 40, probably has a teenage child), or motor bike accessories (male, over 40, planning a mid life crisis any day now). Their profiling is actually so sophisticated that I am probably a hundred times more predictable to a supermarket than I am to myself.
Or there is the gas company model - the slimmed down, lean machine that delivers bandwidth and access, without frills. Simple, clear tariffs and options, which is what the customer wants. Some companies will thrive on this business model.


Either way, or whichever way - there are many models and many choices to be made - the world of telecoms is going through a huge transformation, slimming down and getting as fit as it can as fast as it can, to be ready for whatever life throws at it - from whichever direction.


There was a survey done at a recent conference. A question was put to the audience about how many of their companies were planning, implementing or had implemented a business transformation project. Over 80 per cent of the audience of telecoms companies answered ‘yes' to one of those three options.


Which takes us back to the question of readiness. How do we get ready? What do we transform into? And if we do not know, how do we prepare? If a company is driven by strategy, that strategy must be supported by the processes and systems of the company. And even if that strategy is to be ready, flexible and able to react to market change and market opportunity, then the processes and architectures must be able to support that strategy and implement those as yet undefined business rules.


And the next question is this - do your processes and systems currently support the strategy of the company? It is quite likely that having upgraded or replaced legacy systems eight years ago and bought the software that solved the problem of the day - the race for market share, fast tariff change capabilities - now this is not what is needed in the new, customer centric as yet to be defined world. You and your management are nervous about doing it all again. The memories of sleepless nights may still be with you. I am sure that the inclination is wait and see, to play it safe.


And while time ticks away, you are probably working around the edges of the problem - automating pieces of processes (an absolute necessity), opening up new channels and payment methods, but the big problem is still there.


Ideally you are going to need something so flexible that whether your management says ‘gas company model' or ‘supermarket' or even ‘gas company that sells shampoo' you are there, proving my point of the last six years - that billing is strategic. You must be ready for the meetings with Marketing and Management, and able to offer suggestions for innovative products and bundles that could be presented to customers, now.
The other, bigger problem is that this is all very well for someone who writes articles, and does not get his hands dirty, to say you need to change, but it is hugely risky and the likelihood is that your management is risk averse.


The innovative software that is needed to be ready for the ‘next big thing' is generally produced by small, innovative companies, and we both know how popular that is going to be when you are putting the business case to management. But I also know that the worst scenario is the realisation that your processes would not be able to support the new product or service that your competitors just launched. There are ways round this of course, there are large systems integrators who are well aware that communications companies need to innovate, but also need to feel safe when doing so. They are addressing this problem. Your existing partners are also well aware of the challenges and implications, and are able to help.


But the bottom line is that innovation is essential, and soon. In this world, it only takes one of your competitors to be first and fast and successful and the game will change, and you will be struggling to catch up.


The saying may still be true - that billing, or revenue management is where you implement your business rules, but we do not have the luxury of knowing the whole strategy before we have to implement systems or processes that support the unknown rules of tomorrow.
We should prepare.

Alex Leslie is a Communications Industry Consultant, and can be contacted via alex.leslie@btinternet.com

The move to open source in content delivery could be the much-needed catalyst to driving the mobile web forward says Mark Watson

The analyst community has often painted a rosy picture of the mobile Internet industry. Take, for example, Forrester's recent prediction that 38 per cent of European users will access the mobile Internet by 2013, with the number of 3.5G devices overtaking GSM/GPRS devices in the market by that date. Meanwhile, other analyst forecasts for global market size for mobile content range from a conservative £3bn to an ambitious £10bn over the next three years. Yet, while forecasts are routinely upbeat about the potential for the industry, to many involved in the mobile content market, it has become evident that the forecasts are taking much longer than expected to come to fruition; the abundance of compelling content and web applications that exists on the traditional Internet has not yet arrived on the mobile web.


To many it is market fragmentation that has stalled the development - and ultimately consumer uptake - of mobile content. Faster mobile networks, rich browsers, and compelling devices are now all established features of mobile Internet services. In fact it is the diversity of choice and capability in handsets in particular that has created a frustrating degree of fragmentation in the market. Fixed line Internet developers, for example, are tasked with building compelling applications, making them available on the web, and can be confident that they will be accessible - language issues apart - for pretty much any PC user who can reach them. Developers for the mobile web have to build and test an application separately on most of the devices that Vodafone, Orange, T-Mobile and any other operator supports, globally. They have had to change it or extend it every time a new phone is released, and must plan for upcoming new handsets.


Fragmentation has created the very real need for the many mobile content delivery and transcoding services that are available in the market. This technology enables operators and content owners to increasingly move to a ‘write once, view anywhere' content strategy that reduces the complexity of managing mobile content. Server based solutions that automatically convert PC websites in real time to work on thousands of wireless devices without the need for high-end hardware or specialised software, instantly make a vast number of PC websites available to most mobile phone users across the globe. Transcoders are part of a stop-gap solution. They can intelligently filter, condition, analyse and distil PC website content before optimising and delivering web pages that are formatted in real-time, both visually and technically, for individual devices. Pages can have custom headers, footers and messages inserted within the conversion process to maintain full control over branding and service presentation.


Indeed, usability and presentation of content will prove to be a key component of future mobile Internet success.  Take for example the much-hyped launch of the iPhone. It is a device that has undoubtedly had an impact in respect of mobile Internet usage. Despite running over an older connectivity technology in EDGE, mobile content usage amongst iPhone users is amongst the highest of any device according to analyst firm StatCounter; with many pundits suggesting that these usage figures are such because the device is built around intuitive use of the mobile Internet. New devices such as the recently launched HTC Touch Diamond and this summer's much anticipated Nokia N96 have also taken up the mantle; easy-to-use interfaces that promote the use of mobile content and applications.
If a fundamental issue is presentation of content, then a major causal factor in the slow uptake of the mobile web could be that the software, which can provide a universally improved user experience, is being withheld from the very people who should be building the new mobile web because of entrenched proprietary software licensing models
The success of the traditional Internet can largely be attributed to its openness - browsers are relatively standardised and the tools to create databases and complex systems, such as Linux and SQL, are widely and freely available as Open Source Software (OSS) through General Public Licenses (GPL). This environment has made it easy to develop for the web and has enabled the community to focus on what they do best - create fresh and compelling content, rather than worry about how to deliver it. With the mobile Internet, however, the story couldn't be more different - the market has always been highly fragmented, with an overwhelming array of devices with diverse characteristics, operating systems and networks jostling for position. And as smart phones get smarter and newer platforms - Google's Android and Apple's iPhone, most recently - continue to enter the market, the gulf between high-end devices and low-cost, mass-market handsets is only set to widen.
In this environment, it's not possible for content providers to just put a mobile web application "out there" and see the immediate uptake that they'd expect on the wider Internet. Instead, they need access to the right enabling technology to reach the mass market -software that can overcome fragmentation issues, as well as scale to support applications as they become increasingly successful across multiple markets in the longer term.


To date, the expensive licenses surrounding such software have meant that this all-important access has been limited or even non-existent for many, smaller developers and content providers. And, without ubiquitous access, the growth of the mobile Internet industry as a whole has been held back.


In the traditional Internet environment this access has been provided through OSS models, so couldn't the same principle be applied to mobile? OSS has the ability to provide an underlying platform for the management and delivery of mobile content and applications, and offer a common and scalable basis upon which individual content owners can develop differentiated and compelling products.


To this end, Volantis Systems has set its software free too with the Mobility Server open to developers to download, and use, for free, and 1.2 million lines of code available to extend and improve as the community sees fit. It is a move that will bring openness to the mobile web and will help to overcome the difficulties of divergence between networks, handsets, and browsers. In all, the result of seven years' of development has been opened up to the industry, along with access to a device database containing 653 attributes for more than 5,100 devices. That's got to be good news for content owners, who need easy-to-use tools in order to help their creativity come alive.


It is undoubtedly true that both the developer and operator communities are supportive of an industry wide move to open source. It will encourage developers to start extending the capabilities of the software currently out there and make available some new and compelling mobile content. Moreover, OSS mobility software, with licensing terms favourable to the enterprise audience, will open up mobile Internet development to a vast array of new companies. It is that content - the long tail - that will enable the mobile web to start to fulfil its potential and at least some of the analyst predictions made about it.
Operators too have expressed support for the community standards process, which has been driven by the World Wide Web Consortium (W3C), to create the Device Independent Authoring Language (DIAL) specification. Web development mark-up languages that comply with the DIAL specification, such as XDIME, can be used interchangeably to create content viewable on any mobile device.


It is a truism that the more open source applications we see on mobile devices, the more likely the industry is freed from restrictive licensing costs. In recent times the industry has seen the development of numerous handsets based on the LiMo platform and Google's Android platform has already helped to build an ecosystem of mobile developers. It's proof that the openness of the traditional Internet is slowly coming to the mobile ecosystem too. Indeed, the mobile web should become the platform upon which mobile data revenues are based, with open source helping to overcome the limitations imposed on content creation by license-fees.


What of those analyst predictions mentioned earlier? Sizing a market as dynamic and rapidly changing as the mobile Internet is not without its difficulties; hence the vast array of differing opinions. What we can be certain of is that better handsets, faster networks, and intuitive mobile web based phones will certainly help to drive uptake of mobile content. But as an industry we are now helping ourselves. The move to open source in content delivery and transcoding could be the catalyst that drives the mobile web forward. OSS will redefine the extent to which content publishers will be able to utilise and capitalise on the mobile web's opportunity and enable the emergence of the long tail of content.

Mark Watson is CEO and co-founder of Volantis Systems
www.volantis.com

There's a stark dynamic framing in the telecoms Operations Support Systems (OSS) market. Until recently networks were expensive while the price tags for the OSS systems used to assure the services running across them were, by comparison, puny. Today that's all changed - not because OSS systems have become significantly more costly, but because network components are a fraction of the capital cost they were 15 years ago. The result is an apparent cost disparity that may be causing some operators to swallow hard and think about putting off their OSS investments, Thomas Sutter, CEO of Nexus Telecom tells Ian Scales.  That would be a huge mistake, he says, because next generation networks actually need more OSS handholding than their predecessors, not less

Naturally, Thomas has an interest. Nexus Telecom specializes in data collection, passive monitoring and network and service investigation systems and, while Nexus Telecom's own sales are still on a healthy upswing (the company is growing in double figures), he's growing increasingly alarmed at some of the questions and observations he's hearing back from the market. "There is a whole raft of issues that need exploring around the introduction of IP and what that can and can't do," he says. "And we need to understand those issues in the light of the fundamental dynamics of computer technology. I think what's happening in our little area of OSS is the same as what tends to happen right across the high technology field. As the underlying hardware becomes ten times more powerful and ten times as cheap, it changes the points of difference and value within competing product sets." If you go back and look at the PC market, says Thomas, as you got more powerful hardware, the computers became cheaper but more standard and the real value and product differentiation was, and still is, to be found in the software. "And if you look at the way the PC system itself has changed, you see that when microcomputers were still fairly primitive in the early 1980s all the processor power and memory tended to be dedicated to the actual application task  - you know, adding up figures in a spreadsheet, or shuffling words about in a word processor. But as PC power grew, the excess processing cycles were put to work at the real system bottleneck: the user interface. Today my instincts tell me that 90 per cent of the PC's energy is spent on generating the graphical user interface.  Well I think it's very similar in our field. In other words, the network infrastructure has become hugely more efficient and cost effective and that's enabled the industry to concentrate on the software. And the industry's equivalent of the user interface, from the telco point of view at least, is arguably the OSS. "You could even argue that the relative rise in the cost of OSS is a sign that the telecoms market as a whole is maturing." That makes sense, but if that's the case what are these other issues that make the transformation to IP and commodity network hardware so problematical from an OSS point of view?

"There's a big problem over perceptions and expectations. As the networks transform and we go to 'everything over IP', the scene starts to look different and people start to doubt whether the current or old concepts of service assurance are still valid. "So for example, people come to our booth and ask, 'Do you think passive probe monitoring is still needed?  Or even, is it still feasible?  Can it still do the job?' After all, as the number of interfaces decrease in this large but simplified network, if you plug into an interface you're not going to detect immediately any direct relationships between different network elements doing a telecom job like before, all you'll see is a huge IP pipe with one stream of IP packets including traffic from many different network elements and what good is that? "And following on from that perception, many customers hope that the new, big bandwidth networks are somehow self-healing and that they are in less danger of getting into trouble. Well they aren't.  If anything, while the topological architecture of the network is simplifying things (big IP pipes with everything running over them), the network's operating complexity is actually increasing." As Thomas explains, whenever a new technology comes along it seems in its initial phases to have solved all the problems associated with the last, but it's also inevitably created new inefficiencies. "If you take the concept of using IP as a transport layer for everything, then the single network element of the equation does have the effect of making the network simpler and more converged and cost effective. But the by-product of that is that the network elements tend to be highly specialized engines for passing through the data  - no single network element has to care about the network-wide service." So instead of a top-down, authoritarian hierarchy that controls network functions, you effectively end up with 'networking by committee'. And as anyone who has served on a committee knows, there is always a huge, time-consuming flow of information between committee members before anything gets decided.  So a 'flat' IP communications network requires an avalanche of communications in the form of signaling messages if all the distributed functions are to co-ordinate their activities. But does that really make a huge difference; just how much extra complexity is there? "Let's take LTE [Long Term Evolution], the next generation of wireless technology after 3G. On the surface it naturally looks simpler because everything goes over IP. But guess what? When you look under the bonnet at the signaling it's actually much more complicated for the voice application than anything we've had before. "We thought it had reached a remarkable level of complexity when GSM was introduced. Back then, to establish a call we needed about 11 or 12 standard signaling messages, which we thought was scary. Then, when we went into GPRS, the number of messages required to set up a session was close to 50.  When we went to 3G the number of messages for a handover increased to around 100 to set up a standard call. Now we run 3GPP Release 4 networks (over IP) where in certain cases you need several hundred signaling messages (standard circuit switching signaling protocol) to perform handovers or other functions; and these messages are flowing between many different logical network element types or different logical network functions. "So yes of course, when you plug in with passive monitoring you're probably looking at a single IP flow and it all looks very simple, but when you drill down and look at the actual signaling and try to work out who is talking to who, it becomes a nightmare. Maybe you want to try to draw a picture to show all this with arrows - well, it's going to be a very complex picture with hundreds of signaling messages flying about for every call established. "And if you think that sort of complexity isn't going to give you problems:  one of my customers - before he had one of our solutions I hasten to add - took  three weeks using a protocol analyzer to compile a flow chart of signaling events across his network. You simply can't operate like that - literally. And by the way, keep in mind that even after GSM networks became very mature, all the major operators went into SS7 passive monitoring to finally get the last 20 per cent of network optimization and health keeping done. So if this was needed in the very mature environment of GSM, what is the driver of doubting it for less mature but far more complex new technologies? ''

Underpinning a lot of the questions about OSS from operators is the cost disparity between the OSS and the network it serves, says Thomas. "Today our customers are buying new packet switched network infrastructure and to build a big network today you're probably talking about 10 to 20 million dollars. Ten or 15 years ago they were talking about 300 to 400 million, so in ten years the price of network infrastructure has come down by a huge amount while network capacity has actually risen. That's an extraordinary change. 
"But here's the big problem from our point of view.  Ten years ago when you spent $200 million on the network you might spend $3 million on passive probe monitoring.  Today it's $10 million on the network and $3 million on the passive probing solution. Today, also, the IP networks are being introduced into a hybrid, multiple technology network environment so during this transition the service assurance solution is getting even more complex. "So our customers are saying, ‘Hey!  Today we have to pay a third of the entire network budget on service assurance and the management is asking me, 'What the hell's going on?' How can it be that just to get some quality I need to invest a third of the money into service assurance?' "You can see why those sorts of conversations are at the root of all the doubts about whether they'll now need the OSS - they're asking: 'why isn't there a magic vendor who can deliver me a self-healing network so that I don't have to spend all this money?" Competitive pressures don't help either. "Today, time-to-market must be fast and done at low cost," says Thomas, "so if I'm a shareholder in a network equipment manufacturing company and they have the technology to do the job of delivering a communication service from one end to the other, I want them to go out to the market.  I don't want them to say, 'OK, we now have the basic functionality but please don't make us go to the market, first can we build self-healing capabilities, or built-in service assurance functionality or built-in end-to-end service monitoring systems - then go to the market?'  This won't happen." The great thing about the 'simple' IP network was the way it has commoditized the underlying hardware costs, says Thomas. "As I've illustrated, the 'cost' of this simplicity is that the complexity has been moved on rather than eliminated - it now resides in the signaling chatter generated by the ad hoc 'committees' of elements formed to run the flat, non-hierarchical IP network. "From the network operator's point of view there's an expectation problem: the capital cost of the network itself is being vastly reduced, but that reduction isn't being mirrored by similar cost reductions in the support systems.  If anything, because of the increased complexity the costs of the support systems are going up. "And it's always been difficult to sell service assurance because it's not strictly quantitative. The guy investing in the network elements has an easy job getting the money - he tells the board if there's no network element there's no calls and there's no money. But with service assurance much more complicated qualitative arguments must be deployed. You've got to say, 'If we don't do this, the probability is that 'x' number of customers may be lost. And there is still no exact mathematical way to calculate what benefits you derive from a lot of OSS investment."
The problem, says Thomas, is as it's always been. That is, that building the cloud of network elements - the raw capability if you like - is always the priority and what you do about ensuring there's a way of fixing the network when something goes wrong is always secondary. "When you buy, you buy on functionality. And to be fair it's the same with us when we're developing our own products. We ask ourselves, what should we build first? Should we build new functionality for our product or should we concentrate on availability stability, ease of installation and configuration.  If I do too much of the second I'll have less features to sell and I'll lose the competitive battle. "The OSS guy within the operators organization knows that there's still a big requirement for investment, but for the people in the layer above it's very difficult to decide - especially when they've been sold the dream of the less complex architecture. It's understandable that they ask: 'why does it need all this investment in service assurance systems when it was supposed to be a complexity-buster?" So on each new iteration of technology, even though they've been here before, service providers have a glimmer of hope that 'this time' the technology will look after itself. We need to look back at our history within telecoms and take on board what actually happens.  

The mobile advertising sector will be worth $18.5 billion by 2010, largely because advertisers want to take advantage of the most exciting channel for delivering targeted messaging in the history of advertising, but also because operators want to supplement their traditional business with an additional revenue stream says Cathal O'Toole

Mobile advertising presents a huge opportunity for both the relatively young mobile community and the well-established advertising and media industry. For both, there is the chance to be in at the start of something big. For mobile, it is expected to be one of the most important revenue-generating opportunities presented by mobile technology. For the advertisers this exciting means of mass-audience targeting and message delivery will, as Telefonica O2's CEO, Peter Erskine predicted in 2007, grow even faster than Internet advertising, which has already surpassed radio. He added: "It seems inevitable that the mobile screen - just as cinema, TV and PCs before - will be used for advertising, and when you consider that there are a lot more mobiles than any other device, the rise and rise of mobile advertising is unstoppable."


In the context of the wider world, mobile advertising is something which will soon positively affect everyone who has a mobile phone, so it is important that both the advertising and the mobile telecoms industries make its design successful from the start. Through a co-operative, considered approach, they can create an optimum mobile advertising ecosystem from the beginning.


There are, however, a number of difficulties that must be overcome in order to take advantage of this opportunity. Firstly, the lack of mutual understanding that exists between the two sectors must be addressed through co-operation. Historically, advertising and mobile communications have evolved at different times and in very different ways, resulting in very different industrial cultures, languages and approaches to doing business. For the mobile operators to take advantage of the rise of mobile advertising, they will need to implement solutions that speak the language of the advertising community - Cost per Impression, Cost per Click, Cost per Acquisition rather than Messages per Second, Transactions per Second, and so on. And these solutions once implemented must allow the easy placement of advertisements by media and advertising agencies, using similar interfaces as they are used to for the Internet or, indeed, for traditional media.
Secondly, the technical fragmentation and complexity presented by the mobile channel offers a potentially confusing variety of options as to where to place an advert, such as: text message, multi-media message, ringtone, during an interactive voice menu, during browsing or downloading, or on-line chat sessions. In addition, there is a bewildering breadth of terms to describe these options - SMS, MMS, RBT, IVR, IM, etc - and technical restrictions on the nature of the advert that can be displayed on these different options in terms of such variables as size, timing, and ease of response. A text message, for example, offers a very basic text experience but is compatible with all handsets, while a multi-media message offers a much richer media option but it is not compatible with every mobile device. Each of these technical options offers different possibilities for the delivery of advertising campaigns. Some will be suited to one type of campaign whilst others will be ideal for different types of campaign.


As a result of this complicated picture, it will be crucial for the mobile industry to guide the advertising world through this maze in order to achieve optimum results. Operators must work with the advertising industry to simplify the use of these technologies. Organisations like the GSMA and MMA are already working to assist in setting guidelines and advisory statements to guide the development of this business and they will be instrumental in designing the best mobile advertising way forward.  In 2007, the GSMA announced its ‘Mobile Advertising Programme' to ensure the establishment of guidelines and standards in support of this new sector of the industry. The operators must continue to support this work and to develop an environment that encourages advertisers and agencies to deliver campaigns over mobile networks.


Thirdly, there is the challenge of commercial unfamiliarity on both sides. Continuous debate is taking place about how to price an advertisement on a mobile and how to adjust this price based on the degree of relevance, the timing, the media content and the ability to respond. For their part, the operators are afraid of losing out due to inexperience of the market and what the value of their assets is to the advertising community. The advertisers, on the other hand, are afraid that the ‘unproven' advertising channel offered by mobile communications will be ineffective and so they are hesitant about committing large percentages of their marketing budgets to this embryonic vehicle.


Overcoming these obstacles will only happen with time and experience in the form, perhaps, of some early trial agreements. But it is even more important that all parties enter into a co-operative atmosphere conducive to learning what will or will not work for all involved. If the above three difficulties are addressed then the advertising industry and the operators can quickly take a strong position.


Mobile advertising can take many forms, each of which has its own characteristics that makes it suitable for specific campaigns. Broadly, the advertising media can be broken down into three categories: advertising over messaging, advertising during browsing, or advertising using media or VAS applications.


Advertising over messaging is where advertisements are sent using SMS, MMS, Instant Messaging, or other messaging media. Mobile subscribers have been experiencing push advertising over SMS for a number of years but this has taken the form of unsophisticated, outbound, and untargeted marketing messages. Still, SMS remains an extremely powerful vehicle for mobile advertising delivery. Indeed, SMS delivery alone is projected by some industry sources to account for US$9 billion of mobile advertising revenues by 2011.
For SMS and MMS options to work in the delivery of a mobile advert, the operator's mobile advertising platform, or advertising engine, needs to communicate with the network nodes. For Peer-2-Peer messages coming from the sender/originator, the SMSC recognises the sender as an advertising subscriber, or not, and if recognised the SMSC then alerts the operator's advertising engine, which selects the appropriate targeted advert for the subscriber. This selection is based on specific user profile criteria, for example time, sender profile, receiver profile, and then inserts the advert into the SMS/MMS and sends it back to the SMSC (or MMSC), which then completes message delivery to the recipient.
The specific campaign being run by the advertiser will determine the exact content of the messages. Consumers targeted by the ad message might, for example, be asked to send a text to a 5-digit short code promoted through existing TV, radio, online or print media, in order to receive the offered product, service or other brand information. In this way, traditional standalone forms of advertising, such as outdoor billboard and TV, are being turned into interactive media through the power of text messaging, enabling the target audience, for example, to text in for free samples of any number of products, services and consumables.


In addition, some of the larger mobile operator brands are eliminating the use of direct mail and using multimedia messaging instead. This enables the marketing teams to create graphically rich messages incorporating animations, audio, images and streaming video. Not only will such campaigns be more cost effective without print and mailing charges, but they will be also be rapid in their execution and more effective in eliciting a response from the audience than other means. With phone in hand the willing recipient can respond immediately to an advertising offer with calls to call centres (or requesting in-bound calls to the handset) or click-throughs to mobile Internet sites.


Secondly, there is advertising via an Internet browser offering a similar experience, though in miniature, to advertising on the Internet. With a huge surge in the number of mobile Internet sites available to advertisers, typically as companion sites to traditional web pages, combined with the continuing rise in numbers and sophistication of mobile audiences, it is becoming more and more viable for advertisers to consider the positioning of display-type ads on such mobile sites.


As with traditional online banner ads, mobile Internet ads consist of text and/or graphics, and offer the target consumer a variety of response options. A simple click-through, for instance, may reach a product registration page, or there may be a click-to-call option initiating an outbound call to a call centre. A click-to-buy option is also one possible route, with a mobile Internet purchase appearing on the consumer's normal phone bill. There may also be a simple click requesting a text message reply for further product or service information. At present, mobile Internet display advertising has been shown to be up to ten times more effective than Internet banner ads in terms of response rates.
Thirdly, there is Media/VAS-related advertising and handset/content related advertising. The Media/VAS-related advertising is where an advert will be inserted into a service experience of, say, Ringback Tones or Interactive Voice Response (IVR), on mobile TV or using idle screen time on a handset. An IVR message, for example, may say to the caller, "before entering your PIN to retrieve you messages did you know that ‘Brand Name' is on offer..." or similar promotional messages.


The methodology behind this option is similar to messaging domain advertising although the advertising message is received via some form of application such as Voice Messaging or Ringback Tone service provided by a third party content provider outside the network.
Although the media/VAS domain, as a vehicle for mobile advertising, is the newest option, one major brand has already utilised this method using idle screen time to great effect in the Far East last year. Targeting subscribers on the AIS network in Thailand, an interactive content campaign was run on behalf of the Honda motor company, with messages broadcast twice daily appearing on millions of users' phone screens but only when they were idle. Offering tips about motorcycle safety and fuel efficiency, the campaign's main aim was to promote the brand and encourage user response through the incentive of a click-through prize-draw.


Within three weeks, more than three million unique impressions, targeted at subscribers in the Bangkok area, were generated, and more than 100,000 users clicked to participate in the prize draw - and receive more information from Honda. These results show the ability of the mobile phone to be a truly mass-market advertising vehicle.


It should be noted, however, that any mobile advertising campaign may draw on a number of the above mobile possibilities combining, say, idle-screen with SMS, mobile TV and non-mobile advertising media. And each module of a campaign may also afford interaction by the user with another form such as offering short codes to viewers of a TV delivery to seek, via their mobile, the next leg of the ad campaign journey.


It is important that the mobile operator moves quickly to discuss the mobile channel with a solution provider that has experience building advertising solutions, and with a media agency that has early experience or understanding in the delivery of advertisements on the mobile channel.


It will then be essential to set up trial advertising campaigns in order for the advertising side to begin building experience and understanding of the operator organisation. O2 Telefonica ran trials earlier last year with "encouraging signs of customer receptivity" and "no negative impact on overall customer experience or brand perception".

Once all operators take part in such evaluation they will find that the opportunities for driving advertising revenues will grow and revenues from unprofitable traffic on the network will start being realised.

Cathal O'Toole is Product Manager, Jinny Software  www.jinny.ie

Test and monitoring has transitioned from being a ‘necessary evil' to maintain the network, into a key business enabler, according to Michele Campriani, allowing operators to rapidly expand their service portfolios and at the same time reduce operating expenses

The race is on as mobile operators across the globe accelerate their migration path to mobile data services.  This rush stems from explosive customer demand for web-based services, the opportunity to capture increased average revenue per user (ARPU) while reducing operating expenses, as well as the desire to stay one step ahead of the competition. Most industry insiders agree - in the not-so-distant future, it will be rich mobile services such as videoconferencing, mobile gaming and presence that will set mobile operators apart. 
As the technical early adopters have experienced, however, the transition to converged networks supporting mobile data services presents complicated challenges.  At the heart of the problem lies the fact that most of the time, IP and PSTN-based services are operating in parallel during the convergence phase. While the operator may well understand how to measure service quality and SLAs on legacy traffic, new IP-based services are much different, and require vastly different monitoring techniques and service quality measurements. 
Hence this new world of mobile data services has greatly expanded the role of network troubleshooting and monitoring tools. In essence, they have transitioned from being a ‘necessary evil' to maintain the network, to a key business enabler, allowing operators to rapidly expand their service portfolios and at the same time reduce operating expenses.  As the nature of networks and services has evolved, so have these tools.  In the sections below, we take a closer look at how a mobile operator can most effectively utilize protocol analysis and network monitoring tools to successfully launch and manage mobile data services.


New access technologies such as 3G/UMTS and 3.5G/HSPA have finally provided the cost-effective ‘edge' bandwidth required to offer mobile data services. For the operator, once these services are offered, they will likely experience a burst in network usage driven by adoption of the new services, as well as porting legacy traffic over the new infrastructure.  
At this point, the core of the mobile network is one of the most vulnerable areas. This is due to the fact that the core network is typically tailored to access network bandwidth... so as the operator migrates from GSM to UMTS to HSPA, for example, it will amplify core infrastructure weaknesses.

   
It is critical at this point that the operator be able to correlate and monitor all activity occurring over network elements interconnected through various protocols and interfaces. The operator must simultaneously correlate information exchanged by each device involved in transactions, including those on the internal signaling network as well as external connections for calls made to/from subscribers of other operators.


The complexity lies in the fact that there are many signaling exchanges and interfaces involved during data transactions, each giving a different level of visibility into network and service issues.  For example, in a UMTS network, the Gn interface is the most crucial in providing overall visibility into the network (eg for ‘macro' problems and issues related to authentication with external networks), while the Gi interface provides information on the quality of IP traffic and services.  So unless the Gn and Gi interfaces are correlated, there is no way to test the interconnection between the operator data network and the external data network (e.g. the Internet).


Thus managing each of these interfaces, as well as all of the traffic traversing them becomes of high importance for high-quality mobile data services.  This type of monitoring is easily provided by a new breed of distributed monitoring system.  Here are example interface correlations, and the key information they provide:

  • Gn/Gi - to check the interconnection between the operator data network and the external data network for snapshot of state of services. After the Gn/Gi correlation, monitoring either the Gb or the IuPS helps triangulate on location of problem
  • Iu/Gn/Gi - comprehensive view of both the core and access network signaling messages of a 3G / 3.5G data session (PDP context)
  • Gb/Gr - to decrypt signaling messages of a data session over the Gb
  • Iu/Gr - to analyze the data session activation and authentication phases

Compared with voice services, mobile data services require vastly different monitoring and troubleshooting techniques. For legacy voice, it is generally assumed that if the network has high QoS, service quality is good. For data services, this is no longer true. This is due to the fact that most mobile data services are UDP/IP and TCP/IP-based, with many of them being high-bandwidth and interactive, hence highly affected by packet loss, TCP resets/latency and application-layer issues such as DNS and HTTP anomalies.


Thus for data services, understanding the service actually experienced by the end user (or "quality of experience" -QoE-), now becomes the important metric. Unfortunately, QoE cannot be provided by usual network node element metrics and tests.  Here are basic measurement guidelines for the major mobile data service types.

  • Background services such as web, FTP and e-mail are not time-sensitive, so delay and jitter are not a big issue. More important for these services is throughput per call, traffic per call and packet loss.
  • Streaming services such as webcasting and video viewing are much more real-time sensitive. For these services, delay-related measurements such as jitter and delay are most important.
  • Conversational services such as video calls and mobile gaming must be based on all of the above... throughput, packet loss, jitter and delay.
  • Thus the operator must now integrate more measurement types, and know how each affects the other.

As an example of these new measurements, the screenshot in Figure 2 shows a few key TCP user-plane statistics that can be used at either a summary level, or in a drill-down level to assess how TCP resets and TCP delay times are affecting services. TCP is a very important protocol-under-monitor for mobile data services as it most directly affects service responsiveness once a service link had been established.


Perhaps the most important element of service monitoring and troubleshooting lies in understanding what is happening at the application layer. After all, it is protocols such as DNS and HTTP that ultimately determine service availability. In Figure 3, we see a summary-level view of DNS response codes over selected set of records. In this example, we see clearly that we must drill down further and investigate the cause of the DNS name failures to achieve high service availability. Other DNS-related information we might want to investigate are average DNS response times, top DNS addresses and occurrences and DNS query types (host, mail, domain name, etc). This measurement and the one above are only a small sampling of new measurements that must be learned.

  
In summary, while the lure of competitive differentiation, OPEX reduction and increased ARPU are driving operators towards mobile data services, there are significant challenges that must be overcome for successful introduction. At the most basic level, the operator must integrate a multitude of new measurements and network monitoring techniques that provide insights into the IP-based services.  The good news is that significant advancements have been made in capabilities of network monitoring and protocol analysis systems. Mastery of these new tools is critical for the operator hoping to introduce and manage multimedia mobile data services. 

Michele Campriani is General Manager of Sunrise Telecom's Protocol Product Group. www.sunrisetelecom.com

It has now been proven that access to digital communications has a direct and measurable impact on economic growth. Yet despite this, Janne Hazell explains, huge numbers of emerging market communities, often located at the very source of the natural resources which fuel the economy, still remain cut off from basic telecommunications. Herein lies the paradox of the Digital Divide; providing communications to drive economic development is, in itself, cost prohibitive.  Though several key barriers have been overcome, one still remains, the cost of transmission to, from and between these people

Significant advances in wireless technologies, together with the economies of scale resulting from hugely successful global initiatives such as GSM, have coupled together to provide much of the world's population with cost-effective wireless communications. Competition between equipment suppliers, increased government subsidies and important initiatives, such as the GSM Association's ultra low-cost wireless handset initiative, have all combined to drive down network costs. Mobile network operators (MNOs) and, ultimately the wireless users themselves, have benefited from this wireless industry evolution and we can now experience cost-effective broadband communications. Many MNOs are now focusing on operational overheads, almost exclusively, looking to outsource the operation of their networks and drive down costs further. While the drive to reduce the operational cost of running networks has taken centre stage globally, within emerging markets where arpu is at sub US$10 per month levels, operational overhead costs can often be a bridge too far, leaving communities cut off from basic communications.


While costs vary depending on local factors, one key cost stands out consistently across the world's mobile networks - transmission costs. In particular, transmission costs to and from base stations. Many alternative technologies exist today. Within the reach of the terrestrial telecommunications grid, optical fibre or copper are the dominant technologies and short range microwave links are also common. Cost-effective microwave technology is also dominant for longer distances although the cost of the installation and management of towers has made this an increasingly expensive option. With this the use of satellite as a backhaul technology has accelerated, yet this too brings with it operational cost barriers.
Therefore, the focus has now shifted to the reduction in transmission costs to and from wireless base stations and, with the majority of communications (particularly within emerging markets) being local within the communities themselves, the development of technology to address this local communications requirement.


While the price for network equipment and mobile terminals has fallen sharply since the introduction of global digital mobile networks in 1991, the same deflationary trend has not been visible for transmission costs. Transmission costs have been growing steadily to the point where it is now estimated that anything from 15 to 80 per cent of the total cost of ownership of a BTS relates to backhaul transmission costs. While it is clear that the need for cellular backhaul never will be eliminated, it is also clear that for any area with costly backhaul it is a waste of resources to carry local communications back and forth over the backhaul link. That is to say, that when two subscribers are located in the same area, there is no reason why their intercommunications must travel across the transmission network just to return again to the same area. Industry estimates for such local traffic are as high as 60 to 80 per cent. MNOs are losing significant profit margins because technology has not been introduced to address the wasted cost of local traffic being unnecessarily carried back and forth across the network instead of being switched locally. By switching calls locally, a very significant share of the operational overhead can be avoided, thus making remote rural wireless service more economical for the service providers.


Another cost driver for cellular backhaul is the signaling specifications to which suppliers must adhere. The GSM specification calls for one or more 2mbit/s backhaul links (1.5mbit/s in North America) for any base station depending on capacity. In reality, most remote rural sites only require a fraction of a 2mbit/s link. Any excess capacity is wasted and adds operational overhead to sites that is a financial challenge to start with. A solution to the problem is to change the structure of the backhaul link. The trend is to move from PCM coded TDM backhaul links to IP, as IP lends itself particularly well to handle dynamic link utilization and manipulation. While niche suppliers have introduced stand-alone equipment that converts the backhaul link to IP and removes all unnecessary transmission and silence, Ericsson, the leading telecom supplier, has built-in support for optimizing the cellular backhaul link over IP. The integrated functionality ensures best possible performance, even under severe conditions.


Not only does the high cost of backhaul in remote areas put a strain on the business case for wireless coverage in these areas, the infrastructure and operating expenses must be shared by fewer users than in urban sites. This leads to requirements on lower cost sites and lower cost deployments. As a result of fewer subscribers active on the site, idle load on the backhaul link will also come into play. A 3kbit/s idle load corresponds to a 12-hour voice call every 24 hours, or 20,000 minutes per month. This is the same volume as the expected voice traffic from a 100-subscriber community. From a backhaul perspective, idle load thus doubles the backhaul cost per call.


Based on the above, the ideal solution for small remote communities is a base station with sophisticated link optimisation for long-distance traffic and local connectivity for local calls, which eliminates the idle load when no revenue-generating traffic is ongoing. The base station should be easy to deploy and maintain, require minimal space and work under generous environmental specifications.


We believe that Altobridge's Remote Community solution meets the above criteria. Not only does it deliver cost-effective wireless services to communities with 50 to 500 subscribers in remote or hard to reach areas, it also provides operators with the opportunity to switch locally in any network scenario, e.g. downtown Paris. The technology reduces the backhaul cost to levels below that of sites in the macro network.


The company has addressed the provision of wireless services to communities in the most hard-to-reach areas of the globe. Focusing on the use of legacy satellite technology for backhaul, Altobridge's Split BSC made GSM on commercial jets an attractive proposition for AeroMobile, with Emirates being the first airline to launch the service commercially. That same technology made GSM on merchant ships with as few as 21 crew members a commercially interesting opportunity for Blue Ocean Wireless, with the key to success in both those cases being the ability to minimise backhaul bandwidth utilisation, using existing low bandwidth satellite channels. And now, the same technology is being used to make the delivery of mobile communications to remote communities an interesting proposition for MNOs.


The core of the Remote Community solution is the Split BSC. In simple terms, the part of the BSC handling Radio Resource Management and all other communication with the BTS and the mobiles, has been moved out to the BTS site, while the part handling communication with the MSC/Media Gateway, is kept centrally. This split allows the BSC to manage the BTS and mobiles, without any signaling going back and forth over the backhaul link. Similarly, the MSC ‘sees' all the traffic channels, without having actual contact with them. Once a call is set up, the two halves of the Split BSC establish a connection over the satellite link and the call can be completed.


As the Split BSC resides on either side of the satellite backhaul link, it is in full control of how and what is transferred over the link. In addition to just optimising the payload by removing padding and silent frames, the Split BSC transcodes the signal. The transcoded signal requires 5-8 kbit/s per active voice channel, compared to 17-25 kbit/s required by competing solutions. In many areas of the world, the difference represents 4-5 US cents per minute, effectively the entire profit margin. Add to that the idle load and it becomes clear that the Remote Community solution is ideal for addressing small remote communities, profitably.
To further improve both business case and user experience the Remote Community includes the Local Connectivity solution, which handles all local calls locally. With 50 per cent local calls, the backhaul cost drops a further 50 per cent. The company's patented Local Connectivity functionality is unique in that call control, charging, supplementary service management and O&M remains in the control of the MSC. The transparency ensures that the investment in optimising the central core network and service layer is protected and no expensive and complex distributed architecture is introduced. As Local Connectivity eliminates double satellite hops for local calls, users will also experience improved network quality. This usually leads to longer call holding times.


For larger subscriber groups beyond the populations addressed by the Remote Community solution, the Local Connectivity feature is also available through Ericsson on their entire range of GSM base stations (see boxed text).


Using such a solution, operators have an attractive opportunity to provide communication to an untapped market - remote communities - that was previously considered too small or too costly to address. Universal service obligation now becomes a profit opportunity rather than a regulatory liability, and first movers will be able to lock in new subscriber groups if they recognize the technology now exists to do so.
The digital divide has shrunk considerably!


Janne Hazell is Altobridge General Manager, Remote Community Communications

To enable a successful Web 2.0, we also need Internet 2.0 says Richard Lowe

Ten years ago the Internet had capacity to spare and the applications that it supported consumed relatively few network resources. The advent of Web 2.0 has changed all of that. We have seen, and continue to see, a proliferation of complex applications demanding ever-more bandwidth. This has led many to wonder exactly who will foot the bill for the necessary upgrades to the network.


Web 2.0 applications are very much a success story - services like Wikipedia, Facebook, YouTube and the wide variety of text and video blogs all seem to defy demographic boundaries and continue to experience stratospheric growth in users. However, there are genuine fears that the demands these place on bandwidth resources may ultimately overload the network and cause Internet meltdown. To enable a successful Web 2.0, we also need Internet 2.0.


The problem is that the Internet was never designed to deal with the increasing demands that are being placed on it. In this respect it bears a close resemblance to modern motorway infrastructure. In the past no one predicted the number of cars that there would eventually be on our roads, as a result commuters are faced with chronic gridlock, especially during rush hour. Similarly, no one could have predicted the popularity of next-generation Internet applications. Interactive and video-rich Web 2.0 applications demand a great deal of bandwidth, which consequently clog the networks carrying the information and degrades overall performance quality. Furthermore, the IP traffic generated by Web 2.0 applications does not follow the one to many, top down, approach of most original web applications. As one senior network architect of my acquaintance frequently explains, "traffic can come from strange directions".


Though it may sound counter intuitive, the solution is not as simple as merely building larger networks; in much the same way as a motorway, extra capacity is soon consumed. It is a vicious circle: the more capacity that is provisioned the more innovative bandwidth-hungry applications are developed to exploit it. Of course, new capacity has to be built to ensure continued innovation on the web, but what we also need to do is to enable the intelligent management of network resources to support valuable services.


This is not about creating a two-tier Internet. Yes, for some people, they may express the value they see through increased payments to their Service Provider. For others, they may choose to prioritise their access line resources towards gaming at no extra cost, and sponsors or advertisers may choose to fund incremental network capacity for services like IPTV. We have learned over the last few years not to second-guess what business models might emerge. After all, who would have believed a few years ago that one of the world's most valuable companies would offer a free consumer search service funded by contextual advertising?


Internet operators already have to compete with the challenge of managing network resources for multi-play packages that include ‘walled-garden' services such as VoIP and IPTV. However, the problem is exacerbated when ‘over-the-top', web-based services - such as YouTube or the BBC's iPlayer - seek to exploit greater network capacity for substitute services without bearing the network costs. Real-time services such as video can be severely impaired, or fail, if insufficient network resources are available to them.


This places operators in a difficult position. They cannot tolerate a decline in the overall quality of their network, but nor can they turn their back on third party services which drive broadband adoption and are highly valued by customers. In the same way as content providers need to monetise their content, network providers need to monetise their networks.


Operators can no longer expect to make sufficient profits through the sale of voice lines.  Nor is Internet service delivery the salvation is once appeared to be.  Most of the important global markets are open to alternative network providers and this has resulted in fierce competition - eroding prices and eating into the profitability of broadband provision.  Many telcos are facing the possibility of becoming little more than a bit pipe for the provision of over the top services by third party suppliers.

  
This can only be countered by the provision of compelling value-added services that exploit the unique capabilities offered by ownership of the access and aggregation network, while ensuring that these premium services enjoy the quality of service required to differentiate them from over-the-top providers. Clearly there needs to be a proven economic case for the allocation of network resources to these services, rather than allowing all services to wrangle for resources in an ‘unallocated pipe.'


If telcos are to retain autonomy over the service they provide, they need to move into the sphere of rich media, web application provider, and leverage the best asset they have to compete effectively - the network itself. In order to do so, however, they will need a fresh network management tool and a new business model.


The traditional approach to managing network resources for a particular service is partitioning. This cuts off bandwidth resources specifically for VoIP, IPTV et cetera and only admits the number of concurrent sessions that the resource can support in the access network - or what my friend, the network architect, calls "sterilizing bandwidth". Service quality is therefore only guaranteed when the network is ‘over-provisioned.' This is an untenable and inherently risky approach in the web 2.0 era.


For one thing, partitioning is a backward looking approach because the basic goal of migrating to all-IP networks is to have a common shared resource that is service and application agnostic.  Partitioning the network only results in higher capital and operational expenditures because it is an inherently wasteful process. In looking to ensure quality, it necessitates highly in-demand network assets becoming ‘stranded' and idle. In the early days of IP networks where the dominant traffic was voice, with very little video, over-provisioning and partitioning was still possible, albeit inefficient, because voice compared to video consumes far less bandwidth and its growth and peak traffic patterns are more predictable.  Video traffic by contrast is very bandwidth hungry and subject to large peaks, a bit like a motorway changing from nearly empty with good flow and speed to overloaded with vehicles in less than a minute causing endless delays and stoppages with no apparent reason or warning.  Trying to over provision and partition for such demands will be economically impossible for Service Providers.  Actual usage patterns may not match the capacity plan causing customers to be unable to access a service or application when, in fact, sufficient capacity exists in another partition or ‘silo.' Not only is it an inefficient way of managing current services but every time a new service is launched, a brand new capacity plan has to be launched alongside it. This leads to extended time-to-market, repetition of work and expense and an inflexibility that is disadvantaging in the competitive converged media markets.


A modern approach needs to be agile - there is not unlimited bandwidth to justify wasting resources. The solution lies in technologies that allow the carrier to treat the network as a holistic resource available to all applications.


The ETSI standards-based Resource and Admission Control Subsystems (RACS) permit available resources in the access network to be allocated dynamically rather than being pre-provisioned, thus ensuring that they are exploited in the most efficient way. Operax has enhanced the basic standards by proposing that the functionality operators require is "dynamic Resource and Admission Control" (dRAC). This brings dynamic topology awareness into the admission control and policy enforcement process - thus ensuring that services and sessions are truly guaranteed QoS on the basis of resources that are really available.
The functionality is situated between the application layer and the network, a position from which it is able to become the only point of contact to which applications can request bandwidth - effectively isolating the service from the network resources. It is then able to enforce subscriber and service policies, allocating resources on a real-time, per session basis, removing any need for applications to understand the underlying topology of the network.


In the same way, dynamic Resource and Admission Control is able to intelligently manage applications, services, subscribers and network resources according to the carrier's business policies.  All the different points of bandwidth contention are identified and are automatically processed before a session is set up. dRAC tracks the available bandwidth into a consumer's home and can ensure that a session is not set up if the necessary bandwidth is unavailable.
At present, applications are competing for bandwidth on a best-effort network. Automated management of bandwidth commodities will not only ensure that service quality can be guaranteed for premium real-time services such as VoIP and IPTV, but can also ensure that over-the-top services have a reasonably free access to resources. Quality can be guaranteed in the premium tiers of the network while still leaving room for innovation in web-based services.


More than merely saving operational expenditure by providing the most efficient technical support for services, this method of automated management may also allow operators to open new revenue streams and present new business opportunities. By treating bandwidth as a commodity that can be allocated dynamically, quality of service can itself become a monetising strategy. For example, if an individual customer wishes to subscribe to a ‘gold' standard of quality for a service, such as high-definition (HD) for IPTV, the RACS can monitor the capacity and automatically inform the customer of the available levels of quality. If there is only capacity for a ‘bronze' standard-definition (SD) class of service, the customer could be alerted before payment and charged appropriately if they choose to proceed. Alternatively, they can be offered a discount and priority if they prefer to access the ‘gold' session through a network digital video recorder (DVR) at a later time.


There is plainly a middle ground to be drawn between the current Internet model, which allows free access to all services, and a controlled tiered Internet. Operators rightly want to see a return of investment in network technologies, but not at the risk of the competitive market. Personalisation is very much a buzz word of the Web 2.0 era, rather than the unknown quantity of provisioning through network partitions, automated resource and admission control can allow the operator to tailor its service levels for their individual subscribers, ensuring guaranteed quality for tiered services and yet still allowing capacity for innovation in over-the-top services.

Richard Lowe is CEO, Operax

The ‘battle for the home' is a key development currently taking place within the telecoms industry, as mobile operators, fixed operators and VoIP providers fight for what was once the sole territory of fixed operators. Although low-cost VoIP and increased coverage continue to be key benefits associated with fixed-mobile convergence, the focus has now shifted beyond voice, as mobile operators realise that by taking control of the connected home they can also open the doors to new revenue-generating services and applications.  Steve Shaw takes a look inside

Increasing usage of mobile data, has led subscribers' homes to become the next telecom battleground.  With multiple providers struggling to increase share-of-voice and gain their cut of the available revenue, it has become a strategic imperative for mobile operators to own the home. 


Analyst house Infonetics predicts that the FMC market worldwide will be worth $46.3 billion by 2010, so there is no doubt that the future will be a connected home. But the battle to own that space has only just begun, as operators across Europe develop and launch homezone services based on dual-mode handsets or femtocells. All eyes will be on the industry to see who rises to the challenge and how the market will develop.


Indoor voice and data usage represents one of the largest growth opportunities for mobile operators today. European operators are investing in ‘homezones' to attract new subscribers and increase customer loyalty. A Home Zone 2.0 (HZ2.0) service enables carriers to deliver mobile voice and data over the IP network, rather than the expensive outdoor macro network, when the consumer is located within the home or office zone. This can have huge financial benefits for the carrier and also opens the doors for high-bandwidth services, such as downloading/ uploading pictures.


The first HZ2.0 services are already live across Europe, and include dual-mode GSM/WiFi offerings from Orange and TeliaSonera. And with leading mobile operators, including T-Mobile and Telefonica O2, announcing femtocell trials across the continent, we will soon see the launch of femtocell-based homezones.


Orange's Unik service is a good example of the potential of the HZ2.0 concept. In France, Orange's service has been deployed since September 2006.  The service has delivered a 10 per cent increase in average revenue per user (ARPU), 15 per cent of subscribers who take the service are new to Orange mobile, and subscribers with a Unik service churn three times less than standard Orange mobile subscribers.


Since HZ2.0 services usually offer low-cost or free calls from within the home, from the consumer's point of view, cost is an obvious benefit, but one that VoIP providers and even fixed operators can also offer. With rising consumer demand for mobile data services, operators can make their HZ2.0 services work harder for them. By capitalising on this growing demand, operators can provide a compelling mobile data experience at home at a vastly reduced cost, so forming the foundations for their ownership of the connected home.
In the home of the future all devices will be connected (TVs, DVRs, cameras, game consoles, etc.). This picture has long been discussed, but what has never been understood is quite how that will be enabled. For the mobile operator to create that network and truly own the home, it needs to go beyond local data offload and improved coverage, and make the mobile handset the central device in managing and maintaining the connected home.
With the rise of a new generation of mobile handsets designed not only for voice services, but also data, such as the iPhone and Blackberry, the handset is fast becoming the primary access mode for e-mails, basic web browsing and social networking.
The vision of a connected home is an important part of the strategy of key players in the telecoms market, including Orange and Apple. Orange for example, has recently announced the ‘Soft at Home' initiative, a joint venture with Sagem and Thomson that aims to facilitate the deployment and interoperability of digital equipment in the home.
Apple, a newcomer to the telecoms space, is connecting the Apple AirPort WiFi router to its range of computers and laptops, which in turn accesses its iTunes service, synchronises with the new iTV server, as well as a WiFi enabled iTouch. This vision consolidates around the iPhone - a central device that could bring the connected home together.
In the battle for the home, mobile operators will not only fight with fixed operators and VoIP providers, but also other home-service providers (such as Virgin and Sky for example) and even device manufacturers. And with new players like Apple and Google entering the telecoms space, the battle has been further intensified.


The key challenge facing mobile operators is to position the handset as the central device in the home, expanding the way consumers use and experience their phones and shifting the focus from voice to data services. The handset is without any doubt a key player in this game, as it needs to operate as the link between all the different devices that form the home.


More importantly, operators have a vital advantage when compared to rivals: they manage both the macro network and the ‘homezone', being able to provide a seamless experience between both networks. For mobile operators, FMC becomes the vehicle enabling them to own and add value to the connected home.


By understanding service expectations of customers, operators can create a home network and deliver valuable and wanted services to a multitude of connected devices within the home - therefore creating, owning and using the network within the home.
From a consumer's point of view, the convenience of the connected home is a natural next step that will enable the customer to not only have access to better coverage and lower-cost services, but also to personalise the handset and services it controls according to his/her preferences and life style.


We have already seen the shift towards data services, with consumers using handsets to access e-mail, download music and videos, web browsing and social networking. In the future, this trend will only be intensified, with consumers further personalising their phones and the content they access.


The launch of HZ2.0 services in key European markets and upcoming femtocell launches gives mobile operators the opportunity to move beyond the proposition of low-cost voice. With new rivals entering the mobile space and service providers refining their strategies, the battle for the home has turned into an extremely competitive market.


By taking ownership of the mobile handset within the home, delivering high quality and unique services, mobile operators can begin to build the case for their control over the connected home. From here, they can create, own and use the home network - since they understand the service expectations of consumers. Proving value through home service delivery to the mobile handset, operators can craft a connected home network and deliver valuable and wanted services to a multitude of connected devices within the home, with the mobile handset at the heart of this new connected environment.


The homezone vision enables carriers to stay one step ahead of the competition and develop a long term connected home that cements the mobile operator as the service provider of choice for in-building communications and puts the mobile phone firmly at the centre of the next-generation in-home network.

Steve Shaw is Associate Vice President Marketing, Kineto Wireless 

    

Other Categories in Features