Features

Features

Mobile operators must tackle the transition to next-generation Ethernet if they are to successfully meet consumer demands for new services, argues Vinay Rathore

Mobile operators are in a period of fundamental transition, as consumers increasingly demand access to personal high-bandwidth services while on the move. The growth of 3G data services, mobile broadband and the availability of powerful new mobile devices such as the Apple iPhone are placing significant strain on mobile networks and operators are investing in additional capacity to support this bandwidth explosion.

However, the operational costs associated with traditional mobile backhaul (defined as the access portion of the network transporting traffic between the mobile base station and gateways to the packet network and voice switched network) are increasing faster than the revenue generated by new data services.  Until recently, most network operators have been trying to add backhaul capacity primarily by leasing additional TDM-based E1 circuits at a high cost. Worldwide, TDM backhaul accounts for 20 to 40 per cent of mobile network Operating Expense (OPEX). All this is untenable in a competitive market of shrinking average revenue per user (ARPU).

The challenge facing mobile operators is how to increase network bandwidth, both in terms of capacity and speed, while reducing the total cost of running the network and growing top line services revenue.

In the UK, for example, demand for mobile Internet services is increasing while the adoption of mobile broadband is heating up competition among fixed and mobile operators. Approximately one in eight UK consumers have either replaced their fixed-line Internet connection with a mobile alternative or chosen a mobile broadband service from the outset according to research published by YouGov. Similarly, Ofcom's latest Communications Market Report (August 08) found that two million people have already used mobile broadband via a dongle, 3G datacard or similar device, with sales reaching 133,000 from 69,000 between February and June this year alone.

In addition, a recent report from Neilson mobile highlighted that the UK has the second highest number of active mobile Internet users in the world (12.9 per cent), second only to the US. Furthermore, the availability of powerful new mobile devices such as the 3G iPhone are driving mobile Internet usage by promising consumers easier access to mobile business, personal and entertainment services.  For example, 37 per cent of iPhone users watch video, 82 per cent access the Internet, 17 per cent stream music and 76 per cent send email on their phones according to Neilson Mobile.

Operators have been quick to capitalise on people's desire to access and download content off the Internet on their mobile phone, launching a suite of new services. A recent independent survey (June 2008) conducted on behalf of Quickplay, found that two in five people in the UK had already watched TV and video content on their mobile phone, with many now regularly using such services. 18 per cent of those that had tried a Mobile TV and video service watch on a weekly basis.

However, while mobile operators are continuously upgrading their wireless networks to support this bandwidth explosion, it's not just about adding capacity. In fact, network quality is the most important driver of satisfaction with the mobile Internet, accounting for 79 per cent of overall satisfaction according to Nielsen Mobile.

These trends highlight the need for mobile operators to invest in next-generation network infrastructure to accommodate increasing bandwidth demand and deliver a high quality user-experience while maintaining profitability.

Current networks were designed to transport voice traffic over Time Division Multiplex (TDM) networks with E1 circuits to provide backhaul transport from the base station to the network controller, and over SONET/SDH networks for voice traffic from the controller to a Mobile Switching Center (MSC). With the advent of 2.5 G mobile networks and the data services thereby enabled, the backhaul network has evolved to accommodate increased data traffic by including Frame Relay, ATM and IP, but in large part this data still travels over TDM circuits utilizing ATM/IMA.

The current TDM-based backhaul network is being overwhelmed by the rapid increase in bandwidth demand with the introduction of 3G (HSPA, EV-DO) and 4G (LTE, UMB, and WiMax) data services. For example, to ensure all users have access to Mobile TV services, the network must be scalable to support thousands of multi-cast video streams such as broadcast TV as well as uni-cast streams like You Tube.

As a result, mobile operators must reduce the cost-per-bit of data transport in the backhaul network while continuing to ensure voice quality, maintain carrier-grade Operations, Administrations, and Maintenance (OAM), and provide circuit-like resilience.

Carriers can take advantage of advanced Ethernet technology as a solution to challenges in the mobile backhaul network and reduce the dependency on E1 leased lines, and expensive SDH infrastructure. Carrier Ethernet is far more economical as it lowers the cost-per-bit and operational expenses while offering carrier-class management and Quality of Service with the right attributes. High-performance Carrier Ethernet solutions offer larger pipes - essentially, more bandwidth - to meet end-user bandwidth requirements while lowering the overall infrastructure cost and ensuring high quality of service. Using Ethernet, operators can scale network more easily to meet the demands of mobile services and applications without scaling costs. By building an Ethernet mobile backhaul, operators can burst up to the full speed and use the same circuit to carry different types of traffic. While there is still some concern regarding carrying time-sensitive traffic such as voice over Ethernet, the industry is working toward resolving this issue in multiple ways including developing TDM over Ethernet standards.

However, while advanced Ethernet technology provides a solution to backhaul problems, the backhaul network does not operate in isolation. To work efficiently and leverage Operational Expense (OPEX) advantages, the mobile core network must also evolve as the access network migrates to Ethernet. Building a Carrier Ethernet network infrastructure using standards defined by the Metro Ethernet Forum provides operators with long-term, low-cost strategy to replace their existing SDH infrastructure while maintaining carrier-class reliability.
However, as carriers have invested heavily in their current mobile networks, they cannot afford to simply tear out and replace current legacy radio infrastructure. It is crucial that their mobile backhaul and core network strategy still supports legacy traffic and service while allowing them to gradually transition to next-generation infrastructure that are more scalable and economical.

There is no ‘one size fits all' approach to building out an Ethernet backhaul network.  Each tower has different requirements based on available infrastructure, bandwidth requirements, and geography.  Most Ethernet backhaul networks will be a hybrid of fibre, microwave and copper.  In addition, it is likely that operators will lease portions of their network and own portions in order to balance CAPEX and OPEX budgets.  Additionally, operators will need to support TDM, ATM and Ethernet networks during the transition phase.  With all of these varied requirements, operators must seek out vendors that supply a comprehensive Ethernet portfolio that can be gradually applied as the network demands evolve.
Fixed telecom operators are already benefiting from the migration from TDM-centric to next generation Ethernet centric networks. Mobile operators must manage this aggressive transition to next-generation Ethernet to maximise the investment in existing mobile and network infrastructures while maintaining a quality of service that minimises subscriber churn.

Vinay Rathore is Director of Product Marketing, EMEA, Ciena
www.ciena.com

Francois Mazoudier argues that fixed-mobile convergence is not the be all and end all in the evolution of telecoms

The term Fixed Mobile Convergence (FMC) has captured the minds of businesses across the telecoms industry, with many players believing that its growing presence is causing a communications revolution. To date, the term has attracted a great deal of attention across the world, with anybody who is anybody in the telecoms space talking about FMC. However, in reality, is FMC all just ‘hype' and, if so, what are the alternative communications solutions out there?

By converging fixed and mobile communication, FMC provides a synergistic combination of technologies, enabling all-in-one communication systems that allow voice to switch between networks on an ad-hoc basis using a single mobile device. However, as these solutions start to impact the telecommunications ecosystem, do FMC players really believe it is the best answer for businesses that are looking to adopt the latest communications models? Or are the telecoms vendors and operators making a last ditch attempt to breathe life back into their ever-decreasing profit margins by stamping a larger footprint into the office environment in the name of FMC?

Until now, mobility was designed as an extension to this office-based hardware telephony system, perceived as a luxury that was too expensive to handle all business calls. With the universal adoption of mobile phones, it is now the fixed-to-mobile element that is complex and expensive, with calls to mobile phones the main form of voice communication over traditional fixed-line handsets. So why not go all mobile?

In today's busy offices the majority of workers still have to juggle a desk-based phone and mobile device, a situation that is not only inconvenient but also costly for businesses that support and cover the costs of both phones. As mobility increases, and more staff work outside the traditional office environment, employees are forced to give out and use multiple phone numbers. As FMC gives users a single dual handset that can be used anywhere at anytime it seems like the perfect solution to this issue. However, away from the hype, integrating mobile and fixed-line networks is a complex matter and there is a far simpler single handset solution already on the market - the mobile phone itself!

It does not have to be expensive to introduce an all mobile telephony solution into a business environment, all the necessary hardware is already supplied and paid for as employees' already have mobile devices - no other hardware is needed. Calls made to the office number are handled exactly as they would on an ordinary telephone system, directed to the user's mobile phone over a GSM network of choice, rather than over a complicated office fixed-line telephony network to a desk-based phone. This makes them reliable and cost effective, particularly compared to the FMC solution.  

True mobility enables employees to be contacted on just one number whether they are in the office or not. All mobile solutions enable this unprecedented access without compromising on the features users require from their desk phone. This new level of access allows business workers to stay in contact with colleagues and customers with ease; they never miss an important call again.

The simple-to-use system can be provisioned and activated instantly via the web and takes ten minutes to set up. Put bluntly, an all mobile solution eradicates all the complexity of a fixed-line, and therefore an FMC solution, as there is no need for hardware, techies, long-term contracts and expensive upgrades. Handover between fixed and mobile is inherent - because it is all mobile.

All mobile phone solutions have the potential to dismiss the FMC ‘hype' in the same way that network-enabled voicemail ended the office hardware business when it was launched back in the 90's.  Back then each office had a tape recorder in the reception and this hardware eventually became superseded by network-enabled voicemail. Today there is an all mobile system that can supersede the office PBX, turning the business telephony industry (selling mostly hardware) into a service industry.

In addition, an all mobile system solves the problems found by all companies today when buying/changing phone systems: complexity, high upfront costs, hidden ongoing costs, high dependency on technical specialists, costly ongoing upgrades and, most importantly, expensive monthly bills as their employees are increasingly mobile and incoming calls are redirected to them at full mobile rates. 

Despite the fact that FMC has existed as a concept for over ten years, its penetration is likely to be as little as 8.8 per cent of the total business subscriber base by 2012 (according to Informa) and it remains to be seen if businesses will want to replace existing infrastructure, when you look behind the complex scenes of FMC. It is far easier to achieve mobility without such high level infrastructure investment.

I am not saying that FMC won't take off on a large scale or even change the way we communicate. It probably will, but by using an all mobile phone solution we can change the very nature of office communication without the need for the fixed-line.

FMC is really about the move to mobile where everyone's phone is wireless. Therefore let's stop talking about FMC and instead talk about accelerating such a move.
Francois Mazoudier is CEO at GoHello
www.GoHello.com

We've been hearing a great deal about ‘converged', ‘21st Century' and ‘next generation' networks, and what they will mean for business.  But what does it all actually mean in terms of technology? Peter Thompson takes a look

Next generation networks, while promising great strides for business, happen to entail - in terms of technology - a radical shift from circuit switching, where fixed resources are allocated to a session (such as a telephone call) for as long as it lasts - regardless of whether they are actually being used at any particular moment - to packet switching, which allocates transmission resources for only as long as it takes to forward the next packet. This is more efficient, since most sources of packets only generate them occasionally (though sometimes in bursts). Equally important is the inherent flexibility of packet switching to cope with variations in demand, and hence to support a wide range of different applications and services. While several packet switching standards have been used, the clear favorite is IP (Internetworking Protocol), which is the basis of an ever-expanding web of enterprise and service provider networks that link together to form the Internet.

For the enterprise, shifting to a converged all-IP network translates into immediate productivity gains through integration of different functions - now available in easy-to-use Unified Communications packages - and medium-term cost savings from toll bypass and consolidation of network infrastructure.

If this all sounds a bit too good to be true, then it should. Allowing streams of packets from different applications to share resources in a free-for-all makes the network simple and cheap but causes the service each application gets to be extremely variable. Whenever packets turn up faster than a network link can forward them, queues form (called congestion), causing packets to be delayed, and buffers may overflow causing some packets to be lost. Traditional data applications such as email transfer don't mind this too much, but new real-time services such as IP telephony are very intolerant of such behavior.
The upshot of all this is that despite all its benefits, a converged packet network can't be considered a reliable substitute for a circuit-switched one without having something in place to ensure that it provides an appropriate quality of service (QoS) for all critical (and particularly real-time) applications. This means giving each application enough bandwidth, and keeping packet loss and end-to-end delay within bounds. Loss and delay can only get worse as a stream of packets crosses a network, so it makes sense to think in terms of allocating an end-to-end budget for these parameters across different network segments. Different parts of the network can then attempt to meet their budgets using a variety of methods.

A technique used in the high-bandwidth and high-connectivity core of a network is to control the routes that streams of packets take so as to avoid congestion almost entirely.
MPLS, with its traffic engineering extensions, is a standardized way to do this, but there are also proprietary mechanisms that some of the IXCs use that work well enough for them to carry billions of call minutes annually over converged IP networks using VoIP.
Move towards the edge of the network, however, and the number of alternate routes diminishes. The capacity of the individual links also goes down, making occasional congestion much harder to avoid. At the level of an individual WAN access link it becomes almost inevitable, so packets will often be queued up to cross it. Delivering QoS then becomes a matter of managing this queuing process to assure service for critical packet flows even when the link is saturated. This can be very tricky when several different applications are all ‘critical' but have wildly different throughput requirements and sensitivities to packet loss and delay. This problem is a major drag on the uptake of converged networks, causing them to be widely regarded as ‘complicated' and ‘difficult', when they ought to be making life easier.

One reason that the available QoS mechanisms don't help as much as they might is that they fail to take account of the intrinsic interaction between the different QoS parameters, or rather between the resource competitions that affect them. At a congestion point, packet streams compete for the outgoing link bandwidth, and since having more traffic than capacity is the definition of congestion in the first place, a lot of ‘QoS' implementations focus on managing this one competition, i.e. they provide a way to allocate bandwidth. However this isn't the only limited resource, as queued packets have to be stored somewhere, and any buffer can only hold so many; consequently there is another competition between the streams, for access to this buffer, which determines their packet loss rate.

Finally there is the limitation that, however fast the network link, it only sends one packet at a time, and so there is a third competition to be selected for transmission from the buffer, which determines queuing delay. These three competitions are interlinked; for example increasing the amount of buffering to reduce packet loss results in more packets being queued up to send and hence increases average delay. Even assuming a series of QoS mechanisms can be combined to manage all three of these competitions, the behind-the-scenes interactions between them will sabotage every attempt to deliver precise and reproducible QoS. In practice, the effect of this is that reasonable QoS can only be achieved by leaving substantial headroom, resulting in very inefficient use of the link, which can become a high price to pay for a solution that was supposed to save money!

Predictable multi-service QoS
Fortunately a new generation of QoS solutions is emerging, that manage the key resource competitions at a network contention point using a single, general mechanism rather than a handful of special-purpose ones. This not only controls the intrinsic interactions but even allows trade-offs between different packet streams, for example giving a voice stream lower delay and a control stream lower loss within the same overall bandwidth. By starting from a multi-service perspective, multiple critical applications can all be prioritized appropriately without any risk of one dominating and destroying the performance of the others. Embracing the inherently statistical nature of packet-based communications makes the resulting QoS both predictable, eliminating surprises when the network device is configured, and efficient; up to 90 per cent of a link's capacity can be used for packet streams requiring QoS, with the rest filled with best-effort traffic.

Applying this technology at severe contention points, such as the WAN access link, enables the biggest potential losses of QoS for critical applications to be controlled. This makes the QoS ‘budget' for the rest of the network achievable using established techniques such as route control and bandwidth over-provisioning.

For the business, this QoS technology is most useful for managing the WAN access link to the rest of the network. Combining it with session awareness, NAT/Firewall/router functions and the ability to convert legacy applications such as conventional telephony to converged applications such as SIP VoIP produces a new class of device called a Multi-service Business Gateway (MSBG). Such a device can either be managed by a service provider delivering managed services or by a business buying simple connectivity services from a provider. It also provides a convenient point to provide QoS assurance such as VoIP quality measurement to ensure that SLAs are not breached. Overall it is an enabler for reliable, converged, packet-based services, allowing the full potential of 21st Century networks to be realized. We are only just beginning to see the changes this will bring to both business processes and everyday life.

Peter Thompson is Chief Scientist at U4EA Technologies and can be contacted at peter.thompson@u4eatech.com
www.u4eatech.com

Next generation access ("NGA") networks are slowly being rolled out in a range of European, American and Asia-Pacific countries. Even BT, that had until recently been reluctant to commit to NGA investments, announced a five year £1.5 billion plan to roll out fibre based NGA infrastructure to replace parts of the legacy copper network. The target architecture chosen will initially deliver services with speeds of between 40 Mb/s (for existing lines) and 100 Mb/s (some new builds) with the potential for speeds up to 1,000 Mb/s in the future. It is anticipated that the NGA will be rolled out to 10 million UK households by 2012. Similar plans have been announced in a number of other European countries.

Most operators, however, have stated, like BT, that such investment plans were conditional on a "supportive and enduring regulatory environment". What does this mean? What are the regulatory options available? To what extent will these impact the competitive dynamics of the market and end users?

The regulation of NGA investment raises a number of important regulatory issues. While not regulating wholesale access to these new infrastructures may have a positive impact on investment it would also creates significant barriers to entry for third party providers and may therefore result in a less competitive environment. Existing operators using unbundled copper loops may, for example, see their investments stranded as new fibre based networks are being rolled out. This could translate into less choice and higher prices for end users. At the other end of the spectrum an aggressive regulatory regime mandating cost based wholesale access to all may stifle investment and result in suboptimal outcomes for all stakeholders.

Operators, regulators and Governments worldwide are currently grappling with these questions. Regulatory measures that are being considered include:

Permanent and temporary forbearance - this entails placing no regulatory requirement on NGA operators, either for a period of time or permanently. The US approach is to forbear from regulation of fibre access and this seems to have stimulated NGA investment from operators such as Verizon. The German Government proposed a regulatory holiday for Deutsche Telekom along the same lines as the US, but was subsequently criticised by the EC and had to withdraw its proposals following the threat of legal action.

Cost based access - the regulator determines access prices for NGA based on the cost of providing access. A number of cost modelling approaches could be used for this including long run incremental cost ("LRIC") commonly used in the telecommunication sector or the Regulatory Asset Base ("RAB") approach often used for the regulation of utilities.
Retail minus - the regulator determines the access prices on the basis of the retail price charged by the incumbent operator less the costs avoided by not having to retail the service. This can be thought of as a "no margin squeeze" rule, which prevents a discrepancy between the wholesale access charge from the integrated company to competing service providers and the retail broadband price from the integrated company.

Anchor product regulation - in addition to providing access to all wholesale products on a retail minus and equivalent basis, the wholesale operator also provides an "anchor" product, a service they already provide, which they then continue to supply at the same price. For example, if the current copper network is capable of providing a 5 Mb/s broadband service, then the anchor NGA product would also be a 5 Mb/s broadband service and the integrated company would be required to provide it at the current service price.
Geographic market regulation - this is a variant on the forbearance approach. The regulator may forbear from regulating acces where there are competing NGA operators.
Each approach has its own advantages and disadvantages that will need to be considered carefully in the context of domestic market characteristics.

Another major difficulty, from a regulatory viewpoint, is the specification of the NGA products that should be regulated. Traditional solutions such as the leasing of the copper pair to third parties (a process known as "unbundling") are often difficult to implement -for both technical and economic reasons- with fibre networks.  Regulators may therefore want to regulate other products such as access to unused fibre, or even ducts to ensure that competition is not harmed.

The challenge for regulators will be to develop a regulatory approach that provides incentives for efficient and timely investment in NGA as well as regulatory visibility for stakeholders.

Benoit Reillier is a Director and European head of the telecommunications and media practice of global economics advisory firm LECG.  The views expressed in this column are his own.
breillier@lecg.com

Could VPLS offer the answer to the seemingly inevitable future bandwidth crunch?  Chris Werpy explores the options

With any communications network, the most common demand from multinational enterprises is for a reliable, secure and cost-effective communication channel between their globally dispersed offices, which requires guaranteed end-to-end bandwidth performance. However, bandwidth is under growing pressure from the increasing popularity of multimedia communications and converged voice, video and data applications, such as VoIP and video conferencing. As a result, the threat of traffic bottlenecks occurring between LANs is looming and corporations are looking for safe and guaranteed LAN to LAN connectivity that is scalable to meet whatever future bandwidth they may require.


Although carriers and service providers have been offering VPN services based on traditional TDM, Frame Relay, and ATM for some time now, the cost of operating separate networks to provide these services, coupled with the greater bandwidth consumption pressures, is forcing them to move to more cost-effective technologies: namely IP and MPLS.
Enter global virtual private LAN service (VPLS) into the networking spotlight. VPLS is a point-to-multipoint Ethernet-based transport service that allows businesses to securely extend their LAN throughout the entire WAN. VPLS benefits from the scalability and reliability of an MPLS core, with no legacy Frame Relay or ATM networks to integrate, and access to the existing network infrastructure and equipment. It scales well to national or international domains while preserving the quality of service (QoS) guarantees, with the added privacy and reliability of a Layer 2, carrier-class service.


The question is; are there any alternative solutions available that can compete with VPLS, such as Private IP, point-to-point solutions (such as Virtual Leased Line), Ethernet-in-Ethernet, L2TP and Border Gateway Protocol (BGP)/MPLS VPNs? The simple answer is "No". Simplicity and transparency is the name of the game for VPLS.


VPLS lets customers maintain control of their networks while allowing them to order on-the-fly bandwidth increments for multiple sites, instead of being constrained by the traditional legacy services. Configuration is also very straightforward - only the peer PE routers for a VPLS instance need to be specified. VPLS uses edge routers that can learn, bridge and replicate on a VPN basis. These routers can be connected by a full mesh of tunnels, enabling any-to-any connectivity. Customers can either use routers or switches with a VPLS solution, as opposed to Private IP. VPLS always offers Ethernet port handoff (customer demarcation) between the carrier and the customer router or a simple LAN switch allowing higher bandwidth service at a lower cost of deployment. Unlike IP VPN, where the customer hand-off can range from Ethernet, frame relay, or IP over TDM, with VPLS the customer hand-off to the WAN is always Ethernet. VPLS is also access technology-agnostic. The list of advantages is substantial.


BGP/MPLS VPNs (also known as 2547bis VPNs), on the other hand, require in-depth knowledge of routing protocols. As the number of instances increase, service provisioning systems are often recommended in both cases to ease the burden on the administrator, particularly for Layer 3 VPNs. Layer 2 VPNs also enjoy a clear separation between the customer's network and the provider's network - a fact that has contributed heavily to its increasing popularity. Each customer is still free to run any routing protocol that the customer chooses, and that choice is transparent to the provider. Layer 3 VPNs are geared towards transport of IP traffic only. Although IP is nearly ubiquitous, there could be niche applications that require IPX, AppleTalk or other non-IP protocols. VPLS solutions support both IP and non-IP traffic. One significant security and performance advantage is that there is no IP interaction at the connection between the provider edge and the customers' devices.


Another differentiator is that VPLS offers greater flexibility and cost reductions, by putting the customer in control of the network and the negation of equipment upgrading requirements. End users have the flexibility to allocate different bandwidths at different sites, with the bandwidth varying from site to site by as much as 1 Mbps (for example at a low traffic-generating sales site) to Gig-E (which could be needed for the company's headquarters and/or data centre). Furthermore, as customers increase the bandwidth, there is no need to buy new cards for the existing CPE. I estimate that customers with a 50-site network can save up to 20 per cent in networking costs by moving over to VPLS.
VPLS solutions also score highly in the areas of compatibility and scalability. They are transparent to higher layer protocols, so that any type of traffic can be transported and tunneled seamlessly. VPLS auto-discovery and service provisioning simplifies the addition of new sites, without requiring reconfiguration at existing sites.


The most effective VPLS offerings are delivered using Ethernet connectivity, in the form of VLANs. These VLANs can be provisioned across TDM connections (E1, T1, E3, T3, etc) when native Ethernet is not available.


Dan O'Connell, research director for Gartner is a staunch supporter of VPLS, stating recently that "VPLS is a major new growth area for Ethernet. Customers are already very familiar with Ethernet in their local area networks. Extending to the wide area network is a natural progression, especially for those business and government customers seeking a clear IP migration path to enable convergence of their multiple legacy networks."


So with VPLS displaying so many powerful capabilities, it would be surprising to imagine any circumstances where VPLS would be less than optimal. However, there is one such application and it's called multicast. Unlike Ethernet networks where there is native support for multicast traffic, VPLS requires the replication of such packets to each PE over each pseudo-wire, in order for multicast packets to reach all PE routers in that VPLS instance. The problem is further exacerbated in metro networks, where ring-based physical topologies are often deployed. Clearly, this replication is very expensive, causes wastage of bandwidth and is applicable at best when multicast traffic is expected to be a small proportion of overall traffic needs. Alternative solutions that the industry is researching include the establishment of shared trees within the VPLS domain, but this research has a long way to go.
Despite this caveat, VPLS is gaining momentum. Maria Zeppetella, Senior Analyst in Business Communications Services at Frost & Sullivan, agrees with this trend, but makes the point that most carriers are not planning to shut down legacy networks, as they still obtain a steady, albeit shrinking, revenue stream from them. Sprint being one exception, however, as it has announced it will shut down its legacy networks in 2009 and will have full migration to IP by then.


As for what the future holds for VPLS, I believe it will become the biggest network solution adoption of 2008 and 2009 for globally dispersed enterprises. Service providers will look to enhance their offerings for the early adopters and will introduce more customer network control applications and features. End users are always looking to simplify their network connections, while optimising transport effectiveness and keeping costs low. For businesses that are globally distributed and want to extend the benefits of the lower costs and simplicity of Ethernet technology throughout their entire network, global VPLS based on IP/MPLS is the solution of choice.

Chris Werpy is Director of Sales Engineering at Masergy, and can be contacted via tel: +1 (866) 588-5885; e-mail: chris.werpy@masergy.com
www.masergy.com

It's no secret that voice traffic is still expensive to carry in many regions of the world. And even though the related transmission costs are decreasing with the introduction of network entities such as 2G-3G media gateways, many operators are still struggling to deliver consistent OPEX reductions while evolving their networks to support ever-increasing traffic levels that now consist of voice, data and video content.

But even while multimedia traffic continues to grow, the fact is that voice services are still the core revenue stream for the majority of operators worldwide. Whether deploying cellular or satellite infrastructures or some combination of both, operators must optimize voice trunks in order to offer price-competitive services including pre-paid calling cards, private business lines and call centers.  And, with the telecom bubble of the late 1990s long gone, brute-force bandwidth provisioning is no longer a viable option. Now more than ever operators must plan the evolution of the transmission network pragmatically and look at enhancements that are available at low cost and deliver immediate ROI.

Want to know a secret?

Adding bandwidth capacity is not the only solution to relieving network congestion or increasing service capacity.  Used in telephony networks for many years, DCME (Digital Circuits Multiplication Equipment) solutions have earned a solid reputation for providing advanced PCM (Pulse Code Modulation) voice compression over transmission media such as satellite and microwave links.

Driven by standardization advancements in the ITU-T, DCME technologies have continued to evolve and now achieve impressive compression ratios, allowing operators to provide extra bandwidth without provisioning additional capacity. Adopted and valued by thousands of operators worldwide, DCME optimization and compression technologies offer time-tested, field-proven results that drive more bandwidth from existing assets while sustaining ¾ or in some cases even improving ¾ voice quality in networks where media gateways are deployed, resulting in substantial OPEX savings and improved profitability.

The secret to cost savings without sacrificing voice quality

Many operators are still carrying voice across plain PCM [G.711] 64 kbit/s channels or Adaptive Differential Pulse Code Modulation ADPCM [G.726] 32 Kbit/s channels over satellite links (even though the link costs are substantial), simply because they believe more aggressive voice codecs will cause a degradation in voice quality. But advances in today's DCME technologies have resulted in codecs that offer up to 16:1 bandwidth reduction on voice trunks while preserving quality of service.

For example, consider a link consisting of 8 x E1s, carrying 240 voice channels and 8 x SS7 signaling channels. Assuming a conservative 35% silence ratio, today's DCME solutions will reduce the bandwidth required on the satellite link from 4,300kbit/s to less than 1,000 kbit/s. This translates into substantial yearly savings with a payback period of only a few months.

A similar scenario can be repeated for leased line backhaul or congested PDH (E3) microwave links. The above example would significantly reduce the backhaul capacity from 8 x E1s (or a single E3) to a single E1.

In an Ater link configuration, voice traffic is carried between the BSC and the MSC in a compressed format (usually 16kbit/s per voice channel). But as traffic increases it typically becomes necessary to migrate voice traffic from the Ater link to an A link where it is not compressed, requiring at least four times more transmission capacity and significantly increasing network OPEX.

However, by equipping the A link with a DCME solution transmission bandwidth requirements are reduced by up to 4:1, thus delivering significant OPEX savings and liberating existing bandwidth assets to support additional backhaul capacity for future growth and new services.

The secret to cost-effective, secure disaster recovery

Mobile networks are being incorporated more and more into public disaster response plans, further emphasizing network availability as a critical component of any network planning.  Network outages - which can account for 30% to 50% of all network faults and hundreds of thousands of dollars in lost revenue -- are particularly sensitive for operators using third party leased lines or unprotected fiber links.

DCME technologies offer a cost-effective, reliable A/E link back-up solution that uses satellite backhaul on an as-needed basis to tightly control OPEX budgets without sacrificing reliability and security requirements.

DCME - the secret is out

Word is spreading that DCME solutions have come a long way from their humble beginnings as PCM voice compression solutions.  Today's advanced solutions offer a vast range of interfaces as well as varied processing capacity allowing connectivity in diverse environments, from STM-1 to large trunks, or even E1,  STM-1 or IP/Ethernet connecting to an MPLS core over Fast Ethernet or Gigabit Ethernet interfaces (electrical or optical).

But while there is a vast array of solutions available, operators must research vendors carefully to ensure the solution supports key criteria such as bandwidth management, a crucial component helping revenue continuity with carrier grade voice quality in traffic congestion situations. Other critera can include, 16:1  bandwidth compression (20:1 for telephony), 8:1 SS7 optimization, high-quality mobile codecs, voice and data aggregation, backbone protocol independence, integrated traffic monitoring and versatile connection capability. The checklist might seem exhaustive, but it is only with this complete feature set that a DCME solution can deliver the bandwidth efficiency, OPEX savings, exceptional voice quality, and network reliability demanded by today's mobile operators.

What do mobile operators do now that many of their services are reaching saturation point?  How do they continue to develop new and innovative ways for people to communicate that are as universally embraced as voice and text? Allen Scott contends that Mobile Instant Messaging will become the third key operator communications channel in the future

Mobile messaging services have never been so popular, with SMS still the hands-down winner. According to recent forecasts by Gartner (December 2007), 2.3 trillion messages will be sent across major markets worldwide this year. That is almost a 20 per cent increase from the 2007 total. Whilst the growth in traffic has been (and is predicted to remain) nothing short of phenomenal, for most operators the growth in volumes will not be anywhere near matched by a growth in revenues. Many are already seeing the flattening of messaging revenues.


Operator margins on messaging services are going to become ever slimmer as competition and market saturation bite deeper. Gartner estimates that the compound annual growth rate (CAGR) for SMS revenues is expected to fall by almost 20 per cent over the next four years to just 9.9 per cent. Rather than trading blows in a competitive game of one-upmanship over highly reduced bundled text tariffs - or even giving text away for free - forward-thinking mobile operators are beginning to realise that they should be planning right now for a future where current messaging services are fully commoditised and the margins greatly reduced.


The challenge is to replace the revenue. There is a need to develop new services that drive additional revenues, increase customer loyalty and develop new streams of income for operators.  The challenge has not changed in the last five years. Yet a host of services - from MMS to WAP and from Mobile Television to video calling - have failed to capture the imagination of the public.


Perhaps it is time to go back to basics.  So far, the most successful operator services have all been channels of communication rather than specific services.  What do I mean?  Voice and text are both simple to understand and use.  Most importantly, both provide channels with which to communicate.  No one tells users what to say or write in voice and text. What is provided is a simple communications channel for the user to use as he or she sees fit.
Operator services that have failed to capture users' imaginations have tended to provide very specific services rather than communications channels - or they have tried to recreate a PC experience in a way that is not suitable for the mobile. WAP, for example, is a specific service providing browsing on a mobile phone. However, it is slower than browsing on a PC, and the results are usually difficult to read and to navigate. Similarly, MMS is a specific service offering the user an opportunity to send photographs from mobile to mobile. Ultimately, volumes of photo messaging are likely to stall (probably in the next year or two) as mobile subscribers increasingly share photos through mobile communities and social network portals rather than sending them directly to one another.


So what are the new communications channels?  The most obvious one today is mobile Instant Messaging.  Mobile IM offers operators a new opportunity because it provides a comprehensive new communications channel and not a specific service. Though mobile IM's uptake today compared to SMS is still relatively small, operators are taking the service seriously.  Indeed, early indications are that mobile IM has a growth pattern that matches, and in some cases exceeds, that of SMS at the same time in its development.
IM has at times been unfairly seen as little more than a service extension to the PC. But the reality offers so much more, with operators increasingly recognising the opportunity to create a global partnership to truly take mobile to the Internet, and vice versa. The combination of messaging, presence, and conversations, plus the ability to attach links or pictures, provides an incredibly vibrant solution for the mobile environment. It's about much more than driving additional active subscribers.


IM is actually ‘text talk.' It is differentiated by two key points: presence and interactivity. Presence means subscribers can tell their connections what they want to do.  Are they busy? Are they free to communicate? With SMS, there is a restricted level of interactivity and uncertainty as to whether a text message recipient is ready or able to communicate. With Mobile IM, interactivity delivers a text conversation in its truest form - and offers an unmatched user experience.  IM allows that conversation to become more real-time, more intuitive, and more content-driven.


Instant messaging can, and already does, have numerous different forms. There are multiple applications (business, social, educational) and multiple channels (via ISP or mobile operator) for delivery.


A lot has been intimated about the risk to SMS revenues due to ‘cannibalisation' from mobile IM.  This has not been the experience within the operator community.    Turkcell has already announced that they have seen an increase in revenue from mobile IM users who are now using more voice and SMS.  The nature of the medium is conversational, chatty, and public.  Turkcell users were starting conversations in mobile IM and then breaking off to call or text one of the participants whilst the conversation continued.


So far, there have been more than 60 mobile IM launches across the globe.  Operators as diverse as Vodafone, Vimpelcom, Turkcell, 3 UK, TIM, and Tele2 have launched mobile IM services.  Some are market leaders and others are new entrants.  Some have launched ISP branded services, whilst others have launched their own branded services.  Some have even launched both!  The common bond amongst them is that they see benefit and revenues from the mobile IM opportunity.  NeuStar's operator customers alone have an end user base in excess of a third of a billion mobile subscribers.


Most important, though, is the potential mobile IM has to transform the delivery of information and data to mobile across the Internet. Browsing on a mobile is a slow and frustrating activity. Through the use of IM ‘information buddies', mobile IM services can provide users with access to information like news and sport headlines, train timetables, weather information, and so forth. This is relevant, useful information delivered in a way that is suitable for the mobile environment - short, sharp bursts of relevant information delivered quickly and in an easily digestible form.


Furthermore, the ability to share is a fundamental cornerstone of mobile IM.  Today it is sharing messages, but tomorrow it could be any number of things: bookmarks, photos, video, phone numbers, or location-based information.


To sustain growth over the next few years, operators are likely to look to IM and social networking applications to drive traffic, working either with popular established ISPs and social networking sites or creating new communities in which people can gather and communicate.


The growth of mobile IM has been impressive.  A number of operators went public with announcements in the latter half of 2007 and the early part of 2008 regarding service success so far.  These included 3 UK, who announced more than one billion mobile IMs sent in less than a year.  Vodafone Portugal announced an early success milestone of 100 million messages, and Vimpelcom commented favourably on the launch of its Beeline service.
The challenge for the industry now is to seize the moment.  SMS became a ubiquitous service when it opened its doors to interoperability.  Before this, users needed to know which network someone was on to message them.  Mobile IM is even more complex, with interoperability issues between different ISPs as well as different operators.  But consumers do not care about this.  They simply want to communicate.  The opportunity is there to be seized.  If operators engage with the ISPs and with other operators, then there are real achievements to be secured. We, as an industry, must keep looking forward to the bigger picture. The danger apparent is in emulating the growth of the Internet, which took ten years to derive real value - and a lot of money was lost on the way.


This may all sound ambitious, but as long as operators embrace the vision and address the demands of interoperability, IM has the propensity to drive new revenues in a host of ways.  No one expected SMS to be the roaring success that it has been since it embraced interoperability.  This year, mobile operators will have 2.3 trillion reasons to be thankful it has been.  Mobile IM has as much, if not more, potential to succeed.  The technology is in place - it just needs to be well executed. 

Allen Scott is General Manager of NeuStar NGM www.neustar.biz/ngm/

IMS - the ultra next generation network architecture - was to become the great unifier for all our disparate access technologies and a cure-all needed to deal with vendor interoperability issues. If done the wrong way, however, it can create an overly complex, difficult-to-manage architecture.  This realisation has put a renewed focus on interoperability testing and network monitoring. Chad Hart explores the challenges and makes the case for a lifecycle approach to testing and monitoring of NGNs and IMS networks

Most operators want an NGN, but few have actually been deployed. One major reason is that making these networks operate reliably is challenging, and many initiatives never make it out of the lab. This is especially true for IP Multimedia Subsystems (IMS). New approaches to quality assurance are, however, changing this - which is where testing and monitoring comes into the picture.

Those responsible for looking after the quality of NGNs and IMS-based networks face many challenges. In the first instance, they are complex beasts. Almost by definition, they are made up of many devices, offer several different kinds of services, interface with many legacy networks, and have to interact with other providers' networks.

The IMS architecture is especially complicated. It comprises many different protocols, dozens of standardized functions, and even more interfaces than one can imagine. Because of this, coping with these seemingly infinite details creates another challenge for quality engineers to deal with.

Theoretically speaking, specifications should make designing and implementing advanced networks easier. The standards should provide a good guide for everyone to follow. But in reality, many standards - particularly those for IMS - are incomplete, or have major pieces missing. What compounds this is that there are many industry bodies developing different specifications to apply to NGN networks. These include IETF, ETSI TISPAN, and the 3GPP. Furthermore, these associations frequently update their work, making it an arduous task to keep track of new versions to adhere to.

Thirdly, engineers face the challenge of identifying and sourcing all the pieces in this jigsaw puzzle, and to then make it work together. Because of the complex nature of the IMS architecture, and the many standard ambiguities, interoperability becomes a serious issue to deal with. Often the components from one vendor do not work with those of another without a significant amount of integration work.

Furthermore, because no one vendor does everything exceptionally well, operators are confronted with the challenge of dealing with each one's weaknesses, and go through an often laborious and tough vendor interoperability testing process - alternatively, operators could pick a vendor that has already interoperated with the best-of-breed components. But even this is not without its challenges.

Finally, the biggest and single most important challenge is to ensure that your customers and subscribers remain happy. The end user does not care about how the services he uses are implemented - all he's after is high-quality, reliable, secure and affordable networks. So it becomes crucial for IMS implementers to hide the complexities from its users while providing consistent - or even higher - service quality levels. Meeting these challenges requires a more advanced approach to ensuring quality.

You'd think all these challenges make it almost impossible for any NGN - never mind an IMS network  - to make it to market. But operators are dealing with them. They are providing their customers with top-notch services, and we believe this is because they have realigned their quality assurance processes and invested time and money into continuously testing and monitoring their networks.

Before progressing from a concept to a deployed network offering a service, operators implement gruelling tests, and these often take place in several unique lifecycle stages. Characteristically, they start within the infrastructure vendors and transition to the operator. They include research and development, quality assurance, production, field trials, deployment, and on-going maintenance. And within each phase, quality assurance should have been applied.

If handled correctly, each group should have had its own employees, equipment, processes, and test plans assigned to deal with this, with little being shared between groups. However, because of the many challenges created by IMS, the traditional process to managing quality requires that the testing process becomes more flexible - there simply is too much that can go wrong.

With too few quality engineers to meet today's needs the lifecycle function needs to be adaptable. Increasingly these separate groups are collaborating more in order to carry out thorough and implementation-specific testing. And this can be in the form of shared test methodologies, shared lab equipment, shared test metrics, and shared test scripts, or even shared test engineers; but what's critical is that no testing takes place in seclusion.
When doing any job it's fundamental to try and use the right tools. Therefore, when managing the lifecycle approach to quality assurance, it's absolutely imperative your teams are armed with the best tools to help them get the job done - this is especially important if you're to ensure your quality assurance remains watertight.

Lifecycle testing and monitoring consists of several different elements; typically these include:
Subscriber simulation/call generation - the slowest and least sophisticated way to test a network is to make manual calls into the network and to report on the result of each one. Although this works for simple tests, it's not the best approach to handling complex feature and scenario testing as this will take hours to run and it would be difficult to manage. For example, it would require more than thousands of callers with dozens of phones each to even begin to reach the traffic levels needed for today's load tests.

Call generation tools can normally emulate the precise end-point devices from a signalling and media perspective as well as simulate end-user calling behaviours. These tools usually have specialised capabilities for feature testing, load testing, and test automation; and often support advanced voice quality measurements and have reporting capabilities that are not viable to record with manual testing.

Infrastructure emulation -legacy networks have one main network switching component known as the Class 5 switch/softswitch or MSC. The IMS model separates these into several dozen clear-cut software and component functions such as CSCF's, AS's, BGCF's and a whole slew of other acronyms. As a consequence of this, most of today's IMS core infrastructure devices demand a considerable amount of interaction from other infrastructure devices in order to function. The problem, unfortunately, is that fitting all these devices into a test lab is not practical or feasible. By using infrastructure emulation tools, quality assurance engineers can emulate precise infrastructure devices as well as the distinct vendor implementations of these devices. What's more it helps operators save a significant amount of physical space, configuration time and capital equipment cost.

Network emulation - as a rule, labs are typically setup in a single room with all the devices connected to a single data switching infrastructure. However, real-world IP networks are quite different. In fact, several switches and routers are used to connect an array of different devices across hundreds of miles, and via many differing network topologies. In reality, this ultimately causes packet losses and delays to occur that you cannot see in a lab environment. Network emulation products let you emulate these LAN conditions, and even allow you to introduce jitter, bit error rates, and link outages.

Troubleshooting and diagnostics - being able to identify limitations and problems is the sign of a good test. Although, how can you identify whether an issue was created by the network, and not faulty testing? By using troubleshooting and diagnostic tools, engineers can isolate and analyse each problem. Information gathered during troubleshooting and diagnostics is invaluable to the development engineers as it allows them to fix any bugs discovered. Typical diagnostic tools for IMS networks have some low-level signalling message decoding and voice quality analysis capabilities.

Service monitoring - because of the intricate make-up of advanced networks, it's foreseeable that problems can arise over time, even after thorough lab testing has taken place ahead of deployment. Therefore it's important to proactively monitor the quality of service the network is delivering after being rolled out to customers, and to swiftly respond to any problems that may arise.

In order to achieve this most service providers deploy a monitoring system. This may be passive and simply listen to network traffic; or it could be active. If it's the latter it normally makes measurements against system-generated calls - or a mixture of both. In both cases they characteristically include reporting metrics that are useful to networks operations personnel, as well as specialised diagnostic and analysis tools that help them find and sort out network problems.

The testing and monitoring requirements for today's NGNs and emerging IMS networks are substantially broader and deeper than the industry has ever seen before. Creating a comprehensive test program that can be applied across the various layers, functions, applications and lifespan of such a network is not impossible to achieve. By using the advanced tools and techniques available in the marketplace you can tackle any quality assurance issues from day one, and beyond.

So, regardless of whether you're only in the initial stages of designing your NGN or IMS network, design, testing and monitoring should be at the top of your priority list - if it's not, it could certainly spell doom for the entire project.

Chad Hart is Product Marketing Manager, Empirix

Could VPLS offer the answer to the seemingly inevitable future bandwidth crunch?  Chris Werpy explores the options

The key to answering this question is in understanding some of the industry dynamics at play for WiMAX, a contender to the 4G throne. Spectrum is yet to be allocated in some countries although it is fair to assume it will be limited and therefore its usage needs to be maximised. Vendors and other stakeholders in the WiMAX infrastructure value chain are currently responding to RFPs and there is a great deal of network yet to be completed outside of North America, with European WiMAX subscribers estimated to represent 40 per cent of worldwide WiMAX subscribers by 2009. Then, there are the devices which will support WiMAX services and of course the services themselves. Does anyone really know what these services will be or what the experience will be like for the subscribers?


There's been a great deal of excitement around what WiMAX could deliver for subscribers - whether it's basic services in developing countries or more sophisticated interactive mobile broadband elsewhere. In fact, it's the subscribers who will decide the level of success of mobile WiMAX and other 4G technologies, and many of them will sign up with some pre-conceived ideas of what it will be like.


The pressure is now on the operators to deliver the network, device support and services that will prove compelling to users and accelerate subscriber acquisition. Yet, there are a number of challenges faced by operators in getting to this point, and it's overcoming these that provides much of the "what next?" for WiMAX. Not least of which is actually recognising subscribers when they access the network, and making sure that they get the services and experience to which they are entitled.


Crucial to the monetisation of the WiMAX network is, quite simply, attracting subscribers onto it. Those who have experience of mobile broadband will generally be used to those services provided by 3G networks, and so WiMAX does have the advantages of greater bandwidth in some instances, but also wider coverage and the prospect of greater interactivity and roaming. But by the same token, WiMAX must deliver an experience that is at least comparable to 3G as subscribers hop on and off the network, otherwise seamlessness between WiMAX, WiFi, 3G networks, and cellular networks will be lost, and with that, many of the subscribers themselves.


This means there needs to be a seamless and intuitive handover between networks, even during the same data session. Currently, operators largely have no way of recognising existing subscribers when they move onto the WiMAX network without a laborious login procedure that does not differentiate existing from new users and does not allow existing subscribers to easily ‘carry' their service entitlements with them as they move onto the WiMAX network. This could potentially jeopardise not only their future subscriber base but their existing one as well as.


To overcome this challenge, operators need to amalgamate subscriber information, including service entitlements, access credentials and credits, and centralise these in a subscriber profile. This profile details what they are entitled to, allows the network to ‘recognise' users and apply the policy to their mobile broadband experience.


More sophisticated uses of policy in this example could be subscribers automatically pushed onto the WiMAX network if greater bandwidth is needed for a service and they are in range. Or, it could be an opportunity for an operator to upsell a service that they know the subscriber enjoys using in the 3G world, or even a way of better targeting mobile advertising based on real time subscriber data such as location and presence.


Because the subscriber policy is always changing to reflect the personal needs of each individual subscriber, it is also the key asset source for operators to market new services to subscribers once they are on the network. Policy helps them build a relationship with the subscriber where in the near future it will be possible to personalise services based on where the subscriber is, what device they are using and what their preferences are, in real time.


However, in order to do this, operators must first establish a strong pricing model that may, by necessity, need to buck the trend for flat fees, and that certainly calls for some creative thinking.


WiMAX subscribers are expected to benefit from a wide range of services from voice in remote areas to interactive visual services such as video in other regions. But, in instances where spectrum will be limited, this suggests there is a need to transition from traditional flat fee models to service models that are based on metered bandwidth.  There are several models for doing this based on the value of the service, the service tier, the amount of data used or available at any point in time, or in fact whether the service is subsidised by advertising.


Some analysts have extrapolated that this could be the end of the flat fee pricing model, particularly when new services are likely to be bandwidth intensive and have the potential to use up bandwidth very rapidly. When operators do the maths, they may find that flat fee pricing encourages subscribers to ‘eat all they can' - and they may be biting off more than operators are willing to let them chew.


So, it's likely that operators will need to create different service models based on subscriber policies that enable the operators to manage access to the network, ensure fair usage, but also open the network up to those early adopters who may well want video-on-demand or any of the other broadband services which have been touted and who will be willing to pay for them.


They will need the next generation of WiMAX devices which - I would expect - will have large screens, multiple air interfaces, sophisticated onboard graphic and audio processing technologies and also batteries that will allow more than a few minutes viewing. These 4G WiMAX devices (including laptops) will need to be more flexible towards new services, especially with the unprecedented ‘openness' of the WiMAX network. That openness is not only the range of new services that could be developed, it's more to do with the way that consumer demand is affecting the device marketplace.


Operators will not be the sole stockists for WiMAX devices - they will be available from retail shops and will therefore not be tied to a specific network or service.  With device delivery now distinct from service delivery, the challenge for operators is to attract as many subscribers as is possible, but more importantly to make sure the network is as easy to access and use as possible.


At the moment, there's no clear way to ensure that WiMAX devices are compatible with services and that subscribers can be easily registered on the network and use those services without a hitch. Subscribers will tend to buy devices directly from a retailer and not a network operator, which certainly reduces the financial pressure of subsidising equipment, but also means operators must be able to support Over-the-Air (OTA) device configuration, activation and provisioning. Subscriber provisioning as a standard offering will enable operators to jump on their competitors.


Indeed, mobile devices - including phones, laptops and multimedia players with WiMAX modules - will not simply ‘work out of the box' as normal cellular devices do. Subscribers will need to choose for themselves who they subscribe with, what service package they buy and a number of other variables, reflecting that the future of telecoms services really is to meet consumer demand for any service, any time, anywhere.


Soon, OTA will drive the proliferation of open WiMAX networks and services by allowing the subscriber to activate their own subscription, receive firmware updates direct to the device that automatically supports new services or functionality, and select their own service features.


WiMAX already offers the openness that subscribers want, so operators need to be able to create subscriber policies that reflect the entitlements and changing demands of the subscriber. If they can master the network, service and charging models, and devices, encapsulating these in a policy, then they are in a prime position to begin the next phase of WiMAX and attract subscribers onto the network. By putting subscriber policy at the heart of their WiMAX service strategy, service providers can build a relationship that provides subscribers a personalised WiMAX experience that will improve subscriber retention and drive greater uptake of 4G services.

Ihsen Fekih is EMEA Managing Director at Bridgewater Systems
www.bridgewatersystems.com

WiMAX is often regarded as an economically attractive technology in rural areas with no wired networks, but it is also being increasingly positioned as an alternative to DSL in metro areas within developed countries, says Howard Wilcox

The global opportunity for WiMAX 802.16e to deliver 'local loop' broadband connectivity will begin to take off over the 2009 to 2011 period, according to Fixed WiMAX: Opportunities for Last Mile Broadband Access 2008 - 2013, a new report from Juniper Research.   There are significant prospects for WiMAX as a DSL substitute technology, and the fixed WiMAX subscriber base is forecast to approach 50 million globally by 2013. 

  
Currently, there are over 250 802.16e WiMAX networks being trialled across the world, and a relatively small but rapidly growing number of commercial networks in service.   With a profusion of trial and network contract announcements over the last 12 to 18 months, WiMAX is now much more of a market threat to existing broadband access technologies such as DSL. 

     
An analysis of the primary target market focus of each of over 50 service providers which have announced commercial network contracts revealed that the stand-out market focus is offering an alternative to DSL.   The analysis illustrated that WiMAX is well suited to rapid deployment in many underserved areas. 


Developing countries in Eastern Europe, the Middle East and Africa, and Asia, have shown most interest in WiMAX to date: many of these countries are part of the "underserved" world from a broadband perspective and are seeking pure Internet connectivity - fast.    These countries can enjoy the technology "leapfrog" effect, jumping from no or limited connectivity to multimegabit, state of the art broadband.    In Poland, for example, four carriers received nationwide 3.6 GHz WiMAX licences in 2006 including Netia, cable television operator Multimedia Polska, Crowley and Exatel. Netia has contracted with Alvarion for a 20-city national network for business and residential users, while Crowley has contracted with Redline and Multimedia Polska with Airspan, but Exatel's network has been delayed. Multimedia Polska is targeting homes in Central and Eastern Poland that have been previously underserved with Internet access.   Meanwhile, Russia is a very fragmented market, but with a growing number of existing and aspiring broadband operators. All of the operators are focusing in the short to medium term on providing fixed services in underserved areas.   In mid May 2008, there was a significant development with Virgin Group entering Russia via the nationwide launch of its high-speed broadband WiMAX network - known as Virgin Connect - and operated by Trivon; the service has been launched in 32 Russian regions including Moscow, St. Petersburg and the 20 largest cities.
Although WiMAX is often regarded as an economically attractive technology in rural areas with no wired networks, it is being increasingly positioned as an alternative to DSL in both rural and metro areas in developed countries.   Typically, WiMAX service providers are differentiating their services either by offering higher speeds than DSL, for example, for customers located at the distance limit from their local exchange, or by emphasising ease and speed of set-up for customers.   WiMAX will therefore both cater for broadband growth, and replace some existing DSL connections.   Service providers in a number of developing countries such as India are also targeting rural areas that have no wired networks at all, to provide basic telephony as well as more advanced services.   In these communities, WiMAX services will need to be priced at affordable levels.


The next most popular market focus is for high-end business users - those typically spending $400 to $500 per month on broadband services  - who require secure, very high-speed connections, and that have more demanding bandwidth needs such as hosting their own servers, but who also require some element of nomadic working.   Again here, WiMAX is proving attractive to subscribers who have used DSL up until now.


The survey showed that the vast majority of service providers are concentrating on providing fixed broadband services to begin with, although many have the intention of developing mobile offerings once their networks and services are established.
However, there a number of issues that WiMAX as an ecosystem needs to address including:

  • Availability of suitable devices: WiMAX has great potential to integrate broadband connectivity in a wide range of consumer devices such as MP3 players, cameras and satellite navigation units as well as more traditional items such as laptops and dongles. The industry must ensure that reliable, certified devices are readily available so that customers are not held back or discouraged from subscribing due to supply issues. In early April 2008 the WiMAX Forum announced that the first eight Mobile 802.16e WiMAX products received the WiMAX Forum Certified Seal of Approval. There is an opportunity to drive and sustain market takeoff through a steady stream of innovative devices. The "push" to achieve market launch needs to be counterbalanced by ensuring the availability of components and volume of production to meet anticipated demand - at the right attractive price point.
  • Timely network construction: service providers need to complete build programmes on time to achieve sustainable WiMAX based businesses and they also need to translate the many, usually well-publicised trials, into commercial networks offering reliable and attractively-packaged services. In future, users will take this as a given, and will become less tolerant of unreliability as broadband becomes inextricably linked with everyday life. The announcement by Sprint and Samsung in mid May 2008 that WiMAX has met Sprint's commercial acceptance criteria including overall performance, handoff performance and handoff delay is a very timely boost for the technology: the eyes of the (WiMAX and mobile broadband) world are on developments there. Commercial launches in Baltimore and Washington DC are planned for later in 2008 by Sprint. Further success will counteract the view in some parts of the industry that WiMAX is always coming tomorrow.
  • Brand identification and service differentiation: WiMAX service providers need to avoid entering the market on the basis of price: this will be a difficult battle to win against established DSL and mobile operators, especially in developed markets like Western Europe. These established (usually 3G) operators already have strong brand image and sophisticated marketing, and in some countries such as Ireland and Scandinavia are already enjoying success in the DSL substitution market.

With the plethora of broadband access technologies available - such as DSL, satellite, cable, HSPA, EVDO, WiMAX - not to mention the future technologies such as LTE, people often ask if there is going to be a technology that wins out over the rest.  Juniper Research discussed this issue with around 30 executives from a variety of vendors, service providers and industry associations.   Respondents were unanimous in viewing WiMAX as complementary and take a pragmatic approach - if there is a use for it, and the business case is sustainable, it will be deployed.   Telecoms operators need to consider all alternatives when making an investment. 
Most new technology launches face issues like these, and with the impetus that WiMAX now has in the marketplace, it is well-placed to grow.  Juniper's headline forecasts include:

  • The annual fixed WiMAX global market size will exceed 13m subscribers by 2013
  • The WiMAX device market - comprising CPE, chipsets, minicards, and USB dongles - will approach $6bn pa by 2013
  • The top 3 regions (Far East, N. America and W. Europe) will represent over 60 per cent of the $20bn p.a. global WiMAX service revenues by 2013.

In fact, WiMAX is forecast to substitute for nearly 50 million or 12 per cent of the DSL and mobile broadband subscriber base globally by 2013:
 
Howard Wilcox is a Senior Analyst with Juniper Research in the UK.
www.juniperresearch.com

The delivery of voice services over next generation networks has never been a comfortable journey. The business reality takes companies upwards and downwards, twists carriers in the wind of market challenges and throws them in heavy seas of competition. Konstantin Nikashov looks at the current market situation to explain how VoIP softswitches ensure the efficient performance of carriers' networks

VoIP adoption is in full swing worldwide. ABI Research predicts a seven-fold increase in the number of residential voice-over-IP subscribers between 2006 and 2013, while Frost & Sullivan forecasts enterprise VoIP services revenues to surge to $ 3.3 billion in 2010. VoIP is already taken for granted in Europe and the USA, while Asia and Latin America register an incredible interest in the technology. With this promising raise of demand for VoIP services telecom carriers definitely should keep their finger on the pulse of where the industry is heading.


Today's telecom landscape offers carriers numerous margin drivers that seem irresistibly tempting to anyone in the business. VoIP calls are but packets of data travelling across the Internet. The technology can be called virtual since it is not tied to physical locations or devices. Carriers rent out their VoIP capabilities to get higher revenues and traffic volumes. Virtualization facilitates launching service rendering to an unlimited number of subscribers and adding new phone lines wherever and whenever needed. VoIP allows for flexible control over the system (either by the system administrator or subscribers) and redundancy to help service providers manage risks.


However, revenue-generating opportunities go hand in hand with industry challenges. Cable companies and Internet service providers that compete for the market share with conventional telcos take advantage of the fact that VoIP can be easily bundled up with other services. Triple- and even Quad-Play, which are included in selling propositions, increase load on networks. Therefore, carriers need to ensure that their switching platforms are able to handle huge volumes of traffic with top-level reliability.


Obviously, there are two major tasks that telcos striving to succeed should complete today. The most urgent is a choice of the basic functionality that VoIP solutions deployed on the networks must deliver. At the same time service providers should always look ahead in terms of developing their networks, so the software has to keep up with industry upgrades. Another task is to set up criteria of the solution's successful performance. Along with basic features some vendors offer unique capabilities that generate additional business value and significantly raise carrier's revenues. 


The galloping migration to NGN technologies that started at the break of the 21st century instigated the intensive use of VoIP softswitches as core elements of carrier's networks. However, not all softswitches are created equal.


The primary softswitch functionality includes call routing, call control and signalling, and delivery of media services. A lot of carriers appreciate when softswitch routing capabilities are emphasized. The whole concept of the softswitch is advantageous since it allows decoupling software from hardware. It means that new services can be added and removed easily, and the deployed solution can be operated in a flexible manner. Compared to traditional circuit switches, softswitches deliver an elaborate functionality, leave carriers with more freedom and save up to 15-20 per cent on capex and opex.


Analysts argue, nevertheless, that the move towards converged IP communications makes vendors emphasize the session border controller (SBC) functionality of VoIP softswitches.
SBCs are carrier-grade systems designed to facilitate interconnection of disparate IP networks. Carriers deploy softswitches with session border controller capabilities on the border between two adjacent networks to overcome protocol and codec conversion challenges. SBCs also allow NAT and firewall traversal, provide for access control, topology hiding, lawful interception service compliance and ensure that only authorized calls are admitted across network borders.  Session border controllers give a competitive edge to service providers that search ways to easily combine calls and services from multi-vendor equipment networks.


Other VoIP softswitches offered on the market today are more sophisticated. And so is the ideology of their deployment. What used to be simply a router has evolved into a complex system of traffic transit management. Best-of-breed softswitches perform intelligent routing based on a variety of route hunting criteria, keep and regularly update all the information about rates and tariffs of peering partners. On top of that, operation and QoS analysis tools of industry-leading softswitches enable carriers to come up with competitive customer-driven service offerings, make profitability forecasts and select the best partners. Such softswitches are enhanced with session border control functions and include elements of easy integration into the carrier's network (real-time billing interface). Some software manufacturers add even more capabilities to their complex solutions - ENUM lookup and IPv4 to IPv6 interworking support, tools of interaction with B/OSS applications. The innovations contribute to the softswitch viability in today's ever-changing VoIP environment. All-in-one solutions meet the requirements of carriers that promptly react to network challenges and wish to efficiently run significant volumes of VoIP calls.


Functionality of modern softswitches can be defined depending on the purposes of a particular deployment. Vendors usually focus on the routing or session border controller capabilities or design comprehensive intelligent traffic management systems. Each finds its niche in the current market situation.


Even if carriers are sure about the desired basic softswitch features they often need to evaluate how successful a particular solution will be when deployed on the network. Software products can always be customized to address the carrier's needs, but some capabilities are a must for any VoIP solution offered as a cost-efficient competitive softswitch.


The first and foremost capability of a good softswitch is reliability. Being the focal point of a VoIP network and processing several million minutes of traffic per month, the softswitch has to guarantee business-critical dependability. Top-level fault tolerance can be ensured by the softswitch modular architecture. In case one module fails, its functions are taken over by other modules depending on the current workload. This mechanism enables carriers to choose between various redundancy schemes and set up complete or partial back up scenarios.


An advanced business logic embedded in the softswitch capabilities is another important criterion of successful performance. For instance, IP-based PBX solutions are more attractive to enterprise users than TDM-based systems. An IP platform is generally more service-oriented, and, therefore, delivery of voice-to-email, fax-to-email, email-to-fax and other popular services is easier. Best-of-breed IP Centrex solutions offer 20-40 value added services crucial for businesses.


Today's softswitches are often appraised by their ability to operate in the IMS environment. IP Multimedia Subsystem is an access-independent platform for multimedia service delivery. IMS is based on the IP technology but is designed to take VoIP to the entirely new level of development. However, it does not mean that carriers should look for solutions other than a softswitch. Softswitches perform Call Session Control Function (CSCF) well in the perspective of IMS architecture. One of the most important requirements here is the ability to control quality of service and effectively interact with network devices. This makes the session border controller functionality of softswitches especially relevant.


The last but not the least thing to consider when choosing the most successful VoIP softswitch is price-to-quality ratio. Open source solutions are free but they do not guarantee the reliability, 24x7 professional support service and other technology benefits offered by proven VoIP developers. At the same time it does not sound reasonable to overpay for mere basic features under a widely promoted brand. Today's market of VoIP solutions is highly competitive, and mid-sized developers often help retail and wholesale carriers find the golden mean, supplying reasonably priced full-featured softswithes with capabilities that carriers need most of all.
Modern VoIP softswitche

s have a great potential to dramatically shorten the carriers' way to the top and lay a solid ground for further innovation. However, the choice of robust VoIP solutions is always defined by carriers' needs and the ability of a softswitch to meet certain criteria of satisfactory performance. Eventually, carriers that demonstrate a thorough and thoughtful approach to equipment deployment issues always benefit from best-in-class VoIP softswitches.   

Konstantin Nikashov is CEO of MERA Systems
www.mera-systems.com

In a world where Corporate Social Responsibility (CSR) continues to make its mark in the boardroom, companies are looking for ways in which carbon footprints can be reduced, and employees' time can be used more efficiently and productively.  Conferencing technology is seen as one way of achieving this.  Meetings can be set up within minutes, even if the people involved are spread across the world. But how can costs be kept under control? Aaron McCormack looks at some of the options

Conferencing has taken huge steps forward in recent years.  Calls are much easier and quicker to organise, for example. For a formal meeting, you may still want to plan ahead but if you want to gather a few people together for a quick impromptu discussion, that's just as easy. A growing number of people maintain virtual ‘meeting rooms' they can dial into whenever they like.


Conferencing has become a much richer experience as well. Using tools such as Microsoft Live Meeting with an audio conference, people can see and work on documents while they speak, as if gathered around one PC. You can use these conferencing services to make presentations as well. As you talk, your audience can see your slides or your product demonstration on their screen.


And if you still think these services are a poor substitute for actually being there, the next generation of video conferencing services made possible by equipment from Cisco and other suppliers should change your mind. By placing large screens and cameras carefully around a meeting table, they make it possible to look people in the eye. Facial expressions and gestures are as clearly visible as if participants were in the same room.
No wonder, then, that more and more people are adopting conferencing as a time- and money-saving alternative to face-to-face meetings.


Gartner has described conferencing as a ‘birthright' application for high-performance workplaces and it's easy to see why. Thanks to globalisation, partners, suppliers and colleagues can be spread across the world. But despite their distance, you need a close and effective working relationship with everyone in your circle. Yes - it's good to visit them from time to time. In between, though, you need an effective and efficient alternative - something more personal and interactive than email.


Conferencing is also good for the bottom line. Time that would have been spent travelling can be put to better use, and conference calls are much cheaper than plane tickets - even if you travel on budget airlines!


The problem for many companies, though, is that the costs of travel and conferencing fall in different areas or budgets. While one manager smiles at the savings, another winces as phone bills escalate, apparently out of control.


Fortunately, there is a win-win solution. Those who have already completed the introduction of IP telephony across their organisation and have connected their various premises through IP VPNs have the most to gain.


As with any distributed organisation, a great many phone calls - often, the majority - are internal. This is particularly true of conferencing. Up to 20 per cent of minutes carried on an enterprise's phone network can result from audio conferences. And 50 to 60 per cent of conference calls are between people from the same enterprise.
The cost of connecting these people through public phone networks can account for 30 or 40 per cent of the total cost of a conference call. If employees are located in different countries, for example, some may need to make expensive international calls to connect to the conference. Where mobile phones are used, the costs can be even higher.


However, if organisations use their IP VPNs to connect everyone, much of this cost can be saved. By adding a managed conferencing service to their corporate network, the majority of calls can be brought ‘on net'. Savings of 20 per cent or more can be achieved as a result. Even better as far as cost management is concerned is that the variable cost of calls to public conferencing services is replaced by the fixed cost of providing enough network capacity to handle the additional calls.


Of course, while complete convergence is the ideal, most organisations have yet to reach it.
Surveys suggest that the 80:20 rule applies. While 80 per cent of organisations have started to introduce IP telephony, only 20 per cent have completed deployment. The remainder still operate a mix of old and new telephony systems across their facilities.
It's a situation that can persist for many years. New offices might be equipped with IP telephony, for example, while outlying branches continue to use their traditional PBX solutions.


So what can you do to gain control of conferencing costs in the meantime?
One option is to select a single global supplier for conferencing services, which can help remove the need to make expensive international phone calls.


Imagine someone in New York wants to arrange a conference involving colleagues in Europe and Japan. If the call is set up through a US-based supplier, those joining from other countries will have to make international calls to join it. If the call is arranged through a global supplier, everyone will be able to connect by calling a number in their own country. Costs are reduced considerably as a result.


Another option is for organisations to choose a global supplier's hosted conferencing service. In most respects, this is the same as choosing an on-net managed solution. There is one important difference, though - the conferencing platform is located on the supplier's premises and connected to the enterprise's IP telephony systems through VPNs. Calls from employees using the IP telephony systems are effectively ‘on net'. Everyone else connects in by dialling the local access number for their country, with their calls being connected to the conferencing platform through the supplier's global network. These solutions leverage the enterprise's network investment and create a highly cost-effective service.Whichever option you choose, there is one more thing you need to do to maximise the benefit your organisation gains from conferencing - embed it in your company's culture.


If you've invested in a managed or hosted conferencing service, it makes sense to get as much use from it as you can. Additional calls make little difference to the cost of providing the service, but can reduce travel bills and improve employee productivity.
The chances are you'll have little difficulty in convincing 10 to 15 per cent of your staff to use the service. They'll be your early adopters - the people who are keen to try something new and have probably been using conferencing for some time.


The main issue is in convincing the rest of your organisation and changing their behaviour. Unless someone is telling people why it is beneficial to change, showing them how to do it, training them in the technology and monitoring their usage, it won't happen.


The problem comes from the fact that companies rarely have any experience to build on when they begin to introduce their conferencing solutions and drive up adoption. They're doing it for the first time and may never need to do it again. The results they achieve suffer as a result.


Where a supplier can make a big difference is by bringing a wealth of practical experience and understanding to the table. Over the years, they will come across almost every situation and will know how best to address it. They'll also have ‘out of the box' CRM tools and information systems to provide effective support and a range of ‘tried and tested' training and communication programmes. Armed with this, they can help enterprises introduce conferencing cultures quickly and painlessly, bringing forward return on
investment.


Because they've done it many times before, global conferencing service providers are good at these sorts of education programme. They know what will get people's attention and how to get the message across. As a result, they can change an organisation's culture from face-to-face to conferencing much more quickly than might otherwise be possible.
And with the continual pressure on organisations to reduce costs and improve efficiency that has to be a good thing.

Aaron McCormack is CEO, BT Conferencing

Benoit Reillier provides an update on the key regulatory topics that will shape the telecoms market over the next few years

Members of the European Parliament (MEPs), Commission officials, the Council of Ministers, as well as lobbyists and advisors are currently busy in Brussels negotiating the wording of the proposal for a new EU regulatory framework. Many changes have been proposed since the initial proposal was put forward by the Commission last November and time is running out for a consensus to emerge. The stakes are high as the resulting package will have to be transposed into law by all member states in 2010 and 2011 and will effectively provide the rules and regulations for the telecoms sector until the next review.


While no decisions have been made, it is likely that several of the controversial proposals that were put forward by the Commission last November will be diluted before an agreement can be reached.


One of the most contentious proposals was the creation of a powerful pan-European regulator (European Electronic Communications Market Authority or EECMA) with a range of additional powers. This idea was seriously criticised by both the Council of Ministers and the European Parliament and is therefore likely to be replaced by a more official recognition of the role of the existing European Regulatory Group (ERG), made up of representatives from national regulators, in coordinating national and pan-European regulation. Details about the financing, powers, name and status of this "enhanced" ERG are yet to be finalised.
The rather ambitious Commission's proposals for the development of a more market based approach to spectrum allocation and management across member states is also unlikely to survive the current round of negotiations. A "mixed spectrum management regime", balancing economic and public policy considerations, is likely to be proposed instead. Unfortunately this may have more to do with the significant lobbying power of broadcasting institutions in national markets (that often benefit from "free" spectrum at the moment) than sound economics... some adjustments providing better coordination between member states are likely to be made but any significant spectrum reform will probably have to wait until the next framework review.


The ability for national regulators to mandate functional separation still appears to be on the agenda. Needless to say, incumbent operators are quite worried about the prospect of being forced to split up their operations. It is anticipated however that the final wording of this proposal will reflect the last resort nature of this particularly intrusive remedy. It is likely for example that the economic analysis to be carried out by national regulators to support the case for separation will have to be very robust and take into account investment incentives.


The Commission also proposed that markets that are deemed competitive be removed from the list of ex-ante markets. These proposed changes, that would broadly result in a move away from the regulation of retail markets (that are increasingly competitive) to focus on wholesale markets (where infrastructure providers can be dominant) are less divisive that some of the other proposals and are therefore likely to go through.


The EU Parliament and the Council of Ministers also asked that a number of important topics that were somewhat overlooked in the original proposal be addressed by the new framework.  For example, the Commission was asked to clarify its position on the regulation of investment in New Generation Networks (cf. last column on this topic).


While none of the above issues have been decided yet, a consensus is required soon and significant modifications of the original text will have to be made for an agreement to be reached before the end of the year. All the players in the telecoms market are anxious to know the new rules of the game that they will soon have to play... and win.

Benoit Reillier is a Director and European head of the telecommunications and media practice of global economics advisory firm LECG.  He can be contacted via: breillier@lecg.com
The views expressed in this column are his own.

As the telecoms industry embraces transformation - and all that implies - Alex Leslie argues that billing, or the now more fashionable "revenue management", remains strategic in a deeply unpredictable marketplace

There are a few sayings that I remember from about ten years ago. A couple came from the revenue assurance managers - ‘Billing reveals the sins of the entire company'; ‘My billing processes were perfect - the day before we launched' and perhaps less amusing but very true was this: ‘Billing is where you implement the rules of the business'.
There are a couple of problems with this saying. The first is that billing is no longer billing, in the traditional sense, and second, knowing what your business rules are going to be in two, three or five years time is impossible in the current circumstances.


But the saying is still true, in spirit, even though billing has basically been redefined. Various people have spent much of the past 15 years redefining it - mainly in bars late at night, after a full day of conference sessions. We have now come to the conclusion that it should be called revenue management, whether that revenue comes from a prepaid, postpaid, sponsored or some hybrid transaction. The revenue management process is about assigning a value to a service or product, and managing that value so that it becomes revenue (and then profit, and then shareholder value). Ultimately revenue management is about the customer experience - presenting products and services to the customer in such a way that the customer continues to use your services, satisfied that he is getting value from the service and being charged correctly. The bill itself must become a value statement, not simply a demand for payment.


The reason that we do not know what the business rules are going to be in a few years time is that markets are changing, the pace of competition keeps increasing and the array of competitors and players keeps multiplying. Just a couple of years ago convergence was about services and payment methods converging onto one platform, putting the customer at the centre of our universe. It meant consolidating the many systems that supported single product lines onto one or two systems that enabled this move to a customer centric view to happen. But convergence now means that entire markets, entire eco systems, are changing around us. Google, Apple and Microsoft now represent the innovators of the communications industry, the new Service Providers - not the more familiar telecoms names. The very definition of service provider or communications provider is changing too.


Competition is coming from all directions, and the telecoms world is not as powerful as it was ten years ago, when it was twice the size of the media industry. If it is not careful and quick it will be marginalised. Some of these new players are quite prepared, even keen, to go round or ‘over the top' of the network operator, in order to offer customers what they want.
A symptom of this is that real time charging is now one of the topics of the year, a feature of almost every industry event. Not, I suspect, because the communications industry has decided it is about time to do something about the buzzword that has been around for ten years but because real time is the way that ISPs and content providers think and operate. Post paid is too slow, too traditional for the new world, and real time is now the answer. We must not be left behind and even though ‘real time' is only appropriate in some instances, not all situations, the capability must be in place.


So, if billing is now about managing value whilst enhancing the customer experience, and the strategy that defines the business rules changes so fast that you are not entirely sure whether your business should look like a supermarket or a gas company - what do you do?
Both business models are possible and valid. There has been much discussion about the communications company as a supermarket. You should be offering, or offering access to, a comprehensive range of products, some your own brand, some branded by others. You should have inventory systems that are second to none, partner/supplier relationship management systems to be proud of. You must have logistics capabilities to beat them all and a point of sale system that is seamless, easy and flexible. And of course it must be able to provide information and reports that support the management decisions that define the strategy. The comparisons are obvious between a value added communications provider and a physical supermarket, and the emerging ‘services' markets and frameworks are being set up to offer a huge range of services, simply and quickly.


Alongside this the loyalty schemes are emerging in the communications world, schemes that the big supermarkets are so good at - business intelligence is competitive advantage nowadays.


I often wondered why my supermarket offers me occasional vouchers for things that I have not bought from them. It took me a while to realise that the things that I did buy were being profiled against a particular type of customer (and not just ‘male', over 40), and that what they were trying to do was to get me to buy the things from them that I would normally go to another store to buy - garden tools (male, over 40), tee shirts with improbable slogans on them (male, over 40, probably has a teenage child), or motor bike accessories (male, over 40, planning a mid life crisis any day now). Their profiling is actually so sophisticated that I am probably a hundred times more predictable to a supermarket than I am to myself.
Or there is the gas company model - the slimmed down, lean machine that delivers bandwidth and access, without frills. Simple, clear tariffs and options, which is what the customer wants. Some companies will thrive on this business model.


Either way, or whichever way - there are many models and many choices to be made - the world of telecoms is going through a huge transformation, slimming down and getting as fit as it can as fast as it can, to be ready for whatever life throws at it - from whichever direction.


There was a survey done at a recent conference. A question was put to the audience about how many of their companies were planning, implementing or had implemented a business transformation project. Over 80 per cent of the audience of telecoms companies answered ‘yes' to one of those three options.


Which takes us back to the question of readiness. How do we get ready? What do we transform into? And if we do not know, how do we prepare? If a company is driven by strategy, that strategy must be supported by the processes and systems of the company. And even if that strategy is to be ready, flexible and able to react to market change and market opportunity, then the processes and architectures must be able to support that strategy and implement those as yet undefined business rules.


And the next question is this - do your processes and systems currently support the strategy of the company? It is quite likely that having upgraded or replaced legacy systems eight years ago and bought the software that solved the problem of the day - the race for market share, fast tariff change capabilities - now this is not what is needed in the new, customer centric as yet to be defined world. You and your management are nervous about doing it all again. The memories of sleepless nights may still be with you. I am sure that the inclination is wait and see, to play it safe.


And while time ticks away, you are probably working around the edges of the problem - automating pieces of processes (an absolute necessity), opening up new channels and payment methods, but the big problem is still there.


Ideally you are going to need something so flexible that whether your management says ‘gas company model' or ‘supermarket' or even ‘gas company that sells shampoo' you are there, proving my point of the last six years - that billing is strategic. You must be ready for the meetings with Marketing and Management, and able to offer suggestions for innovative products and bundles that could be presented to customers, now.
The other, bigger problem is that this is all very well for someone who writes articles, and does not get his hands dirty, to say you need to change, but it is hugely risky and the likelihood is that your management is risk averse.


The innovative software that is needed to be ready for the ‘next big thing' is generally produced by small, innovative companies, and we both know how popular that is going to be when you are putting the business case to management. But I also know that the worst scenario is the realisation that your processes would not be able to support the new product or service that your competitors just launched. There are ways round this of course, there are large systems integrators who are well aware that communications companies need to innovate, but also need to feel safe when doing so. They are addressing this problem. Your existing partners are also well aware of the challenges and implications, and are able to help.


But the bottom line is that innovation is essential, and soon. In this world, it only takes one of your competitors to be first and fast and successful and the game will change, and you will be struggling to catch up.


The saying may still be true - that billing, or revenue management is where you implement your business rules, but we do not have the luxury of knowing the whole strategy before we have to implement systems or processes that support the unknown rules of tomorrow.
We should prepare.

Alex Leslie is a Communications Industry Consultant, and can be contacted via alex.leslie@btinternet.com

    

@eurocomms

Other Categories in Features