Features

Features

Benoit Reillier examines the economics of bandwidth  

The relationship between infrastructure investment and economic growth has been established by many studies. In our ‘knowledge based' economies, investment in new communications infrastructures, in particular, is seen as increasingly critical to long-term economic growth.

In much of the developed world, most of the fixed line telephony and internet services used today are based on old copper pairs that were designed for voice telephony in the early 1900's.  While clever technological innovations (such as DSL technologies) have recently allowed operators to breathe  new life into these old local loops, this is not quite the gift of immortality that some may have hoped for.

But replicating or upgrading the existing local loop with new, fibre based technology to facilitate the availability of high bandwidth services is a difficult and costly exercise.
And while some argue that demand may not yet exist for very high bandwidth, it is worth keeping in mind that Moore's law (that roughly states that the processing power of computer chips doubles every two years) has been holding remarkably true since the 60's. There is no reason to assume that this trend will stop overnight and therefore every reason to believe that tomorrow's digital cameras, TVs and computers will have a higher resolution than those of today. The processing power of our computers and the storage space required will also increase accordingly, and so will bandwidth requirements.  If the infrastructures cannot cope, they will increasingly represent a bottleneck.

Operators are therefore increasingly considering the roll out of so called New Generation Networks (NGNs). Given that copper does not carry signals well over long distances, there is a direct relationship between how much copper is used and the speed/quality of services provided over it. The key question, therefore, is: how close to our homes will these new networks be rolled out?

Many operators talk about NGN investment in the context of using fibre optic in the core of their network. Needless to say, while helpful overall, from an economic viewpoint this is not the kind of transformational investment plan that would drastically enhance the experience of users as it would leave the ancient copper loop infrastructure intact - and not getting any younger. Some talk about New Generation Access (NGAs) networks and these could also have different characteristics that may or may not provide users with the full new generation experience. FTTH is more costly than the alternative Fibre to the Cabinet (or Curb) solution, however it offers higher speeds and perhaps more importantly, greater reliability as the network is no longer dependent on a copper line to the home.  In the US, Verizon is rolling out FTTH to around 19 million households, while other US telcos are following a FTTC strategy.  It is not clear yet as to which strategy will be the most effective.   In both cases, NGA requires very significant investments that operators (and their shareholders) are often reluctant to commit to in light of the uncertainties associated with the financial returns available.

The New Regulatory Framework being negotiated in Brussels at the moment will have a significant impact on many of the underlying economic drivers that operators are considering, and it is likely that large scale NGA investment plans will be somewhat delayed in a number of countries, at least until more regulatory visibility is provided, probably at the end of the year.  It also requires regulators to focus on how regulation impacts investment decisions, in the short and long term, rather than simply transferring wealth from suppliers to consumers.
Of course other technologies, often wireless based like the much hyped WiMax standard, are possible substitutes for local loop investment. Users are, in fact, technology agnostic and couldn't care less about the underlying delivery mechanisms as long as the quality, reliability and features expected are made available at a reasonable price. One thing is sure though, our centenary copper pairs are unlikely to provide the required communications capabilities that our economies' future growth will require.

Benoit Reillier is a Director and European head of the telecommunications and media practice of global economics advisory firm LECG.  He can be contacted via: breillier@lecg.com
The views expressed in this column are his own.

 

 

A new services development paradigm is driving the communications industry says John Janowiak

What exactly is Web 2.0?  How relevant is it to the service provider business model going forward? The short answer: very relevant.  Carriers who remain focused on traditional voice services and video are missing the larger transformational drift of the communications industry. It's no longer just about service providers inventing services and then selling them to customers; it's about platforms on which customers share communications and entertainment experiences with one another, building ever-larger communities of friends, colleagues, and customers.

If anything characterizes the Web 2.0 world - and, by extension, the new soft service provider world - it is openness. In order to interact richly with colleagues, friends, customers, and business partners, end users are pushing a model in which 1) they have a hand in developing and defining the services they themselves want, and 2) interacting with the network itself is easy and efficient. This is the end game of the network-as-software model: one in which software and applications live on the network, are accessed by the network, and indeed are created via the open-access network.

For example, at SOFNET 08 - a new conference produced by the International Engineering Consortium in April - Microsoft will discuss its Connected Services Sandbox as an example of this new paradigm. Through Sandbox, operators can open their networks to next-generation Web 2.0 applications that can be mashed together with traditional services to create new connected services. The goal is to facilitate the rapid development and market deployment of new service offerings, creating new opportunities for participants and delivering new options for consumers and
businesses.

"In the new soft service provider environment, operators will be able to offer hundreds, if not thousands, of new services that enable them to target specific customer segments, reduce ‘churn' and drive new revenues." says Michael O'Hara, general manager for the Communications Sector at Microsoft. "By embracing the principles of Web 2.0 and leveraging the significant customer relationships and assets they already have in place, operators have the opportunity to redefine the models for doing business."

Matt Bross, Group CTO for BT, in an interview recently with Light Reading, noted: "The innovation genie is out of the bottle.  We need to do more mash-ups, and we need to connect together for innovation. There are major innovation possibilities by opening up collaboration opportunities. We're moving towards a real-time global innovation model, and moving from a closed to an open model. It's a big challenge."

Getting a handle on these mash-ups (that is, creating a new service by putting together two existing ones) as well as opening the network to third-party innovators, is the course forward according to Bross, who will serve as overall conference chair at SOFNET 08.
"We need to change our mindsets and focus on how we can enhance the quality of people's lives and how they do business," he said. "We need to innovate at the speed of life."

John R. Janowiak, President, International Engineering Consortium

SOFNET 08 runs from April 28th to May 1st 2008, at the Olympia National Hall, London.
www.iec.org

The increasing complexity of service provision is creating new revenue leakage risks says Adam Boone

New services, innovative service bundling, and emerging content-distribution business models open the door to a host of potential new risks for the typical telecommunications service provider.  Increasingly complex new content partnerships and revenue sharing arrangements create potential for new forms of revenue leakage.  Smarter end-user devices, content-based services and converged network environments create new potential for fraud. 

In short, the new world of next-generation services means new risks, and these emerging problems are foremost in the mind of service providers around the world as they seek to roll out new service offerings and adopt more competitive business practices.
In mid 2007, telecoms industry research specialists Analysys undertook the fifth annual global survey of service providers' attitudes to revenue management, touching on topics like fraud, revenue assurance, and other sources of revenue leakage.   The survey, which was underwritten by Subex Limited, identified a continued increase in revenue leakage across the globe and provided insight into the main causes.  The study examined regional differences in revenue leakage and the approaches to combat it, showing some intriguing differences in how European operators address the problem when compared with operators elsewhere.

Globally, Analysys reported, the overall average level of revenue leakage from all causes stood at 13.6 per cent of total revenues. The Middle East/Africa region experienced more than 20 per cent of revenue loss, with Asia close behind at just below 20 per cent and Central and Latin America at more than 15 per cent. Western Europe ranked lowest of all regions at about 7 per cent, following Central and Eastern Europe at 8 per cent and North America at just about the 13 per cent average.

When breaking losses down by operator type, mobile operators continue to lose the most at nearly 14 per cent, with mid-sized operators with between 100,000 and 1 million subscribers racking up the most loss at more than 18 per cent. For comparison, the largest operators are losing only 6 per cent of revenue per year.

Significant growth in fraud losses and revenue assurance problems related to the launching of new products and pricing has driven the overall increase in losses.  In addition to fraud, the three primary sources of revenue leakage cited by respondents are poor processes and procedures, poor systems integration, and problems associated with applying new products and pricing schemes.

The level of revenue loss that operators find ‘acceptable' has risen this year to 1.8 per cent, from 1.1 per cent in 2006. This is the largest single increase since the survey was started five years ago. Major incumbents were least tolerant (1.2 per cent) and fixed line alternate operators most tolerant (2 per cent). Operators in Central and Latin America reported the highest level of ‘acceptable' loss at 2 per cent, with operators in the Middle East and Africa accepting lower levels of loss (1.4 per cent) than the average.

At the planning stage for new products, most operators take into account most causes of loss as part of their preparation for new service launch.  However, 32 per cent of operators do not use any third party help to address revenue leakage issues. Yet the findings show that the operators who use third-party solution providers for revenue assurance lose 30 per cent less compared to those who use no external help.

The survey found that managers responsible for revenue assurance and fraud management feel a great deal of uncertainty as they look to the future and consider next-generation networks and converged services.

The findings showed that dramatically more revenue assurance and fraud managers are concerned about the impact of next-generation networks and services on revenue management than in previous years.  In fact, around half of the survey respondents reported that addressing revenue management issues for these new technologies will be a chief concern in the next three years.  Much of the anxiety may stem from the unknown.  New products like IPTV are still reaching mass-market subscriber penetration and unanticipated revenue leakage issues may emerge as these products reach peak subscriber growth and market uptake.

The new converged, IP network represents new opportunities for fraud, especially as these services incorporate content that must be delivered and billed for across a converged infrastructure. Further, end user devices are increasingly intelligent, opening the door for new hacking techniques and mobile malware. These are significantly different from the risks associated with a traditional fixed-line telephone network and therefore present a higher degree of vulnerability for operators.  This risk is heightened as operators are under growing pressure to deliver these new services rapidly in order to stay ahead of the competition.  Compounding these challenges, the competitive environment facing most operators means they must achieve faster time-to-market for new offerings and shorter product lifecycles.  As a result, billing processes must be able to adapt to this accelerated pace of change, or what has been described as the need for greater service agility and operational dexterity.
Another area to take into consideration is the implication of delivering new content-based services that involve third parties. In the Analysys research, 30 per cent of respondents cited interconnect/partner payment errors as one of the main causes of revenue leakage across the business. If an operator intends to offer content-based services, it may no longer be responsible solely for a service's connectivity, but also the delivered content, either provided on its own platform or by a third party.  As a result, this creates additional complexity at the accounting stage due to payment handling, tariff management and revenue sharing. With more third-party operators involved in the delivery of the service to the end-user - from the operator, the content owner, to the content host - there is potentially greater opportunity for interconnect or invoicing system errors, which need to be assessed.

An emerging key strategy for addressing many facets of this transformation, and to maximize the benefit of revenue management efforts, is to establish a Revenue Operations Centre (ROC), a consolidated collection of systems that monitors the health of the revenue chain and the impact on costs. Like a NOC enables the tracking of service quality and network health, so the ROC is a centralised monitoring and control infrastructure that integrates an operator's individual revenue assurance, fraud management, cost management and risk management solutions to better monitor revenues and costs.  This end-to-end approach takes into consideration all the processes involved in delivering the service to the subscriber, and helping the service provider to understand the impact of operational processes and outcomes on profit.

A ROC allows operators to monitor the financial performance (eg total revenue, arpu, subscriber growth), revenue performance (eg revenue/cost by category, revenue/fraud loss) and operational performance (eg revenue/fraud/ bad debt loss by root cause) across their networks.  It also enables an operator to track costs associated with delivering services, and arrive at an understanding of the profitability of different service types, different subscribers, different market segments, and other relevant business metrics.

For operators offering next-generation wireless and wire-line services, implementing an end-to-end approach to monitoring and protecting revenues and managing down costs will become an even greater requirement, as services become more complex.  An approach like a ROC enables operators to compare and consolidate information from across network, operations and business systems to monitor revenue chain integrity, detect cost overruns and, hence, achieve sustainable profitability.

Adam Boone is VP Strategic Marketing, Subex Limited

adam.boone@subexworld.com

www.subexworld.com

Jonathan Bell explains how operators can make ageing Intelligent Network infrastructure flexible for both the markets of today and of tomorrow

All industries are future-focused and the telecoms industry is no exception. Next generation services such as IPTV and mobile VoIP have been the talk of the industry for a number of years. However, it is important that service providers do not get distracted from their core revenue drivers. In Western Europe these are undoubtedly voice and messaging. In 2007, 95 per cent of mobile telephony revenues (?29.5 of a total arpu of ?30.4) came from person-to-person (P2P) voice and messaging. Although most analysts predict that messaging and voice revenues will decrease, the latest estimates predict that revenue share will stay above 80 per cent for the next five years, still representing substantial revenues for the operator.
In order to make the most of revenue opportunities, a major focus of the industry should be on the best ways to innovate within existing person-to-person communication services. Introducing a well-targeted variation of an existing service generally leads to much higher acceptance and adoption than launching completely new, unproven services.

However, this innovation is only happening to a limited extent. Despite the technology being available to enhance these services, the telecoms model that has stood for the last 20 years has not really changed. Globally, operators are still providing standardised, homogenised, utilitarian voice and messaging services, effectively providing access and connectivity only.
With increased competition and regulatory changes - particularly in roaming, telephony prices are falling and margins are shrinking.  The spectre of telecoms companies becoming bit-pipe providers in a commodity market is already here. If operators do not take action now, established markets will begin to slip away and the opportunity to develop an existing, semi-captive market will be missed.

There is a great deal of opportunity for operators to extend their services beyond the same limited voice and messaging product set, and focusing innovation on these services makes sound business sense. Value-added features such as presence or visual voicemail extends already popular services and does not require the conceptual leap that something like mobile TV does, making them easier to sell.

But if the technology is available and the market is ready, why have operators not explored these avenues more comprehensively? The reason lies in the construction of the Intelligent Network (IN) platforms that mobile networks use.

The IN authorises and controls connectivity, metering and charging for calls and data sessions.  This is an exacting and complex task requiring low latency as the IN sits in the call signalling path.  In order to achieve this with exceptional reliability and with an enormous volume of concurrent activities, IN platforms were engineered - 10 to 20 years ago - as tightly integrated stacks of software and hardware.  For the same reasons, the telecommunications services that they host are streamlined, so that they are relatively simple and standardised, utilitarian services.

Today's IN platforms were conceived and designed in a different era, one where there were significantly fewer telecommunications services and networking technologies.  Each line of IN code has to be crafted and the service logic and interdependencies tested by a small number of highly skilled IN engineering staff.  Software engineering has also moved on in the interim.  Modern software engineering approaches design systems as separate, horizontal layers that provide services to the other layers, which are decoupled from each other.  This helps to provide a "safe" runtime environment that isolates the behaviour of the application code from the platform and other applications or services. 

IN platforms are expensive in themselves and represent a substantial Opex investment for the operator.  Excessive ‘spare' capacity is therefore undesirable, it translates into dead weight in terms of assets and most operators organise their IN capacity to limit this. The result is that many operators are working at full capacity, further limiting their ability to roll out new services. 

Further exacerbating the situation is the evolutionary rather than revolutionary approach that operators have taken to their networks.  As new technologies have become available, mobile telephony services have been added to and modified, resulting in a disparate array of equipment and IN platforms that do not readily communicate with each other.
As a consequence, the level of expertise needed to adapt the IN is very high and such engineering is expensive.  This acts a real barrier to service innovation - operators need to be almost 100 per cent certain that a new service will deliver before they can even embark upon a trial - a level of certainty which is rarely possible and often means that valuable opportunities are often missed.  Indeed, the high cost of service creation and the long lead times associated with getting a service developed often invalidate the tentative business case for new services, before they can be explored and trialled.

The IMS (IP multi-media subsystem) architecture, the future blueprint of mobile networks, is expected to solve a lot of these issues. The network will have an all-IP core, which is intended to reduce costs and enable the creation of new services on a uniform platform. In particular, the IMS Session Initiation Protocol (SIP) Application Server concept is designed to facilitate service innovation and eliminate the rigidities of today's IN platforms. 
However, IMS is a costly infrastructure investment and rollout, so operators are reluctant to make a full migration without immediate service requirements. Today, nearly all mobile subscribers are on the SS7 circuit-switched TDM network and all the applications and services they value are based upon this technology.  Making the business case for service innovation based on a strategy of IMS rollout and mass subscriber migration to IMS is extremely challenging.

The problem remains that the core business areas of person-to-person communication - chiefly voice and messaging currently - are under fierce price pressure and all operators are providing their customers with the same, standardised, utilitarian services.  If everyone is selling the same thing, then the only way you can differentiate is on price or customer service.  Premier customer service and low price are in direct conflict. 
There is therefore a strong case to be made for service innovation in the core person-to-person communications.  Targeted services that are designed to meet the specific needs of a segmented customer base, rather than "one-size-fits-all, pile them high, sell them cheap". 
At the same time of course, operators need to progress towards the long-term goal of creating a core mobile network based on the 3GPP IMS architecture. The modern operator is stuck between a rock and a hard place. Innovating existing services is a painful process and abandoning their existing infrastructure would be too great a loss of assets.
There is a way through woods, however.  Just as IN was added to augment the telecommunications switch, allowing extra capabilities to be added to the network without requiring significant changes to the switch.  It is also possible to augment the capabilities of the IN platform which telecoms operators are so dependent upon, and which ironically are also at the heart of their inability to innovate in their core services.  IN augmentation, rather than replacement, maintains the IN benefit and overcomes inflexibilities. 

Chaining in the service layer enables the operator to configure subscriber-specific service logic for session or call routing and new service integration, effectively providing the flexibility promised by IMS, but on today's TDM networks.  IN augmentation results in fast and cost-effective service introduction.  Providing the ability to launch, refine, enhance and (if appropriate) withdraw services and variations, including integration with an online charging system for real-time pricing.

The inter-network multi-protocol gateway capabilities bring further benefits.  By providing cross-network access, operators can migrate subscribers to IMS without the need to have a full service portfolio available in the IMS domain.  The additional benefits the vast majority of their subscribers can access the new services provided on the IMS network. 
Operators need to respond to market pressures now, using the strength of their two highest revenue-generating services (voice and messaging) to uncover new revenue generating opportunities. This will enable them to differentiate now, providing services that are sticky and reduce subscriber churn, and to charge a premium for services that meet the specific needs of individual customers.  For an industry that is characterised by long term investment and comparatively slow ROI, IN augmentation holds a great deal of appeal. With promiscuous customers and competition from outside the sector, can operators really afford not to explore this?

Jonathan Bell is VP Product Marketing for OpenCloud
http://www.opencloud.com

As communications, media and entertainment services converge and competition increases, the billing system is pivotal in determining operators' ability to embrace or adapt to the potential of next generation business models, says Wolfgang Kroh

The evolution of the telecommunications industry has reached a crossroads. Emerging technologies and radical new business models have the potential to cause a fundamental change in direction for the industry. Now, more than ever before, communications providers look to billing systems vendors, to equip them with the tools to manage this uncertainty and provide the agility required to meet demands of a rapidly changing business landscape.
In fixed line markets, VoIP services are making huge dents into the subscriber bases of the incumbent operators. In 2006, European fixed line operators lost over 10 million subscribers. At the same time new VoIP entrants attracted over 14 million new subscriptions, with just 3 million of those being won by traditional fixed line operators, attempting to win back part of the VoIP market share.

In many mobile markets, deregulation and competition has driven down call rates and the proliferation of MVNOs, targeting niche market segments with highly competitive lifestyle, brand or language based offerings is leading to a near-commoditisation of mobile voice services. Furthermore Wi-Fi and Wi-Max technologies have the potential to make a serious impact on traditional mobile services.

Increase in competition and pressure on traditional business models is not a phenomenon solely facing mature markets. Emerging markets such as South Asia and the Middle East and Africa are growing at a rapid pace. In 2007, mobile subscriber numbers in India grew by 7 million per month and the recently launched Etisalat Egypt has built a subscriber base of 3.5 million in just 10 months. Whilst such markets have comparatively low levels of mobile penetration, competition is intensifying and new operators are already deploying state-of-the art convergent billing architectures and launching innovative value-added content services in order to offer to differentiate from the existing providers and increasing competition.

Whilst there may be uncertainty over the direction in which the communications industry is heading, it is clear that the current climate of increased competition, market penetration and emerging technologies is giving rise to innovation, in terms of new services, new applications but perhaps more significantly, new business models and new participants in the value chain.

The challenge now facing communications providers is to ensure that they are as market responsive as possible. This requires high levels of business flexibility to rapidly deploy innovative value-added services and applications, and work with a variety of new business partners under non-traditional business conditions. The billing system is therefore pivotal in determining their ability to either embrace or adapt to the potential of these next generation business models.

Communications providers, and in particular mobile operators, threatened with becoming mere ‘bit pipes’, have been keen to acquire compelling content to offer the value added services that, as little as two years ago, were viewed merely as marketing or customer retention tools, but are now considered by many operators, as strategic differentiators and, moreover, critical revenue generators. In 2006 the global mobile content market was valued at around $89 billion but, with increased cooperation with the entertainment and media industries and the increased speed of HSDPA, this is forecast to exceed $150 billion by 2011.
However, acquiring content is not necessarily a simple linear transaction. For example, music content distributed over mobile networks requires the licensing of multiple rights, including the right to copy and transmit both the musical composition and the sound recording. Depending on the country, an operator may therefore have to work with multiple partners, (including copyright institutions offering standard, non-negotiable licensing schemes), each requiring reporting, settlements and payment.

The communication provider’s billing systems and processes must therefore have the adaptability to support these potentially complex non-negotiable partner licence agreements, whether they are based on revenue share, rate per use or combinations of these payment models and associated reporting requirements.  A particular challenge can be the obligation, to a rights owner, to calculate the correct proportion of advertising and sponsorship revenues which may have been sold across the communication provider’s entire portal.

Music is an example of the many new services being introduced involving multiple parties in the value chain. In order to promote mobile music services, real-time charging and balance management becomes increasingly important to the revenue management process. With the diversification of services, many subscribers are still unfamiliar with new content-based services and require reassurance over costs. Billing systems must therefore support real-time charging and balance management to enable real-time advice-of-charge messages to ensure that subscribers are comfortable in using the service. At the same time, they must allow the communications provider to ensure the credit-worthiness of the subscriber with real-time balance authorisation and reservation capabilities. Bad debt resulting from high levels of content consumption, including music, carries the added exposure to third party content licence costs in addition to lost service revenue

Multi-play services are increasingly being deployed as a means to differentiate and gain market share. Traditionally this has been a strategy of the broadband cable
operators aggressively moving into the telecommunications space. However it is now also being deployed by new market entrants, seeking to rapidly gain market share in highly saturated markets, who have the advantage of state-of-the art billing architectures rather than the ‘siloed’ legacy systems often operated by incumbent providers.
One such example is EITC, in the UAE, operating under the brand name ‘du’. At the time of launch, in February 2007, mobile penetration in the UAE already exceeded 120 per cent but after just 10 months of operations du had attracted over 1.9 million subscribers, accounting for some 30 per cent of market share.

Central to du’s strategy to enter the UAE market was the premise of offering subscribers simplicity and convenience. With du’s  ‘At Home’ package, subscribers were offered an innovative, voice, data, video and content packages that are not only simple to use but also easy to purchase, to pay for and to receive support.

du’s state-of-the art, ‘any-play’ billing architecture was fundamental to this strategy. It enabled all network technologies and all services to be supported within one system. As a result ‘du’ are able to offer a triple play; fixed-line, Internet and pay-TV package with all elements of the service covered by a single monthly bill. In addition, the system facilitates fully integrated customer care, whereby a subscriber can receive support for all services through a single point.

The convergent capabilities of ‘du’s’ billing system was also a key enabler of the highly targeted cross-service campaigns and promotions that were another key feature of their aggressive launch into the UAE market. It enabled them to offer compelling, tailored packages and marketing promotions to targeted segments of their subscriber bases, across a range of communications media, in both the consumer and business sectors. One such promotion is du’s ‘Free Time’, a cross-service promotion whereby subscribers earn credits for every second of every international call made.  The credit accumulates and is displayed on the bill each month. It can then be redeemed against any kind of usage, monthly fees or valued-added service.

Perhaps the greatest potential change to the mobile market place is the arrival of advertising supported services. 2007 saw the launch of Blyk, the UK’s first advertising based mobile service provider, offering a certain number of free calls and texts in exchange for agreeing to receive targeted advertising messages on their mobile phone. Historically, mobile operators have been able to charge a premium on mobile call rates for the intrinsic value of mobility, however, with the advent of what is, at least in part, free mobile services; it could be that we are witnessing the beginning of paradigm shift in the mobile communications business model.

In addition, with growing interest in mobile IPTV, but with no clear business models emerging, it seems that advertising supported services are likely to play a major part in the evolution of telecommunications, and in particular, mobile business models. Mobile operators could soon find themselves competing with providers who are offering equivalent services together with compelling content, free of charge.

Whilst advertisement supported mobile service business models are still emerging, it is clear they will play a major role in the development of the telecommunication business landscape. It is also another example of the business uncertainty that drives the billing system requirements of today’s communication provider.
With the uncertainty over the direction of the communications industry, providers are facing some difficult decisions: Which technologies, to embrace? Which business models to adopt? Which partners to work with?

However, what is clear is the necessity to invest in a multi-technology ready billing system that provides the convergent, business adaptive billing environment to support sophisticated charging scenarios, including advertising rate plans, and complex partner settlements. This should also include open and service orientated architecture, providing the ability to easily and quickly upgrade in accordance with new technology such as IMS, rapidly launch services and integrate third party applications.
Only those players that have made the necessary preparations to their billing environment and have geared up for innovation will be best placed to maximise the potential of these next generation business models.

Wolfgang Kroh is CEO at LHS and can be contacted via info@lhsgroup.com www.lhsgroup.com

With the latest buzzword ‘transformation' ringing in everybody's ears, European Communications takes a look at what will be on offer at the TM Forum's Management World

The great and the good (and sundry others) from the OSS/BSS world will be descending en masse on Nice again this May, to learn, debate, observe and participate in the on-going,
fast paced, and - some might say - disturbing developments that are affecting their every-day working lives.

The TM Forum's Management World - re-titled to reflect the organisation's expansion from a purely telecoms brief into the broader (and more complicated) world of communications, information, entertainment and media - runs from May 18th - 22nd at the Acropolis Convention Centre.  With these separate but increasingly converging industries still going through what Martin Creaner, TM Forum's President and Chief Technical Officer, describes as "a total sea change", the Forum's role in bringing together the movers and shakers from - staying with the sea analogy - the octopus' various legs, is a crucial element in the transformation which many players are now having to undergo. 

Sometimes viewed - it might be said unfairly, given the dull and bureaucratic implications - as essentially a standards organisation, the TM Forum aspires to be, and often is, much more than that.  This is in no small part due to the drive and vision of its CEO, Keith Willetts, who was proselytising the theories of lean and agile corporations when telecoms, as a whole, was still trying to dislodge its boots from the sticky monopoly mud.

Willetts is still banging the drum for optimizing business processes and automating them end-to-end through integrated systems, noting that while it is certainly important to be highly efficient, fast to market and delivering great services, it is no longer enough.  The industry buzzword now is ‘transformation', which Willetts describes as being as much about acquiring new skills and competencies, as it is about putting new kit into central office buildings.  "It's about changing the way companies think and act, as much as it is about new service ideas," he says. "The watchwords are: innovation, partnering, exploiting assets, and taking risks."

Management World in Nice is intended to reflect all these aspects, and give those attending the opportunity to share information as they navigate transformation.  The areas to be tackled at the event, therefore, include ‘Business Transformation Strategies' which looks at the proposition that service providers are continually adapting to market dynamics through transformation strategies - and only by investing in technology for managing the service lifecycle will they compete in the 21st century; and ‘Technology Transformation Strategies' which will argue that involving the right use of technology coupled with effectively managed systems migration will enhance both operations and business support systems.  ‘Business Enablers and Managing the Content Lifestyle' meanwhile, discusses the fact that systems and processes that worked for more traditional telecom services may no longer be up to the challenge of delivering content-based services.  The next generation of services will be more sophisticated - delivering a mix of content and media to a diverse and increasingly mobile subscriber base. 

Other conference sessions will include SOA for Next Generation Operations; the TM Forum's Prosspero and NGOSS in the Real World; Strategies for Optimising the Customer Experience; Revenue Management and Assurance; and Delivering Innovative Services to Cable Customers.  A specific Focus on China will look at the proposition that understanding what it takes to succeed in China as a service provider, integrator or vendor can be difficult in such a large and high stakes market - but that failure to enter that market could be significant.  Speakers from China Mobile, China Telecom, China Unicom and Guoxin Lucent will bring to the session their experience and knowledge of living and working in the region.
Reflecting the industry-breadth of its 650 plus members, TM Forum is fielding a number of keynote speakers from different parts of the converging communications industry, including Sol Trujillo, CEO, Telstra; Alan Bell, Executive Vice President and Chief Technology Officer, Paramount Pictures; Paul Reynolds, CEO Telecom New Zealand; and Stephan Scholz, CTO, Noikia Siemens Networks.

The ever-popular Catalyst Showcases will also, of course, feature at the event. The Catalyst program directly supports the TM Forum's objectives to provide practical solutions, in order to improve the management and operation of information and communication services.  It also aims to provide an environment where service providers can pose real-world challenges and directly influence the system integrators, hardware, software, and middleware providers to define, develop, and demonstrate solutions. Projects within the program are delivered in a very short timeframe, typically six to nine months, and the results are presented at the Showcase during the event.  This year's crop includes, among the ten featured showcases: End-to-End B/OSS Framework; Delivering and Industry Information Infrastructure; Zero Touch Deployment; and Operator User Management.

Alongside the Expo, where vendors, equipment manufacturers, system integrators and service providers get the chance to show their wares, check each other out, and compare notes, the networking events at Management World always prove to be a considerable draw.  As well as the expo cocktail reception, and the networking event party, this year will also see the second Excellence Awards Ceremony and gala dinner, where awards covering such areas as Best New Management Product; Most Innovative Integrated Marketing Campaign; and Best Practices - Service Supplier will be handed out to the winners, among the glitz and glamour of the Palais de la Mediterranee. 

Management World 2008, 18th - 22nd May, The Acropolis Convention Centre, Nice.
www.tmform.org/ManagementWorld2008

While service providers are still tempted to use price as a competitive weapon, Tony Amato argues that they should be investing capital in enhanced revenue generating VAS applications - and ensuring that they are tested across every element of the network

I want to be able to download my music and also be online with my friends anywhere-anytime, among other things, and I need all of this very economically", asserts the technology savvy end-customer. The underlying message translates simply to a convenience-at-your-fingertips idiom with a pocket-friendly ulterior motive. If you are a service provider or a network operator, you might already be overwhelmed with such a paradoxical sentiment. On one hand lies the enormity of improving average revenue per user (arpu) and profitability each successive ‘accounting period'. On the other is the seemingly perilous choice of selection and rollout of new Value Added Services (VAS) that appeal to the imagination of the masses. Now consider the following stark realities

  • Fixed-line revenues are dwindling owing to an increased fixed-to-mobile substitution
  • In geographies that are witnessing positive subscriber growth, arpu figures are either flat or seem headed downwards, although profitability figures have shown improvement in some cases
  • In highly saturated markets arpu growth has shown a direct impact on profitability
  • New and innovative VAS such as multimedia messaging, presence, gaming, mobile commerce (m-Commerce), mobile office and location-based services are starting to contribute significantly to the data component of total arpu (i.e. voice + data)
  • The data component of total arpu is growing, but not fast enough to offset the decline in the voice component of total arpu

Market studies have established VAS to be the prime driver for arpu growth. From an average current revenue share of 8-12 per cent worldwide, VAS implementations are poised to account for at least 15-20 per cent of the top lines of service providers in the next couple of years. However, VAS implementations have to be operationally supported by the deployed network and systems. For them to be operationally efficient, service providers and operators are realising the importance of effective utilisation of their network infrastructure. The need of the hour is for new service rollouts to provide ample revenue improving opportunities, while also dealing with a shortened time-to-market cycle. For them to succeed, all innovative marketing techniques for the new services have to be pillared on a solid foundation of a harmonised network configuration.

Service providers (both fixed and wireless) deciding to offer bundled/converged (used inter-changeably with VAS) services often find it difficult to deal with intricacies at key stages of the product lifecycle. At the very outset, the high-level business consulting process needs to be focused on assessing the existing operations and management systems to discover potential gaps and recommend solutions (in the order of priority). This forms an inherent part of a future-state transition plan that has strategic as well as tactical ramifications. The overriding motive is to use Business Process Modelling (BPM) to evolve to an operationally efficient state that delivers optimal resource utilisation, improves productivity and reduces the possibility of a substantial overhaul. This improved operations efficiency will streamline processes that work to further enable future VAS.  To help enable a flexible service delivery environment, this stage should also consider prevailing market trends and preferences. The planning process has a bearing on the eventual returns on investment (ROI) and arpu. Service providers have increasingly started to rely on the services of their partners and specialist vendors to chalk-out strategic roadmaps for optimising their networks and service rollouts.

VAS implementation and integration are also fraught with numerous challenges. While content acquisition, its management, and spectrum regulations (for video/data applications) pose a common threat to all providers, the actual implementation and integration effort provides the differentiation from competition. This stage attempts to convert the optimised functional models (suggested during the consulting phase) into action. This may require replacement/retirement of legacy components and introduction of new COTS systems that seamlessly plug into the network. Once the final selection of components is made, their seamless integration into the network follows. Effective customer relationship management and network management are the desired outcomes of this phase. The success of this phase determines the ease of deployment of current and new services, as well as their financial viability, through reduced opex as the result of integrated, end-to-end systems in support of services.

SLA-based managed testing is another interesting trend in the communications space. By removing testing silos and adopting a single testing strategy, service providers and network operators tend to dramatically reduce the operational costs associated with managing a multi-vendor environment across all their networks, devices and applications. As operators focus on the launch of VAS, they are also striving to reduce the manpower and maintenance overheads of their product line. The managed testing partner brings an in-depth knowledge of technology development and testing to support end-customer SLAs and Key Performance Indicators (KPIs).

Extreme competitive pressures are forcing operators to reduce R&D costs, while simultaneously ensuring that VAS are tested across every element of the network.
The touch points at various interfaces of the network core, various OSS/BSS elements as well as the main application and network components also need managed testing. This ensures a thorough verification of all features, functionalities, performance and quality metrics prior and after service launch. It also improves predictability and visibility of costs the operator may need to spend on testing, year on year. Prominent types of managed testing services include test engineering and consulting, end-to-end integration testing, test automation, user acceptance, and multi-vendor interoperability testing services.
The managed testing vendors also have the capability to conduct end-to-end testing scenarios in a controlled network environment. They assume responsibility for the entire service lifecycle. This may include a lab setup to emulate the entire network deployment architecture to conduct various testing scenarios in heterogeneous access networks, with multi-protocol implementations of converged services. Such labs also allow communications service providers to address and monitor critical issues such as performance, latency, voice quality, retransmission, security QoS and policy, enabling a smooth launch of services, maybe even ahead of competition.

VAS is leading the way in driving arpu growth and improving profitability. But, this path must be trodden carefully, backed up by the capability of a ‘fine-tuned' network and the associated management systems. Operators cannot afford inefficiency and poor management of their own systems and hope to be competitive at the same time.
Service providers often use price as a competitive weapon when the services market faces extreme competitive pressures. They find it simply easier to offer better pricing for a longer-term contract commitment with early termination fees to suppress churn, than to invest capital in enhanced revenue generating VAS applications. This temptation has to be curbed in favour of optimising their networks to achieve long-term sustainable arpu growth. Strategic partnerships with specialist telecom vendors are enabling them to achieve operational efficiencies to make their networks ready for new service rollouts. This also helps them rationalise their operational expenses.

Rather than worry about the maintenance and deployments of their networks, innovative operators are focusing their energies on managing and growing their businesses through VAS. After all, the end-customer will continue to request more innovative services regardless of any operational challenges a service provider might be facing.

Tony Amato, AVP Network Services Solutions, Aricent, can be contacted via tel: +1 516 795 0082,  e-mail: anthony.amato@aricent.com

Cable operators must streamline their networks for faster service rollout if they are to guard against hungry telcos says Bill Bondy

As telcos race to roll out IPTV along with Internet access, VoIP, e-mail, messaging and security services, cable operators can not rest on their laurels by relying on their strongholds in the entertainment and broadband industries. Despite cable's solid brand recognition and established customer loyalty, telcos could gain considerable ground on cable turf by boasting "on-demand" TV capabilities and personalisation of "blended lifestyle services" in their quad plays.

If IPTV subscriptions grow to 36.8 million by 2009, as predicted by Multimedia Research Group, this personalisation will be a significant differentiator.

To stay ahead, MSOs must recognise the many identities of a person as he or she transitions from personal, professional and leisure profiles. A subscriber can be a wife, a mum, an office manager, a tennis player, an antiques collector or a dancer at different times in the same day. The fact a subscriber could opt to change service settings according to the time of day, location or situation could be leveraged to open the door to increased loyalty through improved service quality perception.

The problem is that embracing the customer and the seamless hand-offs among TV, fixed telephony, broadband and cellular networks will take substantial engineering feats. Of paramount importance will be the ability to instantly access information about bandwidth requirements, QoS, permissions, pricing plans, credit balances, locations and device types.
To achieve this, there needs to be a one-stop shop for data, and an understanding of how dynamic services fit into rigid legacy networks with silo data storage structures.
While service management, control and security can be greatly simplified with the unification of subscriber-specific data, the fact remains that multitudes of protocols and access methods go across many components (ie RADIUS, AAA, session accounting, policy management and HSS). That makes consolidation a very daunting task.

With so many different types of databases to manage-each with its own protocols and access methods, there is often a duplication rate of up to 35 per cent. More often than not, manual processes and forklift migrations are the status quo for re-synchronising databases with networks in order to support and to keep up with increasingly rapid service changes.
The new-world view of data centralisation is more dynamic, as it focuses on real-time capabilities and on-the-fly transactions. These capabilities require a move away from historical, report-oriented strategies that sat at the core of monstrous data warehousing initiatives and did not have rigorous latency and response time requirements. Monolithic libraries of information now have to give way to intelligent databases that "grip" data for deeper personalisation of services and performance at increasingly higher levels.
To do so, cable companies have to break away from reliance on "transform layers" or "federation layers" that sit on top of multiple databases as an ad hoc "glue". While these layers help applications and clients to better understand the nature of queries, they will cease being real-time responsive dealing with, say, 50 databases. Because each data repository possesses its own access interfaces and protocols, the glue will no longer be enough when cross-database access within the network is required. Core network service and application performance lags are a major liability.

A centralised view will instead depend on the creation of one logical database to house all subscriber data with a discoverable, published common subscriber profile, as well as one single set of interfaces for managing that data (ie LDAP, TelNet, SNMP, etc). The single logical database will co-exist with data federation to allow a gradual, step by step, migration of data on a silo by silo basis until the operator has consolidated all required subscriber data to the degree that is possible.

Subscriber data is at the heart of control for the user experience and quality across networks. By consolidating customer data, MSOs enable provisioning and maintenance from one centralised location. A one-step process for adding all data for subscribers and services to a single database would give cable companies a huge opportunity to activate complex services within seconds of customer orders, rather than in some cases hours or days.
Instant access to synchronised data will greatly improve the customer experience, as well as create tremendous opex and capex savings. Potentially, miles of racks and servers could be eliminated if terabytes of data were moved to pizza-box sized hardware rather than complicated SANs and larger servers.

To realise capex and opex benefits, there are certain components that are crucial to centralising subscriber data among different network layers: a hierarchical extensible database, real-time performance, massive linear scalability, continuous availability, standard, open interfaces and a common information model. To help prepare for the day when IMS becomes a reality, leaving room for a software upgrade to a full-blown HSS will become important. 

As cable operators integrate to PacketCable 2.0 environments, building and maintaining a subscriber-centric architecture will be key to services that require very fast, reliable and resilient repositories that concurrently serve multiple applications. After all, latency is not tolerated in pre-IMS networks today, which could spell doom for quad plays that don't build on a consolidated subscriber centric architecture.

A network directory server (NDS) is the first step in freeing and directing customer data from silos, as an NDS puts a directory in the heart of the network. With a centralised repository, service logic can be separated from subscriber data, enabling a cable operator to have VoIP and associated services working on WiFi, because the subscriber data can be reused among various access networks (ie VoIP on cable, CDMA or GSM).

Additionally, the application independent and hierarchical nature of an NDS makes it extremely flexible and extensible; suitable to host data for multiple applications and multiple access networks compared with embedded relational databases. A proper NDS directory structure is better suited to the disparate nature of the data prevalent in converged networks, which involve dynamic, real-time relationships. An NDS directory is object-oriented in nature with a data model that is published, enforced and maintained by the directory itself.

For a network directory server to provide these capabilities in the core of the MSO network, it is critical that it be highly performant, massively scalable, and geographically resilient.
Typical disk-based databases and legacy directories don't offer the read/write speed operators need to consolidate data in a live core network. Average latencies of three milliseconds for a query and less than five milliseconds for an update are critical to maintain customer performance expectations. Update performance is critical and using highly distributed memory resident directory databases can offer update (as well as query) transaction scalability at the point of access.

As critical as performance, a consolidated single logical database must always be available, downtime is loss of business. The network directory must provide continuous availability even in the event of multiple points of failure throughout the network, ideal for geographically dispersed networks and business continuity reassurance. NDS technology can be scaled massively, using data partitioning and distribution to host virtually unlimited quantities of data. Transactions and resilience are scaled by replicating data in real-time over multiple local and geographically distributed servers.

To make this scalability cost-effective, the hardware must be compact, inexpensive and non-proprietary and the NDS software must be able to scale linearly with the hardware. In fact, the hardware necessary for high transaction rates with the aforementioned low latency is actually very small. A small network directory system can yield 10,000 transactions per second for a couple million subscriber data profiles on a handful of dual-core processor servers running Linux.

That is a big difference from relational systems, which rely on expensive and complex hardware to scale to high transaction rates and directory sizes. Relational systems often struggle to utilise more than a single server or operating system footprint to scale capacity, forcing much more expensive hardware into a network. That increases opex and capex. Relational databases do have their place, as they are more the ideal for batch-mode, complex billing- and CRM-type operations, but for voice services, SMS and Internet services, distributed in-memory directories are more adept at handling the real-time nature of use when and where the data is needed.

Directories also help to simplify integration by supporting access through common IT technologies and protocols, such as LDAP, XML/SPML, and SOAP. Using IT technologies and protocols broadens the pool of qualified professionals who can support such as system. This translates into substantial cost savings, as operators can implement open interfaces in off-the-shelf hardware and operating systems. It's important to keep network components adaptable to a wide range of equipment to bring down support and maintenance costs.
Furthermore, to realise all the benefits of an NDS it is critical that forethought be put into designing a common information model (CIM). This is the foundation for a useful, extensible data model that encourages data re-use while allowing applications to peacefully co-exist in a multi-application, single logical database environment. The CIM model focuses on arranging subscriber, network and application data in several categories: subscriber identities, common shared global data, application specific shared data, and private data.
Unfortunately, no standard model exists, as every operator has its own information model and its own methodology for migrating and consolidating applications. However, most MSOs can build a common data repository within their network using an evolutionary approach. Starting with a single application that fulfils a emerging need of the MSO (eg presence, IM), the CIM data model framework may be established. This provides the foundation upon which other application data may be integrated and built. From then on, new applications (eg WiFi, AAA or policy management) can build on the already existing model. The key is to establish the proper foundation first and then add to it in an incremental fashion.

The CIM allows cable and telco operators to share data in a single logical database, as it houses re-usable data that can be used for new applications and services. As new applications are added and exiting ones evolve, data models are analysed and often changes are required. Changes can be applied to existing application data models where data is part of a common model using virtualisation techniques. So-called virtualisation is the ability to provide application clients with different views of the common data based on the identity of the accessing agent. This allows the common data model to be filtered, re-organised or enhanced to fit each individual application clients requirements, while keeping the core data model intact and un-entangled with a specific application.
As data is "virtualised", objects can be viewed according to different characteristics. For example, attributes specific to a particular application or object distinguished names according to the accessing application or user. That means data is implemented once and managed as one instance, but it can be viewed as an object according to different characteristics over and over again.

As the CIM evolves, cable companies will need to find the synergies so that applications can share common data. Once you have shared objects, you continue to evolve the process of designing schema for applications and merging the schemas together into the common model.

As operators consolidate their subscriber data, the platform they choose must offer a seamless migration to support IMS data via an HSS. This prevents an operator from deploying yet another silo if/when the operator decides to deploy IMS. An HSS can also source its data from the NDS, storing its data as part of the CIM thereby allowing IMS applications to source their data from the NDS as well as non-IMS applications. This has the potential to provide non-IMS and IMS applications a way to provide common data and services across different access planes. An HSS essentially sits on top of the NDS to offer an continued evolution to the process of consolidation. It does so as it enhances the CIM with an operator's IMS subscribers, the characteristics of their connected devices, and the preferences for those services.

For cable operators to guard their markets against hungry telcos who are charging toward IPTC, Internet service, VoIP, and other traditionally ‘cable' services, they must start planning how to streamline their networks for faster services rollout. To achieve a quad play set of offerings, consolidation of subscriber data for unified views of customer profiles across multiple services is essential.

Bill Bondy is CTO Americas for Apertio, and can be contacted via: bill.bondy@apertio.com

 

VoIP might well be a boon to users, but it can also be a network manager's headache.  Martin Anwyll offers some pain-killing suggestions

It's been a long time coming but VoIP deployments are now a common feature across the business spectrum.  It's become the network manager's responsibility to ensure performance and availability, monitor security threats, and ensure call quality. 
Handling the voice service has not always been the network manager's concern.  In larger enterprises, particularly, voice telephony was a separate though allied discipline. With VoIP, however, the divide between voice and data has become blurred and now all network managers are expected to deal with voice - quality, traffic and availability - plus the underlying hardware and application issues.

It's no easy task replicating the quality and reliability of a circuit switched network.  As the number of users and services can only increase over time, network managers are looking at a serious challenge on every level.

How do network managers ensure they are providing the level of service required by users?
Its voice - but not as we've known it so far.  Of all the technology-driven services in the workplace, voice is the one that users have come to take totally for granted.  Every now and then there may be occasions when call quality may be impaired but extended downtime - or any downtime at all - is a rare occurrence.

However much money the business stands to save using VoIP, it counts for nothing if productivity is lost or the relationship with customers or suppliers is undermined.  Suffice to say anything less than 99.999 per cent reliability is unacceptable.  
The reality is that VoIP is a complex application.  It runs on a generally complex network delivering a set of equally complex services.  It is a packet-switched application, which means it is impaired by traffic congestion, traffic spikes, and all those other network events that the voice service has so far avoided.

There are a huge number of variables that can impact on VoIP service performance outside the application itself - including servers, the supporting network, system components, middleware, operating systems as well as other associated applications.
How do network managers identify problems automatically, avoiding any situation in which they're seen to be reacting to user complaints, or obvious degradations in service such as echo, delays or distortion?  The answer is deceptively simple: they ask their supplier for a comprehensive VoIP management solution that covers every aspect of the VoIP infrastructure across the entire lifecycle.  The solution should view and manage the VoIP network as a business service, and demonstrate a practical understanding of its interactions with other services and applications.

A solution that allows access to information on all relevant network components and applications from a single console is more efficient and makes life a lot easier.
At a very basic level, a VoIP network management solution should combine voice-specific monitoring tools that detect jitter, packet loss, delay and call quality, with network management facilities that give an insight into the devices, port configurations and network availability.

The very fluid nature of VoIP - in which conditions can change second by second - dictates that management functions should be operational constantly and in real time.  An effective VoIP management solution will drill deep into the network to pinpoint the root of a problem.   The best tools will monitor jitter, packet loss, throughput, volume issues, delay, and other quality issues both from within the network and external applications such as voicemail and call centres.

The ability to monitor the VoIP infrastructure in real time is, arguably, the defining factor in delivering a reliable, high audio quality VoIP service.  Continuous monitoring is not only important in identifying and automatically resolving potential problems; it's the vital first stage in the planning and optimisation of this critical process.
A VoIP management solution should provide a range of monitoring facilities - coupled with automated corrective actions - including call quality, call success rates and fault and performance management.

Any thoughts of adapting existing management tools should be dismissed right away.  Standard network management tools don't fit the bill as they only pinpoint faults at precise points across the network path where the links between devices were failing to complete.  This approach won't work for VoIP as it is a dynamic, packet-switched application in which no two paths will ever be the same.

There is already a huge contingent of products and ‘solutions' on the market.  However, there is a growing consensus that customers should opt for a comprehensive, integrated solution that encompasses the entire VoIP environment and lifecycle.  A truly effective solution should seamlessly integrate the best available technologies.
Security is the one area in which most VoIP management solution providers have yet to offer a credible response.  Traditional firewalls don't protect VoIP calls as voice packets must be encrypted and traverse a firewall without undue latency.   Any network that ends with an IP address is vulnerable to unauthorised calls, spammers, information theft and other malicious activity by hackers and DoS (Denial of Service) attacks that can, at best, adversely impact call quality. In a worst-case scenario, the entire network can be at risk during a VoIP security breach.

Security, therefore, should be a priority in the buying equation.  To be effective from a security perspective, the VoIP network management system must provide an automated security layer that monitors the entire VoIP environment in real time to increase protection levels and ensure layered defences. It should be capable of correlating security events and alert on security breaches and performing analysis and forensics - all in real time.
Our definition of a solution goes beyond excellent products seamlessly integrated into a scalable, flexible whole.   The framework on which that solution is based is fundamental in enabling users to control rather than simply manage their VoIP services and environment.  Once the framework is in place, network managers can add the flexible tools, management modules and customisable scripts they need - as and when they are needed - to ensure a high quality user experience (QoE).

We're all used to talking about Quality of Service (QoS), which is a term that relates to network optimisation.  QoS is about ensuring network elements apply consistent treatment to traffic flows as they traverse the network.
Quality of Experience (QoE) is the crucial end result of QoS.  QoE refers to the user's perception of quality, which, as we all know, is ultimately the measure by which the success of a VoIP service will be defined.

Martin Anwyll is Product Line Specialist, VoIP Solutions (EMEA), Attachmate

Business and residential subscribers are constantly demanding more bandwidth. Historically the majority of this demand has been in the downstream direction, with users accessing content from the Internet. Now, though, users are generating copious amounts of content of their own explains Piyush Sevalia

Accompanying the trend in content creation are changes in network utilization and new applications that are now demanding greater bandwidth upstream-forcing carriers to swim against competitive currents in this same direction.  Video serves as a good example.  To compete with cable and satellite providers, many carriers are now offering Internet protocol television (IPTV), video on demand (VoD) and digital video recording (DVR), as well as additional value-added services that require greater upstream bandwidth. Plus, new bandwidth-hungry applications are certain to increase demand in the foreseeable future.
The leading technology for delivering higher bi-directional bandwidth is Very-high-bit-rate Digital Subscriber Line (VDSL), which has been standardised by the ITU-T as the VDSL2 standard. VDSL2 was designed to take full advantage of a carrier's broadband infrastructure with its increasing fibre optic capacity to the node, curb or building/basement. VDSL2 can deliver 100 Mbps symmetrical broadband bandwidth, which puts carrier services on par with LAN switching to the desktop.

Carriers around the world are experiencing tremendous success with their initial VDSL service offerings. As a result, most of these carriers are planning to add new or enhanced services to generate additional revenue streams, potentially as an upgrade to existing (and increasingly inadequate) legacy services.
Users, carriers and equipment vendors alike have a long history of underestimating the need for speed in order to keep costs under control. But the inevitable "forklift" upgrades are ultimately disruptive and costly.  
Consider what has occurred in the local loop over the past couple of decades.  Data communications in the public switched telephone network (PSTN) began with 300 BAUD modems, but was ultimately replaced with one that supported 1200 bps, then 2400, then 9600.  A major breakthrough came with the advent of the 14.4 Kbps modem and then the really "fast" 28K modem. After a tweak to 33.6 Kbps, the modem reached its maximum potential with 56K technology.  But even here, too, the initial asymmetrical V.90 standard was quickly replaced with a symmetrical version:  V.92.  
Despite these advances, users wanted more.  Integrated services digital network (ISDN) offered great hope for the industry, but advent of digital subscriber line (DSL) technology began to erode ISDN market share. Some forms of DSL even began to supplant traditional T/E-carrier services, especially T1 and E1. Asymmetric DSL (ADSL) became the most popular rendition for consumers.  But even ADSL has seen its fair share of changes - from ADSL to ADSL2 to ADSL2+ - all of which required a reinvestment in infrastructure.
Current access bandwidth compromises involve the upstream direction. The various versions of ADSL all deliver asymmetrical bandwidth and this lack of adequate upstream throughput is now becoming obvious in the marketplace.
Business subscribers were the first to recognise ADSL's limitations. The reason businesses need bandwidth symmetry is fairly straightforward: content. Organisations consume and generate a significant amount of content, requiring adequate bandwidth in both directions. Where fibre optic cabling is available to the premises, businesses often turn to DS-3 or Fractional DS-3 services. And where only copper is available, carriers have found it necessary to multiplex T/E1 or DSL services to meet the bandwidth demand. 
Residential subscribers now feel the same pain. Although a vast amount of consumer-oriented content continues to reside outside the home on the Internet, the balance is changing as consumers generate their own content. Residential applications, detailed below, will require greater bandwidth in the upstream direction:  
Home Networks: The home network is starting to resemble that of a small business, with multiple client PCs and a shared server. Multiple household members, all accessing the Internet requires higher bandwidth in both directions to ensure that the Internet experience remains acceptable.
Telecommuting:  There has been dramatic increase in the number of people telecommuting full- or part-time. Creating an "office-like" work environment at home requires adequate symmetrical bandwidth for uploading content such as presentations and spreadsheets.
Peer-to-Peer (P2P) Applications:  As traffic patterns go, P2P is driving consumer bandwidth usage worldwide, and is also causing bottlenecks. Asymmetric bandwidth is simply insufficient for many of these needs as the PC is both client and server. A University of Washington study concluded that P2P bandwidth dominates Internet bandwidth and contributes to its "peaky" traffic patterns. According to the study, the 24 per cent of Internet users who use P2P consume over 90 per cent of the bandwidth. The upstream bandwidth also is much higher than downstream bandwidth because users typically share audio and video files, which are much larger in size than data files.
Videoconferencing:  As people have become accustomed to digital quality, most videoconferencing offerings have not been ready for consumer use because of poor image and sound quality. Adequate bi-directional bandwidth is the only remaining hurdle for carries to achieve high definition television (HDTV) quality videoconferencing.
Multimedia Messaging (MMS):  MMS and other forms of instant messaging are standard applications today, but in the future they must become more robust to support video.  Inexpensive built-in or add-on cameras allow users to send video-mail and video messages, or conduct a video chat session quite easily if they have sufficient bandwidth.
Video Monitoring/Surveillance:  Affordable webcams enable users to "check up" on things from remote locations. In order to have decent video quality, the upstream data rate must be capable of being dynamically partitioned and sufficient to support the application.
Content Creation and Publishing:  Blogs and video blogs are gaining in popularity, as consumers become Internet publishers. The trend toward "rich media" and full multimedia productions is resulting in an ever-increasing demand for upstream bandwidth.
Interactive Gaming:  Home PCs owe much of their popularity to games and "edutainment" applications. As gamers seek new competition from around the world, they need additional bandwidth to ensure an uninterrupted gaming experience.
Remote Desktop Control:  Many applications benefit from the ability to remotely control one PC from another. The complexity of PCs also now makes it beneficial for the Help Desk to have such access.  Without bi-directional broadband bandwidth, this capability can be painfully slow and inefficient.

These new and emerging bandwidth-intensive applications, along with competitive pressures from cable and satellite providers, are forcing carriers to rethink their strategies. Newer technologies, especially hybrid fiber coax (HFC) and broadband wireless, threaten to undermine the inherent strategic advantage of a carrier's copper/fibre infrastructure. What carriers need is a robust bi-directional broadband solution that can be provisioned profitably for business and residential subscribers alike, and stand the test of time.
Most carriers are driving fibre deeper into their networks because new or enhanced, revenue generating services requires copious amounts of bandwidth. Fibre offers virtually unlimited bandwidth potential, and, as a result, is a rock-solid investment that will endure the test of time.  But laying fibre to every single subscriber is difficult to justify financially-even in the face of increasing competition. Complicating factors of a fibre deployment include trenching driveways, drilling holes in walls, and setting up two-hour windows for appointments and then keeping them - all of which are an inconvenience to the consumer and increase time to revenue.
One DSL technology was designed to enable carriers to take full advantage of a fibre build-out:  VDSL2.
VDSL2 technology delivers fibre-like bi-directional bandwidth over ordinary unshielded twisted pair wiring.   Of all the DSL technologies available, VDSL2 is simply the fastest, delivering up to 100 Mbps in both the downstream and upstream directions.  
This data rate (100 Mbps) is significant.  Switched 100 Mbps is the predominant choice today for desktop connectivity in the LAN. The power of delivering the same 100 Mbps service in the access network represents a major breakthrough. For this reason, it will be a long time until VDSL2's potential bandwidth capabilities are exhausted.  

VDSL2 can deliver ADSL2+-like connectivity to all subscribers and affords its highest level of performance to those subscribers closest to the carrier's central office (CO) or remote terminal (RT).  With this robust rate/reach profile, carriers have greater flexibility to offer full broadband interactive services to offices and homes closer the CO/RT and basic Internet connectivity to consumers at longer distances. VDSL2 solutions are available in full-featured DSL access multiplexers (DSLAMs) or as remote gateways/concentrators that can be deployed either in the CO or RT. The customer premises equipment (CPE) is typically a single-port gateway or "modem" incorporating a DSL transceiver.

Carriers around the world are successfully deploying VDSL. These carriers include: 
AT&T is just one service provider that is capitalising on existing copper infrastructure. For its U-Verse deployment, AT&T is building out fibre to the node (FTTN) and using VDSL2 to turbo-charge the existing copper loops entering homes. AT&T estimates that this architecture costs only about $360 per user to deploy - almost five times less than the cost of Verizon's all-fibre build.

Verizon and NTT, too, are using VDSL2 in a hybrid approach to deliver broadband services to multiple dwelling units (MDUs), such as apartment complexes or condominiums. In this scenario, they are using VDSL2 as the last mile technology because deploying fibre in restricted riser space is incredibly challenging.
NTT in Japan: NTT began deploying VDSL-DMT in 2002 with an initial asymmetric offering of 50 Mbps downstream and 11 Mbps upstream. Within a year or two, NTT rolled out two enhanced platforms:  50/30 Mbps and 70/30 Mbps upstream/downstream.  In 2004, NTT added a 100/50 Mbps service. NTT subsequently deployed platforms that are capable of offering both 100/50 Mbps and 100/100 Mbps services.  
AT&T in the US: AT&T is aggressively rolling out its U-Verse service based on VDSL technology in the last mile. U-Verse is part of the carrier's a $4 billion initiative to expand fibre optics deeper into residential neighbourhoods to deliver IPTV, voice and data broadband services.
Verizon in the US: Verizon is using VDSL2 to deliver high-performance, copper-based broadband services to multiple dwelling units (MDUs), such as apartment complexes or condominiums.
Belgacom in Europe: The Broadway project extends fibre infrastructure to street cabinets to over 14,000 nodes throughout Belgium. VDSL2 is a key technology in the Broadway project and enables Belgacom to offer revenue-enhancing triple play services, which include multiple simultaneous channels of high definition Internet protocol television (IPTV). Belgacom expects that its upgraded network will pass 60 per cent of Belgian households by spring 2008, enabling Belgacom to extend its market leadership by being the first carrier in Europe to announce high definition television (HDTV) service over VDSL2.

The lack of adequate upstream bandwidth has begun to place limitations on the types of services carriers can offer their subscribers. Fortunately, carriers still enjoy an inherent advantage over the competition:  a basic infrastructure capable of cost-effectively delivering bi-directional broadband.  

With VDSL2 technology, carriers have a more versatile and universal way to offer a wide assortment of new or enhanced-and quite lucrative-services.

A variety of VDSL2 DSLAMs, concentrators and gateways have been deployed in pilots or rolled out in full production networks. The day will eventually come when even 100 Mbps upstream and downstream is insufficient for many applications. But until then, carriers have a long and lucrative opportunity with VDSL and VDSL2.

Piyush Sevalia is Vice President of Marketing, Access Products Group, Ikanos Communications

Growth drives long-term value, but what drives growth? The answer could make the difference between thriving and barely surviving in Western Europe's telecom industry. Asmus Komm and Sven Smit look at telecom growth opportunities, and explore eight megatrends and microtrends that can open new revenue pools to industry players

McKinsey's research detailed in ‘The granularity of growth' by Mehrdad Baghai, Sven Smit, and Patrick Viguerie, clearly illustrates how growth plays a crucial role in long-term profitability and survival.

Companies that grow at above-GDP rates are six times less likely, on average, to go bankrupt or be acquired. Furthermore, company growth that exceeds GDP expansion corresponds to a 28 per cent greater long-term total return to shareholder (TRS). Even ‘cash cows' deliver inferior long-term TRS on average, and are more likely to ‘die', if their growth is slow.

Unfortunately, analysts expect most large Western European telecom players to grow at rates below GDP, planting them firmly in the high-death rate category. This represents a dramatic turnaround from the industry's pre-2001 performance, when most of Western Europe's telco players enjoyed strong double-digit growth, driven primarily by the expansion of the mobile and broadband markets. Even during the transitional period from 2001 to 2005, most players grew in the two to four per cent annual range, largely matching GDP levels.

Telco incumbents that focus primarily on their core (home) markets will find it difficult to achieve growth rates that align with or exceed GDP. While nominal GDP in Western Europe is forecast to grow at 3.7 per cent per year through 2009, the telco core market, made up of fixed and mobile voice and basic data, will only grow at about 1.6 per cent annually. In addition, the average incumbent player still retains a high share of slow- or no-growth fixed voice revenues, which could limit its core telecom portfolio to an annual growth rate of only about one per cent.

In line with this general picture, capital markets do not predict much growth for Western Europe's telco incumbents, since current performance explains most of their entity value. Furthermore, the share that reflects expected performance improvement continues to decline, and now represents less than 10 per cent of total value.

So should Telco's in Western-Europe forget about growth and focus on returning the bottom line and return dividends to shareholders, or pursue share buy backs?  The answer is, it depends: like for many utilities it is a very viable strategy to return high dividends and pursue share buy backs to create value. On the other hand expansive "growers" for example, pursuing emerging markets have succeeded in finding valuable growth.  For those that consider growth we have postulated eight trends shaping the future telecom market that can support growth strategies.

Eight trends shape future telecom markets
Our research indicates eight telco megatrends will shape the West European telecom market through the end of the decade. These trends will threaten most incumbent business models, but will also give rise to substantial new growth pools. We will examine each of the eight trends in some detail.
Trend I: Convergence.  Convergence has been and is much talked about, both in terms of fixed and mobile convergence as well as content/infrastructure convergence.  Some evidence suggests that fixed/mobile convergence could accelerate as technical and usage barriers disappear. Driven by Internet protocol (IP) proliferation, data and voice traffic will converge. While a huge potential for new products and business models will emerge, few additional revenues streams will be created directly for the "infrastructure business". Telco players are likely to be challenged to monetise the additional customer value resident in the newly converged offers, as convergence can lead to more competition.
Trend II: The Commoditisation of Traffic. Competitive pressure on usage-based voice and data pricing will accelerate the shift towards flat rate type offers.  Incumbents will increasingly compete as access "pipes", and will likely find themselves unable to fully rebalance their declining traditional traffic revenues with higher access revenues unless markets consolidate a lot more than today.
Trend III: Broadband Proliferation. Spurred by continuing price declines, fixed line broadband penetration will continue to rise but the additional revenue potential will be limited (without value-added-services/content). Broadband penetration and usage is an opportunity for in nomadic and mobile data applications, leading to more revenues from WiFi hot spots and 3G data networks despite likely pressure on price levels.
Trend IV: Value-Added-Services (VAS) and Content-Driven Traffic. Fixed and mobile broadband networks will enable a multitude of value added services and new forms of content. These services represent substantial revenue potential beyond the traditional telecom business, examples of which include eHealth applications, gaming and gambling, and telematics, for telcos the challenge is - like in the emergence of the Internet - to capture revenues beyond increased price for bandwidth.  In a way VAS is a diversification opportunity for telcos on success of which has limited proof to date.
Trend V: Reshaping the Value Chain. Both regulation and technological progress increasingly enable attackers to break up the existing integrated incumbent value chain to compete on their favorite parts (eg, city/local access or call origination/termination via the Internet) and thereby put pressure on the most valuable pockets.  Investing in attackers abroad is the growth opportunity for telcos as "at home" this trend challenges revenues.
Trend VI: Consolidation in Western Europe. Incumbents face limited organic growth opportunities in their core home markets. Large players will increasingly seek the opportunity to grow inorganically and to form global players by acquiring small and medium sized players in other markets. The development of the US market is a case in point.
Trend VII: Regulatory Focus on Wholesale Favouring Attackers. Regulators will continue to shift focus from retail to wholesale prices in order to foster competition. With mobile penetration approaching saturation levels, mobile operators with significant market power could face the same rigid regulatory pressure as fixed-line incumbents.
Trend VIII: Growth in Emerging Markets. Rapid economic growth in emerging markets (eg, in Eastern Europe, Middle East, and Asia), driven by mobile voice and broadband, will remain a major revenue pool for telecom players following a geographical diversification strategy, the challenge is to find non-fully valued assets in the space.

From megatrends to growth pools
Based on the above eight telco megatrends, we identified over 20 growth pools with a collective growth potential of USD 63 billion from 2006 to 2009. The identified growth pools will grow from USD 121 in 2006 to 184 billion in 2009 (Figure 3).
Players should prioritise and select a growth portfolio based on these revenue pools.  To be effective, they should make this assessment based on the specific profile, positioning, and capabilities of a given company, and consider it in terms of three different dimensions:

Accessibility
Players need to assess growth opportunities in terms of barriers to entry and familiarity from their own individual perspective.  Barriers to entry include legal, regulatory or technological aspects, while familiarity reflects the extent to which a player already operates in or near a particular market segment. The most favourable growth pools combine a sufficient level of familiarity with some substantial barriers to entry that limit competitive pressure.
Profitability

Growth opportunities differ widely in their size, expected ebitda margins and required capex.  By ranking growth pools along these dimensions, players can identify growth ‘nuggets' with high margins at relatively low capex and, more generally, select the most favorable trade-offs of capex demand and likely operating margins.

Timing
The development of growth pools typically follows an S-curve characterised by a moderate start, a rapid uptake phase, and a moderate maturation phase. Incumbents should ideally move when the pool enters the uptake phase in order to leverage scale-up capabilities and to avoid over-paying.

These three dimensions should be applied as filters for potential growth initiatives to identify the most promising portfolio of growth pools based on a player's needs, capabilities, and assets.

Several industry players have seen the ‘writing on the wall' and are already moving into selected growth pools.

If players aggressively leverage the identified growth pools in their core businesses and especially in adjacent markets, total annual growth rates of 8 to 9 per cent appear within reach.

Total revenues can be fuelled by three sources: inorganic growth (ie, acquisitions), market growth, and organic share gains. Inorganic growth and organic share gains have historically contributed three to four per cent and one per cent of growth, respectively, and will likely remain at that level for the best-performing players. Therefore, in order to achieve growth rates significantly above GDP, players need to tap market growth pools, and our research indicates that determined moves into a well-selected portfolio of growth opportunities can deliver an additional four percent, for an overall growth rate of eight to nine per cent.
Reaching these growth levels will not be easy as they often represent strategic shifts for the company involved, players will have to compare these possibilities to the alternative of dividends and share buy backs and there own capabilities.

Asmus Komm is a principal and Sven Smit is a director in McKinsey & Company's Hamburg and Amsterdam offices, respectively. Sven Smit is also co-author of ‘The Granularity of Growth: Making choices that drive enduring company performance' (with Patrick Viguerie and Mehrdad Baghai) published by Marshall Cavendish and Cyan Books.

Successful data migration is vital to the effective transformation of telcos into lean and agile competitors in the communications marketplace.  Celona Technologies' Charles Andrews, CEO (left) and MD, and Tony Sceales, CTO, talk to Lynd Morley about overcoming companies' fear of failure, and the best way to achieve a winning migration

A data migration project that works, comes in on time, within budget, and without causing major disruption to the business may sound like a extraordinary piece of wishful thinking to those experienced in the pitfalls of the exercise (after all, Bloor Research puts the failure or overrun figure at 80 per cent among Forbes 2000 companies) but it is exactly what Celona Technologies CEO, Charles Andrews, stresses can now be delivered.

Andrews believes that at the heart of any successful data migration project is the clear recognition that migration is a business issue.  "Keeping the business aware and in control of the migration is the first and biggest challenge - but absolutely critical to getting it right," he says. "Simply put, most of the decisions that need to be made during the process require business rather than technical knowledge.  Sure, the analysts understand about data formats and interface requirements, but they can often only guess at what the data they are processing means to the business."

Successful data migration must surely be a central plank of any telco's plans to tackle the business transformation now so essential to survival in the highly competitive communications market.  Along with innovation and business process optimisation, transformation is certainly the dominant theme in the industry at the moment, and Andrews agrees that innovation is seen as a key differentiator for many businesses.
"Innovation is not a fad - it is here to stay," he says.  "It is a mantra that drives businesses, and will continue to move them forward.

"In the 1990s businesses became adept at sales and marketing, branding, re-branding and growth through merger and acquisition. With the support of the Internet, businesses opened up new global markets and the barriers to setting up a business lowered. This provided a host of new opportunities, but it also introduced a range of new threats - not least that increased numbers of competitors made differentiation harder, and premiums for particular products and services more difficult to maintain.

"Today, each innovation is scrutinised, copied and the advantage negated that much quicker - thanks to the power of the Internet-supported global market. GE's Jeffrey Immelt, for example, explains that now ‘constant re-invention is the central necessity...we're all just a moment away from commodity hell'.

"The ability to respond to change, to continually innovate and to get product to market quickly and reliably are the new hallmarks of business success. Or, in Rupert Murdoch's words: ‘big will not beat small anymore. It will be the fast beating the slow'."
Andrews who, before joining Celona at the beginning of this year, had worked with both IBM and Sun Microsystems, clearly has his eye firmly on the business issues, but is more than well grounded in IT.  He stresses that IT is a central player in a business' search for both innovation and differentiation.  "IT's critical role in supporting an organisation's innovation fitness was underlined in a recent survey conducted by Capgemini Consulting. The survey revealed that two-thirds of CIOs believe that IT is critical to business innovation, but only 25 per cent feel their IT function is actually driving business innovation. Capgemini's Eric Monnoyer, BIS Global Leader, comments that the requirement to balance operation and innovation is ‘a constant challenge' for CIOs, although the survey indicates that 60 per cent of CIOs believe it's possible to do both," Andrews says.

"Seemingly it's the old, old problem of how to have your cake and eat it," he continues. "CIOs are being asked to ensure that IT is continuing to function efficiently, to comply with legislation and regulation, and to be secure against an ever-wider range of threats. They're also expected to perform the usual upgrades, renewals and maintenance on legacy infrastructures, and to ‘manage' (as in maintain or reduce) IT budgets. But as if doing all of this were not enough, IT is now required to ‘innovate' to support businesses that are being fundamentally re-engineered for the new economy. All of which has far reaching effects on IT infrastructures, budgets and goals.

"So why aren't more IT departments supporting business innovation effectively? Well to some extent we have already answered this question. Many CIOs and IT departments are busy just keeping IT running and measuring performance against vital key performance indicators. Often IT is seen as a cost centre that needs to be measured, optimised and controlled, rather than as the powerhouse of business innovation. And CIOs may have little time or budget to innovate, due to the fact that such a large chunk of existing IT budgets, resources and staff are committed simply to keeping legacy infrastructure running. The scale of this problem was revealed in a recent white paper by Erudine's Dr Toby Sucharov and Philip Rice who noted that: ‘The cost of legacy systems [from industry polls] suggest that as much as sixty to ninety per cent of IT budget is used for legacy system operation and maintenance'."

Andrews believes that IT underpins the business process, whether it is the customer relationship management systems, the billing system, the provisioning system or whatever. IT can either be an enabler or an inhibitor. Frequently, he explains, different people in the business see the same IT system as both.

"This is a tough place for a CIO to be. If you want to re-align your IT to business needs there are two options: to tactically manage the issue (for example, by extending systems or by partial replacement of infrastructure) or to strategically redesign your infrastructure. While the second approach will yield the most benefits in the long run, in practice the first approach is taken by most companies. The migration of mission-critical applications and their associated data have a risk, degree of difficulty, and such a poor track record of being delivered on time or to budget, that businesses shy away from this approach. The compounded effect of using a tactical approach to solve legacy IT problems over a number of years is the unbelievable complexity that is now responsible for sucking IT budgets dry.
"We now have a seemingly intractable ‘chicken or egg' conundrum of innovation versus operation," Andrews notes, but stresses that a solution to this problem is offered by the new generation of application migration technology that is coming to market.
"So-called ‘third generation' migration solutions are very different from preceding generations of migration technology. Notably, they are highly adept at dealing with the thorny problem of business logic held in legacy systems and are flexible enough to enable ‘business-driven' migrations. CIOs that have employed this technology have achieved business-driven application migration and consolidation projects on time and to budget. They are benefiting both from a lower legacy infrastructure cost and the ability to offer new products and services to their customers - supporting innovation and opening up new revenue streams.

"Take early adopter BT, for example, who wanted to migrate the legacy billing system that supported its Featurenet customers to Convergys's Geneva system, but who also desired a ‘completely seamless transition' to the new system. It achieved a successful migration in just six months (a full 13 months ahead of schedule) using the Evolve tool from Celona Technologies. BT Retail has since credited the successful project with creating more than ?148 million in new revenues, thanks to its ability to launch innovative new services to Featurenet customers.

"Third-generation migration technology could be the CIO's best friend - the key to unlocking the budget and resources trapped in legacy systems, by enabling effective, low-risk application migration and consolidation. And, by significantly reducing both the risk and cost of consolidating and renewing legacy infrastructure, it allows more resources and effort to be targeted at innovation."

Keenly aware of the trends now fashioning service delivery in the telecoms sector, Andrews highlights the importance of successful data migration to the effectiveness of the new delivery platforms.

"Two main trends are clearly emerging," he says. "The first is the standardisation of components in the SDP (Service Delivery Platform), as opposed to bespoke development, and the second is that the key adoption drivers are now commercial rather than technological. 

"Business is demanding that technology should not inhibit change. SDPs promise vital competitive advantage, enabling service providers to roll out new services, faster and cheaper than before. However, realising all the benefits offered by SDPs also requires service providers to have the key application data in the right place. The move to a standardised set of applications, with more re-use of functionality, means that the application data will need to be moved into the new applications.

"The traditional way of moving this data involves either people-based techniques or primitive data migration using extract-transform-load (ETL) techniques," Andrews explains. "Unfortunately, the downside of these approaches - such as the inability to scale or to respond to the changing business requirements, as well as high cost - are diametrically opposed to the reasons for implementing an SDP.  SDPs put the business in control rather than the technology, which means that data must be where and when the business needs it to be, rather than something that the technology controls.

"The key to delivering the vital benefits provided by SDPs is, therefore, the ability to move critical data on time, without loss of service and without spiralling costs and budgets. A survey we conducted amongst IT management revealed that 60 per cent of respondents thought a principal cause of failed migrations is that data complexity and cleanliness are poorly understood; 36 per cent said that they did not think they would be able to get some or all of their data across.

"These challenges cannot be ignored: data migration needs to move into the era of SDPs and SOA (Service Oriented Architecture) - with re-usable standard components, and with the business directing the use of the technology."

Celona, Andrews believes, can answer both the concerns of the surveyed IT managers, and the vital needs of service delivery. "Data migration is our core competence, and we have gone back to basics and begin with a definition of the types of migration. There are five possible approaches that can be applied to a migration - 1) Don't migrate; 2) Event based; 3) Incremental; 4) Bulk-load; 5) Big-bang. No single approach fits every project's requirements: any programme of transformation must ensure that a range of approaches can be delivered. Celona is able to deliver each approach and can adapt and change between approaches depending on requirements.

"Even a single project may move through a number of approaches over time or even combine approaches, in parallel," he continues. "For example, to get a new customer service up and running, without delay, an enterprise might decide to go with a ‘Don't Migrate' approach initially. Some information may be synchronised with the existing systems, eg revenues written back to the old accounts receivable system. Following the launch and trial, with new customers, existing customers who take up the new service are migrated with their old service information on an event-by-event basis. After the new systems have stabilised, then incremental or even bulk-load strategies might be added, whilst continuing to migrate individual customers as each new service order is received."
Conscious of the uphill battle Celona might well have in persuading companies that data migration need not be the agonising (and ultimately unsuccessful) undertaking many imagine, the company has refined what it describes as the ‘Four golden rules of data migration' - describing the common characteristics shared by proven, successful data migrations. 

Tony Sceales, Celona's CTO, explains: "The first two rules stress that data migration is a business issue, and that business knows best.  Putting business in the driving seat means that before we ask ‘how do we migrate data' we first answer a series of important related questions that help to frame and scope the project.  These are: "Why are we migrating data?'; ‘What data should be migrated?'; and ‘When should it be migrated?'.  These questions cannot be answered by technicians, but only by business managers."
Sceales goes on to stress that ensuring the business makes the decisions and drives the project also frees up IT to do what it does best - the technical aspects of moving the data.
"At the same time," he adds, "the second rule stresses that business drivers, not technical ones should take precedence.  It is critically important that business goals should define the solution and approach selected, and not the other way around.  To be successful, the chief business stakeholders must not only define their requirements, but must also take responsibility for driving the project."

The third ‘golden rule' states that no one needs, wants or will pay for perfect data.  Sceales explains: "While enhancing data quality is a worthwhile goal, it's really important not to go off on a tangent mid-project in the quest for perfect data quality.  Data owners and users need to determine the level of quality they require at the start of a project so that the technologists have an appropriate goal to aim at."

The fourth rule also addresses data quality, noting that ‘if you can't count it, it doesn't count'.  Again, Sceales explains: "The challenge is how to measure data quality in order to asses the state of your legacy data and determine the level of quality your business users require.  To make matters worse, data quality is not static, but erodes and improves over time.  It's really important that the measures used make sense to business users and not just to technologists.  This allows deliverables to be measured, gap analyses to be performed, and ongoing data quality to be monitored and improved."

Celona is a small company, very much at the forefront of solving a big problem - a position that Charles Andrews is clearly very proud of.  "Data migration," he says, "is all about getting the data in the right place at the right time, and we are solely focused on this.
"We have built a platform, a method and experience/best practice which can deliver the promise by managing the detail and allowing the business to decide on the speed of the migration.  We are calling it progressive migration - it could be called migrating at the speed that the business needs to be able to drive innovation and new products and services into the market."

What's in a name and why the need for a Common Language? Well, the difference between profit and loss, for a start,  says Allen Seidman

The educated layman might wonder what semantics - essentially the study of the meaning of words - has to do with our industry's business of moving speech, content and data around the world in ever faster and cheaper ways.

The answer lies in the fact that unless we can employ standardised methods to define and describe the many components, attributes and functions that make up the world's networks, then we'll be left floundering in an unmanageable sea of proprietary definitions and descriptions that will make the Tower of Babel sound simple by comparison.
This certainly isn't a new problem in human history. While many creation myths contain a common tale of the first men naming plants and animals, neither the physical nor life sciences would have progressed without standardised systems of measurement or nomenclature. Given that the future success of our industry relies on combining ever-larger numbers of devices, servers and network elements into a single functional entity, the need for a rational common language that can describe all these assets in consistent and meaningful ways is essential, not optional.

This issue however isn't just of concern to those focused on the back office or on engineering operations. Without the ability to clearly define and communicate about   equipment, network connections, locations and services, we'll be completely unable to implement the wider shared ‘Information Infrastructure' concepts that cross-boundary services based on Web 2.0.rely on. How can different applications, content and network owners share information and hardware assets to create meshed and merged services without common ways of defining and communicating about those assets? Or even know which geographies they reside in? Integration costs even within a single company can be an onerous overhead and the absence of standard terms makes inter-company cooperation even more complex and time wasting.

Service providers have certainly already spent many hundreds of millions of dollars over the last decade on trying to rationalise their network inventory systems and find better ways of extracting the maximum value from their fixed assets. Anecdotal evidence suggests that some operators had, in the past, even managed to ‘mislay' significant chunks of their physical networks through poor record keeping, incompatible data formats and the departure or retirement of experienced engineers. Additional surveys from organisations like the Yankee Group show that ‘dirty' and inaccurate data will collectively cost US and European operators a total of $6.3 billion each year by 2010, with knock-on effects that impact heavily on provisioning, fault finding and network engineering. All too often, expensive human resources are wasted trying to resolve problems caused by misleading data records.
Significant too in this drive for a clearer understanding of the cost/performance issues of the infrastructure itself have been the various TMF initiatives such as the enhanced Telecommunications Map (eTOM) and the Structured Data/Information (SID) model. Although also heavily focused on the semantics of our industry, these have increasingly been adopted by both service providers and vendors to solve real world problems and remove unnecessary costs from both business relationships and operations engineering.   While extremely valuable, these approaches only go part of the way to solving the overall problem and are insufficiently granular in depth and detail to drill down to the kinds of elemental component details that are really required. Ultimately, poor network equipment information impacts in numerous ways across almost every aspect of service providers' operations  - and on their relationships with their vendors, customers and partners.

But how exactly does - or should - a common language for describing components work in the telecoms industry? Can we quantify the benefits to service of using a standardised naming system for the multiplicity of different elements - inevitably from different manufacturers with inevitably different naming conventions - that have to be combined to create a network and deliver a service?

Consider one of the most basic, bottom-up building blocks of any communications infrastructure - the humble plug-in card or blade. Each will inevitably have multiple markings such as part numbers, revision numbers, com codes and stencilling - but might not even carry the name of the actual manufacturer or vendor. To add to the potential for confusion and hassle, most boards have to be removed from their racks before their provenance can be properly identified with all the add-on impact that has on interrupted services.
Even where a local ‘asset tag' is created by an individual service provider, in an attempt to standardise their own inventory management systems, these often result in multiple representations of the same device type, adding still further to the overall confusion and waste. If service providers are unable to correctly identify equipment, then they leave themselves open to confusion about their available inventory, their assets can become effectively ‘stranded' and they will often have to invoke costly data synchronisation, reconciliation and stocktaking measures to resolve these problems.

That's a concept that extends not just to the equipment or its location, but also how it is connected to other equipment, what the significance or ‘context' of that connection is, and what might happen to it if there's a requirement to swap out equipment at either end. Clearly, if we take into consideration equipment, its location and the connections between them, then we're talking about the need for an entire Information Infrastructure I mentioned earlier. And that's far-reaching stuff, especially when you take into account that analyst firms such as Stratecast are already referring to Information Infrastructure as ‘the next OSS' battleground.

For example, Arun Dharbal, SVP Communications Industry Solutions, SAP, has said to me that SAP continues to see and enable an increased focus on the management of information across the enterprise. Its Master Data Management solution embraces this trend with the capability to synchronize, and unlock the value of information across a spectrum of systems and data management areas transcending product, customer, asset and service domain.
However, the wider Information Infrastructure issue can also manifest itself in the naming problem I mentioned earlier, impacting heavily on the vendor community. Although many service providers use the proprietary naming conventions of each of their vendors, this approach creates its own kinds of confusion. Because manufacturers have their own multiple drivers for identifying equipment - such as tracking manufacturing changes, marketing, ordering and so on - there is very rarely a strict one-to-one relationship enforced between equipment type and part number. For example, suppliers might use different multiple part numbers for a single equipment type sold across different geographical markets. Conversely, multiple equipment types can be represented by the same part number and revision codes - so making it impossible to rely on vendor-supplied part numbers and revisions to uniquely identify the equipment types both within and between suppliers.

There is huge variation amongst vendors in how they record their own equipment. As a result, there are no clear guidelines on how information revision and interchangeability should be interpreted. The greatly differing formats also make it impossible to create a normalised equipment identification mechanism that is based solely on a vendor-assigned part number and/or revision codes. This concept of a normalised equipment identifier is central to any sensible model for equipment tracking as it ensures that different equipment types can be tracked with easy interchangeability when there are equipment revisions.
Adding to this complexity is the issue of how equipment information is actually managed and distributed by the vendors. Numerous non-standardised formats are inevitably used, with important documents stored in Microsoft Word and Adobe PDF formats, distributed on CDs and hard copy paper manuals or distributed through web links. With each supplier providing different sets of attributes and attribute values, it becomes difficult to model the information with any sense of consistency and this often involves complex mapping algorithms.
For an industry that's betting its future on making the move towards introducing ‘just-in-time' service creation and provisioning principles, many service providers are still stuck managing their inventories with very 20th century technologies. While we might have a range of different tools available to us now to help data capture and warehouse management - such as linear bar code labels, standard 2-D symbology labels, RFID tags and autodiscovery - these will only work efficiently if they are supported by standardised equipment information formats.

Although some good work has been done in this area already by various industry bodies, the actual implementation of these has to be carried out by the individual service providers, network owners and vendors. It's here that the realities on the ground often interfere with good intentions from above, resulting in multiple standards-based proprietary identification mechanisms with heterogeneous part number and revision variances.
On top of this, both vendors and operators also often end up wasting valuable time and energy by forming cross-business teams of engineers and procurement staff in attempts to create internal naming and identifier conventions. With each service provider or vendor around the world attempting to deal with this complexity in their own way, the management overheads involved place a very heavy burden on an industry already trying to streamline its operations as much as possible.

Tony Gladden, Director of Products and Technology at SITACORP has told me that data problems aren't new, but as the global marketplace demands more and more automation across corporate boundaries, data issues become more transparent and manifest themselves in exception processing creating higher costs and lost revenue in areas like invoice reconciliation, order fulfillment, shipping and receiving. So, clearly, this is an issue that lies at the heart of operators' ability to make progress.

To resolve these problems, Telcordia developed its Common Language® Information Services initiative which has grown in recent years to become the industry's default centralised information registry and clearinghouse, capable of providing the structure, format, unique and meaningful identifiers, syntax and common language registries that are needed to reduce operational and capital expenses. With nearly 100 service providers and 1,000 equipment vendors now using Common Language, hard evidence exists that shows some service providers now being able to reduce their master data administration and maintenance costs by as much as 90 per cent for equipment, location, connections and service master data. Significant savings are also made in supporting areas such as spares inventories, systems integration and network utilisation.

These benefits extend beyond global operators to the firms that they serve. SAP ‘s Arun Dharbal says that enterprises are keen to exploit synergies between organisations. In fact, SAP's work with key industry solutions such as Common Language allows its customers the ability to get a single view of their assets across the corporation. Ultimately, this challenge needs to be addressed on a broader basis than just the individual enterprise - the industry needs to adopt a strategy to manage information across corporate boundaries as the information processes of each operator is intertwined with a broader ecosystem of trading partners and equipment vendors.

Although we are all at the beginning of solving the Information Infrastructure issue, and it's size dictates that a Common Language won't be implemented overnight, there are operators and enterprises that are making rapid progress. Daniele Fracasso, Common Language Director at Telecom Italia, for example, says that Common Language has helped Telecom Italia to have up to 95 per cent flow-through as part of its operations, reducing its cost of systems integration.

Colin Orviss, Senior VP at Patni Telecoms Consulting, emphasises the benefits to the entire industry - operators, systems integrators and vendors - of Common Language. He says that it takes standards to a whole new level by providing a unique global implementation that no individual systems integrator or internal IT department can achieve. And that's pretty powerful.

The last couple of decades have seen an explosion in the complexity of an already complex industry. What was once a largely closed community of national operators and vendors now includes members from the broadcasting, IT and consumer electronics sectors - each with their own specific ways of identifying and managing component equipments. If the industry ‘previously known' as telecoms doesn't look to put its house in order soon, a lot of the power of new technologies and business strategies will remain mired in the complexities of managing increasingly heterogeneous networks and systems, creating its own internal Tower of Babel that does little for its customers' own needs to communicate.

Allen Seidman is Vice President, Business Development and Marketing, Telcordia, and can be contacted via: aseidman@telcordia.com

    

Other Categories in Features