Features

Features

On the eve of the TM Forum's Management World, Keith Willetts notes that the  imminent arrival of a true digital economy represents a massive opportunity for expanded communications services.  The key question, however, is does it also open up a whole new set of revenue streams for the service providers?

Not sure I've seen any fireworks lighting up the sky, but it's now a full 25 years since the first telecom deregulation. In that time just about every market in the world has gone down the path to competitive communications. So what, fundamentally, has happened in that time? Competition and regulatory pressures have transformed prices but as the communications world discovered the laws of market elasticity, rising volumes and the phenomenal growth of mobile have meant that revenues have continued to rise. In reality, the business model for communication services hasn't changed much in that time - we've sharpened up marketing their old one.

But just as financial markets found out, all good things come to an end! According to IDate, 2008 saw the global communications market only grew by +4.2% to $1.37 trillion, but most of that growth was from still expanding markets like India and China. In mature markets any volume growth was more than cancelled out by price declines on mobile and broadband. Poor old fixed line revenues fell by 5%. Prices for everything are declining as we go not only into a recession but maybe a deflationary period as well - I can't imagine a scenario where communications prices will go up, indeed they are likely to follow a form of Moore's law.
In Europe, mobile penetration now exceeds 100 per cent, with no more market left to trawl. So how do you continue to grow your business? The stock answer from CEO's is an exciting story of new mobile broadband; mobile TV; IPTV; unlimited music, online books - you name it they will claim it. But that question and similar answers have been asked for a long time now and there is little evidence to show that the service providers can realistically generate new, innovative revenues.

Remember when location based services would make us all rich - well the market took so long defining standards for exposing the location data that the handset guys have just got around it by putting GPS chips in their phones. Same for MMS - too hard, too slow and too user unfriendly to get a mass market going. The only truly new services, like iTunes, have come to market from ‘over the top' players, not the communications companies.

So the question has to be asked - can service providers realistically generate sufficient new revenue from the services they sell to their current customers to replace the falls in price on traditional services as markets saturate? And if the answer to that is maybe not, what are they going to do for an encore? Until recently you could point to diversified services like outsourcing of corporate communications networks as a ray of sunshine - that was until one major carrier started posting profits warnings and admitting over-stating profitability of that business.

Clearly, service providers are quickly coming to a fork in the road when it comes to their core business model - just who are their customers and their competitors; what services should they be selling and how are they going to monetize them?

Pioneering services like Amazon, Google, iTunes, and Hulu have shown that entire markets can be shifted to a digital economy model at much less cost, but where everyone can still make money. Apart from bricks and mortar stores of course. We are seeing a similar thing in publishing - more and more publications are going online and eschewing expensive printing and shipping. Books and newspapers may well follow music and videos in going online through products like Amazon's Kindle.

In fact, the global recession will push almost every business on the planet to look at what cheaper and better online approaches they can exploit. Thanks to advances in communications - fibre, 4G wireless and femtocells, (putting cell sites within the home), the market for digitally enabled services may well explode on a myriad of consumer devices from net-enabled TV's through online gas and electric meters, fridges and cars.

This mushrooming of devices and a true digital economy represents a huge array of opportunities for expanded communications services.  The key question is - does it also open up a whole new set of revenue streams for the service providers? Do they get commoditized into bit pipe players? Would that matter?

Almost as long ago as deregulation, Michael Porter (Key competencies, 1985) outlined the concept of companies maximizing their core competencies and minimizing any reliance on what they are not good at. So what is it that communications are good and no so good at? How many wildly successful new services have been introduced in the past 10 years? Apart from DSL (Alexander Graham Bell with knobs on) you really have to scratch your heads to come up with anything - most are basically variations on a theme: voice minutes in all-you-can-eat packages with texting thrown in and different bundles with broadband.

For mass market, innovative, successful services - Google, Facebook, iTunes, Kindle, Hulu, and so on none of them have come out of a communications company. All of them could have been invented by a communications player - they certainly have the brains - but their business models get in the way - their DNA is just not geared to taking risk, moving quickly and launching anything that might damage current lines of business.

But on the other hand none of these new services could exist without the innovations of the communications industry. What the service providers are good at is being a great enabler of other people's services - after all, for a 100 years phone companies have enabled us to talk to other people - they didn't do the talking!

Being a service enabler presents a new business model or at least, significantly extends an old one. Providing a range of enabling capabilities can unlock a different charging model, such as taking a percentage of the revenues of the services that are enabled. This gives much more scalable revenues than, say, flat bandwidth charging approaches. It opens up new revenue streams by opening up the software and process infrastructure of a comms company - transport obviously (but maybe various qualities of service)  plus capabilities like billing; settlements; authentication; cloud computing; user information and so on: in other words a super- wholesale enabler. But to open your mind up to that, you have to get your head around the fact that you are accepting that someone else is going to be the provider of service to the end user. And it's tough to pursue both a provider model and an enabler business model in the same company because they are usually in conflict. You can just imagine the schizophrenia that can result.

At TM Forum's Management World Nice this May, we're hosting a sessions on exactly this subject. Werner Vogels, CTO of Amazon, will talk about how his company has successfully played both sides of the fence: providing services to its own end users but also providing a lot of capability to enable third parties to sell through Amazon.

This business model is starting to be more understood and taken more seriously by communications companies, but you'd have to say the jury is still out on which fork in the road providers are going to take. Will it be the model of trying to develop innovative new services for individual end users and businesses, or will it be more of the role of a behind-the-scenes enabler.

I think the next two to three years will be crucial to answering this question. A "do nothing" approach probably means service providers getting backed further and further into the role of a commodity bit carrier.  Being the ‘Intel Inside' of numerous new and exciting services is a much better place to be that a bystander watching the action for the sidelines. Enabling other people's services is something that communications companies can do to leverage their really core competencies.

Let's put a traffic camera by that fork and watch which way the punters go.

For more information on TM Forum and Management World in Nice please visit
www.tmforum.org/mw09
Keith Willetts is Chairman and CEO, TM Forum
www.tmforum.org

Mobile network operators often ask 'how can we leverage the social networking  phenomena'? A better question, says Jouko Ahvenainen, is 'how can we mine the social network we already have - that is, the network of our subscribers'?

Mobile phones generate better and more useful information about consumers than any other technology, including the web. How people use their phone is a very personal thing. Who people call, text and save in their phones is more closely related to their 'real' network of contacts than the people they connect with on Facebook, Twitter or MySpace. In the book I helped write, Social Media Marketing, one passage reads:

"The mobile device is a key element of the digital footprint since it is oriented towards capturing information (which is a driver of the digital footprint). The real question for the telecoms and mobile industry is what can they do with all this information? More importantly, what could they do in future with all this information?"

By accepting this advantage and looking at how this data can be utilised, while also maintaining strict standards of trust and privacy with subscribers, mobile phone operators can understand the best asset they already have - the 'goldmine' of subscriber data.

This data can open exciting new revenue channels and give deep customer insight so operators can offer better products and services. They can also create much more effective anti-churn campaigns. In addition to behaviour and demographic information, subscriber social network and influence information can be uncovered. Who a subscriber has influence over is a critical point when it comes to churn. If person A churns, and brings person B, C and D with them, the problem has been quadrupled. By identifying that person A might churn, and that he or she will bring three other leavers along, mobile operators can make churn busting campaigns far more economical, with far greater impact. Operators can also be a source of market research data in the future, when they can collect more data than traditional market research firms can.

Of course, maintaining subscriber privacy is critical in all this, and it is certainly achievable. But it requires a shift in culture. There is a stark difference between trying to 'know' your customer and trying to 'own' your customer. Web companies continue to 'know' their customers better, while many phone operators stick to an outdated notion that they 'own' their customers. The longer the telecoms industry delays knowing the customer better, the longer the industry will lose out. Embracing the power of social networking information is key to this transformation.

Many operators hope that call data records (CDRs) give enough information to gain improved customer insight. It is a good starting point to make the operator's own marketing activities more effective, but it is not enough to be a platform for advertising and social services in mobile. We need to know more about how the subscriber interacts and uses service elements. There are three things (generally accepted) that social media requires to know a customer better:
1. You must incentivise a customer to share more personal information about him/herself. This assumes that all privacy / confidentiality standards are adhered to.
2. An open ecosystem / platform is needed. At present, Facebook and Google Android are the best examples of where third parties interact on open playing fields, and as such, do the work to grow the ecosystem. The best part about this is it can be exploited without direct monetary rewards.
3. We need touch points. Places where the operator can interact directly with the customer and generate a two-way communication. Broadcasting information at customers without interaction (one-way street) is an outdated approach and no longer prescient in today's Twitter age. Advertisements are the best option (an alternative, call centres, are too expensive). And social networking can make these advertisements personal, intuitive and relevant - rather than annoying to the subscriber. Open systems are the best location for the advertisement to be placed.

With these being the first steps for operators, a starting point is by thinking about marketing and services in a new way. The marketing can no longer be a one-way broadcast of messages to customers, it must be a much more interactive relationship that also supports word-of-mouth. And it is the same with services: operators cannot make or select all services for the subscribers; users must be able to create their own services and choose which ones they want to use. Web 2.0 truly is coming to mobile, and it offers a platform where people can do what they want to do. It's not a place to push selected ideas and models to them. And web 2.0 is not the only evolution, but also CRM 2.0, which is where subscribers can also utilise their own data and data analytics for their own benefit. For example, subscribers can know their own social network and manage their own connections. It becomes a way to motivate people to share their data when they can get benefits from doing so.

Following this, a measurement system must be created and agreed upon. Operators cannot act until there is a way to manage social media programmes. One approach has been to track user behaviour at the network level. An example of this is the mechanism Phorm's service is based upon. Phorm has become a pressing issue in the UK because, even if you did get permission to undertake this level of tracking, people do not understand networks and don't trust what they don't understand. People are normally okay to share small amounts of information if they get something back, but they don't like one-way spying models.

By working at higher levels of the stack than networking level (usually, the application level) and by making 'knowing the customer better' the goal rather than 'owning the customer,' there are huge marketing gains to be had as well as a new level of trust to be engendered with subscribers. The only way this can be achieved is through aggregated data, by not working with specific individuals or specific transactions but rather with aggregates and data patterns derived from the data.

Data aggregates for marketing purposes also presents a solid business case for the converged operator. If an operator owns customer touchpoints via not only the mobile phone, but also broadband, TV, landline, etc, the data presents a richer picture of the customer and it becomes easier to engender trust when the customer only has to share information once, with one brand whom they trust.

As mentioned before, by understanding people's behaviours in the context of this network, operators can pull out and define a member's measure of influence, and use this for new clever mobile and online marketing techniques. In the case of London-based Flirtomatic, the web and mobile social network for flirting for ages 18+, they are using this superior customer insight to engage in more targeted marketing and services for its influential members - those who have word of mouth impact on other members. Flirtomatic applies 'social intelligence' to create more compelling services for each customer segment, and targeted, relevant and personal marketing and promotions, via web and mobile. Flirtomatic is focused on generating viral take-up throughout its community via word-of-mouth marketing. This approach works two-fold, because influential members have both direct pull over purchasing decisions (by recommending products to their friends) as well as indirect pull (by friends' desire to imitate or mimic their friends' purchases). Flirtomatic is using Xtract Social Links product for this insight.

This works because studies show that social influence is more important than any other factor in consumers' purchasing decisions. One research on car-buying showed that 71 per cent of car buyers are influenced by what their friends said, whereas only 17 per cent were influenced by TV ads.

This insight can be applied by the operator for its own marketing, such as churn campaigns, where as much as a 20 per cent improvement in campaign effectivity has been reached, or for generating new revenue through third-party advertising schemes. And this market is growing fast; eMarketer predicted that spending on behavioural targeting will reach $3.8 billion by 2011.

Flirtomatic's CEO Mark Curtis recently said: "The early results from customer segmentation are very insightful and exciting. We can now see considerable potential, as the business scales, to directly improve our revenues through a sophisticated view of our customers, their behaviour and the pattern of their relationships. The tool hands us an effective, powerful CRM solution."

Jouko Ahvenainen, co-founder and VP at Xtract and author of Social Media Marketing , and can be contacted via Jouko.Ahvenainen@xtract.com
www.xtract.com

The growth in the African telecommunications market over the past five years has  been nothing less than phenomenal. Although growth rates are expected to slow, Julia Lamberth and Serge Thiemelé explain that Africa should continue to be the fastest growing market in the world for the next five years

The growth in the Arican telecoms market has turned the telephone from a luxury item to a basic necessity in many countries. However, the expansion has not been universal across the continent. Some countries, such as South Africa and Libya, have already passed the 100 per cent mobile penetration rate, while others, such as Ethiopia and Eretria, still have penetration rates under 10 per cent.

According to Ernst & Young's recently released Africa Connected survey, the growth up until now has been driven almost entirely by GSM voice. While voice should continue to be the largest component of the market for the foreseeable future, it is expected that data is going to be an ever-increasing component of operator revenues in the future.

Internet penetration on the continent is still substantially lower than any other part of the world, with only nine countries on the continent having penetration rates above one per cent. It is expected that the construction of submarine cable systems, the first of which should be operational by the middle of the year, is likely to be the catalyst for accelerated growth in African Internet penetration. Alongside the construction of the submarine cable systems, which should, to a large extent, address the problems posed by inadequate international connectivity, there has been significant investment in terrestrial fixed line infrastructure.

This investment has been made by both private operators, especially in countries such as Nigeria and South Africa, as well as by governments, in countries such as Angola, Malawi, Botswana and the Democratic Republic of the Congo. While the impact of this investment will not be felt immediately by consumers in many countries, it will provide the basis for cheaper and more reliable telecommunications in the next few years, particularly in rural areas. It is likely that the vast majority of the next wave of African Internet users will not connect to the Web through services that rely on fixed networks, but instead use the infrastructure provided by mobile and fixed wireless service providers.

One of the reasons for the slow pace of telecommunications growth on the continent in the past has been the historical lack of basic infrastructure. Poor infrastructure was one of the areas identified by operators as a key challenge to the development of telecommunications. This weakness has manifested itself in a number of ways, including limited access to core telecommunications infrastructure, as well as a lack of a reliable electricity network needed to keep networks up and running. This weakness in the basic services has a negative impact on the ability of operators to rapidly deploy their own infrastructure. Having to make contingencies for these weaknesses is something that has a significant cost attached. Safaricom, in Kenya for example, reportedly spends more than a million Euros a month in diesel to power the generators it needs to keep its network running.

This situation has resulted in operators exploring alternative sources of energy such as wind or solar power to supplement other power generation options.

Operators identified the issue of attracting and maintaining talent as the largest operational issue. This issue applies both to the technical and management skills, with operators struggling to fill vacancies across the spectrum. While they acknowledged the importance of training, the issue of staff being poached by rivals was identified as an ongoing challenge.
The increased vigilance of regulators on the continent has heightened the need of ensuring network reliability as regulators are taking a more active consumer protection role. Examples of this include operators being barred from marketing their services in Nigeria until the quality of service reached an acceptable level.

In addition, operators interviewed voiced concern over the perceived political interference in the regulatory process. It is this lack of consistency that creates difficulties for operators, as they are unsure of how changes in the local regulatory framework will impact their businesses, especially if these changes are being driven by a political, rather than a pure regulatory agenda.

Operators highlighted the high rates of taxation they are subjected to, with the average across the continent coming in at over 30 per cent. Governments across the continent have chosen to place a heavy tax burden on mobile operators by taxing profits at a higher rate, instituting mobile specific taxes or raising license fees. Operators also raised the issue of excise taxes on imported handsets, making them less affordable to consumers and hampering the ability of companies to reach potential customers in lower income brackets.
In order to succeed in the African market, scale is considered one of the key elements of future success, and competition for new licenses and existing operations is keen. It is likely that we are going to see significant consolidation in the next few years, as smaller operators feel the effects of increased competition.

The global economic crisis is not likely to leave the African market unscathed, as many operators may find it more difficult to raise the funding needed to continue the level of investment needed to remain competitive. Especially in key markets such as Nigeria where the multinational operators are investing heavily in network expansion, the smaller operators may find it difficult to keep pace with either the network coverage or the technological innovation of the large regional and multi-national operators.

The issue of infrastructure sharing and outsourcing of parts of the business is one way for operators to cut the costs of doing business. However, operators surveyed were resistant to this, preferring rather to have control over the infrastructure and services that they consider their competitive advantage. More recently, however, some operators have said that they are looking to cooperate with competitors wherever possible to bring down the cost of deploying new infrastructure.

Operators, specifically in more developed markets, are also starting to look at broadening their set of services to include targeting the wider ICT market. This has seen operators acquiring companies in the information and communications technologies sector. This broader focus is setting the stage for a divide in the market between operators that choose to create a converged services offering and those that focus on offering voice and basic data services at a lower cost.

The rollout of these converged services, which include fixed and mobile services as well as offerings that have traditionally been the reserve of the Internet service providers, such as hosting and business continuity, will further drive the development of the African ICT market. While the initial focus of these services will be in the developed markets, it is expected that these services will rapidly be driven out to corporate customers in all the territories in which these companies operate in Africa.

This investment by operators, as well as the infrastructure that is being deployed will set the stage for the rapid adoption of more data-focused services for both governments and corporates across the continent. While these types of organisations are likely to access these new networks via new fixed-line networks, consumers should benefit from the deployment of high-speed wireless services with the attendant increase in bandwidth and broadband.

It is anticipated that networks based on 3G will dominate the market for broadband wireless access with CDMA EVDO and WiMax offering some competition as well as providing access where the GSM-based service is not suitable.

We expect the next five years to see a continuation of the growth in African telecommunications, with increased Internet and broadband penetration across the continent. At the same time the market is likely to undergo a period of considerable consolidation with the existing African operators expanding their reach and continuing to expand their reach across the continent. It is our view that operators who do not already have an African presence could have a difficult time in challenging both strong, regional and global operators (MTN, Vodafone, France Telcom, Zain - for example) and a plethora of new licensees (more than 40 per cent of the market is still in one per cent market share slices). Operators launching as the fourth or fifth license holder in countries may face challenges in generating profits, especially in countries where one operator already has a dominant position. Countries such as Angola and Ethiopia where none of the large regional players have established a presence are being viewed as real opportunities for future expansion.
Julia Lamberth and Serge Thiemelé are co-leaders of the Ernst & Young Global Telecommunications Center - Africa.

The Africa connected survey was compiled from interviews conducted with operators from across Africa. For further information please visit www.ey.com/telecommunications or contact globaltelecommunicationscenter@uk.ey.com

With mobile operators keen to implement impending network upgrades in the most  effective manner, Colin Garrett explores how they can limit network planning costs in the face of the economic downturn

Mobile operators are under increasing pressure to provide the best service to their customers at the most competitive rates.  The next 12 months will see operators across Europe struggling to strike a sensible balance between the need to roll out the latest network upgrades and avoiding passing additional costs on to the end user.  With the difficult economic situation affecting industries across Europe, all eyes are on reducing costs across the board, and for mobile operators this means reviewing spend involved in the initial planning stages of the network through to the training of customer-facing staff.

With the rise in popularity of the smartphone device during 2008, consumers and business users are demanding improved mobile data speeds to access more content via mobile.  The race is on for mobile operators to boost data speeds by rolling out HSPA and LTE networks as soon as possible.  The first step in upgrading existing mobile networks is to gather sufficient network data to identify areas of high mobile penetration and expose any areas that may be lacking in coverage and capacity before choosing which areas of the network require the most urgent upgrade work. 

A common cost-effective approach to test the network is to seed drive test tools in business van fleets.  Drive test systems enable wireless operators to view their own and their competitors' wireless voice and data services from the perspective of the subscriber by providing critical quality-of-service (QoS) measurements. Network designers can then use portable test transmitters to verify optimal antenna positioning and as a low power source for testing the design and functionality of RF repeaters and base stations. This allows operators to limit infrastructure costs by identifying the correct products for network upgrades.

It has become widely accepted in the ICT industry that the correct method for analysing the cost of a vendor's products or services is to do a Total Cost of Ownership (TCO) analysis. Rather than focusing solely on price, buyers of ICT products and services must consider the additional, often hidden, costs of training, operating, managing and upgrading their purchases. Addressing only the purchase price will not sufficiently make a difference. 

TCO is more than the original cost of purchasing the system. We have found that more than 70 per cent of the TCO is involved in non-purchasing activities. It must include all direct and indirect costs associated with mobile network data gathering and drive test systems. Drive test systems have a typical life span of five years. At some institutions this life span may be more like ten years, but in both cases the older units are removed and abandoned as redundant because they cannot be used to test and measure the latest network infrastructure upgrades.

There are many factors and elements that make up the TCO for mobile network data gathering with drive test systems. Over the last ten years, the TCO for drive test tools has continued to increase due to technological advancements, drive test product limitations and increased Mobile Network Operator (MNO) competition. Those institutions that have already addressed and developed strategies and programming to reduce the cost of ownership of mobile network data gathering systems are now seeing benefits. Institutions that have not yet addressed this issue are probably not seeing a cost reduction. In fact, institutions and companies that have not addressed TCO are continuing to experience out-of-control cost increases for mobile network data gathering and drive test systems.

Sweeping changes and improvements in technology continue to challenge the mobile industry to reshape and redefine how best to deploy mobile network data gathering systems. Individual organisations will find that it can prove expensive to stay current unless they have a handle on what it takes to acquire, implement, and support drive test tools. By addressing the components that make up the TCO, an institution will be in a position to take full advantage of the latest innovation in mobile network data gathering techniques. It will become very difficult, even impossible, to implement an institutional mobile network data gathering and drive test methodology aimed at including HSPA results if an enterprise is using a bespoke technology and frequency limited system.

Introducing the wrong drive test systems to your network can be very costly. Being aware of the TCO components is the first step in lowering your mobile network data gathering cost. We have found that limiting choices and setting standards are the best methods for starting to get control of your drive test systems cost. Whereas ensuring that all parties use a single type of system is usually the fastest way to bring mobile network data gathering and drive test systems costs under control, it is not always easy to implement when both individuals and group networks have developed enough expertise and knowledge to be able to specify and utilise their own drive test systems.

The implementation of "soft standards", including significant economies of scale, simplified purchasing procedures and centralised training support, will work best in bringing the entire enterprise to accept a standard and limited choice. Nevertheless limited choice should still offer enough variety to cover the end user's requirements, including engineering (optimization and integration), special coverage groups (in-building and special coverage projects), marketing (benchmarking) and management (key network performance indices).
As already established, institutional TCO consists of more than simply the original purchase of hardware and software. We have defined seven different base elements that make up the cost components for drive test systems. These base elements are purchase price for all hardware and software, staff training costs, installation and implementation costs, support services and update costs, cost of required functional upgrades, technology upgrades, interoperability costs.  Each of these base elements includes several types of expenditures.
The purchase price includes all direct and indirect purchases for a drive test system, namely the drive test tool hardware, software, data collection supported devices, and log file (output) manipulation. The price should also include warranties, extended warranties and maintenance agreements.

Training costs will include all direct and indirect expenditures for training activity required to effectively run the drive test system. Formal and informal training usually occurs with the installation of the drive test system. Costs and methods vary according to vendor.

Installation and implementation costs include all direct and indirect expenditures involved in ensuring that the system is installed correctly and meets an institution's standard operating procedures. This may vary from tools needed for hardware installation to server configuration to accommodate the storage and access of log files.

Support services costs include all staff costs incurred in providing adequate personnel support to the drive test system. This includes on-site technical support, as well remote support via telephone, e-mail and the Internet. Installers, troubleshooters and skilled support staff are all involved in maintaining the system.

Functional change upgrade costs comprise both direct and indirect expenditures necessary to make ongoing changes to the drive test systems operation. This will allow the institution to increase its drive test efficiencies, including the deployment of the latest software updates, the addition of extra parameters, and the improvement of data displays.
Technology upgrades costs should take into account both the direct and indirect costs involving in acquiring new tools or upgrading the current system to be compatible with new mobile devices as well as the latest mobile technologies, i.e. CDMA 1x to EVDO or HSPA to LTE.

Through a careful step-by-step consideration of each of the elements that constitute the TCO for a vendor's drive test system, mobile operators can reach an informed decision as to the cost effectiveness of a vendor's tool set.  Although wireless network data gathering comprises only one aspect of the network planning process, an accurate TCO evaluation for drive test systems is a great place to start in order to ensure maximum cost and performance efficiency across an institution's entire remit.  At a time when businesses need to evaluate every area of their spend in order to retain the highest possible competitive advantage in a saturated market, mobile operators cannot afford to base buying decisions solely on purchasing price, but must instead consider all aspects of TCO across their wireless networks.

Colin Garrett is Product Manager, Test and Measurement Systems, Andrew
www.andrew.com

This is the first in a series of columns focusing on issues surrounding the management of  today's communications business models. For this debut effort, I thought I would talk about voice over IP and its impact on communications, or perhaps I should say the lack of it.

I read an interesting article recently that said voice over IP (VoIP) was stalling even though not so many years ago it looked like it would sweep the board. On the contrary, VoIP usage appears to be declining; a recent report by independent British communications regulator Ofcom says that only 14 per cent of broadband subscribers are even using the technology.
Adding fuel to the fire is the rumor floating around that eBay is looking to sell VoIP provider Skype, which it purchased in 2005 for over $2 billion. The article even quotes Skype CEO as saying it's a great standalone business. Surely that's a big hint at what may be to come.
So this is quite an interesting turn of events we have on our hands. The reason I'm focusing on this bit of news that VoIP uptake appears to be waning is that back about 15 years ago when I was working with BT, I saw my first demo of the technology. I remember one of BT's board members being rather panicked and saying VoIP would kill off their business, and the world was coming to an end.

Obviously that never happened. But what has happened is pricing on traditional circuit-switched calls has become lower and lower in the past 15 years. Nowadays, most people have some sort of flat-rate fixed-line or mobile calling plan that's priced very aggressively. Sure, Skype-to-Skype calls are free, but today's consumer is interested in a lot more than just a free lunch.

Also, the convenience of VoIP just isn't there. With PC-to-PC calling, as with Skype, you're anchored to your PC and stuck at your desk. If all parties are using the same service, the call usually works as intended, but if you're on a raw IP connection, or someone is using the conventional phone network, all bets are off.

And unlike the common perception that if it's free you can't complain about it, consumers are much more savvy and demanding that every form of communications they touch lives up to the high standards of the traditional PSTN.

Back when mobile phones were brand new, and the novelty of being able to call from the middle of a field or on the top of a hill still had a shine on it, people didn't really care if calls dropped or quality was poor. But after a while that novelty started to wane, and today you can get mobile service on tunnels, trains and just about anywhere else with high call quality.
So we have lower priced traditional voice calls and customers who are demanding - and getting - higher quality of service. And that is exactly what the Internet has not been able to achieve in terms of voice.

It exposes the myth that people don't care about quality if something is free. And nowhere is voice call quality more of an issue than in the corporate world. Can you imagine the Fortune 500 companies using a VoIP configuration that is going over the general Internet where there is no packet priority and jitter and delay are common? The Internet is great for email, downloading video and anything else where it's not a huge deal if packets are sent and received out of order or with latency. But the inconvenience of having a VoIP call dropped or sounding like static just isn't cutting it in the corporate world.

I'm the furthest thing from a Luddite, but the call quality, the inconvenience of being stuck making calls from your PC and other factors are hindering VoIP's potential to be a voice communications game-changer.

Keith Willetts is Chairman and CEO, TM Forum
kwilletts@tmforum.org

A wide range of factors is driving mobile broadband demand as our lifestyles become   increasingly digital. Howard Wilcox asks whether LTE is the natural future standard of choice

LTE is often quoted as a 4G mobile technology. However, at this point there is no agreed global definition of what is included in 4G: the ITU is establishing criteria for 4G (also known as IMT-Advanced) and will be assessing technologies for inclusion. The two next generation technology candidates are mobile WiMAX 802.16m (WiMAX Release 2) and LTE Advanced. Both these products will meet the IMT Advanced specification with, for example, up to 1 Gbit/s on the downlink at low mobility.

There is a wide range of factors driving mobile broadband demand as our lifestyles become increasingly digital.

LTE is a global mobile broadband standard that is the natural development route for GSM/HSPA network operators and is also the next generation mobile broadband system for many CDMA operators. The overall aim of LTE is to improve capacity to cope with ever-increasing volumes of data traffic in the longer term. The key LTE objectives include:

  • Significantly increased peak data rates - up to 100 Mbps in the downlink and uplink peak data rates up to 50 Mbps,
  • Faster cell edge performance and reduced latency for better user experience
  • Reduced capex/opex via simple architecture, re-use of existing sites and multi-vendor sourcing
  • Wide range of terminals - in addition to mobile phones and laptops, many further devices, such as ultra-mobile PCs, gaming devices and cameras, will employ LTE embedded modules.

3GPP's core network has been undergoing SAE (System Architecture Evolution), optimising it for packet mode and IMS (IP-Multimedia Subsystem) which supports all access technologies. SAE therefore is the name given by 3GPP to the new core all-IP packet network that will be required to support the LTE evolved radio access interfaces (RAN): it has a flat network architecture based on evolution of the existing GSM/WCDMA core network. LTE and SAE together constitute 3GPP Release 8 and have been designed from the beginning to enable mass usage of any service that can be delivered over IP. The RAN LTE specification was completed at the end of 2008, with further work required to complete SAE by March 2009: this work is on track for completion of the full release 8 standard at that time.

Beyond LTE to 4G
LTE is often quoted as a 4G mobile technology. However, at this point there is no agreed global definition of what is included in 4G: the ITU is establishing criteria for 4G (also known as IMT-Advanced) and will be assessing technologies for inclusion. The two next generation technology candidates are mobile WiMAX 802.16m (WiMAX Release 2) and LTE Advanced. Both these products will meet the IMT Advanced specification with, for example, up to 1 Gbit/s on the downlink at low mobility.

There is a wide range of factors driving mobile broadband demand as our lifestyles become increasingly digital.

Personal connectivity:  "Always On"
Anytime, anywhere connectivity as an overall concept is becoming a clear user expectation. The increase in connectivity is seen to be driving applications, user preferences and broadband demand, which in turn drives the demand for access. The demand for increased access is actually leading to bigger investments in the area of mobile and broadband networks, in turn making it cheaper and supporting higher bandwidths and ubiquitous connectivity. As available bandwidth grows, so does the variety and sophistication of devices. As the volume of devices increases, prices become more attractive, so driving user demand. This completes a cycle of demand.

However, as shown below, each demand driver can equally impact any of the others, for example smarter devices clearly drive more sophisticated applications and services, whilst the knowledge that increased bandwidth is available means that more users are likely to demand services:

Economic stimulus
Fixed broadband already plays a vital part in developing the economy, connecting the population at large, businesses, and governments, and enabling commerce. Mobile broadband is also being driven by the need to provide broadband where it is not possible to easily, quickly and economically deliver fixed broadband, particularly in developing countries, but also in underserved or rural areas in developed countries.

Emerging mobile youth generation
The younger generation (particularly the under 18s but also the 18 to 30 age group) are the future employees and workers, as well as being the momentum behind popular applications such as social networking, gaming and music and the earliest adopters of ICT devices. They are also amongst the most skilled, innovative and fastest learning users of technology. These skills and expectations as users are derived not only from their mobile phones but from the increasing ubiquity of broadband at home, and the teen generation is highly likely to carry forward this level of expectation (and more) into adulthood.

New applications and services 
New applications and services (some of which may well be unknown now) are going to be key drivers of mobile broadband and faster and faster data rates. Aspects include: 

  • Growth of mobile commerce

Over the past 12 to 18 months there has been significant activity and growth in mobile payments (particularly digital and physical goods purchases), and mobile banking. In addition these services and applications, along with contactless NFC, mobile money transfer, ticketing and coupons are forecast to grow rapidly over the next five years.

  • Mobile web 2.0

Before long, anything you can do at your desktop, you will be able to do on the road with a laptop or other mobile device. Users want the same capabilities wherever they are located and however they are connected - as fixed, mobile or nomadic subscribers.
This means that mobile broadband will provide personalised, interactive applications and services, such as multiplayer gaming, social networking, or other video/multimedia applications: anytime and anywhere.   The mercurial rise of social networking sites and user-generated content has rekindled users' interest in accessing web-based services on the move. 

The difference between current 3G applications and mobile broadband at the speeds envisaged is that LTE mobile broadband will enable greater user-generated content and uploading/downloading, along with person-to person connectivity.

  •  Portable video revolution

One application that is crucial to driving demand for mobile broadband is video. There is a variety of applications that can be offered in video, which include video calling, video clips streaming, live mobile TV and video clip uploads and downloads (especially for sites such as YouTube, MySpace etc.). The focus on video clip downloads is an application that is extremely popular. The demand to watch videos on the go, has been ignited by the emergence of the video iPod, with similar devices following from other vendors.

  • Impact on network traffic growth

In January 2009 Cisco forecast that globally, mobile data traffic will double every year, increasing 66 times between 2008 and 2013. Mobile data traffic will grow at a CAGR of 131 per cent between 2008 and 2013, reaching over 2 exabytes per month by 2013. Confirming the paragraphs above, Cisco said that almost 64 per cent of the world's mobile traffic will be video by 2013. Mobile video will grow at a CAGR of 150 per cent between 2008 and 2013.

  • The need for mobility

Worldwide mobile subscribers have grown by a factor of in excess of 15 times over the last ten years and actually surpassed the worldwide fixed line base in 2001-2002. Mobile subscriber density has been showing strong growth ever since, while fixed line density has been experiencing low or no growth. In the same period, the number of PCs has grown by a factor of nearly three, whilst Internet users have grown over 11 times. Fixed lines are very much the poor relation, and in the last couple of years the number of fixed lines has begun to decline.

LTE market opportunity
There will be considerable change to the global mobile technology base over the next five years:

  • Subscribers in developed nations and regions will migrate upwards from 3G to existing mobile broadband such as HSPA
  • A limited number of high end enterprise and consumer subscribers in developed nations and regions then migrate further upwards to LTE
  • Developing nations and regions see considerable growth in 2G and 2.5G as people and businesses seek first time connectivity ahead of more sophisticated services, and sometimes instead of acquiring fixed network access
  • A limited number of high end subscribers in developing nations migrate towards newer generation technologies

Juniper Research forecasts that the LTE service revenue opportunity for mobile network operators will exceed $70bn pa by 2014, with the main regional markets in North America, Western Europe and the Far East & China.

This article is based on Juniper Research's report: LTE: The Future of Mobile Broadband 2009 - 2014.
Howard Wilcox is a Senior Analyst with Juniper Research.
www.juniperresearch.com

 

The recent focus on privacy issues surrounding behavioural advertising is only the tip of the iceberg says Lynd Morley

European Telecoms Commissioner Viviane Reding has been placing the issue of privacy firmly on the communications agenda of late, and the subject has - particularly in the UK - been causing quite a stir.  Even the British national press has been exercised about it - something of an unusual occurrence, given their more normal propensity to fill pages with scandals that are more accessible and simpler to understand than the complexities of the gradual erosion of privacy now taking place.

The current fuss is largely due to the fact that the European Commission could pursue legal action against the UK Government, because the latter has paid little attention to the Commission's concerns about the use of Phorm software to monitor the Internet browsing habits of users without their consent.

The Phorm system, used, for instance, in a number of trials carried out by BT over its broadband network, offers a behavioural advertising facility, targeting adverts at users based on the types of sites they have visited.  The catch as far as the Commission is concerned is that neither BT nor Phorm asked users' permission to gather and use this information.

The EU directive on privacy and electronic communications basically says that member states must ensure the confidentiality of data on communications and related data traffic by prohibiting unlawful interception and surveillance unless the users concerned have consented to such activity.

Reding reinforces the sentiment in a recent statement, noting: "Europeans must have the right to control how their personal information is used.  European privacy rules are crystal clear - your information can only be used with your prior consent."

Clearly, there should be considerable cause for concern in the UK - not only among its citizens whose rights to privacy under European directives are being ignored, but also in Government, which now risks legal action by the EU.

But while the Phorm affair has served to raise the profile (if only en passant) of privacy issues, it is by no means the only privacy concern that Europe should be turning its attention to.  Reding has certainly pointed to other areas within communications technology that warrant close observation, including the significant amounts of data that social networking sites hold on their users, and the increasing use of RFID chips in a wide range of products.  And while the UK Government might fairly be accused of a certain laxity in its attitude to privacy issues, the country's Information Commissioners' Office has been focussing attention on the sometimes complex requirements central to establishing effective information privacy practices. At the end of last year, for instance, the ICO issued an in-depth document on the subject - Privacy by design.  Prepared by the Enterprise Privacy Group, the report is intended as a first step in the privacy by design programme, which aims to encourage public authorities and private organisations to ensure that, as information systems that hold personal information are developed, privacy concerns are identified and addressed from first principles.

The ICO noted in its introduction to the report: "The capacity of organisations to acquire and use our personal details has increased dramatically since our data protection laws were first passed.  There is an ever-increasing amount of personal information collected and held about us as we go about our daily lives.  Although we have seen a dramatic change in the capability of organisations to exploit modern technology that uses our information to deliver services, this has not been accompanied by a similar drive to develop new effective technical and procedural privacy safeguards."

Toby Stevens, Director of the Enterprise Privacy Group, notes that among the barriers to any successful adoption of privacy safeguards are, not only, an ongoing lack of awareness of privacy needs at an executive management level within organisations - often driven by uncertainty about the potential commercial benefits of privacy-friendly practices - but also the fundamental conflict between privacy needs and the pressure to share personal information within and outside organisations.

"Addressing privacy issues at the start of systems development," he explains, "can have significant business benefits, and in some circumstances ensure that new ventures do not run into privacy problems that can severely delay time to market."
www.ico.gov.uk
www.privacygroup.org

Jon Wells discusses how pressures in emerging markets are forcing OSS to change, for the benefit of all

The emerging telecoms markets are no less demanding than those in Western Europe or North America, but they do present substantially different requirements. These markets often present challenges that telcos in developed markets have not had to contend with, but may soon find themselves facing - particularly regarding the global economic slowdown. They are also extremely lucrative, with OSS Observer forecasting revenue growth in emerging markets at 11 per cent from 2007-2012.

OSS is essential for telcos in emerging markets since it helps them operate efficiently, leverage economies of scale, keep up with intense competition, engage with increasingly technology-aware consumers and create innovative services. It also helps telcos manage technology refresh; initiated to reach new customers with next-generation services, replace creaking infrastructure or "leap-frog" to next-generation networks (NGN).

In the West, the traditional approach to OSS is to employ a ‘best of breed' approach potentially unsuited to emerging markets. In contrast, Unified OSS - an open, NGOSS based, modular, pre-integrated, end-to-end OSS solution - presents operators in emerging markets with sophisticated OSS without the associated long lead times and high costs. Market analysts, such as Frost and Sullivan and Yankee Group, are increasingly aware of the opportunity that Unified OSS presents to operators seeking sophisticated OSS.

Falling average revenue per user (arpu), increased customer focus and technology refresh are impacting globally. Furthermore, many predict that 2009-2010 will be a year of market contraction and pronounced arpu shrinkage for North America and Western Europe, but that emerging markets, such as APAC, will be less affected. This combination will make the OSS practices in APAC even more applicable in developed markets. With local pressures pushing operators in emerging markets towards a ‘quantum leap' in OSS, what are the lessons emerging markets can offer to the global OSS community?

Most operators in emerging markets must contend with comparatively low arpu. The estimated arpu in India is around US$8 per month - only slightly lower than Indonesia, the Philippines, Malaysia, Thailand and China but around a tenth that of some Western European operators. However, this low arpu is offset by a huge potential for customer growth. For operators in emerging markets, the key is in accessing their large, often rural populations that typically have low tele-density, thus supporting business models based on rapid growth and high customer subscription. For example, India covers 3M km2 and 70 per cent of the 1.1 billion population lives in rural areas with tele-density of around two per cent. While the opportunity for customer growth is clear, automation and intelligent management of manual activities, leading to operational efficiency, are critically important when maintaining services over such a geographical extent.

Some operators in Asia are achieving ratios of staff to subscribers that are almost half that of counterparts in Western Europe and North America; one Indian operator is achieving a ratio of 1:1750. This is being achieved initially through rapid growth in subscribers but to sustain this and turn it into operational efficiency, operators look to their OSS to automate and manage the end-to-end operational processes.

Operators in Eastern Europe and the Commonwealth of Independent States (CIS) are challenging their legacy platforms as they experience demand for broadband services. OSS Observer forecasts that residential broadband will grow faster than revenue, at a compound annual growth rate of 27 per cent, as the service is still relatively new, and arpus are low. Simply put, the legacy OSS cannot efficiently, rapidly and reliably deliver the order-to-cash process, despite network availability and a consumer base demanding higher value services. Many operators are replacing legacy with new OSS, often delivering many functions simultaneously. One Eastern European operator recently started an OSS project covering inventory, order management, activation, field-force logistics and trouble-tickets. But, time is of the essence, and the transfer of subscribers from low to high value services cannot wait for traditional OSS lead-times.

In emerging markets, an OSS must take the strain of a rapidly expanding customer base, since this offsets low arpu. Expansion can be extremely rapid - some operators in emerging markets achieve tens of millions of subscribers within a few years and a monthly growth of one million subscribers is fairly common. Where the subscriber base already exists, as in Eastern Europe, the OSS must support consumer demands to rapidly transition from low to high revenue services.

Operators in emerging markets need OSS that helps them "go-live" with services quickly and manage the transition from low to high revenue services. This rapid increase in subscriber numbers or service revenue is often essential for the business plan. This is doubly important because operators in emerging markets have often invested heavily in infrastructure and strive for high utilisation through customer growth to balance costs. One emerging market operator estimates that the right choice of OSS saved around US$200M in lifetime integration costs and delivered sophisticated OSS functionality two years earlier, when compared to ‘best of breed' OSS. Within seven months of starting up, they were the country's largest mobile operator.

Subscribers in emerging markets are technology literate and competition is relentless, throughout this intense growth period. Competition is a major reason why India has some of the lowest mobile rates in the world, at two cents per minute. The need to defend market share and capture new subscribers drives innovation in service offerings. In addition to coping with demands of growth, the OSS for emerging markets must reduce time-to-market for new products. Demands for 12-15 new products and features per year for mobile service providers in emerging markets are not unheard of, and are being supported by Unified OSS today.

A common misconception is that subscribers in emerging markets are not demanding. In reality, customers in emerging markets have extremely high expectations. The level of competition for subscribers may drive operators that do not address customer experience, innovate and improve their product portfolio and service level agreements (SLAs) to extinction.

Just as in developed markets, an OSS must intelligently map network status, planned outages and provisioning key performance indicators (KPIs) to customer facing SLAs, and coordinate and prioritise responses when SLAs are in jeopardy or breech. Whilst automation and efficient manual processes remain the fundamental means of maintaining excellent customer experience, SLA management can gauge and improve that experience, focusing management on the subscriber's needs.

The same is true when viewed from the customer perspective. Customers expect the call-centre staff to be informed, to map the customer reported fault to a known network fault intelligently, give reassurance that the resolution is progressing and provide a restoration time. Only the OSS is positioned to support this.

In ‘best of breed' OSS, maintaining customer centric perspectives is often the culmination of years of evolution. To meet demands of customers in emerging markets, telecoms providers simply cannot wait. Unified OSS can implement customer centric management without major integration projects.

For many developing countries, next-generation technologies are not a long-term aim, but a starting point, since they can solve many problems facing operators.
Various operators in emerging markets are building broadband optic fibre networks, completely bypassing the copper lines still used in many developed countries. In just a few years, India-based Reliance Communications has become the world's largest IP-enabled optic fiber cable network with 230,000km now laid. Compared to copper cable, optical fibre provides far more bandwidth, whilst being cost comparable and less subject to theft. Telekom Malaysia's HSBB project will receive RM2.4B investment from the Malaysian government, as it proactively tackles its relatively low penetration of broadband. 

Singapore's Government recently announced that its Next Generation National Broadband Network will be nationwide by 2012, providing all homes and offices with access to the new, pervasive, all-fibre network. Similar government initiatives are found around the world. For example, entry into the European Union is driving infrastructure re-fresh in Eastern European and CIS countries.

Instead of deploying copper or fibre, many countries are deploying wireless coverage to provide an instant broadband service. Wireless broadband is an excellent means of reaching rural or transient populations and coverage ‘black spots'. Unlike copper cable, wireless broadband equipment can be secured against theft and removes much of the cost of laying and maintaining hundreds of kilometres of infrastructure.

One shared characteristic of most emerging markets is that they are a hive of innovation and experimentation. In Africa, 3G and CDMA2000 are currently capturing public interest, but this may be challenged by WiMAX and technologies such as Power Line Communications (PLC) continue to exploit niche opportunities.  Operators in Africa are evaluating technology, looking for the best fit for their specific challenges and OSS must support this evolution. With current residential broadband at only one per cent there is a huge potential for rapid expansion of service.

Unified OSS focuses on simplification through pre-integration, consolidation of operational data and centralised workflow spanning end-to-end operational processes; from SLA management to field-force logistics. Unified OSS can deploy faster and with lower risk than ‘best of breed' OSS solutions, avoiding integration and data synchronisation costs. It helps operators in emerging markets achieve ROI on their infrastructure investments sooner and, through simplicity and flexibility, allows operators to engage their subscribers with innovative products over evolving networks.

With arpus falling worldwide, operators are now desperately adding value to their services and, increasingly, medium or high arpu countries may feel the bite of revenue reductions on their operations and question whether their network is providing them with the necessary tools to exploit economies of scale. With 2009-2010 set to be particularly challenging years in terms of revenue; parallels between OSS practices in emerging and developed countries are that much more pertinent. The approaches emerging markets have taken to overcome these problems have been hard learned and Western operators ignore them at their peril.

Jon Wells is OSS consultant at Clarity International
www.clarity.com

While IP appears to have simplified telecoms, Christoph Kupper, Executive Vice  President of Marketing at Nexus Telecom tells Lynd Morley that the added complexity of monitoring the network - due largely to exploding data rates - has led to a new concept providing both improved performance and valuable marketing information

Nexus Telecom is, in many ways, the antithesis of the now predominant imperative in most industries - and certainly in the telecoms industry - which requires wholesale commoditisation of services; an almost exclusive focus on speed to market; and a fast response to instant gratification.

Where the ruling mantra is in danger of becoming "quantity not quality" in a headlong rush to ever greater profitability (or possibly, mere survival), Nexus Telecom calls something of a halt, focussing the spotlight on the vital importance of high quality, dependable service that not only ensures the business reputation of the provider, but also leads to happy - and therefore loyal - customers.

Based in Zurich, Nexus Telecom is a performance and service assurance specialist, providing data collection, passive monitoring and network service investigation systems.  The company's philosophy centres around the recognition that the business consequences of any of the network's elements falling over are enormous - and only made worse if the problem takes time to identify and fix.  Even in hard economic times, the investment in reliability is vital.

The depressing economic climate does not, at the moment, appear to be hitting Nexus Telecom too directly.  "Despite the downturn, we had a very good year last year," comments Christoph Kupper, Executive Vice President of Marketing at Nexus Telecom.  ‘And so far, this year, I don't see any real change in operator behaviour. There may be some investment problems while the banks remain hesitant about extending credit, but on the whole, telecom is one of the solid businesses, with a good customer base, and revenues that are holding up well."

The biggest challenge for Nexus Telecom is not so much the economy, but more one of perception and expectation, with some operators questioning the value and cost of the OSS tools - which, relative to the total cost of the network has increased over the years.  In the past few years the price of network infrastructure has come down by a huge amount, while network capacity has  risen.  But while the topological architecture of the network is simplifying matters - everything running over big IP pipes - the network's operating complexity is vastly increasing.  So the operator sees the capital cost of the network being massively reduced, but that reduction isn't being mirrored by similarly falling costs in the support systems.  Indeed, because of the increased complexity, the costs of the support systems are going up.

Complexity is not, of course, always a comfortable environment to operate in.  Kupper sees some of the culture clash that arises whenever telecom meets IT, affecting the ways in which the operators are tackling these new complexities.

"In my experience, most telecom operators come from the telco side of the road, with a telecom heritage of everything being very detailed and specified, with very clear procedures and every aspect well defined," he says.

"Now they're entering an IP world where the approach is a bit looser, with more of a ‘lets give it a try' attitude, which is, of course, an absolute horror to most telcos."

Indeed, there may well be a danger that network technology is becoming so complex that it is now getting ahead of some CTOs and telecom engineers.

"There can be something of a ‘fear factor' for the engineers, if ever they have an issue with the network," Kupper says.  "And there are plenty of issues, given that these new switching devices can be configured in so many ways that even experienced engineers have trouble doing it right.

"Once the technical officers become fully aware of these issues, the attraction of a system such as ours, which gives them better visibility - especially independent visibility across the different network domains - is enormous.

"It only takes one moment in a CTO's life when he loses control of the network, to make our sale to him very much easier."

The sales message, however, depends on the recognition that increased complexity in the network requires more not less monitoring, and that tools which may be seen as desirable but not absolutely essential (after all, the really important thing is to get the actual network out there - and quickly) are in fact, vital to business success.  Not always an easy message to get across to those whose background in engineering means they do not always think in terms of business risk.

Kupper recognises that the message is not as well established as it might be. "We're not there yet," he says.  "We still need to teach and preach quite a lot, especially because the attraction of the ‘more for less' promise of the new technology elements hides the fact that operational expenditure on the management of a network with vastly increased traffic and complexity, is likely to rise."

The easiest sales are to those technical officers who have a vision, and who are looking for the tools to fulfil it.  "They want to have control of their networks," says Kupper. "They want to see their capacity, be able to localise it, and see who's affected."

And once Nexus Telecom's systems are actually installed, he stresses, no one ever questions their necessity. 

"The asset and value of these systems is hard to prove - you can't just put it on the table. It's a more complicated qualitative argument that speaks to abstract concepts of Y resulting from the possible failure of X, but with no exact mathematical way to calculate what benefits your derive from specific OSS investment."

So the tougher sales are to the guys who don't grasp these concepts, or who remain convinced that any network failure is the responsibility of the network vendors who must therefore provide the remedy, without taking into account how long that might take, and the subsequent impact on client satisfaction, and therefore, ultimately business success.
These concepts, of course, are relevant to the full range of suppliers, from wireline and cable operators to the new mobile kids on the block.  Indeed, Kupper stresses that with the advent of true mobile data broadband availability, following the change to IP, and the introduction of flat rates to allow users to make unlimited use of the technology, the cellular operator has positioned himself as a true contender against traditional wireline and cable operators.

Kupper notes: "For years in telecommunications, voice was the data bearer that did not need monitoring - if the call didn't work, the user would hang up and redial - a clearly visible activity in terms of signalling procedure analysis.

"But with mobile broadband data, the picture has changed completely.  It is the bearer that needs analysis, because only the bearer enables information to be gleaned on the services that the mobile broadband user is accessing.  The network surveillance tools, therefore, must not only analyse the signalling procedure but also, and most importantly, the data payload.  It is in the payload that we see if, for example, Internet browsing is used, which URL is accessed, which application is used, and so forth. And it is only the payload, for which the subscriber pays!"

He points out that as a consequence of the introduction of flat rates and the availability of 3G, data rates have exploded.

"It is now barely possible to economically monitor such networks by means of traditional surveillance tools.  A new approach is needed, and that approach is what we call ‘Intelligent Network Monitoring'. At Nexus Telecom we have been working on the Intelligent Network Monitoring concept for about two years now, and have included that functionality with every release we have shipped to customers over that period.  Any vendor's monitoring systems that do not include developments incorporating the concepts of mass data processing will soon drown in the data streams of  telecom data networks."

Basically, he explains, the monitoring agents on the network must have the ability to interpret the information obtained from scanning the network ‘on the fly'.  "The network surveillance tools need a staged intelligence in order to process the vast amount of data; from capturing to processing, forwarding and storing the data, the system must, for instance, be able to summarise, aggregate and discard data while keeping the essence of subscriber information and its KPI to hand - because, at the end of the day, only the subscriber experience best describes the network performance. And this is why Nexus Telecom surveillance systems provide the means always to drill down in real-time to subscriber information via the one indicator that everyone knows - the subscriber's cell phone number."

All this monitoring and surveillance obviously plays a vital role in providing visibility into complicated, multi-faceted next generation systems behaviour, facilitating fast mitigation of current and potential network and service problems to ensure a continuous and flawless end-customer experience.  But it also supplies a wealth of information that enables operators to better develop and tailor their systems to meet their customers' needs.  In other words, a tremendously powerful marketing tool.

"Certainly,' Kupper confirms, "the systems have two broad elements - one of identifying problems and healing them, and the other a more statistical, pro-active evaluation element.  Today, if you want to invest in such a system, you need both sides.  You need the operations team to make the network as efficient as possible, and you also need marketing - the service guys who can offer innovative services based on all the information that can be amassed using such tools."

Kupper points out that drawing in other departments and disciplines may, in fact, be essential in amassing sufficient budget to cover the system.  The old days when the operations manager could simply say ‘I need this type of tool - give it to me' are long gone, and anyway their budgets, these days, are nothing like big enough to cover such systems.  Equally, however, the needs of many different disciplines and departments for the kind of information Nexus Telecom systems can provide is increasing as the highly competitive marketplace makes responding to customer requirements and preferences absolutely vital.  Thus the systems can prove to be of enormous value to the billing guys, the revenue assurance and fraud operations, not to mention the service development teams.  "Once the system is in place," Kupper points out, "you have information on every single subscriber regarding exactly which devices and services he most uses, and therefore his current, and likely future, preferences.  And all this information is real-time."

Despite the apparent complexity of the sales message, Nexus Telecom is in buoyant mood, with good penetration in South East Asia and the Middle East, as well as Europe.  These markets vary considerably in terms of maturity of course, and Kupper points out that OSS penetration is very much a lifecycle issue.  "When the market is very new, you just push out the lines," he comments.  "As long as the growth is there - say the subscriber growth rate is bigger than ten per cent a year - you're probably not too concerned about the quality of service or of the customer experience. 

"The investment in monitoring only really registers when there are at least three networks in a country and the focus is on retaining customers - because the cost of gaining new customers is so much higher than that of hanging on to the existing ones.

"Monitoring systems enable you to re-act quickly to problems.  And that's not just about ensuring against the revenue you might lose, but also the reputation you'll lose.  And today, that's an absolutely critical factor."

The future of OSS is, of course, intrinsically linked to the future of the telcos themselves.  Kupper notes that the discussion - which has been ongoing for some years now - around whether telcos will become mere dumb pipe providers, or will arm themselves against a variety of other players with content and tailored packages, has yet to be resolved.  In the meantime, however, he is confident that Nexus Telecom is going in the right direction.

"I believe our strategy is right.  We currently have one of the best concepts of how to capture traffic and deal with broadband data.

"The challenge over the next couple of years will be the ability to deal with all the payload traffic that mobile subscribers generate.  We need to be able to provide the statistics that show which applications, services and devices subscribers are using, and where development will most benefit the customer - and, of course, ultimately the operator."

Lynd Morley is editor of European Communications

Over the past years the demand for data centre services have been experiencing a  huge expansion boosted by the growth of content-rich services such as IPTV and Web 2.0. With the increased bandwidth available, enterprises are hosting more of their applications and data in managed data centre facilities, as well as adopting the Software-as-a-Service(SaaS) model. David Noguer Bau notes that there's a long list of innovations ready to improve the overall efficiency and scalability of the data centre, but network infrastructure complexity may prevent such improvements - putting at risk the emerging business models such as SaaS, OnDemand infrastructure, and more

The data centre is supposed to be the house of data - storage and applications/servers - but after a quick look to any data center it's obvious that a key enabler is also hosted there: the network and security infrastructure.

The data centre network has become overly complex, costly, and extremely inefficient, limiting flexibility and overall scalability. Arguably, it is the single biggest hurdle that prevents businesses from fully reaping the productivity benefits offered by other innovations occurring in the data centre, including: server virtualisation, storage over Ethernet, and evolution in application delivery models. Traditional architectures that have stayed unchanged for over a decade or more employ excessive switching tiers, largely to work around low performance and low-density characteristics of the devices used in those designs. Growth in the number of users and applications is almost always accompanied by an increase in the number of "silos" of more devices - both for connectivity as well as for security. Adding further insult to injury, these upgrades introduce new untested operating systems to the environment. The ensuing additional capital expenses, rack space, power consumption, and management overhead directly contribute to the overall complexity of maintaining data centre operations. Unfortunately, instead of containing the costs of running the data centre and reallocating the savings into the acceleration of productivity-enhancing business practices, the IT budget continues to be misappropriated into sustaining existing data centre operations.

Data centre consolidation and virtualisation trends are accelerating in an effort to optimize resources and lower the cost. Consolidation, virtualisation and storage services are placing higher network performance and security demands on the network infrastructure. While server virtualisation improves server resource utilisation, it also greatly increases the amount of data traffic across the network infrastructure. Applications running in a virtualised environment require low latency, high throughput, robust QoS and High-Availability. Increased traffic-per-port and performance demands, tax the traditional network infrastructure beyond its capabilities. Furthermore, the future standardisation of Converged Enhanced Ethernet (CEE) - with the aim to integrate the low-latency storage traffic - will place even greater bandwidth and performance demands on the network infrastructure.

Additionally, new application architectures, such as Service Oriented Architecture (SOA) and Web Oriented Architecture (WOA), and new services - cloud computing, desktop virtualisation, and Software as a Service (SaaS) - introduces new SLA models and traffic patterns. These heightened demands often require new platforms in the data centre, contributing to increased complexity and cost. Data centres are rapidly migrating to a high-performance network infrastructure -scalable, fast, reliable, secure and simple- to improve data centre-based productivity, reducing operational cost while lowering time to market for new data centre applications.

The way data centre networks have been designed traditionally is very rigid, based on multiple tiers of switches and not responding to the real demand of highly distributed applications and virtualised servers. By employing a mix of virtualisation technologies also in the data centre network architecture -such as clusters of switches with VLANs and MPLS-based advanced traffic engineering, VPN enhanced security, QoS, VPLS, and other virtualisation services- the model becomes more dynamic. These technologies address many of the challenges introduced by server, storage and application virtualisation. For example, the Juniper Networks Virtual Chassis technology supports low-latency server live migration from server to server in completely different racks within a data centre and from server to server between data centres in a flat Layer 2 network when these data centres are within reasonably close proximity. Furthermore, Virtual Chassis combined with MPLS/VPLS allows the Layer 2 domain to extend across data centres to support live migration from server to server when data centres are distributed over significant distances. These virtualisation technologies support the low latency, throughput, QoS and HA required of server and storage virtualisation. MPLS-based virtualisation addresses these requirements with advanced traffic engineering to provide bandwidth guarantees, label switching and intelligent path selection for optimised low latency, traffic separation as a security element, and fast reroute for HA across the WAN. MPLS-based VPNs enhance security with QoS to efficiently meet application and user performance needs.

As we can see, adding virtualisation technologies at the network level as well as at server and application level, serve to improve efficiencies and performance with greater agility while simplifying operations. For example, acquisitions and new networks can be quickly folded into the existing MPLS-based infrastructure without reconfiguring the network to avoid IP address conflicts. This approach creates a highly flexible and efficient data center WAN.

A major trend is the data centre consolidation. Many service providers are looking to reduce from tens to three to four very large data centres. The architecture of each new data centre network is challenging and collapsing layers of switches alleviates this. However, with the consolidation, the large number of sub-10Gbps security appliances (FW, IDP, VPN, NAT, with the correspondent HA and load-balancing) becomes unmanageable and represents a real bottleneck. Traditionally, organisations have been forced to balance and compromise on network security versus performance. In the data centre space this trade-off is completely unacceptable and the infrastructure must provide the robust network security desired with performance to meet the most demanding application and user environments.

The evolution and consolidation of data centres will provide significant benefits; that goal can be achieved by simplifying the network, collapsing tiers, and consolidating security services. This network architecture delivers operational simplicity, agility and greater efficiency to the data centre. Applications and service deployments are accelerated, enabling greater productivity with less cost and complexity. The architecture addresses the needs of today's organisations as they leverage the network and applications for the success of their business.

David Noguer Bau, Service Provider Marketing EMEA, Juniper Networks
www.juniper.net

As users become increasingly intolerant of poor network quality, Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks tells Priscilla Awde that, in order to meet the huge demand for speed and efficiency, the whole industry is heading in the same direction - creating an all IP Ethernet core using MPLS to prioritise packets regardless of content

Speed, capacity, bandwidth, multimedia applications and reliable any time, anywhere availability from any device - tall orders all, but these are the major issues facing every operator whether fixed or mobile. Meeting these needs is imperative given the global telecoms environment in which providing consistently high quality service levels to all subscribers is a competitive differentiator. There is added pressure to create innovative multimedia services and deliver them to the right people, at the right time, to the right device but to do so efficiently and cost effectively.

Operators are moving into a world in which they must differentiate themselves by the speed and quality of their reactions to rapid and global changes. Networks must become faster, cheaper to run and more efficient, to serve customers increasingly intolerant of poor quality or delays. It is a world in which demand for fixed and mobile bandwidth hungry IPTV, VoD and multimedia data services is growing at exponential rates leaving operators staring at a real capacity crunch.

To help operators transform their entire networks and react faster to demand for capacity and greater flexibility, Ericsson has created a Full Service Broadband initiative which marries its considerable mobile capabilities with similar expertise in fixed broadband technologies. With the launch of its Carrier Ethernet portfolio, Ericsson is leveraging the strength of the Redback acquisition to develop packet backbone network solutions that deliver converged applications using standards based IP MPLS (Multi Protocol Label Switching), and Carrier Ethernet technologies.

Committed to creating a single end-to-end solution from network to consumer, Ericsson bought Redback Networks in 2007, thereby establishing the foundation of Ericsson IP technology but most importantly acquiring its own router and IP platform on which to build up its next generation converged solution.

In the early days of broadband deployment, subscriber information and support was centralised, the amount of bandwidth used by any individual was very low and most were happy with best effort delivery. All that changed with growth in bandwidth hungry data and video applications, internet browsing and consumer demand for multimedia access from any device. The emphasis is now on providing better service to customers and faster, more reliable, more efficient delivery. For better control, bandwidth and subscriber management plus content are moving closer to customers at the network edge.

However, capacity demand is such that legacy systems are pushed to the limit both in handling current applications, let alone future services, and guaranteeing quality of service. Existing legacy systems are inefficient, expensive to run and maintain compared to the next generation technologies that transmit all traffic over one intelligent IP network. Neither do they support the business agility or subscriber management systems that allow operators to react fast to changing markets and user expectations.

Despite tight budgets, operators must invest to deliver and ultimately to save on opex. They must reduce networking costs and simplify existing architectures and operations to make adding capacity where it is needed faster and more cost effective.

The questions are: which are the best technologies, architectures and platforms and, given the current economic climate, how can service providers transform their operations cost effectively. The answers lie in creating a single, end-to-end intelligent IP network capable of efficiently delivering all traffic regardless of content and access devices. In the new IP world, distinctions between fixed and mobile networks, voice, video and data traffic and applications are collapsing. Infonetics estimates the market for consolidating fixed and mobile networks will be worth over $14 billion by 2011 and Ericsson, with Redback's expertise, is uniquely positioned to exploit this market opportunity.

Most operators are currently transforming their operations and as part of the solution, are considering standards based Carrier Ethernet as the broadband agnostic technology platform. Ethernet has expanded beyond early deployments in enterprise and Metro networks: carrier Ethernet allows operators to guarantee end-to-end service quality across their entire network infrastructure, enforce service level agreements, manage traffic flows and, importantly, scale networks.

With roots in the IT world where it was commonly deployed in LANs, Ethernet is fast becoming the de facto standard for transport in fixed and mobile telecoms networks. Optimised for core and access networks, Carrier Ethernet supports very high speeds and is a considerably more cost effective method of connecting nodes than leased lines. Carrier Ethernet has reached the point of maturity where operators can quickly scale networks to demand; manage traffic and subscribers and enforce quality of service and reliability.
 

"For the first time in the telecoms sector we now have a single unifying technology, in the form of IP, capable of transmitting all content to any device over any network," explains Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks, an Ericsson company. "The whole industry is heading in the same direction: creating an all IP Ethernet core using MPLS to prioritise packets regardless of content.
 

"In the future, all operators will want to migrate their customers to fixed/mobile convergent and full service broadband networks delivering any service to any device anytime, but there are a number of regulatory and standards issues which must be resolved. Although standards are coming together, there are still slightly different interpretations of what constitutes carrier Ethernet and discussions about specific details of how certain components will be implemented," explains Williams.

Despite debates about different deployment methods, Carrier Ethernet, MPLS ready solutions are being integrated into current networks and Redback has developed one future proof box capable of working with any existing platform. 

Experts in creating distributed intelligence and subscriber management systems for fixed operators and now for mobile carriers, Redback's solutions are both backward and forward compatible and can support any existing platform, including ATM, Sonet, SDH or frame relay. Redback is applying its experience in broadband fixed architectures to solving the capacity, speed and delivery problems faced by mobile operators. As the amount of bandwidth per user rises, the management of mobile subscribers and data is being distributed in similar ways as happened in the fixed sector.

Redback has developed SmartEdge routers and solutions to address packet core problems and operator's needs to deliver more bandwidth reliably. SmartEdge routers deliver data, voice or video traffic to any connected device via a single box connected to either fixed or mobile networks. Redback's solutions are designed to give operators a gradual migration path to a single converged network which is more efficient and cost effective to manage and run.

In SmartEdge networks with built-in distributed intelligence and subscriber management functionality, operators can deliver the particular quality of service, speed, bandwidth and applications appropriate to individual subscribers.

Working under the Ericsson umbrella and with access to considerable R&D budgets, Redback is expanding beyond multiservice edge equipment into creating metroE solutions, mobile backhaul and packet LAN applications. Its new SM 480 Metro Service Transport is a carrier class platform which can be deployed in fixed and mobile backhaul and transport networks; Metro Ethernet infrastructure and to aggregate access traffic. Supporting fixed/mobile convergence, the SM 480 is a cost effective means of replacing legacy transport networks and migrating to IP MPLS Carrier Ethernet platforms. The system can be used to build packet based metro and access aggregation networks using any combination of IP, Ethernet or MPLS technologies.

Needing to design and deliver innovative converged applications quickly to stay competitive, operators must build next generation networks. Despite the pressures on the bottom line, most operators see the long-term economic advantages of building a single network architecture. Moving to IP MPLS packet based transmission and carrier Ethernet creates a content and device agnostic platform over which traffic is delivered faster and over a future proof network. Operators realise the cost and efficiency benefits of running one network in which distinctions between fixed and mobile applications are eliminated.

Although true convergence of networks, applications and devices may be a few years away, service providers are deploying the necessary equipment and technologies. IP MPLS and carrier Ethernet support both operators' needs for speed, flexibility and agility and end user demand for quality of service, reliability and anywhere, anytime, any device access.
 

"Ultimately however, there should be less focus on technology and more on giving service providers and their customers the flexibility to do what they want," believes Williams. "All operators are different but all need to protect their investments as they move forward and implement the new technologies, platforms and networks. Transformation is not only about technology but is all about insurance and investment protection for operators ensuring that solutions address current and future needs."

Priscilla Awde is a freelance communications journalist

With each day, the complexity of market offerings from telecommunication operators grows in scope. It is therefore vital to present the individual offers to end customers in an attractive, simple and understandable manner. Together with meeting target profits and other financial measures, this is the principal goal of the marketing department for all communication service providers says Michal Illan

Within the OSS/BSS environment, forming clear and understandable market offerings is equally important for business as the factors described above. There is a huge difference between maintaining all key information about market offerings through various GUIs and different applications, and having it instantly at your fingertips in an organised manner. The latter option saves time and reduces the probability of human error, which makes a significant difference in both the length of time-to-market and the accuracy of the offering, ordering and charging processes experienced by the end customer.Market offerings have the following principal aspects that are usually defined during the offer design process:

  • General idea (defining the scope of the offer)
  • Target market segment
  • Selection of applicable sales channels
  • Definition of services and their packaging
  • Definition of pricing
  • Definition of ordering specifics
  • Definition of the order fulfilment process
  • Marketing communication (from the first advertising campaign and ending with communication at points of sale or scripts prepared for call centre agent)

It is apparent that market offerings aren't static objects at all; on the contrary, they are very dynamic entities and most of a communication provider's OSS/BSS departments have some stake in its success.

This leads directly to the key question: "Which environment can support a market offering and enable unified and cooperative access to it by appropriate teams during the proper phases of its lifecycle?"

The environment that addresses all of the above-mentioned aspects must be materialised in the form of an information system or application, if it is to be put into real existence.

Putting Clarity into Practice
The closest match to the requirements described above is an OSS/BSS building block called Product Catalogue. 
Product Catalogue is usually represented by the following three aspects:

  • A unified GUI that enables all key operations for managing a Market Offering during its lifecycle
  • Back-end business logic and a configuration repository
  • Integration with key OSS/BSS systems

In terms of integration, the functions supported by an ideal Product Catalogue will also define the OSS/BSS systems. Product Catalogue should be integrated with a market segmentation system (i.e. some BI or Analytical CRM), ordering, order fulfilment, provisioning, charging and billing and CRM. These systems should either provide some data to Product Catalogue or use it as the master source of the information related to market offerings.
The necessity of integration in general is unquestionable; the only remaining issue is determining how the integration will be done and what will be the overall cost. Which type of integration will take place depends on a number of factors discussed below.  
 
The principle dilemma
There are three major options for positioning Product Catalogue within the OSS/BSS environment. Product Catalogue can be deployed as:

  • A standalone application
  • Part of a CRM system
  • Part of a Charging & Billing system

Product Catalogue as a Standalone Application
This option appears tempting at first because: "Who can have better Product Catalogue than a company exclusively specialising in its development?" Unfortunately, troubles tend to surface later on regardless of the attractiveness of the application's GUI.

When a telecommunications operator has intelligent charging and billing processes in place, an advanced standalone Product Catalogue can still produce massive headaches related to the integration and customisation side of its deployment. Generally, telecom vendors are highly unlikely to guarantee compatibility with surrounding OSS/BSS system, nor provide confidential pricing logic definition information (or other advanced features) to third-party vendor. What the operator gets is either a never-ending investment into customisations without clear TCO or ROI or multiple incompatible systems.

The key point is that all the charming features of a standalone Product Catalogue are effectively useless without the surety of seamless integration and excellent support from the surrounding OSS/BSS systems.

Product Catalogue as part of a CRM system
This is without a doubt a better option than the first choice because at least one side of the integration is guaranteed-if ordering is part of the overall CRM system, then two sides are in the safe zone.

The only disadvantage of such an approach is that the pricing logic richness of a CRM system's Product Catalogue is quite low, if any. Subsequently, there is no principal gain in implementing a unified Product Catalogue as long as the definition of the price model and some additional key settings remain on the charging and billing system side. Such a setup is quite far from the ‘unified environment' described at the beginning of this article.

Product Catalogue as part of a charging and billing system
Complex pricing logic/modelling is not only the major differentiator of an operator's market offering; it is also the key to profitability in every price-sensitive market. Even in markets where consumers demand inexpensive flat-rate offers, it is still VAS offers (many using complex pricing logic) driving profits.

Implementation on the side of charging and billing is quite often the most challenging when compared to ordering or CRM, for example. Order fulfilment can also be quite a challenge, especially when considering the example of introducing complex, fixed-mobile convergent packages for the corporate segment; however, Product Catalogue itself has no major effect on its simplification.

We can say that out-of-the box compatibility between Product Catalogue and charging and billing significantly decreases the opex of a service provider as well as markedly shortens time-to-market for the introduction of new market offerings and the modification of existing ones.

Because the overall functional richness and high flexibility in the areas of pricing and convergence are really the key features of charging and billing systems nowadays, out-of-the-box compatibility and reduced costs should facilitate the greatest gains on the service provider's side.

Business benefits
There are a variety of direct and indirect benefits linked to implementation of Product Catalogue into the OSS/BSS environment. All of them are related to three qualities that accompany any successful introduction of Product Catalogue - clarity, accessibility and systematization.

Clarity
Managing market offering lifecycles is supported by Product Catalogue's design, bringing all involved parties within the telecommunication operator a better understanding of related subjects, the level of their involvement and their role within the process. This decreases the level of confusion, which is usually unavoidable regardless of how well the processes are described in paper form.

Accessibility
All Market Offerings are accessible and visible within a single environment, including the history of their changes and the market offering's sub-elements. Anyone, according to their access rights, can view the sections of Product Catalogue applicable to their role.
There is no risk of discrepancies between market offering related data in various systems provided that the Product Catalogue repository is the master data source as stated above. Accessibility to correct data is an important aspect of information accessibility in general.

Systematisation
Product Catalogue not only enforces a certain level of systematisation of market offering creation and maintenance processes but also stores and presents all related business entities in a systematic manner, by default taking their integrity enforced by business logic into account.

Measurable benefits
All three qualities - clarity, accessibility and systematisation - can be translated into two key terms - time and money. A successful implementation of Product Catalogue brings significant savings on the telecommunication operator's side as well as guarantees a considerable shortening of time-to-market for introducing new market offerings. If these two goals are not accomplished by implementing Product Catalogue, such a project must be considered a failure.

A full version of this article can be found here

Michal Illan is Product Marketing Director, Sitronics Telecom Solutions
www.sitronics.com

Ensuring the effectiveness and reliability of complex next generation networks is a major test and measurement challenge.  Nico Bradlee looks for solutions

Almost without exception the world's major service providers are building flat hierarchical next generation networks (NGNs), capable of carrying voice, data and video traffic. They are creating a single core, access independent network, promising lower opex and enabling cost effective, efficient service development and delivery.

Easy on paper but not so easy to realise the promised capex and opex savings, speedy service launches and business agility. Unlike traditional PSTNs where equipment handles specific tasks, the IP multimedia subsystem (IMS) is a complex functional architecture in which devices receive a multitude of signals. Ensuring QoS and guaranteeing reliability in such a complex network is a test and measurement (T&M), nightmare. Top on the list of operators' priorities are equipment interoperability, protocol definitions, capacity and roaming, which the industry is working to resolve.

According to Frost & Sullivan, the global T&M equipment market earned revenues of $27.4 million in 2007 which is expected to rise to $1.2 billion in 2013. Ronald Gruia, principal analyst, Frost & Sullivan, suggests a change in thinking is needed: operators must reconsider capacity requirements and new ways of testing if they are to avoid surprises.
In the IMS environment there are exponentially more protocols and interfaces with networks and devices - legacy, fixed and wireless. Numerous functions interwork with others and the number of signaling messages are an order of magnitude higher than in traditional networks. The situation is further complicated by a multi-vendor environment in which each function can be provided by different suppliers and, although conforming to standards, equipment may include proprietary features. The advantage is that operators can buy best-of-breed components and, providing they work together and conform to specifications, telcos can add functionality without investing in new platforms or changing the whole network architecture.

Like many new standards, IMS is somewhat fluid and open to interpretation. Although standards have been approved, they are often incomplete, are still evolving or may be ambiguous. Further, each of the different IMS standards organisations, which include 3GPP, ETSI, TISPAN and IETF, publishes regular updates. Vendors interpret standards according to the needs of their customers and may introduce new innovations which they refer to standards bodies for inclusion in future releases. "IMS standards don't define interoperability but interfaces and functions which may be misinterpreted or differently interpreted by vendors," explains Dan Teichman, Senior Product Marketing Manager, voice service assurance at Empirix.

The many IP protocols have advanced very rapidly but standards are still evolving so there is considerable flexibility and variation. "This is a new and exciting area," says Mike Erickson, Senior Product Marketing Manager at Tektronix Communications, "but it is very difficult to test and accommodate error scenarios which grow exponentially with the flexibility provided in the protocol.
 

"Rapid technology changes and variety make it difficult for people to become experts and it is no longer possible for customers to build their own T&M tools," continues Erickson. "However, new T&M systems are more intelligent, automated, easier to use and capable of testing the different types of access networks interfacing with the common core. Operators must be able to measure QOS and ensure calls can be set up end-to-end with a given quality - this facility must be built into the series of test tools used both in pre-deployment and in live networks."

IMS networks must be tested end-to-end: from the access to the core, including the myriad network elements, functions and connections/interfaces between them. While the types of tests vary little from those currently used in traditional networks, their number is exponentially higher. "Tests break down into functional tests; capacity testing to ensure network components can handle both sustained traffic levels and surges; media testing - confirming multimedia traffic is transmitted reliably through the network; trouble shooting and 24x7 network monitoring to identify anomalies and flag up problems," says Erickson. "The difference is that in relatively closed PSTNs, four to five basic protocols are being considered compared to hundreds in more open VoIP and IMS networks."

No single vendor or operator has the facilities to conduct comprehensive interoperability, roaming, capacity or other tests to ensure equipment conforms to different iterations of IMS or to test the multiple interfaces with devices, gateways and protocols typical in NGNs. The MultiService Forum, a global association of service and system providers, test equipment vendors and users, recently concluded its GMI 2008 comprehensive IMS tests of over 225 network components from 22 participating vendors. Five host labs on three continents were networked together creating a model of the telecoms world. Roger Ward, MSF President says: "The results showed the overall architecture is complex and the choice of implementation significantly impacts interoperability. IMS protocols are generally mature and products interoperate across service provider environments. Most of the problems encountered were related to routing and configuration rather than protocols. IMS demonstrated the ability to provide a platform for convergence of a wide range of innovative services such as IPTV."

These essentially positive results support the need for continuous testing and monitoring before and during implementation, the results of which can be fed back into vendors' test and measurement teams for product development.

"Building products to emulate IMS functions means operators can buy equipment from multiple vendors, emulate and test functions before implementation and without having to build big test labs," says Teichman. "In IMS networks, T&M is not confined to infrastructure: the huge variety of user interfaces must be tested before implementation to avoid network service outages and QOS problems. While they have to test more functional interfaces, most traditional tests are still valid: although the methodology may be the same, the complexity is higher as many more tests are required to get the same information."

Operators face scalability issues as the number of VoIP users increases. The question, suggests Tony Vo, Senior Product Manager at Spirent, is whether IMS can support thousands of users. "Test solutions must generate high loads of calls. All tests are focused around SIP so tests must emulate different applications. GMI 2008 verified the issues and companies can now develop solutions. However, from a T&M perspective, no one solution can solve all problems."

Nico Bradlee is a freelance business and communications journalist

In an era of increased competition, convergence, and complexity, workforce management has become more important than ever. Field technicians represent a large workforce, and any improvements in technician productivity or vehicle expense can show huge benefits. Likewise, the effectiveness of these technicians directly impacts the customer experience. Deft management of this workforce is more important than ever and requires sophisticated tools, says Seamus Cunningham

Today's communications service providers (CSPs) in the wireless, wireline, or satellite market are providing service activation and outage resolution to their customers - and need to continually do it better, faster, and cheaper. Further, they must do it in an environment of increasing complexity, with new and converged services and networks, and with an ever-growing base of customers. CSPs additionally face global challenges (eg soaring gasoline prices and increased concern about carbon emissions), competitive pressures (eg corporate mergers, triple play offerings, and new entrants), and technological change. To achieve their desired results with such variables impacting their businesses, CSPs must take control of their workforce operations and focus on some combination of key business case objectives including:

  • Reduce operational costs
  • Improve overall customer experience
  • Rapidly deploy new and converged services.

Operational costs for a CSP are significant, especially given the current global financial and economic situation. Consider the total wireline operations of three US Regional Bell Operating Companies (RBOCs), which include operations related to voice and high-speed internet access in the local and interexchange parts of the network:

  • There are over 82,000 outside technicians and over 21,000 inside technicians.
  • Outside technicians have approximately 144 million hours (or 18 million days) and inside technicians have 37 million hours (or 4.6 million days) of productive time a year.
  • There are over 77 million outside dispatches a year and over 96 million inside dispatches a year.
  • The loaded (including salary and benefits) annual labour cost for outside technicians is $7.6 billion (or 15 per cent of their annual cash expense). The loaded annual labour cost for inside technicians is $1.8 billion (or 4 per cent of their annual cash expense).

These are just a subset of the operational costs of a wireline CSP. Similarly, there are significant operational costs in the wireless and satellite markets. Increasing competition continues to put pressure on CSPs to reduce expenses and increase profitability. Some areas that need to be addressed are discussed below.

Technicians are the single largest expense for CSPs. Therefore, introducing labour efficiency is critical for meeting expense objectives. CSPs could increase the number of customer visits in less time by ensuring the right technician is assigned to the right job at the right time. All too often, technicians are unable to do their assigned job because they do not have the right skill set or time to complete it.

Technician productivity can additionally increase by optimising technician routes and reducing travel time and unproductive time. This has the added benefit of reducing fuel and vehicle maintenance expenses and can result in significant carbon emission savings and fuel savings.

A CSP can increase dispatcher productivity by automating existing dispatcher functions such as work assignments and load imbalance resolution and thereby make the dispatcher an exception handler. This way, a dispatcher can focus on the "out of norm" conditions rather than on functions that can be automated.

Consolidation of dispatch systems and processes can reduce CSP expenses and increase efficiency. Integration of dispatch systems for wireless, wireline, or satellite telecommunications operators can sequence, schedule, and track field operations activities for:

  • Service activation and service assurance work for all types of circuits and services
  • All technicians (outside, inside central/switching office, installation and repair, cable maintenance, cell tower technicians)
  • Broadband or narrowband networks
  • A complete range of technologies, products, and services, eg triple play (video, data, and voice networks), fibre (FTTx), DSL, HFC, SONET/SDH, ATM, and copper. Maintaining separate dispatch systems or processes for different areas of business is expensive and inefficient. A single workforce management system to manage all technicians across all aspects of the company can help.

A CSP can reduce time-to-market for new products and services by streamlining their workforce management system integration with business and operations support systems (e.g., service fulfilment, service assurance, customer relationship management [CRM], and field access systems) and automating their flow-through of service orders and tickets. For some CSPs, this could involve integrating with multiple service activation, trouble ticketing, and CRM systems.

When providing service or outage resolution to their customers, CSPs need to ensure their customers are satisfied and that a customer's overall experience while dealing with the CSP is positive. Certainly, it is impossible to keep everyone happy all of the time; however, there are things the CSP can do to help ensure the customer experience is a positive one.
For example, CSPs can improve appointment management by providing the means for service representatives to offer valid, attainable appointments to their customers (based on actual technician availability) and then successfully meet those appointments. CSPs must also make provisions to offer narrow appointment windows to customers as well as provide automated, same-day commitment management. No one wants to wait a long time for a technician to begin with, much less wait and then have the technician show up late or not at all!

The overall customer experience can be improved by keeping the customer up-to-date and informed through increased communication. For example, keeping the customer up-to-date on a technician's estimated time of arrival at the customer premises can go a long way toward overall customer satisfaction. Also, keeping the technician well informed about the services a given customer has, so the technician is prepared to answer customer questions accurately, as well as provide instruction on how to use the services, can add to a positive customer experience.

Finally, through effective and efficient workforce monitoring and operations management, CSPs can monitor key performance metrics, such as mean time to repair (MTTR), which will help track the effect of their business changes on their service activation and network outage times. Also, CSPs need to ensure that they meet their customer's Service Level Agreements (SLAs), because the customers paid for a certain level of installation or maintenance support and should get it.

Another key business case objective is to rapidly deploy new (eg triple play) services and improve the time-to-market by providing easy integration with new systems and services.
CSPs must integrate their existing operations and system algorithms with new technology (eg xPON, FTTx, Bonded DSL). In order to quickly get a new service/technology to market, CSPs must quickly update their business processes and systems to support the new service and technology. This way, they can focus on providing and maintaining the new service/technology to their customers.

By utilising a flexible and configurable workforce management system, CSPs can meet their ever-changing business needs and challenges by utilising user tunable reference data to enhance their flows. This will allow the CSP to process the new service differently than other services and meet their changing business needs and requirements. For example, for a new service offering, additional information regarding that service is needed that could be used by the workforce management system to uniquely route, job type, and price data and video work.

CSPs must make next generation assignment and services information readily available to all technicians as well as provide the technician easy access to all necessary data, in order to minimise their effort to understand the relationships between domains (eg infrastructure, DSL, Layer 2/3 services, etc.). Also, by having the relationships between domains, the system can minimise truck rolls and the number of troubles by correlating root-cause problems that impact multiple domains (eg Layer 1 outage as the root cause of Layer 2 and Layer 3 troubles).

The decisions a CSP makes about their workforce management solution will greatly impact business results. CSP can make the right decisions by considering all aspects of workforce management operations: process, people, network, technology and leadership. It is not just selecting a system, but understanding the impacts of the process on employees, and ultimately providing excellent customer satisfaction to customers.

Seamus Cunningham is Principal Product Manager at Telcordia.
www.telcordia.com

    

@eurocomms

Other Categories in Features