Features

Features

With mobile operators keen to implement impending network upgrades in the most  effective manner, Colin Garrett explores how they can limit network planning costs in the face of the economic downturn

Mobile operators are under increasing pressure to provide the best service to their customers at the most competitive rates.  The next 12 months will see operators across Europe struggling to strike a sensible balance between the need to roll out the latest network upgrades and avoiding passing additional costs on to the end user.  With the difficult economic situation affecting industries across Europe, all eyes are on reducing costs across the board, and for mobile operators this means reviewing spend involved in the initial planning stages of the network through to the training of customer-facing staff.

With the rise in popularity of the smartphone device during 2008, consumers and business users are demanding improved mobile data speeds to access more content via mobile.  The race is on for mobile operators to boost data speeds by rolling out HSPA and LTE networks as soon as possible.  The first step in upgrading existing mobile networks is to gather sufficient network data to identify areas of high mobile penetration and expose any areas that may be lacking in coverage and capacity before choosing which areas of the network require the most urgent upgrade work. 

A common cost-effective approach to test the network is to seed drive test tools in business van fleets.  Drive test systems enable wireless operators to view their own and their competitors' wireless voice and data services from the perspective of the subscriber by providing critical quality-of-service (QoS) measurements. Network designers can then use portable test transmitters to verify optimal antenna positioning and as a low power source for testing the design and functionality of RF repeaters and base stations. This allows operators to limit infrastructure costs by identifying the correct products for network upgrades.

It has become widely accepted in the ICT industry that the correct method for analysing the cost of a vendor's products or services is to do a Total Cost of Ownership (TCO) analysis. Rather than focusing solely on price, buyers of ICT products and services must consider the additional, often hidden, costs of training, operating, managing and upgrading their purchases. Addressing only the purchase price will not sufficiently make a difference. 

TCO is more than the original cost of purchasing the system. We have found that more than 70 per cent of the TCO is involved in non-purchasing activities. It must include all direct and indirect costs associated with mobile network data gathering and drive test systems. Drive test systems have a typical life span of five years. At some institutions this life span may be more like ten years, but in both cases the older units are removed and abandoned as redundant because they cannot be used to test and measure the latest network infrastructure upgrades.

There are many factors and elements that make up the TCO for mobile network data gathering with drive test systems. Over the last ten years, the TCO for drive test tools has continued to increase due to technological advancements, drive test product limitations and increased Mobile Network Operator (MNO) competition. Those institutions that have already addressed and developed strategies and programming to reduce the cost of ownership of mobile network data gathering systems are now seeing benefits. Institutions that have not yet addressed this issue are probably not seeing a cost reduction. In fact, institutions and companies that have not addressed TCO are continuing to experience out-of-control cost increases for mobile network data gathering and drive test systems.

Sweeping changes and improvements in technology continue to challenge the mobile industry to reshape and redefine how best to deploy mobile network data gathering systems. Individual organisations will find that it can prove expensive to stay current unless they have a handle on what it takes to acquire, implement, and support drive test tools. By addressing the components that make up the TCO, an institution will be in a position to take full advantage of the latest innovation in mobile network data gathering techniques. It will become very difficult, even impossible, to implement an institutional mobile network data gathering and drive test methodology aimed at including HSPA results if an enterprise is using a bespoke technology and frequency limited system.

Introducing the wrong drive test systems to your network can be very costly. Being aware of the TCO components is the first step in lowering your mobile network data gathering cost. We have found that limiting choices and setting standards are the best methods for starting to get control of your drive test systems cost. Whereas ensuring that all parties use a single type of system is usually the fastest way to bring mobile network data gathering and drive test systems costs under control, it is not always easy to implement when both individuals and group networks have developed enough expertise and knowledge to be able to specify and utilise their own drive test systems.

The implementation of "soft standards", including significant economies of scale, simplified purchasing procedures and centralised training support, will work best in bringing the entire enterprise to accept a standard and limited choice. Nevertheless limited choice should still offer enough variety to cover the end user's requirements, including engineering (optimization and integration), special coverage groups (in-building and special coverage projects), marketing (benchmarking) and management (key network performance indices).
As already established, institutional TCO consists of more than simply the original purchase of hardware and software. We have defined seven different base elements that make up the cost components for drive test systems. These base elements are purchase price for all hardware and software, staff training costs, installation and implementation costs, support services and update costs, cost of required functional upgrades, technology upgrades, interoperability costs.  Each of these base elements includes several types of expenditures.
The purchase price includes all direct and indirect purchases for a drive test system, namely the drive test tool hardware, software, data collection supported devices, and log file (output) manipulation. The price should also include warranties, extended warranties and maintenance agreements.

Training costs will include all direct and indirect expenditures for training activity required to effectively run the drive test system. Formal and informal training usually occurs with the installation of the drive test system. Costs and methods vary according to vendor.

Installation and implementation costs include all direct and indirect expenditures involved in ensuring that the system is installed correctly and meets an institution's standard operating procedures. This may vary from tools needed for hardware installation to server configuration to accommodate the storage and access of log files.

Support services costs include all staff costs incurred in providing adequate personnel support to the drive test system. This includes on-site technical support, as well remote support via telephone, e-mail and the Internet. Installers, troubleshooters and skilled support staff are all involved in maintaining the system.

Functional change upgrade costs comprise both direct and indirect expenditures necessary to make ongoing changes to the drive test systems operation. This will allow the institution to increase its drive test efficiencies, including the deployment of the latest software updates, the addition of extra parameters, and the improvement of data displays.
Technology upgrades costs should take into account both the direct and indirect costs involving in acquiring new tools or upgrading the current system to be compatible with new mobile devices as well as the latest mobile technologies, i.e. CDMA 1x to EVDO or HSPA to LTE.

Through a careful step-by-step consideration of each of the elements that constitute the TCO for a vendor's drive test system, mobile operators can reach an informed decision as to the cost effectiveness of a vendor's tool set.  Although wireless network data gathering comprises only one aspect of the network planning process, an accurate TCO evaluation for drive test systems is a great place to start in order to ensure maximum cost and performance efficiency across an institution's entire remit.  At a time when businesses need to evaluate every area of their spend in order to retain the highest possible competitive advantage in a saturated market, mobile operators cannot afford to base buying decisions solely on purchasing price, but must instead consider all aspects of TCO across their wireless networks.

Colin Garrett is Product Manager, Test and Measurement Systems, Andrew
www.andrew.com

This is the first in a series of columns focusing on issues surrounding the management of  today's communications business models. For this debut effort, I thought I would talk about voice over IP and its impact on communications, or perhaps I should say the lack of it.

I read an interesting article recently that said voice over IP (VoIP) was stalling even though not so many years ago it looked like it would sweep the board. On the contrary, VoIP usage appears to be declining; a recent report by independent British communications regulator Ofcom says that only 14 per cent of broadband subscribers are even using the technology.
Adding fuel to the fire is the rumor floating around that eBay is looking to sell VoIP provider Skype, which it purchased in 2005 for over $2 billion. The article even quotes Skype CEO as saying it's a great standalone business. Surely that's a big hint at what may be to come.
So this is quite an interesting turn of events we have on our hands. The reason I'm focusing on this bit of news that VoIP uptake appears to be waning is that back about 15 years ago when I was working with BT, I saw my first demo of the technology. I remember one of BT's board members being rather panicked and saying VoIP would kill off their business, and the world was coming to an end.

Obviously that never happened. But what has happened is pricing on traditional circuit-switched calls has become lower and lower in the past 15 years. Nowadays, most people have some sort of flat-rate fixed-line or mobile calling plan that's priced very aggressively. Sure, Skype-to-Skype calls are free, but today's consumer is interested in a lot more than just a free lunch.

Also, the convenience of VoIP just isn't there. With PC-to-PC calling, as with Skype, you're anchored to your PC and stuck at your desk. If all parties are using the same service, the call usually works as intended, but if you're on a raw IP connection, or someone is using the conventional phone network, all bets are off.

And unlike the common perception that if it's free you can't complain about it, consumers are much more savvy and demanding that every form of communications they touch lives up to the high standards of the traditional PSTN.

Back when mobile phones were brand new, and the novelty of being able to call from the middle of a field or on the top of a hill still had a shine on it, people didn't really care if calls dropped or quality was poor. But after a while that novelty started to wane, and today you can get mobile service on tunnels, trains and just about anywhere else with high call quality.
So we have lower priced traditional voice calls and customers who are demanding - and getting - higher quality of service. And that is exactly what the Internet has not been able to achieve in terms of voice.

It exposes the myth that people don't care about quality if something is free. And nowhere is voice call quality more of an issue than in the corporate world. Can you imagine the Fortune 500 companies using a VoIP configuration that is going over the general Internet where there is no packet priority and jitter and delay are common? The Internet is great for email, downloading video and anything else where it's not a huge deal if packets are sent and received out of order or with latency. But the inconvenience of having a VoIP call dropped or sounding like static just isn't cutting it in the corporate world.

I'm the furthest thing from a Luddite, but the call quality, the inconvenience of being stuck making calls from your PC and other factors are hindering VoIP's potential to be a voice communications game-changer.

Keith Willetts is Chairman and CEO, TM Forum
kwilletts@tmforum.org

A wide range of factors is driving mobile broadband demand as our lifestyles become   increasingly digital. Howard Wilcox asks whether LTE is the natural future standard of choice

LTE is often quoted as a 4G mobile technology. However, at this point there is no agreed global definition of what is included in 4G: the ITU is establishing criteria for 4G (also known as IMT-Advanced) and will be assessing technologies for inclusion. The two next generation technology candidates are mobile WiMAX 802.16m (WiMAX Release 2) and LTE Advanced. Both these products will meet the IMT Advanced specification with, for example, up to 1 Gbit/s on the downlink at low mobility.

There is a wide range of factors driving mobile broadband demand as our lifestyles become increasingly digital.

LTE is a global mobile broadband standard that is the natural development route for GSM/HSPA network operators and is also the next generation mobile broadband system for many CDMA operators. The overall aim of LTE is to improve capacity to cope with ever-increasing volumes of data traffic in the longer term. The key LTE objectives include:

  • Significantly increased peak data rates - up to 100 Mbps in the downlink and uplink peak data rates up to 50 Mbps,
  • Faster cell edge performance and reduced latency for better user experience
  • Reduced capex/opex via simple architecture, re-use of existing sites and multi-vendor sourcing
  • Wide range of terminals - in addition to mobile phones and laptops, many further devices, such as ultra-mobile PCs, gaming devices and cameras, will employ LTE embedded modules.

3GPP's core network has been undergoing SAE (System Architecture Evolution), optimising it for packet mode and IMS (IP-Multimedia Subsystem) which supports all access technologies. SAE therefore is the name given by 3GPP to the new core all-IP packet network that will be required to support the LTE evolved radio access interfaces (RAN): it has a flat network architecture based on evolution of the existing GSM/WCDMA core network. LTE and SAE together constitute 3GPP Release 8 and have been designed from the beginning to enable mass usage of any service that can be delivered over IP. The RAN LTE specification was completed at the end of 2008, with further work required to complete SAE by March 2009: this work is on track for completion of the full release 8 standard at that time.

Beyond LTE to 4G
LTE is often quoted as a 4G mobile technology. However, at this point there is no agreed global definition of what is included in 4G: the ITU is establishing criteria for 4G (also known as IMT-Advanced) and will be assessing technologies for inclusion. The two next generation technology candidates are mobile WiMAX 802.16m (WiMAX Release 2) and LTE Advanced. Both these products will meet the IMT Advanced specification with, for example, up to 1 Gbit/s on the downlink at low mobility.

There is a wide range of factors driving mobile broadband demand as our lifestyles become increasingly digital.

Personal connectivity:  "Always On"
Anytime, anywhere connectivity as an overall concept is becoming a clear user expectation. The increase in connectivity is seen to be driving applications, user preferences and broadband demand, which in turn drives the demand for access. The demand for increased access is actually leading to bigger investments in the area of mobile and broadband networks, in turn making it cheaper and supporting higher bandwidths and ubiquitous connectivity. As available bandwidth grows, so does the variety and sophistication of devices. As the volume of devices increases, prices become more attractive, so driving user demand. This completes a cycle of demand.

However, as shown below, each demand driver can equally impact any of the others, for example smarter devices clearly drive more sophisticated applications and services, whilst the knowledge that increased bandwidth is available means that more users are likely to demand services:

Economic stimulus
Fixed broadband already plays a vital part in developing the economy, connecting the population at large, businesses, and governments, and enabling commerce. Mobile broadband is also being driven by the need to provide broadband where it is not possible to easily, quickly and economically deliver fixed broadband, particularly in developing countries, but also in underserved or rural areas in developed countries.

Emerging mobile youth generation
The younger generation (particularly the under 18s but also the 18 to 30 age group) are the future employees and workers, as well as being the momentum behind popular applications such as social networking, gaming and music and the earliest adopters of ICT devices. They are also amongst the most skilled, innovative and fastest learning users of technology. These skills and expectations as users are derived not only from their mobile phones but from the increasing ubiquity of broadband at home, and the teen generation is highly likely to carry forward this level of expectation (and more) into adulthood.

New applications and services 
New applications and services (some of which may well be unknown now) are going to be key drivers of mobile broadband and faster and faster data rates. Aspects include: 

  • Growth of mobile commerce

Over the past 12 to 18 months there has been significant activity and growth in mobile payments (particularly digital and physical goods purchases), and mobile banking. In addition these services and applications, along with contactless NFC, mobile money transfer, ticketing and coupons are forecast to grow rapidly over the next five years.

  • Mobile web 2.0

Before long, anything you can do at your desktop, you will be able to do on the road with a laptop or other mobile device. Users want the same capabilities wherever they are located and however they are connected - as fixed, mobile or nomadic subscribers.
This means that mobile broadband will provide personalised, interactive applications and services, such as multiplayer gaming, social networking, or other video/multimedia applications: anytime and anywhere.   The mercurial rise of social networking sites and user-generated content has rekindled users' interest in accessing web-based services on the move. 

The difference between current 3G applications and mobile broadband at the speeds envisaged is that LTE mobile broadband will enable greater user-generated content and uploading/downloading, along with person-to person connectivity.

  •  Portable video revolution

One application that is crucial to driving demand for mobile broadband is video. There is a variety of applications that can be offered in video, which include video calling, video clips streaming, live mobile TV and video clip uploads and downloads (especially for sites such as YouTube, MySpace etc.). The focus on video clip downloads is an application that is extremely popular. The demand to watch videos on the go, has been ignited by the emergence of the video iPod, with similar devices following from other vendors.

  • Impact on network traffic growth

In January 2009 Cisco forecast that globally, mobile data traffic will double every year, increasing 66 times between 2008 and 2013. Mobile data traffic will grow at a CAGR of 131 per cent between 2008 and 2013, reaching over 2 exabytes per month by 2013. Confirming the paragraphs above, Cisco said that almost 64 per cent of the world's mobile traffic will be video by 2013. Mobile video will grow at a CAGR of 150 per cent between 2008 and 2013.

  • The need for mobility

Worldwide mobile subscribers have grown by a factor of in excess of 15 times over the last ten years and actually surpassed the worldwide fixed line base in 2001-2002. Mobile subscriber density has been showing strong growth ever since, while fixed line density has been experiencing low or no growth. In the same period, the number of PCs has grown by a factor of nearly three, whilst Internet users have grown over 11 times. Fixed lines are very much the poor relation, and in the last couple of years the number of fixed lines has begun to decline.

LTE market opportunity
There will be considerable change to the global mobile technology base over the next five years:

  • Subscribers in developed nations and regions will migrate upwards from 3G to existing mobile broadband such as HSPA
  • A limited number of high end enterprise and consumer subscribers in developed nations and regions then migrate further upwards to LTE
  • Developing nations and regions see considerable growth in 2G and 2.5G as people and businesses seek first time connectivity ahead of more sophisticated services, and sometimes instead of acquiring fixed network access
  • A limited number of high end subscribers in developing nations migrate towards newer generation technologies

Juniper Research forecasts that the LTE service revenue opportunity for mobile network operators will exceed $70bn pa by 2014, with the main regional markets in North America, Western Europe and the Far East & China.

This article is based on Juniper Research's report: LTE: The Future of Mobile Broadband 2009 - 2014.
Howard Wilcox is a Senior Analyst with Juniper Research.
www.juniperresearch.com

 

The recent focus on privacy issues surrounding behavioural advertising is only the tip of the iceberg says Lynd Morley

European Telecoms Commissioner Viviane Reding has been placing the issue of privacy firmly on the communications agenda of late, and the subject has - particularly in the UK - been causing quite a stir.  Even the British national press has been exercised about it - something of an unusual occurrence, given their more normal propensity to fill pages with scandals that are more accessible and simpler to understand than the complexities of the gradual erosion of privacy now taking place.

The current fuss is largely due to the fact that the European Commission could pursue legal action against the UK Government, because the latter has paid little attention to the Commission's concerns about the use of Phorm software to monitor the Internet browsing habits of users without their consent.

The Phorm system, used, for instance, in a number of trials carried out by BT over its broadband network, offers a behavioural advertising facility, targeting adverts at users based on the types of sites they have visited.  The catch as far as the Commission is concerned is that neither BT nor Phorm asked users' permission to gather and use this information.

The EU directive on privacy and electronic communications basically says that member states must ensure the confidentiality of data on communications and related data traffic by prohibiting unlawful interception and surveillance unless the users concerned have consented to such activity.

Reding reinforces the sentiment in a recent statement, noting: "Europeans must have the right to control how their personal information is used.  European privacy rules are crystal clear - your information can only be used with your prior consent."

Clearly, there should be considerable cause for concern in the UK - not only among its citizens whose rights to privacy under European directives are being ignored, but also in Government, which now risks legal action by the EU.

But while the Phorm affair has served to raise the profile (if only en passant) of privacy issues, it is by no means the only privacy concern that Europe should be turning its attention to.  Reding has certainly pointed to other areas within communications technology that warrant close observation, including the significant amounts of data that social networking sites hold on their users, and the increasing use of RFID chips in a wide range of products.  And while the UK Government might fairly be accused of a certain laxity in its attitude to privacy issues, the country's Information Commissioners' Office has been focussing attention on the sometimes complex requirements central to establishing effective information privacy practices. At the end of last year, for instance, the ICO issued an in-depth document on the subject - Privacy by design.  Prepared by the Enterprise Privacy Group, the report is intended as a first step in the privacy by design programme, which aims to encourage public authorities and private organisations to ensure that, as information systems that hold personal information are developed, privacy concerns are identified and addressed from first principles.

The ICO noted in its introduction to the report: "The capacity of organisations to acquire and use our personal details has increased dramatically since our data protection laws were first passed.  There is an ever-increasing amount of personal information collected and held about us as we go about our daily lives.  Although we have seen a dramatic change in the capability of organisations to exploit modern technology that uses our information to deliver services, this has not been accompanied by a similar drive to develop new effective technical and procedural privacy safeguards."

Toby Stevens, Director of the Enterprise Privacy Group, notes that among the barriers to any successful adoption of privacy safeguards are, not only, an ongoing lack of awareness of privacy needs at an executive management level within organisations - often driven by uncertainty about the potential commercial benefits of privacy-friendly practices - but also the fundamental conflict between privacy needs and the pressure to share personal information within and outside organisations.

"Addressing privacy issues at the start of systems development," he explains, "can have significant business benefits, and in some circumstances ensure that new ventures do not run into privacy problems that can severely delay time to market."
www.ico.gov.uk
www.privacygroup.org

Jon Wells discusses how pressures in emerging markets are forcing OSS to change, for the benefit of all

The emerging telecoms markets are no less demanding than those in Western Europe or North America, but they do present substantially different requirements. These markets often present challenges that telcos in developed markets have not had to contend with, but may soon find themselves facing - particularly regarding the global economic slowdown. They are also extremely lucrative, with OSS Observer forecasting revenue growth in emerging markets at 11 per cent from 2007-2012.

OSS is essential for telcos in emerging markets since it helps them operate efficiently, leverage economies of scale, keep up with intense competition, engage with increasingly technology-aware consumers and create innovative services. It also helps telcos manage technology refresh; initiated to reach new customers with next-generation services, replace creaking infrastructure or "leap-frog" to next-generation networks (NGN).

In the West, the traditional approach to OSS is to employ a ‘best of breed' approach potentially unsuited to emerging markets. In contrast, Unified OSS - an open, NGOSS based, modular, pre-integrated, end-to-end OSS solution - presents operators in emerging markets with sophisticated OSS without the associated long lead times and high costs. Market analysts, such as Frost and Sullivan and Yankee Group, are increasingly aware of the opportunity that Unified OSS presents to operators seeking sophisticated OSS.

Falling average revenue per user (arpu), increased customer focus and technology refresh are impacting globally. Furthermore, many predict that 2009-2010 will be a year of market contraction and pronounced arpu shrinkage for North America and Western Europe, but that emerging markets, such as APAC, will be less affected. This combination will make the OSS practices in APAC even more applicable in developed markets. With local pressures pushing operators in emerging markets towards a ‘quantum leap' in OSS, what are the lessons emerging markets can offer to the global OSS community?

Most operators in emerging markets must contend with comparatively low arpu. The estimated arpu in India is around US$8 per month - only slightly lower than Indonesia, the Philippines, Malaysia, Thailand and China but around a tenth that of some Western European operators. However, this low arpu is offset by a huge potential for customer growth. For operators in emerging markets, the key is in accessing their large, often rural populations that typically have low tele-density, thus supporting business models based on rapid growth and high customer subscription. For example, India covers 3M km2 and 70 per cent of the 1.1 billion population lives in rural areas with tele-density of around two per cent. While the opportunity for customer growth is clear, automation and intelligent management of manual activities, leading to operational efficiency, are critically important when maintaining services over such a geographical extent.

Some operators in Asia are achieving ratios of staff to subscribers that are almost half that of counterparts in Western Europe and North America; one Indian operator is achieving a ratio of 1:1750. This is being achieved initially through rapid growth in subscribers but to sustain this and turn it into operational efficiency, operators look to their OSS to automate and manage the end-to-end operational processes.

Operators in Eastern Europe and the Commonwealth of Independent States (CIS) are challenging their legacy platforms as they experience demand for broadband services. OSS Observer forecasts that residential broadband will grow faster than revenue, at a compound annual growth rate of 27 per cent, as the service is still relatively new, and arpus are low. Simply put, the legacy OSS cannot efficiently, rapidly and reliably deliver the order-to-cash process, despite network availability and a consumer base demanding higher value services. Many operators are replacing legacy with new OSS, often delivering many functions simultaneously. One Eastern European operator recently started an OSS project covering inventory, order management, activation, field-force logistics and trouble-tickets. But, time is of the essence, and the transfer of subscribers from low to high value services cannot wait for traditional OSS lead-times.

In emerging markets, an OSS must take the strain of a rapidly expanding customer base, since this offsets low arpu. Expansion can be extremely rapid - some operators in emerging markets achieve tens of millions of subscribers within a few years and a monthly growth of one million subscribers is fairly common. Where the subscriber base already exists, as in Eastern Europe, the OSS must support consumer demands to rapidly transition from low to high revenue services.

Operators in emerging markets need OSS that helps them "go-live" with services quickly and manage the transition from low to high revenue services. This rapid increase in subscriber numbers or service revenue is often essential for the business plan. This is doubly important because operators in emerging markets have often invested heavily in infrastructure and strive for high utilisation through customer growth to balance costs. One emerging market operator estimates that the right choice of OSS saved around US$200M in lifetime integration costs and delivered sophisticated OSS functionality two years earlier, when compared to ‘best of breed' OSS. Within seven months of starting up, they were the country's largest mobile operator.

Subscribers in emerging markets are technology literate and competition is relentless, throughout this intense growth period. Competition is a major reason why India has some of the lowest mobile rates in the world, at two cents per minute. The need to defend market share and capture new subscribers drives innovation in service offerings. In addition to coping with demands of growth, the OSS for emerging markets must reduce time-to-market for new products. Demands for 12-15 new products and features per year for mobile service providers in emerging markets are not unheard of, and are being supported by Unified OSS today.

A common misconception is that subscribers in emerging markets are not demanding. In reality, customers in emerging markets have extremely high expectations. The level of competition for subscribers may drive operators that do not address customer experience, innovate and improve their product portfolio and service level agreements (SLAs) to extinction.

Just as in developed markets, an OSS must intelligently map network status, planned outages and provisioning key performance indicators (KPIs) to customer facing SLAs, and coordinate and prioritise responses when SLAs are in jeopardy or breech. Whilst automation and efficient manual processes remain the fundamental means of maintaining excellent customer experience, SLA management can gauge and improve that experience, focusing management on the subscriber's needs.

The same is true when viewed from the customer perspective. Customers expect the call-centre staff to be informed, to map the customer reported fault to a known network fault intelligently, give reassurance that the resolution is progressing and provide a restoration time. Only the OSS is positioned to support this.

In ‘best of breed' OSS, maintaining customer centric perspectives is often the culmination of years of evolution. To meet demands of customers in emerging markets, telecoms providers simply cannot wait. Unified OSS can implement customer centric management without major integration projects.

For many developing countries, next-generation technologies are not a long-term aim, but a starting point, since they can solve many problems facing operators.
Various operators in emerging markets are building broadband optic fibre networks, completely bypassing the copper lines still used in many developed countries. In just a few years, India-based Reliance Communications has become the world's largest IP-enabled optic fiber cable network with 230,000km now laid. Compared to copper cable, optical fibre provides far more bandwidth, whilst being cost comparable and less subject to theft. Telekom Malaysia's HSBB project will receive RM2.4B investment from the Malaysian government, as it proactively tackles its relatively low penetration of broadband. 

Singapore's Government recently announced that its Next Generation National Broadband Network will be nationwide by 2012, providing all homes and offices with access to the new, pervasive, all-fibre network. Similar government initiatives are found around the world. For example, entry into the European Union is driving infrastructure re-fresh in Eastern European and CIS countries.

Instead of deploying copper or fibre, many countries are deploying wireless coverage to provide an instant broadband service. Wireless broadband is an excellent means of reaching rural or transient populations and coverage ‘black spots'. Unlike copper cable, wireless broadband equipment can be secured against theft and removes much of the cost of laying and maintaining hundreds of kilometres of infrastructure.

One shared characteristic of most emerging markets is that they are a hive of innovation and experimentation. In Africa, 3G and CDMA2000 are currently capturing public interest, but this may be challenged by WiMAX and technologies such as Power Line Communications (PLC) continue to exploit niche opportunities.  Operators in Africa are evaluating technology, looking for the best fit for their specific challenges and OSS must support this evolution. With current residential broadband at only one per cent there is a huge potential for rapid expansion of service.

Unified OSS focuses on simplification through pre-integration, consolidation of operational data and centralised workflow spanning end-to-end operational processes; from SLA management to field-force logistics. Unified OSS can deploy faster and with lower risk than ‘best of breed' OSS solutions, avoiding integration and data synchronisation costs. It helps operators in emerging markets achieve ROI on their infrastructure investments sooner and, through simplicity and flexibility, allows operators to engage their subscribers with innovative products over evolving networks.

With arpus falling worldwide, operators are now desperately adding value to their services and, increasingly, medium or high arpu countries may feel the bite of revenue reductions on their operations and question whether their network is providing them with the necessary tools to exploit economies of scale. With 2009-2010 set to be particularly challenging years in terms of revenue; parallels between OSS practices in emerging and developed countries are that much more pertinent. The approaches emerging markets have taken to overcome these problems have been hard learned and Western operators ignore them at their peril.

Jon Wells is OSS consultant at Clarity International
www.clarity.com

While IP appears to have simplified telecoms, Christoph Kupper, Executive Vice  President of Marketing at Nexus Telecom tells Lynd Morley that the added complexity of monitoring the network - due largely to exploding data rates - has led to a new concept providing both improved performance and valuable marketing information

Nexus Telecom is, in many ways, the antithesis of the now predominant imperative in most industries - and certainly in the telecoms industry - which requires wholesale commoditisation of services; an almost exclusive focus on speed to market; and a fast response to instant gratification.

Where the ruling mantra is in danger of becoming "quantity not quality" in a headlong rush to ever greater profitability (or possibly, mere survival), Nexus Telecom calls something of a halt, focussing the spotlight on the vital importance of high quality, dependable service that not only ensures the business reputation of the provider, but also leads to happy - and therefore loyal - customers.

Based in Zurich, Nexus Telecom is a performance and service assurance specialist, providing data collection, passive monitoring and network service investigation systems.  The company's philosophy centres around the recognition that the business consequences of any of the network's elements falling over are enormous - and only made worse if the problem takes time to identify and fix.  Even in hard economic times, the investment in reliability is vital.

The depressing economic climate does not, at the moment, appear to be hitting Nexus Telecom too directly.  "Despite the downturn, we had a very good year last year," comments Christoph Kupper, Executive Vice President of Marketing at Nexus Telecom.  ‘And so far, this year, I don't see any real change in operator behaviour. There may be some investment problems while the banks remain hesitant about extending credit, but on the whole, telecom is one of the solid businesses, with a good customer base, and revenues that are holding up well."

The biggest challenge for Nexus Telecom is not so much the economy, but more one of perception and expectation, with some operators questioning the value and cost of the OSS tools - which, relative to the total cost of the network has increased over the years.  In the past few years the price of network infrastructure has come down by a huge amount, while network capacity has  risen.  But while the topological architecture of the network is simplifying matters - everything running over big IP pipes - the network's operating complexity is vastly increasing.  So the operator sees the capital cost of the network being massively reduced, but that reduction isn't being mirrored by similarly falling costs in the support systems.  Indeed, because of the increased complexity, the costs of the support systems are going up.

Complexity is not, of course, always a comfortable environment to operate in.  Kupper sees some of the culture clash that arises whenever telecom meets IT, affecting the ways in which the operators are tackling these new complexities.

"In my experience, most telecom operators come from the telco side of the road, with a telecom heritage of everything being very detailed and specified, with very clear procedures and every aspect well defined," he says.

"Now they're entering an IP world where the approach is a bit looser, with more of a ‘lets give it a try' attitude, which is, of course, an absolute horror to most telcos."

Indeed, there may well be a danger that network technology is becoming so complex that it is now getting ahead of some CTOs and telecom engineers.

"There can be something of a ‘fear factor' for the engineers, if ever they have an issue with the network," Kupper says.  "And there are plenty of issues, given that these new switching devices can be configured in so many ways that even experienced engineers have trouble doing it right.

"Once the technical officers become fully aware of these issues, the attraction of a system such as ours, which gives them better visibility - especially independent visibility across the different network domains - is enormous.

"It only takes one moment in a CTO's life when he loses control of the network, to make our sale to him very much easier."

The sales message, however, depends on the recognition that increased complexity in the network requires more not less monitoring, and that tools which may be seen as desirable but not absolutely essential (after all, the really important thing is to get the actual network out there - and quickly) are in fact, vital to business success.  Not always an easy message to get across to those whose background in engineering means they do not always think in terms of business risk.

Kupper recognises that the message is not as well established as it might be. "We're not there yet," he says.  "We still need to teach and preach quite a lot, especially because the attraction of the ‘more for less' promise of the new technology elements hides the fact that operational expenditure on the management of a network with vastly increased traffic and complexity, is likely to rise."

The easiest sales are to those technical officers who have a vision, and who are looking for the tools to fulfil it.  "They want to have control of their networks," says Kupper. "They want to see their capacity, be able to localise it, and see who's affected."

And once Nexus Telecom's systems are actually installed, he stresses, no one ever questions their necessity. 

"The asset and value of these systems is hard to prove - you can't just put it on the table. It's a more complicated qualitative argument that speaks to abstract concepts of Y resulting from the possible failure of X, but with no exact mathematical way to calculate what benefits your derive from specific OSS investment."

So the tougher sales are to the guys who don't grasp these concepts, or who remain convinced that any network failure is the responsibility of the network vendors who must therefore provide the remedy, without taking into account how long that might take, and the subsequent impact on client satisfaction, and therefore, ultimately business success.
These concepts, of course, are relevant to the full range of suppliers, from wireline and cable operators to the new mobile kids on the block.  Indeed, Kupper stresses that with the advent of true mobile data broadband availability, following the change to IP, and the introduction of flat rates to allow users to make unlimited use of the technology, the cellular operator has positioned himself as a true contender against traditional wireline and cable operators.

Kupper notes: "For years in telecommunications, voice was the data bearer that did not need monitoring - if the call didn't work, the user would hang up and redial - a clearly visible activity in terms of signalling procedure analysis.

"But with mobile broadband data, the picture has changed completely.  It is the bearer that needs analysis, because only the bearer enables information to be gleaned on the services that the mobile broadband user is accessing.  The network surveillance tools, therefore, must not only analyse the signalling procedure but also, and most importantly, the data payload.  It is in the payload that we see if, for example, Internet browsing is used, which URL is accessed, which application is used, and so forth. And it is only the payload, for which the subscriber pays!"

He points out that as a consequence of the introduction of flat rates and the availability of 3G, data rates have exploded.

"It is now barely possible to economically monitor such networks by means of traditional surveillance tools.  A new approach is needed, and that approach is what we call ‘Intelligent Network Monitoring'. At Nexus Telecom we have been working on the Intelligent Network Monitoring concept for about two years now, and have included that functionality with every release we have shipped to customers over that period.  Any vendor's monitoring systems that do not include developments incorporating the concepts of mass data processing will soon drown in the data streams of  telecom data networks."

Basically, he explains, the monitoring agents on the network must have the ability to interpret the information obtained from scanning the network ‘on the fly'.  "The network surveillance tools need a staged intelligence in order to process the vast amount of data; from capturing to processing, forwarding and storing the data, the system must, for instance, be able to summarise, aggregate and discard data while keeping the essence of subscriber information and its KPI to hand - because, at the end of the day, only the subscriber experience best describes the network performance. And this is why Nexus Telecom surveillance systems provide the means always to drill down in real-time to subscriber information via the one indicator that everyone knows - the subscriber's cell phone number."

All this monitoring and surveillance obviously plays a vital role in providing visibility into complicated, multi-faceted next generation systems behaviour, facilitating fast mitigation of current and potential network and service problems to ensure a continuous and flawless end-customer experience.  But it also supplies a wealth of information that enables operators to better develop and tailor their systems to meet their customers' needs.  In other words, a tremendously powerful marketing tool.

"Certainly,' Kupper confirms, "the systems have two broad elements - one of identifying problems and healing them, and the other a more statistical, pro-active evaluation element.  Today, if you want to invest in such a system, you need both sides.  You need the operations team to make the network as efficient as possible, and you also need marketing - the service guys who can offer innovative services based on all the information that can be amassed using such tools."

Kupper points out that drawing in other departments and disciplines may, in fact, be essential in amassing sufficient budget to cover the system.  The old days when the operations manager could simply say ‘I need this type of tool - give it to me' are long gone, and anyway their budgets, these days, are nothing like big enough to cover such systems.  Equally, however, the needs of many different disciplines and departments for the kind of information Nexus Telecom systems can provide is increasing as the highly competitive marketplace makes responding to customer requirements and preferences absolutely vital.  Thus the systems can prove to be of enormous value to the billing guys, the revenue assurance and fraud operations, not to mention the service development teams.  "Once the system is in place," Kupper points out, "you have information on every single subscriber regarding exactly which devices and services he most uses, and therefore his current, and likely future, preferences.  And all this information is real-time."

Despite the apparent complexity of the sales message, Nexus Telecom is in buoyant mood, with good penetration in South East Asia and the Middle East, as well as Europe.  These markets vary considerably in terms of maturity of course, and Kupper points out that OSS penetration is very much a lifecycle issue.  "When the market is very new, you just push out the lines," he comments.  "As long as the growth is there - say the subscriber growth rate is bigger than ten per cent a year - you're probably not too concerned about the quality of service or of the customer experience. 

"The investment in monitoring only really registers when there are at least three networks in a country and the focus is on retaining customers - because the cost of gaining new customers is so much higher than that of hanging on to the existing ones.

"Monitoring systems enable you to re-act quickly to problems.  And that's not just about ensuring against the revenue you might lose, but also the reputation you'll lose.  And today, that's an absolutely critical factor."

The future of OSS is, of course, intrinsically linked to the future of the telcos themselves.  Kupper notes that the discussion - which has been ongoing for some years now - around whether telcos will become mere dumb pipe providers, or will arm themselves against a variety of other players with content and tailored packages, has yet to be resolved.  In the meantime, however, he is confident that Nexus Telecom is going in the right direction.

"I believe our strategy is right.  We currently have one of the best concepts of how to capture traffic and deal with broadband data.

"The challenge over the next couple of years will be the ability to deal with all the payload traffic that mobile subscribers generate.  We need to be able to provide the statistics that show which applications, services and devices subscribers are using, and where development will most benefit the customer - and, of course, ultimately the operator."

Lynd Morley is editor of European Communications

Over the past years the demand for data centre services have been experiencing a  huge expansion boosted by the growth of content-rich services such as IPTV and Web 2.0. With the increased bandwidth available, enterprises are hosting more of their applications and data in managed data centre facilities, as well as adopting the Software-as-a-Service(SaaS) model. David Noguer Bau notes that there's a long list of innovations ready to improve the overall efficiency and scalability of the data centre, but network infrastructure complexity may prevent such improvements - putting at risk the emerging business models such as SaaS, OnDemand infrastructure, and more

The data centre is supposed to be the house of data - storage and applications/servers - but after a quick look to any data center it's obvious that a key enabler is also hosted there: the network and security infrastructure.

The data centre network has become overly complex, costly, and extremely inefficient, limiting flexibility and overall scalability. Arguably, it is the single biggest hurdle that prevents businesses from fully reaping the productivity benefits offered by other innovations occurring in the data centre, including: server virtualisation, storage over Ethernet, and evolution in application delivery models. Traditional architectures that have stayed unchanged for over a decade or more employ excessive switching tiers, largely to work around low performance and low-density characteristics of the devices used in those designs. Growth in the number of users and applications is almost always accompanied by an increase in the number of "silos" of more devices - both for connectivity as well as for security. Adding further insult to injury, these upgrades introduce new untested operating systems to the environment. The ensuing additional capital expenses, rack space, power consumption, and management overhead directly contribute to the overall complexity of maintaining data centre operations. Unfortunately, instead of containing the costs of running the data centre and reallocating the savings into the acceleration of productivity-enhancing business practices, the IT budget continues to be misappropriated into sustaining existing data centre operations.

Data centre consolidation and virtualisation trends are accelerating in an effort to optimize resources and lower the cost. Consolidation, virtualisation and storage services are placing higher network performance and security demands on the network infrastructure. While server virtualisation improves server resource utilisation, it also greatly increases the amount of data traffic across the network infrastructure. Applications running in a virtualised environment require low latency, high throughput, robust QoS and High-Availability. Increased traffic-per-port and performance demands, tax the traditional network infrastructure beyond its capabilities. Furthermore, the future standardisation of Converged Enhanced Ethernet (CEE) - with the aim to integrate the low-latency storage traffic - will place even greater bandwidth and performance demands on the network infrastructure.

Additionally, new application architectures, such as Service Oriented Architecture (SOA) and Web Oriented Architecture (WOA), and new services - cloud computing, desktop virtualisation, and Software as a Service (SaaS) - introduces new SLA models and traffic patterns. These heightened demands often require new platforms in the data centre, contributing to increased complexity and cost. Data centres are rapidly migrating to a high-performance network infrastructure -scalable, fast, reliable, secure and simple- to improve data centre-based productivity, reducing operational cost while lowering time to market for new data centre applications.

The way data centre networks have been designed traditionally is very rigid, based on multiple tiers of switches and not responding to the real demand of highly distributed applications and virtualised servers. By employing a mix of virtualisation technologies also in the data centre network architecture -such as clusters of switches with VLANs and MPLS-based advanced traffic engineering, VPN enhanced security, QoS, VPLS, and other virtualisation services- the model becomes more dynamic. These technologies address many of the challenges introduced by server, storage and application virtualisation. For example, the Juniper Networks Virtual Chassis technology supports low-latency server live migration from server to server in completely different racks within a data centre and from server to server between data centres in a flat Layer 2 network when these data centres are within reasonably close proximity. Furthermore, Virtual Chassis combined with MPLS/VPLS allows the Layer 2 domain to extend across data centres to support live migration from server to server when data centres are distributed over significant distances. These virtualisation technologies support the low latency, throughput, QoS and HA required of server and storage virtualisation. MPLS-based virtualisation addresses these requirements with advanced traffic engineering to provide bandwidth guarantees, label switching and intelligent path selection for optimised low latency, traffic separation as a security element, and fast reroute for HA across the WAN. MPLS-based VPNs enhance security with QoS to efficiently meet application and user performance needs.

As we can see, adding virtualisation technologies at the network level as well as at server and application level, serve to improve efficiencies and performance with greater agility while simplifying operations. For example, acquisitions and new networks can be quickly folded into the existing MPLS-based infrastructure without reconfiguring the network to avoid IP address conflicts. This approach creates a highly flexible and efficient data center WAN.

A major trend is the data centre consolidation. Many service providers are looking to reduce from tens to three to four very large data centres. The architecture of each new data centre network is challenging and collapsing layers of switches alleviates this. However, with the consolidation, the large number of sub-10Gbps security appliances (FW, IDP, VPN, NAT, with the correspondent HA and load-balancing) becomes unmanageable and represents a real bottleneck. Traditionally, organisations have been forced to balance and compromise on network security versus performance. In the data centre space this trade-off is completely unacceptable and the infrastructure must provide the robust network security desired with performance to meet the most demanding application and user environments.

The evolution and consolidation of data centres will provide significant benefits; that goal can be achieved by simplifying the network, collapsing tiers, and consolidating security services. This network architecture delivers operational simplicity, agility and greater efficiency to the data centre. Applications and service deployments are accelerated, enabling greater productivity with less cost and complexity. The architecture addresses the needs of today's organisations as they leverage the network and applications for the success of their business.

David Noguer Bau, Service Provider Marketing EMEA, Juniper Networks
www.juniper.net

As users become increasingly intolerant of poor network quality, Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks tells Priscilla Awde that, in order to meet the huge demand for speed and efficiency, the whole industry is heading in the same direction - creating an all IP Ethernet core using MPLS to prioritise packets regardless of content

Speed, capacity, bandwidth, multimedia applications and reliable any time, anywhere availability from any device - tall orders all, but these are the major issues facing every operator whether fixed or mobile. Meeting these needs is imperative given the global telecoms environment in which providing consistently high quality service levels to all subscribers is a competitive differentiator. There is added pressure to create innovative multimedia services and deliver them to the right people, at the right time, to the right device but to do so efficiently and cost effectively.

Operators are moving into a world in which they must differentiate themselves by the speed and quality of their reactions to rapid and global changes. Networks must become faster, cheaper to run and more efficient, to serve customers increasingly intolerant of poor quality or delays. It is a world in which demand for fixed and mobile bandwidth hungry IPTV, VoD and multimedia data services is growing at exponential rates leaving operators staring at a real capacity crunch.

To help operators transform their entire networks and react faster to demand for capacity and greater flexibility, Ericsson has created a Full Service Broadband initiative which marries its considerable mobile capabilities with similar expertise in fixed broadband technologies. With the launch of its Carrier Ethernet portfolio, Ericsson is leveraging the strength of the Redback acquisition to develop packet backbone network solutions that deliver converged applications using standards based IP MPLS (Multi Protocol Label Switching), and Carrier Ethernet technologies.

Committed to creating a single end-to-end solution from network to consumer, Ericsson bought Redback Networks in 2007, thereby establishing the foundation of Ericsson IP technology but most importantly acquiring its own router and IP platform on which to build up its next generation converged solution.

In the early days of broadband deployment, subscriber information and support was centralised, the amount of bandwidth used by any individual was very low and most were happy with best effort delivery. All that changed with growth in bandwidth hungry data and video applications, internet browsing and consumer demand for multimedia access from any device. The emphasis is now on providing better service to customers and faster, more reliable, more efficient delivery. For better control, bandwidth and subscriber management plus content are moving closer to customers at the network edge.

However, capacity demand is such that legacy systems are pushed to the limit both in handling current applications, let alone future services, and guaranteeing quality of service. Existing legacy systems are inefficient, expensive to run and maintain compared to the next generation technologies that transmit all traffic over one intelligent IP network. Neither do they support the business agility or subscriber management systems that allow operators to react fast to changing markets and user expectations.

Despite tight budgets, operators must invest to deliver and ultimately to save on opex. They must reduce networking costs and simplify existing architectures and operations to make adding capacity where it is needed faster and more cost effective.

The questions are: which are the best technologies, architectures and platforms and, given the current economic climate, how can service providers transform their operations cost effectively. The answers lie in creating a single, end-to-end intelligent IP network capable of efficiently delivering all traffic regardless of content and access devices. In the new IP world, distinctions between fixed and mobile networks, voice, video and data traffic and applications are collapsing. Infonetics estimates the market for consolidating fixed and mobile networks will be worth over $14 billion by 2011 and Ericsson, with Redback's expertise, is uniquely positioned to exploit this market opportunity.

Most operators are currently transforming their operations and as part of the solution, are considering standards based Carrier Ethernet as the broadband agnostic technology platform. Ethernet has expanded beyond early deployments in enterprise and Metro networks: carrier Ethernet allows operators to guarantee end-to-end service quality across their entire network infrastructure, enforce service level agreements, manage traffic flows and, importantly, scale networks.

With roots in the IT world where it was commonly deployed in LANs, Ethernet is fast becoming the de facto standard for transport in fixed and mobile telecoms networks. Optimised for core and access networks, Carrier Ethernet supports very high speeds and is a considerably more cost effective method of connecting nodes than leased lines. Carrier Ethernet has reached the point of maturity where operators can quickly scale networks to demand; manage traffic and subscribers and enforce quality of service and reliability.
 

"For the first time in the telecoms sector we now have a single unifying technology, in the form of IP, capable of transmitting all content to any device over any network," explains Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks, an Ericsson company. "The whole industry is heading in the same direction: creating an all IP Ethernet core using MPLS to prioritise packets regardless of content.
 

"In the future, all operators will want to migrate their customers to fixed/mobile convergent and full service broadband networks delivering any service to any device anytime, but there are a number of regulatory and standards issues which must be resolved. Although standards are coming together, there are still slightly different interpretations of what constitutes carrier Ethernet and discussions about specific details of how certain components will be implemented," explains Williams.

Despite debates about different deployment methods, Carrier Ethernet, MPLS ready solutions are being integrated into current networks and Redback has developed one future proof box capable of working with any existing platform. 

Experts in creating distributed intelligence and subscriber management systems for fixed operators and now for mobile carriers, Redback's solutions are both backward and forward compatible and can support any existing platform, including ATM, Sonet, SDH or frame relay. Redback is applying its experience in broadband fixed architectures to solving the capacity, speed and delivery problems faced by mobile operators. As the amount of bandwidth per user rises, the management of mobile subscribers and data is being distributed in similar ways as happened in the fixed sector.

Redback has developed SmartEdge routers and solutions to address packet core problems and operator's needs to deliver more bandwidth reliably. SmartEdge routers deliver data, voice or video traffic to any connected device via a single box connected to either fixed or mobile networks. Redback's solutions are designed to give operators a gradual migration path to a single converged network which is more efficient and cost effective to manage and run.

In SmartEdge networks with built-in distributed intelligence and subscriber management functionality, operators can deliver the particular quality of service, speed, bandwidth and applications appropriate to individual subscribers.

Working under the Ericsson umbrella and with access to considerable R&D budgets, Redback is expanding beyond multiservice edge equipment into creating metroE solutions, mobile backhaul and packet LAN applications. Its new SM 480 Metro Service Transport is a carrier class platform which can be deployed in fixed and mobile backhaul and transport networks; Metro Ethernet infrastructure and to aggregate access traffic. Supporting fixed/mobile convergence, the SM 480 is a cost effective means of replacing legacy transport networks and migrating to IP MPLS Carrier Ethernet platforms. The system can be used to build packet based metro and access aggregation networks using any combination of IP, Ethernet or MPLS technologies.

Needing to design and deliver innovative converged applications quickly to stay competitive, operators must build next generation networks. Despite the pressures on the bottom line, most operators see the long-term economic advantages of building a single network architecture. Moving to IP MPLS packet based transmission and carrier Ethernet creates a content and device agnostic platform over which traffic is delivered faster and over a future proof network. Operators realise the cost and efficiency benefits of running one network in which distinctions between fixed and mobile applications are eliminated.

Although true convergence of networks, applications and devices may be a few years away, service providers are deploying the necessary equipment and technologies. IP MPLS and carrier Ethernet support both operators' needs for speed, flexibility and agility and end user demand for quality of service, reliability and anywhere, anytime, any device access.
 

"Ultimately however, there should be less focus on technology and more on giving service providers and their customers the flexibility to do what they want," believes Williams. "All operators are different but all need to protect their investments as they move forward and implement the new technologies, platforms and networks. Transformation is not only about technology but is all about insurance and investment protection for operators ensuring that solutions address current and future needs."

Priscilla Awde is a freelance communications journalist

With each day, the complexity of market offerings from telecommunication operators grows in scope. It is therefore vital to present the individual offers to end customers in an attractive, simple and understandable manner. Together with meeting target profits and other financial measures, this is the principal goal of the marketing department for all communication service providers says Michal Illan

Within the OSS/BSS environment, forming clear and understandable market offerings is equally important for business as the factors described above. There is a huge difference between maintaining all key information about market offerings through various GUIs and different applications, and having it instantly at your fingertips in an organised manner. The latter option saves time and reduces the probability of human error, which makes a significant difference in both the length of time-to-market and the accuracy of the offering, ordering and charging processes experienced by the end customer.Market offerings have the following principal aspects that are usually defined during the offer design process:

  • General idea (defining the scope of the offer)
  • Target market segment
  • Selection of applicable sales channels
  • Definition of services and their packaging
  • Definition of pricing
  • Definition of ordering specifics
  • Definition of the order fulfilment process
  • Marketing communication (from the first advertising campaign and ending with communication at points of sale or scripts prepared for call centre agent)

It is apparent that market offerings aren't static objects at all; on the contrary, they are very dynamic entities and most of a communication provider's OSS/BSS departments have some stake in its success.

This leads directly to the key question: "Which environment can support a market offering and enable unified and cooperative access to it by appropriate teams during the proper phases of its lifecycle?"

The environment that addresses all of the above-mentioned aspects must be materialised in the form of an information system or application, if it is to be put into real existence.

Putting Clarity into Practice
The closest match to the requirements described above is an OSS/BSS building block called Product Catalogue. 
Product Catalogue is usually represented by the following three aspects:

  • A unified GUI that enables all key operations for managing a Market Offering during its lifecycle
  • Back-end business logic and a configuration repository
  • Integration with key OSS/BSS systems

In terms of integration, the functions supported by an ideal Product Catalogue will also define the OSS/BSS systems. Product Catalogue should be integrated with a market segmentation system (i.e. some BI or Analytical CRM), ordering, order fulfilment, provisioning, charging and billing and CRM. These systems should either provide some data to Product Catalogue or use it as the master source of the information related to market offerings.
The necessity of integration in general is unquestionable; the only remaining issue is determining how the integration will be done and what will be the overall cost. Which type of integration will take place depends on a number of factors discussed below.  
 
The principle dilemma
There are three major options for positioning Product Catalogue within the OSS/BSS environment. Product Catalogue can be deployed as:

  • A standalone application
  • Part of a CRM system
  • Part of a Charging & Billing system

Product Catalogue as a Standalone Application
This option appears tempting at first because: "Who can have better Product Catalogue than a company exclusively specialising in its development?" Unfortunately, troubles tend to surface later on regardless of the attractiveness of the application's GUI.

When a telecommunications operator has intelligent charging and billing processes in place, an advanced standalone Product Catalogue can still produce massive headaches related to the integration and customisation side of its deployment. Generally, telecom vendors are highly unlikely to guarantee compatibility with surrounding OSS/BSS system, nor provide confidential pricing logic definition information (or other advanced features) to third-party vendor. What the operator gets is either a never-ending investment into customisations without clear TCO or ROI or multiple incompatible systems.

The key point is that all the charming features of a standalone Product Catalogue are effectively useless without the surety of seamless integration and excellent support from the surrounding OSS/BSS systems.

Product Catalogue as part of a CRM system
This is without a doubt a better option than the first choice because at least one side of the integration is guaranteed-if ordering is part of the overall CRM system, then two sides are in the safe zone.

The only disadvantage of such an approach is that the pricing logic richness of a CRM system's Product Catalogue is quite low, if any. Subsequently, there is no principal gain in implementing a unified Product Catalogue as long as the definition of the price model and some additional key settings remain on the charging and billing system side. Such a setup is quite far from the ‘unified environment' described at the beginning of this article.

Product Catalogue as part of a charging and billing system
Complex pricing logic/modelling is not only the major differentiator of an operator's market offering; it is also the key to profitability in every price-sensitive market. Even in markets where consumers demand inexpensive flat-rate offers, it is still VAS offers (many using complex pricing logic) driving profits.

Implementation on the side of charging and billing is quite often the most challenging when compared to ordering or CRM, for example. Order fulfilment can also be quite a challenge, especially when considering the example of introducing complex, fixed-mobile convergent packages for the corporate segment; however, Product Catalogue itself has no major effect on its simplification.

We can say that out-of-the box compatibility between Product Catalogue and charging and billing significantly decreases the opex of a service provider as well as markedly shortens time-to-market for the introduction of new market offerings and the modification of existing ones.

Because the overall functional richness and high flexibility in the areas of pricing and convergence are really the key features of charging and billing systems nowadays, out-of-the-box compatibility and reduced costs should facilitate the greatest gains on the service provider's side.

Business benefits
There are a variety of direct and indirect benefits linked to implementation of Product Catalogue into the OSS/BSS environment. All of them are related to three qualities that accompany any successful introduction of Product Catalogue - clarity, accessibility and systematization.

Clarity
Managing market offering lifecycles is supported by Product Catalogue's design, bringing all involved parties within the telecommunication operator a better understanding of related subjects, the level of their involvement and their role within the process. This decreases the level of confusion, which is usually unavoidable regardless of how well the processes are described in paper form.

Accessibility
All Market Offerings are accessible and visible within a single environment, including the history of their changes and the market offering's sub-elements. Anyone, according to their access rights, can view the sections of Product Catalogue applicable to their role.
There is no risk of discrepancies between market offering related data in various systems provided that the Product Catalogue repository is the master data source as stated above. Accessibility to correct data is an important aspect of information accessibility in general.

Systematisation
Product Catalogue not only enforces a certain level of systematisation of market offering creation and maintenance processes but also stores and presents all related business entities in a systematic manner, by default taking their integrity enforced by business logic into account.

Measurable benefits
All three qualities - clarity, accessibility and systematisation - can be translated into two key terms - time and money. A successful implementation of Product Catalogue brings significant savings on the telecommunication operator's side as well as guarantees a considerable shortening of time-to-market for introducing new market offerings. If these two goals are not accomplished by implementing Product Catalogue, such a project must be considered a failure.

A full version of this article can be found here

Michal Illan is Product Marketing Director, Sitronics Telecom Solutions
www.sitronics.com

Ensuring the effectiveness and reliability of complex next generation networks is a major test and measurement challenge.  Nico Bradlee looks for solutions

Almost without exception the world's major service providers are building flat hierarchical next generation networks (NGNs), capable of carrying voice, data and video traffic. They are creating a single core, access independent network, promising lower opex and enabling cost effective, efficient service development and delivery.

Easy on paper but not so easy to realise the promised capex and opex savings, speedy service launches and business agility. Unlike traditional PSTNs where equipment handles specific tasks, the IP multimedia subsystem (IMS) is a complex functional architecture in which devices receive a multitude of signals. Ensuring QoS and guaranteeing reliability in such a complex network is a test and measurement (T&M), nightmare. Top on the list of operators' priorities are equipment interoperability, protocol definitions, capacity and roaming, which the industry is working to resolve.

According to Frost & Sullivan, the global T&M equipment market earned revenues of $27.4 million in 2007 which is expected to rise to $1.2 billion in 2013. Ronald Gruia, principal analyst, Frost & Sullivan, suggests a change in thinking is needed: operators must reconsider capacity requirements and new ways of testing if they are to avoid surprises.
In the IMS environment there are exponentially more protocols and interfaces with networks and devices - legacy, fixed and wireless. Numerous functions interwork with others and the number of signaling messages are an order of magnitude higher than in traditional networks. The situation is further complicated by a multi-vendor environment in which each function can be provided by different suppliers and, although conforming to standards, equipment may include proprietary features. The advantage is that operators can buy best-of-breed components and, providing they work together and conform to specifications, telcos can add functionality without investing in new platforms or changing the whole network architecture.

Like many new standards, IMS is somewhat fluid and open to interpretation. Although standards have been approved, they are often incomplete, are still evolving or may be ambiguous. Further, each of the different IMS standards organisations, which include 3GPP, ETSI, TISPAN and IETF, publishes regular updates. Vendors interpret standards according to the needs of their customers and may introduce new innovations which they refer to standards bodies for inclusion in future releases. "IMS standards don't define interoperability but interfaces and functions which may be misinterpreted or differently interpreted by vendors," explains Dan Teichman, Senior Product Marketing Manager, voice service assurance at Empirix.

The many IP protocols have advanced very rapidly but standards are still evolving so there is considerable flexibility and variation. "This is a new and exciting area," says Mike Erickson, Senior Product Marketing Manager at Tektronix Communications, "but it is very difficult to test and accommodate error scenarios which grow exponentially with the flexibility provided in the protocol.
 

"Rapid technology changes and variety make it difficult for people to become experts and it is no longer possible for customers to build their own T&M tools," continues Erickson. "However, new T&M systems are more intelligent, automated, easier to use and capable of testing the different types of access networks interfacing with the common core. Operators must be able to measure QOS and ensure calls can be set up end-to-end with a given quality - this facility must be built into the series of test tools used both in pre-deployment and in live networks."

IMS networks must be tested end-to-end: from the access to the core, including the myriad network elements, functions and connections/interfaces between them. While the types of tests vary little from those currently used in traditional networks, their number is exponentially higher. "Tests break down into functional tests; capacity testing to ensure network components can handle both sustained traffic levels and surges; media testing - confirming multimedia traffic is transmitted reliably through the network; trouble shooting and 24x7 network monitoring to identify anomalies and flag up problems," says Erickson. "The difference is that in relatively closed PSTNs, four to five basic protocols are being considered compared to hundreds in more open VoIP and IMS networks."

No single vendor or operator has the facilities to conduct comprehensive interoperability, roaming, capacity or other tests to ensure equipment conforms to different iterations of IMS or to test the multiple interfaces with devices, gateways and protocols typical in NGNs. The MultiService Forum, a global association of service and system providers, test equipment vendors and users, recently concluded its GMI 2008 comprehensive IMS tests of over 225 network components from 22 participating vendors. Five host labs on three continents were networked together creating a model of the telecoms world. Roger Ward, MSF President says: "The results showed the overall architecture is complex and the choice of implementation significantly impacts interoperability. IMS protocols are generally mature and products interoperate across service provider environments. Most of the problems encountered were related to routing and configuration rather than protocols. IMS demonstrated the ability to provide a platform for convergence of a wide range of innovative services such as IPTV."

These essentially positive results support the need for continuous testing and monitoring before and during implementation, the results of which can be fed back into vendors' test and measurement teams for product development.

"Building products to emulate IMS functions means operators can buy equipment from multiple vendors, emulate and test functions before implementation and without having to build big test labs," says Teichman. "In IMS networks, T&M is not confined to infrastructure: the huge variety of user interfaces must be tested before implementation to avoid network service outages and QOS problems. While they have to test more functional interfaces, most traditional tests are still valid: although the methodology may be the same, the complexity is higher as many more tests are required to get the same information."

Operators face scalability issues as the number of VoIP users increases. The question, suggests Tony Vo, Senior Product Manager at Spirent, is whether IMS can support thousands of users. "Test solutions must generate high loads of calls. All tests are focused around SIP so tests must emulate different applications. GMI 2008 verified the issues and companies can now develop solutions. However, from a T&M perspective, no one solution can solve all problems."

Nico Bradlee is a freelance business and communications journalist

In an era of increased competition, convergence, and complexity, workforce management has become more important than ever. Field technicians represent a large workforce, and any improvements in technician productivity or vehicle expense can show huge benefits. Likewise, the effectiveness of these technicians directly impacts the customer experience. Deft management of this workforce is more important than ever and requires sophisticated tools, says Seamus Cunningham

Today's communications service providers (CSPs) in the wireless, wireline, or satellite market are providing service activation and outage resolution to their customers - and need to continually do it better, faster, and cheaper. Further, they must do it in an environment of increasing complexity, with new and converged services and networks, and with an ever-growing base of customers. CSPs additionally face global challenges (eg soaring gasoline prices and increased concern about carbon emissions), competitive pressures (eg corporate mergers, triple play offerings, and new entrants), and technological change. To achieve their desired results with such variables impacting their businesses, CSPs must take control of their workforce operations and focus on some combination of key business case objectives including:

  • Reduce operational costs
  • Improve overall customer experience
  • Rapidly deploy new and converged services.

Operational costs for a CSP are significant, especially given the current global financial and economic situation. Consider the total wireline operations of three US Regional Bell Operating Companies (RBOCs), which include operations related to voice and high-speed internet access in the local and interexchange parts of the network:

  • There are over 82,000 outside technicians and over 21,000 inside technicians.
  • Outside technicians have approximately 144 million hours (or 18 million days) and inside technicians have 37 million hours (or 4.6 million days) of productive time a year.
  • There are over 77 million outside dispatches a year and over 96 million inside dispatches a year.
  • The loaded (including salary and benefits) annual labour cost for outside technicians is $7.6 billion (or 15 per cent of their annual cash expense). The loaded annual labour cost for inside technicians is $1.8 billion (or 4 per cent of their annual cash expense).

These are just a subset of the operational costs of a wireline CSP. Similarly, there are significant operational costs in the wireless and satellite markets. Increasing competition continues to put pressure on CSPs to reduce expenses and increase profitability. Some areas that need to be addressed are discussed below.

Technicians are the single largest expense for CSPs. Therefore, introducing labour efficiency is critical for meeting expense objectives. CSPs could increase the number of customer visits in less time by ensuring the right technician is assigned to the right job at the right time. All too often, technicians are unable to do their assigned job because they do not have the right skill set or time to complete it.

Technician productivity can additionally increase by optimising technician routes and reducing travel time and unproductive time. This has the added benefit of reducing fuel and vehicle maintenance expenses and can result in significant carbon emission savings and fuel savings.

A CSP can increase dispatcher productivity by automating existing dispatcher functions such as work assignments and load imbalance resolution and thereby make the dispatcher an exception handler. This way, a dispatcher can focus on the "out of norm" conditions rather than on functions that can be automated.

Consolidation of dispatch systems and processes can reduce CSP expenses and increase efficiency. Integration of dispatch systems for wireless, wireline, or satellite telecommunications operators can sequence, schedule, and track field operations activities for:

  • Service activation and service assurance work for all types of circuits and services
  • All technicians (outside, inside central/switching office, installation and repair, cable maintenance, cell tower technicians)
  • Broadband or narrowband networks
  • A complete range of technologies, products, and services, eg triple play (video, data, and voice networks), fibre (FTTx), DSL, HFC, SONET/SDH, ATM, and copper. Maintaining separate dispatch systems or processes for different areas of business is expensive and inefficient. A single workforce management system to manage all technicians across all aspects of the company can help.

A CSP can reduce time-to-market for new products and services by streamlining their workforce management system integration with business and operations support systems (e.g., service fulfilment, service assurance, customer relationship management [CRM], and field access systems) and automating their flow-through of service orders and tickets. For some CSPs, this could involve integrating with multiple service activation, trouble ticketing, and CRM systems.

When providing service or outage resolution to their customers, CSPs need to ensure their customers are satisfied and that a customer's overall experience while dealing with the CSP is positive. Certainly, it is impossible to keep everyone happy all of the time; however, there are things the CSP can do to help ensure the customer experience is a positive one.
For example, CSPs can improve appointment management by providing the means for service representatives to offer valid, attainable appointments to their customers (based on actual technician availability) and then successfully meet those appointments. CSPs must also make provisions to offer narrow appointment windows to customers as well as provide automated, same-day commitment management. No one wants to wait a long time for a technician to begin with, much less wait and then have the technician show up late or not at all!

The overall customer experience can be improved by keeping the customer up-to-date and informed through increased communication. For example, keeping the customer up-to-date on a technician's estimated time of arrival at the customer premises can go a long way toward overall customer satisfaction. Also, keeping the technician well informed about the services a given customer has, so the technician is prepared to answer customer questions accurately, as well as provide instruction on how to use the services, can add to a positive customer experience.

Finally, through effective and efficient workforce monitoring and operations management, CSPs can monitor key performance metrics, such as mean time to repair (MTTR), which will help track the effect of their business changes on their service activation and network outage times. Also, CSPs need to ensure that they meet their customer's Service Level Agreements (SLAs), because the customers paid for a certain level of installation or maintenance support and should get it.

Another key business case objective is to rapidly deploy new (eg triple play) services and improve the time-to-market by providing easy integration with new systems and services.
CSPs must integrate their existing operations and system algorithms with new technology (eg xPON, FTTx, Bonded DSL). In order to quickly get a new service/technology to market, CSPs must quickly update their business processes and systems to support the new service and technology. This way, they can focus on providing and maintaining the new service/technology to their customers.

By utilising a flexible and configurable workforce management system, CSPs can meet their ever-changing business needs and challenges by utilising user tunable reference data to enhance their flows. This will allow the CSP to process the new service differently than other services and meet their changing business needs and requirements. For example, for a new service offering, additional information regarding that service is needed that could be used by the workforce management system to uniquely route, job type, and price data and video work.

CSPs must make next generation assignment and services information readily available to all technicians as well as provide the technician easy access to all necessary data, in order to minimise their effort to understand the relationships between domains (eg infrastructure, DSL, Layer 2/3 services, etc.). Also, by having the relationships between domains, the system can minimise truck rolls and the number of troubles by correlating root-cause problems that impact multiple domains (eg Layer 1 outage as the root cause of Layer 2 and Layer 3 troubles).

The decisions a CSP makes about their workforce management solution will greatly impact business results. CSP can make the right decisions by considering all aspects of workforce management operations: process, people, network, technology and leadership. It is not just selecting a system, but understanding the impacts of the process on employees, and ultimately providing excellent customer satisfaction to customers.

Seamus Cunningham is Principal Product Manager at Telcordia.
www.telcordia.com

Next Generation Access (NGA) will dramatically increase broadband speeds for European consumers and business over the coming years. However, it also threatens to disrupt established modes of competition and raises complex issues for telecommunications regulation according to Bob House and Michael Dargue

In traditional telco access networks, the architecture of the copper network lent itself to infrastructure-based competition in the form of Local Loop Unbundling (LLU). In countries such as the UK and France, service providers invested in LLU creating price-competitive broadband markets, rich in innovation and service differentiation.

Looking forward, it is unlikely that the same degree of infrastructure-based competition will exist in an NGA world. The economics of laying fibre or deploying electronics in street cabinets do not favour multiple access networks. Furthermore, unbundling may not be technically possible in certain situations, for example where the incumbent chooses Passive Optical Networking (PON) for its fibre-to-the-home (FTTH) network.

In geographies where infrastructure-based alternatives are technically or economically unviable, service providers will be forced to rely on wholesale bitstream from the network operator to serve their end customers. Such wholesale offers have historically consisted of simple bitstream services or resale of the incumbent's retail offer, supporting little or no differentiation. NGA therefore risks eroding the competitive benefits won through LLU.
Strategically, telecommunications regulators see benefits from NGA but want to maintain a high degree of service innovation and consumer choice. The question is how to achieve this with wholesale access.

In the UK, Ofcom sees wholesale access as a necessary complement to infrastructure-based competition in NGA. Ofcom is therefore supporting the development of fit-for-purpose wholesale products. Ofcom is not attempting to specify the products directly, but has worked with industry to define a desirable set of characteristics for NGA wholesale access products: a concept it terms Active Line Access (ALA). The intention is that an ALA-compliant product would provide a service provider with a degree of control as close as possible to that of having its own network - a step change from traditional wholesale access.

There are five key characteristics of ALA as follows:

  • Flexibility in selection of the aggregation or interconnect point;
  • Ability to support QoS;
  • Flexibility in the types of user-network interface and CPE that can be supported;
  • Ability to guarantee network and service security and integrity;
  • Ability to support multicast services.

In addition to the capabilities, Ofcom and the industry identified Ethernet as the most appropriate technology to realise ALA. Ethernet was chosen for its widespread adoption, support for a wide range of physical media, and its transparency to higher layer protocols.
Having agreed the characteristics of Ethernet ALA, Ofcom's next step was to understand whether there were barriers to realising the ALA concept in practice. To this end, Ofcom engaged industry consultants CSMG to develop case studies of real-world wholesale Ethernet-based access services, and to assess the extent to which they embodied the desired characteristics of ALA. The case studies were drawn from international markets and were selected to cover a range of network architectures and market segments.

COLT was included in the study to provide an example of wholesale Ethernet delivered over a copper network. Although best known for its fibre optic metro area networks, COLT has increased its network reach using Ethernet in the First Mile (EFM) over LLU. COLT's wholesale services are available across both infrastructures and include Internet Access, Ethernet Services, IP-VPN and VoIP.

Of the fibre-based examples, Optimum Lightpath has a metro ring architecture in cities on the East coast of the USA. Optimum Lightpath uses Ethernet in the access network to transport its business-focussed voice, data and video services and also to serve the wholesale service provider market.

In Canada, Telus offers wholesale Ethernet access over both its metro fibre rings and point-to-point fibre access networks. Telus uses Ethernet access to provide E-Line and E-LAN services for business customers, emulating leased lines and LANs respectively.
Although not having a wholesale offer, Iliad was included as it uses Ethernet to deliver retail triple-play services on its FTTH network in France. In the wholesale market, Iliad plans to offer unbundled fibre access rather than an active Ethernet service.

BBned, in the Netherlands, provided an example of an alternative operator using point-to-point fibre to serve residential and business end-users. BBned's FTTH footprint includes Amsterdam where it operates the active layer of Amsterdam's CityNet network.
Also in the Netherlands, KPN offers a spectrum of wholesale access options including unbundled fibre and copper. Its wholesale Ethernet service is known as "Wholesale Broadband Access" (WBA) - first launched on ADSL in 2006 and extended to VDSL and FTTH in 2008.

Finally, as an example of wholesale Ethernet services on a Passive Optical Network, we included NTT's layer 2 "LAN Communications" service which is available across both its PON and point-to-point access fibre networks in Japan.

CSMG developed the case studies through a series of interviews with technical and product marketing executives from the network operators. Input was also taken from service provider customers, national regulators and vendors to provide a 360° view.
Looking at the first of the five characteristics, we found considerable flexibility in the range of interconnect and aggregation options. A range of interconnect points were available, enabling aggregation of traffic at local, regional and national levels. One operator also offered international aggregation, i.e. a single interconnect could be used to reach end-users in multiple countries.

We also found strong support for QoS, with network operators adopting one of two approaches. The first of these was to guarantee the bandwidth of individual access connections. The second approach was to classify the traffic (e.g. voice, video and data) and provide guarantees for the performance of traffic within each class. Guaranteed bandwidth was popular in the business market, where end-customers were using Ethernet services substitutes for leased lines. Class of Service was more popular in the consumer market as it enables network capacity to be shared and hence supports lower cost services.
In terms of flexibility at the user-network interface, in all but one of the case studies the network operator installed an active device at the customer's site to present Ethernet ports towards the customer. We found it was common practice for service providers to add their own CPE resulting in two devices in the customer's home or office. At the time of the study, KPN was unique in providing a ‘wires-only' service; however, given historic trends we expect wires-only presentation to become more common in NGA over time.

The ability to guarantee security and integrity was largely determined by the architecture adopted by the network operators and the functionality of their network equipment. The primary techniques in play were to separate customer traffic logically and lock down vulnerable communications, e.g. using VLANs, controlling broadcast traffic, and preventing user-to-user communication at Layer 2. The shared-access medium in PON introduces additional potential risks in terms of eavesdropping and denial of service, which service providers will need to consider in designing their retail propositions.

Of the five ALA characteristics, the one with least support was multicast. Only BBned and Optimum Lightpath had incorporated multicast into their wholesale offers, although the majority of network operators employed it to carry their retail services (e.g. television broadcast or video conferencing). Without access to multicast, it is unlikely that service providers would be able to offer competing retail services as the bandwidth cost of unicasting the traffic would be prohibitive.

Returning to the overall objective of the research, the case studies demonstrate that examples of most ALA characteristics can already be found real-world wholesale Ethernet access services. The presence of these characteristics in commercially available wholesale offers gives credence to the vision of ALA compliant services being realized in practice. The study therefore supports the view that Ethernet ALA would be a useful component of a future regulatory toolkit for NGA.

Going forwards, having established the ALA concept Ofcom is now working with industry to promote the standardisation of Ethernet ALA. Ofcom see ALA as having European, if not global relevance, and therefore plan to hand over the technical requirements to standards bodies as a next step. International standardization would enable widespread adoption by network operators and in turn deliver global scale economies in ALA-compliant infrastructure. Network operators stand to benefit from attracting service providers to their network, and for service providers ALA creates the opportunity for control and differentiation without the need to own infrastructure. Finally, for end customers, ALA promises to support a competitive and innovative market for broadband services.

Bob House and Michael Dargue are senior members of CSMG's London office.
Further information on Next Generation Access and Ethernet ALA can be found at the following websites:
www.ofcom.org.uk/telecoms/discussnga
www.csmg-global.com

A crucial element in building a wholesale VoIP business and maintaining competitive edge in a harsh business environment is the choice of equipment that forms the core of the company's operation, says Nico Bradlee

As VoIP prospects seem to be bright and sunny thanks to new technologies and a plentiful choice of VoIP solutions, it presents an inviting opportunity for starting your own business. VoIP has entrenched itself in the telecommunication world and competitive carriers are exploring the numerous ways to derive benefits from this lucrative technology.

The wholesale VoIP market used to be overwhelmed with a huge number of players from different leagues. The popularity of wholesale VoIP was easy to explain - as you are your own boss you sell a product that can almost sell itself and requires minimum investment both in terms of capex for equipment and human resources.

But from the perspective of the past several years we can see that harsh reality intruded and small players could no longer compete with large-scale telecommunication tycoons. Competition being a lifeblood of technological progress, it remains an essential prerequisite for any market development, to say nothing about VoIP. Competition is actually the driving force that enables carriers to generate new revenues, and equipment vendors to offer new automated tools for them.

Nowadays the VoIP market is undergoing some transformation that affects the scale of businesses presented there. The number of transit operators is reducing due to margin reduction. It presents an additional challenge for the wholesale market's newcomers and poses another reasonable question - how to join the VoIP race and survive in this hard-bitten business world? One of the crucial elements of the strategy to build a brand new wholesale VoIP business will be the right choice of the equipment laying at the core of the company operations. So let's take a look at class 4 switching equipment from the top ten leading brands and get to the bottom of solving the question of how to choose the switch to save your network from going downhill.
Reviewed brands and products:

The platform: hard or soft?
It's of passing interest that the overwhelming majority of vendors use hardware platforms in their switching equipments. Though there is no definitive answer on what is preferable, soft or hardware, since both have their pros and cons.

A hardware platform doesn't require additional equipment and is shipped on already-based server, so you don't need to look for an appropriate base. All the vendors that we picked out, apart from MERA Systems, which uses software platform for its switches, utilise hardware based solutions. The advantages of a soft-based switch are also notable since you can install the software on an existing server and there is no need to turn to the vendor for its substitution in case of some defect. Moreover, if the carrier chooses to relocate the server there will be no call for its physical replacement.

Operating System
When it comes to the operating system there is also no right or wrong on what OS to use. The majority of developers use Linux OS, and it's quite understandable. It's Linux's universality, wide application and compatibility with servers and third-party systems that made Sansay, Nextone, MERA Systems and Audiocodes opt for Linux OS in their switching equipment. On the other hand proprietary platform can offer enhanced functionality and give competitive advantage before other market players. Therefore Acme Packet uses its own OS as an application base that allows increased productivity.

Functionality
The functionality of switches varies greatly, and has taken a big step forward thanks to technological progress. As the prevailing number of operators who got used to H.323 has started to use SIP, all of the leading vendors support conversion of H.323 and SIP protocols ensuring interoperability between equipment from various vendors. Additionally, Sansay and Acme Packet support MGCP protocol, given that Acme can also work with H.248.
As to voice codec conversion, its support in switching equipment is realised only by Acme Packet and MERA Systems. Acme Packet's functionality includes transcoding, that is translation for wireline and wireless codecs, transrating - mediate between variations in rate (eg 10ms to 30ms) - and DTMF translations. MERA Systems' softswitches ensures conversion of a wide range of codecs: G.729, C.729A, G.729AB, G.723.1, G.711 A/U, GSM FR, Speex, iLBC.

An important feature of switching equipment is encryption protocols support. Built-in support of TLS and IPSec is offered by Nextone, Audiocodes and Acme Packet. The support of MTLS, SRTP and easy messaging between them is also specified in Acme Packet equipment.

Capacity
Since hundreds of vendors around the world started to manufacture networking equipment to meet increased demand, the capacity of switches has increased to meet the requirements of different types of carriers. For instance, Nextone equipment, that is capable of handling up to 25,000CC, and Audiocodes that allows for 21,000CC, are targeted on Tier 1 and Tier 2 operators. Acme Packet, whose products are designed first and foremost for Tier 1 operators, also focuses on networks that handle at least 5,000CC and Acme provides this performance on a single server. Sansay and MERA Systems' products represent ideal solutions for Tier 3 and Tier 4 carriers whose network process up to 7-10K of concurrent calls in its most effective configuration.

Billing
No matter how productive and scalable your switching equipment is, for effective business you need a flexible third-party billing system to collect information about telephone calls and other services that are going to be billed to the subscriber A couple of good examples are Cyneric or Jerasoft billing systems. Of the vendors from the above list, an all-in-one solution (that doesn't require a billing system for the business to be operational) is only offered by MERA Systems. Its softswitch is a ready-to-go product with enhanced billing capabilities.

Pricing policy and target audience
Needless to say, the products considered in this article, being comparable in terms of switching functionality, are still designed for different types of carriers. While Nextone, Audiocodes and Acme Packet products deal with large amount of traffic, MERA Systems and Sansay concentrate on solutions for small and medium-sized wholesale businesses offering maximum functionality in switching equipment.

To put the whole thing in a nutshell, each vendor concentrates on various sectors of wholesale business, which explains the differences examined in this overview. It's up to carriers to make a choice and opt for the equipment that best serves his business purposes.

Nico Bradlee is a freelance business and communications journalist.

According to a recent poll, the revenue from current generation messaging services will continue to eclipse those for data services for at least the next four years - around the same time when we'll see wide scale deployment of Service Delivery Platforms. This creates something of a revenue void. Added to this, termination fees and roaming charges, where telcos are making their money today, face an uncertain future as termination-free IP networks are rolled out (if the EU has its way). New advertising and ‘content sponsorship' business models offer hope, as more third party brands are encouraged into the arena. However, in the short term telcos must rely on doing what they do best - selling telecoms services - but in a much cleverer way. Smart services, adding a little more intelligence to the call, could be the key to filling this void. But could they also be the catalyst for bringing advertising revenues to the fore? Jonathan Bell investigates

Hindsight is a wonderful thing - especially when it comes to evaluating the success or otherwise of past visionary ambitions of our industry. It only seems like yesterday that all the predictions and industry research confirmed that by this year our happy customers would be drowning in an interactive environment of data rich media services delivered direct to their handsets. More importantly, by this time, the world's telecoms service providers would have morphed into true content and entertainment companies, leveraging their ownership of customer relationships, access networks and billing systems to dominate this emerging value chain.

The reality today is rather more disappointing. Voice and messaging services continue to make up the great bulk of most mobile service provider's revenues - even as these are eroded by voice commoditisation. Other commercial entities from outside the world of traditional telecoms are actively seeking their own paths to market domination, potentially reducing the operator to bit-pipe players, while everyone scrambles to gain their share of an increasingly fickle and disloyal market.

So, what is to be done? One strategy already successfully adopted by service providers in both developed and developing markets is to introduce some form of advertising supported or brand sponsored services. While business models vary, these essentially translate into customers being able to make or receive calls (and messages) in exchange for exposure to adverts or, in some cases, for various types of content such as ringtones, ringback tones and wallpapers.

For the service provider, this type of activity could surely result in lower churn and higher loyalty, much-needed additional revenues and while infrastructures and technologies that can deliver truly rich services are being developed and deployed.

Of course, this is the first stage for ad-funded mobile usage. The next step requires a degree of personalisation. Being able to target the customer more effectively will be key when justifying larger budget requirements from advertisers. This is perhaps one reason why a poll of telecoms executives at the recent SDP Summit were charmed by the idea of increasingly ‘smart' voice and messaging services. And you can see why.

The ability to add an element targeting through use of location and presence data certainly takes us someway down the line of true personalisation.

In addition, adding intelligence to traditional ‘dumb' voice and messaging applications also offers consumers a degree of personalised call control and, because the services are very visible, they have a clear value to the user. This further reduces revenue erosion and churn.
So far, both research and practice indicate that such ad-funded models are serious and truly viable options - if the service provider gets it right from the start.

According to findings last year by market research company Harris Interactive, 35 per cent of adult US mobile phone users would be happy to accept incentive-based adverts. Of these, more than three quarters saw the best incentives as being simply financial in terms of refunds or free call minutes, with smaller numbers being in favour of free downloads such as games or ringtones. More interestingly - at least in the context of how service providers should best structure their SDP platforms - was that around 70 per cent of those interested in receiving adverts would be happy to provide personal information on their interests and likes and dislikes to their service providers if they can have a service customised to their needs.

On the practical side, we can see the success of service providers like US based Kajeet and the UK's Blyk. Both are targeted at the youth/child end of the market and both use various forms of sponsorship and advertising. Indeed, in the case of Kajeet, parents can also control user profiles, place calling and texting restrictions and call balances.

This combination of research and comparative commercial success, at least so far, does highlight one positive direction that mobile service providers can consider taking to avoid the dangers of disintermediation and eroding revenues. But the real magic is in bringing together multiple facets and contexts for each user or demographic. Service providers must then target groups of users to make the advertising truly personal and relevant - and not an annoying hindrance.

To create such an environment, the SDP platform required must have certain characteristics in terms of its ability to combine both fixed and changeable information about the user - from user-defined areas of interest or tariffing plans, to a user's particular location at any given moment.

As can be seen on any social networking site, today's youth are far more relaxed - at least for the present - about sharing personal attributes and information. Mobile service providers should be ready to exploit this to increase the stickiness of their own services, while growing their relationships with brand and content owners. If we can be smart with location and presence in the short-term, the longer term opportunities of increased levels of personalisation are much more achievable.

Of course, this requires the service provider themselves to develop and roll out services in a far more open and experimental manner than they have had to in the past. It also demands that they be ready and prepared to rapidly scale these up to mass-market offerings as opportunities emerge. And in turn, this requires a high degree of flexibility within core network infrastructure, billing and provisioning systems, and of the application itself - which brings us back to a standards-based approach.

The alternative is to be left out of this new value chain and see strategic assets - like network ownership, billing and customer identity relationships and location information - be exploited by more nimble outsiders with a better understanding of customer behaviours.
However, more than this, by utilising conventional voice and messaging services to both enable, and deliver more targeted advertising, the truly adverse impacts of voice commoditisation, and subsequent revenue loss, may be averted - at least in the short-term.

Jonathan Bell is VP Product Marketing, OpenCloud

    

@eurocomms

Other Categories in Features