Features

Features

While IP appears to have simplified telecoms, Christoph Kupper, Executive Vice  President of Marketing at Nexus Telecom tells Lynd Morley that the added complexity of monitoring the network - due largely to exploding data rates - has led to a new concept providing both improved performance and valuable marketing information

Nexus Telecom is, in many ways, the antithesis of the now predominant imperative in most industries - and certainly in the telecoms industry - which requires wholesale commoditisation of services; an almost exclusive focus on speed to market; and a fast response to instant gratification.

Where the ruling mantra is in danger of becoming "quantity not quality" in a headlong rush to ever greater profitability (or possibly, mere survival), Nexus Telecom calls something of a halt, focussing the spotlight on the vital importance of high quality, dependable service that not only ensures the business reputation of the provider, but also leads to happy - and therefore loyal - customers.

Based in Zurich, Nexus Telecom is a performance and service assurance specialist, providing data collection, passive monitoring and network service investigation systems.  The company's philosophy centres around the recognition that the business consequences of any of the network's elements falling over are enormous - and only made worse if the problem takes time to identify and fix.  Even in hard economic times, the investment in reliability is vital.

The depressing economic climate does not, at the moment, appear to be hitting Nexus Telecom too directly.  "Despite the downturn, we had a very good year last year," comments Christoph Kupper, Executive Vice President of Marketing at Nexus Telecom.  ‘And so far, this year, I don't see any real change in operator behaviour. There may be some investment problems while the banks remain hesitant about extending credit, but on the whole, telecom is one of the solid businesses, with a good customer base, and revenues that are holding up well."

The biggest challenge for Nexus Telecom is not so much the economy, but more one of perception and expectation, with some operators questioning the value and cost of the OSS tools - which, relative to the total cost of the network has increased over the years.  In the past few years the price of network infrastructure has come down by a huge amount, while network capacity has  risen.  But while the topological architecture of the network is simplifying matters - everything running over big IP pipes - the network's operating complexity is vastly increasing.  So the operator sees the capital cost of the network being massively reduced, but that reduction isn't being mirrored by similarly falling costs in the support systems.  Indeed, because of the increased complexity, the costs of the support systems are going up.

Complexity is not, of course, always a comfortable environment to operate in.  Kupper sees some of the culture clash that arises whenever telecom meets IT, affecting the ways in which the operators are tackling these new complexities.

"In my experience, most telecom operators come from the telco side of the road, with a telecom heritage of everything being very detailed and specified, with very clear procedures and every aspect well defined," he says.

"Now they're entering an IP world where the approach is a bit looser, with more of a ‘lets give it a try' attitude, which is, of course, an absolute horror to most telcos."

Indeed, there may well be a danger that network technology is becoming so complex that it is now getting ahead of some CTOs and telecom engineers.

"There can be something of a ‘fear factor' for the engineers, if ever they have an issue with the network," Kupper says.  "And there are plenty of issues, given that these new switching devices can be configured in so many ways that even experienced engineers have trouble doing it right.

"Once the technical officers become fully aware of these issues, the attraction of a system such as ours, which gives them better visibility - especially independent visibility across the different network domains - is enormous.

"It only takes one moment in a CTO's life when he loses control of the network, to make our sale to him very much easier."

The sales message, however, depends on the recognition that increased complexity in the network requires more not less monitoring, and that tools which may be seen as desirable but not absolutely essential (after all, the really important thing is to get the actual network out there - and quickly) are in fact, vital to business success.  Not always an easy message to get across to those whose background in engineering means they do not always think in terms of business risk.

Kupper recognises that the message is not as well established as it might be. "We're not there yet," he says.  "We still need to teach and preach quite a lot, especially because the attraction of the ‘more for less' promise of the new technology elements hides the fact that operational expenditure on the management of a network with vastly increased traffic and complexity, is likely to rise."

The easiest sales are to those technical officers who have a vision, and who are looking for the tools to fulfil it.  "They want to have control of their networks," says Kupper. "They want to see their capacity, be able to localise it, and see who's affected."

And once Nexus Telecom's systems are actually installed, he stresses, no one ever questions their necessity. 

"The asset and value of these systems is hard to prove - you can't just put it on the table. It's a more complicated qualitative argument that speaks to abstract concepts of Y resulting from the possible failure of X, but with no exact mathematical way to calculate what benefits your derive from specific OSS investment."

So the tougher sales are to the guys who don't grasp these concepts, or who remain convinced that any network failure is the responsibility of the network vendors who must therefore provide the remedy, without taking into account how long that might take, and the subsequent impact on client satisfaction, and therefore, ultimately business success.
These concepts, of course, are relevant to the full range of suppliers, from wireline and cable operators to the new mobile kids on the block.  Indeed, Kupper stresses that with the advent of true mobile data broadband availability, following the change to IP, and the introduction of flat rates to allow users to make unlimited use of the technology, the cellular operator has positioned himself as a true contender against traditional wireline and cable operators.

Kupper notes: "For years in telecommunications, voice was the data bearer that did not need monitoring - if the call didn't work, the user would hang up and redial - a clearly visible activity in terms of signalling procedure analysis.

"But with mobile broadband data, the picture has changed completely.  It is the bearer that needs analysis, because only the bearer enables information to be gleaned on the services that the mobile broadband user is accessing.  The network surveillance tools, therefore, must not only analyse the signalling procedure but also, and most importantly, the data payload.  It is in the payload that we see if, for example, Internet browsing is used, which URL is accessed, which application is used, and so forth. And it is only the payload, for which the subscriber pays!"

He points out that as a consequence of the introduction of flat rates and the availability of 3G, data rates have exploded.

"It is now barely possible to economically monitor such networks by means of traditional surveillance tools.  A new approach is needed, and that approach is what we call ‘Intelligent Network Monitoring'. At Nexus Telecom we have been working on the Intelligent Network Monitoring concept for about two years now, and have included that functionality with every release we have shipped to customers over that period.  Any vendor's monitoring systems that do not include developments incorporating the concepts of mass data processing will soon drown in the data streams of  telecom data networks."

Basically, he explains, the monitoring agents on the network must have the ability to interpret the information obtained from scanning the network ‘on the fly'.  "The network surveillance tools need a staged intelligence in order to process the vast amount of data; from capturing to processing, forwarding and storing the data, the system must, for instance, be able to summarise, aggregate and discard data while keeping the essence of subscriber information and its KPI to hand - because, at the end of the day, only the subscriber experience best describes the network performance. And this is why Nexus Telecom surveillance systems provide the means always to drill down in real-time to subscriber information via the one indicator that everyone knows - the subscriber's cell phone number."

All this monitoring and surveillance obviously plays a vital role in providing visibility into complicated, multi-faceted next generation systems behaviour, facilitating fast mitigation of current and potential network and service problems to ensure a continuous and flawless end-customer experience.  But it also supplies a wealth of information that enables operators to better develop and tailor their systems to meet their customers' needs.  In other words, a tremendously powerful marketing tool.

"Certainly,' Kupper confirms, "the systems have two broad elements - one of identifying problems and healing them, and the other a more statistical, pro-active evaluation element.  Today, if you want to invest in such a system, you need both sides.  You need the operations team to make the network as efficient as possible, and you also need marketing - the service guys who can offer innovative services based on all the information that can be amassed using such tools."

Kupper points out that drawing in other departments and disciplines may, in fact, be essential in amassing sufficient budget to cover the system.  The old days when the operations manager could simply say ‘I need this type of tool - give it to me' are long gone, and anyway their budgets, these days, are nothing like big enough to cover such systems.  Equally, however, the needs of many different disciplines and departments for the kind of information Nexus Telecom systems can provide is increasing as the highly competitive marketplace makes responding to customer requirements and preferences absolutely vital.  Thus the systems can prove to be of enormous value to the billing guys, the revenue assurance and fraud operations, not to mention the service development teams.  "Once the system is in place," Kupper points out, "you have information on every single subscriber regarding exactly which devices and services he most uses, and therefore his current, and likely future, preferences.  And all this information is real-time."

Despite the apparent complexity of the sales message, Nexus Telecom is in buoyant mood, with good penetration in South East Asia and the Middle East, as well as Europe.  These markets vary considerably in terms of maturity of course, and Kupper points out that OSS penetration is very much a lifecycle issue.  "When the market is very new, you just push out the lines," he comments.  "As long as the growth is there - say the subscriber growth rate is bigger than ten per cent a year - you're probably not too concerned about the quality of service or of the customer experience. 

"The investment in monitoring only really registers when there are at least three networks in a country and the focus is on retaining customers - because the cost of gaining new customers is so much higher than that of hanging on to the existing ones.

"Monitoring systems enable you to re-act quickly to problems.  And that's not just about ensuring against the revenue you might lose, but also the reputation you'll lose.  And today, that's an absolutely critical factor."

The future of OSS is, of course, intrinsically linked to the future of the telcos themselves.  Kupper notes that the discussion - which has been ongoing for some years now - around whether telcos will become mere dumb pipe providers, or will arm themselves against a variety of other players with content and tailored packages, has yet to be resolved.  In the meantime, however, he is confident that Nexus Telecom is going in the right direction.

"I believe our strategy is right.  We currently have one of the best concepts of how to capture traffic and deal with broadband data.

"The challenge over the next couple of years will be the ability to deal with all the payload traffic that mobile subscribers generate.  We need to be able to provide the statistics that show which applications, services and devices subscribers are using, and where development will most benefit the customer - and, of course, ultimately the operator."

Lynd Morley is editor of European Communications

Over the past years the demand for data centre services have been experiencing a  huge expansion boosted by the growth of content-rich services such as IPTV and Web 2.0. With the increased bandwidth available, enterprises are hosting more of their applications and data in managed data centre facilities, as well as adopting the Software-as-a-Service(SaaS) model. David Noguer Bau notes that there's a long list of innovations ready to improve the overall efficiency and scalability of the data centre, but network infrastructure complexity may prevent such improvements - putting at risk the emerging business models such as SaaS, OnDemand infrastructure, and more

The data centre is supposed to be the house of data - storage and applications/servers - but after a quick look to any data center it's obvious that a key enabler is also hosted there: the network and security infrastructure.

The data centre network has become overly complex, costly, and extremely inefficient, limiting flexibility and overall scalability. Arguably, it is the single biggest hurdle that prevents businesses from fully reaping the productivity benefits offered by other innovations occurring in the data centre, including: server virtualisation, storage over Ethernet, and evolution in application delivery models. Traditional architectures that have stayed unchanged for over a decade or more employ excessive switching tiers, largely to work around low performance and low-density characteristics of the devices used in those designs. Growth in the number of users and applications is almost always accompanied by an increase in the number of "silos" of more devices - both for connectivity as well as for security. Adding further insult to injury, these upgrades introduce new untested operating systems to the environment. The ensuing additional capital expenses, rack space, power consumption, and management overhead directly contribute to the overall complexity of maintaining data centre operations. Unfortunately, instead of containing the costs of running the data centre and reallocating the savings into the acceleration of productivity-enhancing business practices, the IT budget continues to be misappropriated into sustaining existing data centre operations.

Data centre consolidation and virtualisation trends are accelerating in an effort to optimize resources and lower the cost. Consolidation, virtualisation and storage services are placing higher network performance and security demands on the network infrastructure. While server virtualisation improves server resource utilisation, it also greatly increases the amount of data traffic across the network infrastructure. Applications running in a virtualised environment require low latency, high throughput, robust QoS and High-Availability. Increased traffic-per-port and performance demands, tax the traditional network infrastructure beyond its capabilities. Furthermore, the future standardisation of Converged Enhanced Ethernet (CEE) - with the aim to integrate the low-latency storage traffic - will place even greater bandwidth and performance demands on the network infrastructure.

Additionally, new application architectures, such as Service Oriented Architecture (SOA) and Web Oriented Architecture (WOA), and new services - cloud computing, desktop virtualisation, and Software as a Service (SaaS) - introduces new SLA models and traffic patterns. These heightened demands often require new platforms in the data centre, contributing to increased complexity and cost. Data centres are rapidly migrating to a high-performance network infrastructure -scalable, fast, reliable, secure and simple- to improve data centre-based productivity, reducing operational cost while lowering time to market for new data centre applications.

The way data centre networks have been designed traditionally is very rigid, based on multiple tiers of switches and not responding to the real demand of highly distributed applications and virtualised servers. By employing a mix of virtualisation technologies also in the data centre network architecture -such as clusters of switches with VLANs and MPLS-based advanced traffic engineering, VPN enhanced security, QoS, VPLS, and other virtualisation services- the model becomes more dynamic. These technologies address many of the challenges introduced by server, storage and application virtualisation. For example, the Juniper Networks Virtual Chassis technology supports low-latency server live migration from server to server in completely different racks within a data centre and from server to server between data centres in a flat Layer 2 network when these data centres are within reasonably close proximity. Furthermore, Virtual Chassis combined with MPLS/VPLS allows the Layer 2 domain to extend across data centres to support live migration from server to server when data centres are distributed over significant distances. These virtualisation technologies support the low latency, throughput, QoS and HA required of server and storage virtualisation. MPLS-based virtualisation addresses these requirements with advanced traffic engineering to provide bandwidth guarantees, label switching and intelligent path selection for optimised low latency, traffic separation as a security element, and fast reroute for HA across the WAN. MPLS-based VPNs enhance security with QoS to efficiently meet application and user performance needs.

As we can see, adding virtualisation technologies at the network level as well as at server and application level, serve to improve efficiencies and performance with greater agility while simplifying operations. For example, acquisitions and new networks can be quickly folded into the existing MPLS-based infrastructure without reconfiguring the network to avoid IP address conflicts. This approach creates a highly flexible and efficient data center WAN.

A major trend is the data centre consolidation. Many service providers are looking to reduce from tens to three to four very large data centres. The architecture of each new data centre network is challenging and collapsing layers of switches alleviates this. However, with the consolidation, the large number of sub-10Gbps security appliances (FW, IDP, VPN, NAT, with the correspondent HA and load-balancing) becomes unmanageable and represents a real bottleneck. Traditionally, organisations have been forced to balance and compromise on network security versus performance. In the data centre space this trade-off is completely unacceptable and the infrastructure must provide the robust network security desired with performance to meet the most demanding application and user environments.

The evolution and consolidation of data centres will provide significant benefits; that goal can be achieved by simplifying the network, collapsing tiers, and consolidating security services. This network architecture delivers operational simplicity, agility and greater efficiency to the data centre. Applications and service deployments are accelerated, enabling greater productivity with less cost and complexity. The architecture addresses the needs of today's organisations as they leverage the network and applications for the success of their business.

David Noguer Bau, Service Provider Marketing EMEA, Juniper Networks
www.juniper.net

As users become increasingly intolerant of poor network quality, Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks tells Priscilla Awde that, in order to meet the huge demand for speed and efficiency, the whole industry is heading in the same direction - creating an all IP Ethernet core using MPLS to prioritise packets regardless of content

Speed, capacity, bandwidth, multimedia applications and reliable any time, anywhere availability from any device - tall orders all, but these are the major issues facing every operator whether fixed or mobile. Meeting these needs is imperative given the global telecoms environment in which providing consistently high quality service levels to all subscribers is a competitive differentiator. There is added pressure to create innovative multimedia services and deliver them to the right people, at the right time, to the right device but to do so efficiently and cost effectively.

Operators are moving into a world in which they must differentiate themselves by the speed and quality of their reactions to rapid and global changes. Networks must become faster, cheaper to run and more efficient, to serve customers increasingly intolerant of poor quality or delays. It is a world in which demand for fixed and mobile bandwidth hungry IPTV, VoD and multimedia data services is growing at exponential rates leaving operators staring at a real capacity crunch.

To help operators transform their entire networks and react faster to demand for capacity and greater flexibility, Ericsson has created a Full Service Broadband initiative which marries its considerable mobile capabilities with similar expertise in fixed broadband technologies. With the launch of its Carrier Ethernet portfolio, Ericsson is leveraging the strength of the Redback acquisition to develop packet backbone network solutions that deliver converged applications using standards based IP MPLS (Multi Protocol Label Switching), and Carrier Ethernet technologies.

Committed to creating a single end-to-end solution from network to consumer, Ericsson bought Redback Networks in 2007, thereby establishing the foundation of Ericsson IP technology but most importantly acquiring its own router and IP platform on which to build up its next generation converged solution.

In the early days of broadband deployment, subscriber information and support was centralised, the amount of bandwidth used by any individual was very low and most were happy with best effort delivery. All that changed with growth in bandwidth hungry data and video applications, internet browsing and consumer demand for multimedia access from any device. The emphasis is now on providing better service to customers and faster, more reliable, more efficient delivery. For better control, bandwidth and subscriber management plus content are moving closer to customers at the network edge.

However, capacity demand is such that legacy systems are pushed to the limit both in handling current applications, let alone future services, and guaranteeing quality of service. Existing legacy systems are inefficient, expensive to run and maintain compared to the next generation technologies that transmit all traffic over one intelligent IP network. Neither do they support the business agility or subscriber management systems that allow operators to react fast to changing markets and user expectations.

Despite tight budgets, operators must invest to deliver and ultimately to save on opex. They must reduce networking costs and simplify existing architectures and operations to make adding capacity where it is needed faster and more cost effective.

The questions are: which are the best technologies, architectures and platforms and, given the current economic climate, how can service providers transform their operations cost effectively. The answers lie in creating a single, end-to-end intelligent IP network capable of efficiently delivering all traffic regardless of content and access devices. In the new IP world, distinctions between fixed and mobile networks, voice, video and data traffic and applications are collapsing. Infonetics estimates the market for consolidating fixed and mobile networks will be worth over $14 billion by 2011 and Ericsson, with Redback's expertise, is uniquely positioned to exploit this market opportunity.

Most operators are currently transforming their operations and as part of the solution, are considering standards based Carrier Ethernet as the broadband agnostic technology platform. Ethernet has expanded beyond early deployments in enterprise and Metro networks: carrier Ethernet allows operators to guarantee end-to-end service quality across their entire network infrastructure, enforce service level agreements, manage traffic flows and, importantly, scale networks.

With roots in the IT world where it was commonly deployed in LANs, Ethernet is fast becoming the de facto standard for transport in fixed and mobile telecoms networks. Optimised for core and access networks, Carrier Ethernet supports very high speeds and is a considerably more cost effective method of connecting nodes than leased lines. Carrier Ethernet has reached the point of maturity where operators can quickly scale networks to demand; manage traffic and subscribers and enforce quality of service and reliability.
 

"For the first time in the telecoms sector we now have a single unifying technology, in the form of IP, capable of transmitting all content to any device over any network," explains Simon Williams, Senior VP Product Marketing and Strategy at Redback Networks, an Ericsson company. "The whole industry is heading in the same direction: creating an all IP Ethernet core using MPLS to prioritise packets regardless of content.
 

"In the future, all operators will want to migrate their customers to fixed/mobile convergent and full service broadband networks delivering any service to any device anytime, but there are a number of regulatory and standards issues which must be resolved. Although standards are coming together, there are still slightly different interpretations of what constitutes carrier Ethernet and discussions about specific details of how certain components will be implemented," explains Williams.

Despite debates about different deployment methods, Carrier Ethernet, MPLS ready solutions are being integrated into current networks and Redback has developed one future proof box capable of working with any existing platform. 

Experts in creating distributed intelligence and subscriber management systems for fixed operators and now for mobile carriers, Redback's solutions are both backward and forward compatible and can support any existing platform, including ATM, Sonet, SDH or frame relay. Redback is applying its experience in broadband fixed architectures to solving the capacity, speed and delivery problems faced by mobile operators. As the amount of bandwidth per user rises, the management of mobile subscribers and data is being distributed in similar ways as happened in the fixed sector.

Redback has developed SmartEdge routers and solutions to address packet core problems and operator's needs to deliver more bandwidth reliably. SmartEdge routers deliver data, voice or video traffic to any connected device via a single box connected to either fixed or mobile networks. Redback's solutions are designed to give operators a gradual migration path to a single converged network which is more efficient and cost effective to manage and run.

In SmartEdge networks with built-in distributed intelligence and subscriber management functionality, operators can deliver the particular quality of service, speed, bandwidth and applications appropriate to individual subscribers.

Working under the Ericsson umbrella and with access to considerable R&D budgets, Redback is expanding beyond multiservice edge equipment into creating metroE solutions, mobile backhaul and packet LAN applications. Its new SM 480 Metro Service Transport is a carrier class platform which can be deployed in fixed and mobile backhaul and transport networks; Metro Ethernet infrastructure and to aggregate access traffic. Supporting fixed/mobile convergence, the SM 480 is a cost effective means of replacing legacy transport networks and migrating to IP MPLS Carrier Ethernet platforms. The system can be used to build packet based metro and access aggregation networks using any combination of IP, Ethernet or MPLS technologies.

Needing to design and deliver innovative converged applications quickly to stay competitive, operators must build next generation networks. Despite the pressures on the bottom line, most operators see the long-term economic advantages of building a single network architecture. Moving to IP MPLS packet based transmission and carrier Ethernet creates a content and device agnostic platform over which traffic is delivered faster and over a future proof network. Operators realise the cost and efficiency benefits of running one network in which distinctions between fixed and mobile applications are eliminated.

Although true convergence of networks, applications and devices may be a few years away, service providers are deploying the necessary equipment and technologies. IP MPLS and carrier Ethernet support both operators' needs for speed, flexibility and agility and end user demand for quality of service, reliability and anywhere, anytime, any device access.
 

"Ultimately however, there should be less focus on technology and more on giving service providers and their customers the flexibility to do what they want," believes Williams. "All operators are different but all need to protect their investments as they move forward and implement the new technologies, platforms and networks. Transformation is not only about technology but is all about insurance and investment protection for operators ensuring that solutions address current and future needs."

Priscilla Awde is a freelance communications journalist

With each day, the complexity of market offerings from telecommunication operators grows in scope. It is therefore vital to present the individual offers to end customers in an attractive, simple and understandable manner. Together with meeting target profits and other financial measures, this is the principal goal of the marketing department for all communication service providers says Michal Illan

Within the OSS/BSS environment, forming clear and understandable market offerings is equally important for business as the factors described above. There is a huge difference between maintaining all key information about market offerings through various GUIs and different applications, and having it instantly at your fingertips in an organised manner. The latter option saves time and reduces the probability of human error, which makes a significant difference in both the length of time-to-market and the accuracy of the offering, ordering and charging processes experienced by the end customer.Market offerings have the following principal aspects that are usually defined during the offer design process:

  • General idea (defining the scope of the offer)
  • Target market segment
  • Selection of applicable sales channels
  • Definition of services and their packaging
  • Definition of pricing
  • Definition of ordering specifics
  • Definition of the order fulfilment process
  • Marketing communication (from the first advertising campaign and ending with communication at points of sale or scripts prepared for call centre agent)

It is apparent that market offerings aren't static objects at all; on the contrary, they are very dynamic entities and most of a communication provider's OSS/BSS departments have some stake in its success.

This leads directly to the key question: "Which environment can support a market offering and enable unified and cooperative access to it by appropriate teams during the proper phases of its lifecycle?"

The environment that addresses all of the above-mentioned aspects must be materialised in the form of an information system or application, if it is to be put into real existence.

Putting Clarity into Practice
The closest match to the requirements described above is an OSS/BSS building block called Product Catalogue. 
Product Catalogue is usually represented by the following three aspects:

  • A unified GUI that enables all key operations for managing a Market Offering during its lifecycle
  • Back-end business logic and a configuration repository
  • Integration with key OSS/BSS systems

In terms of integration, the functions supported by an ideal Product Catalogue will also define the OSS/BSS systems. Product Catalogue should be integrated with a market segmentation system (i.e. some BI or Analytical CRM), ordering, order fulfilment, provisioning, charging and billing and CRM. These systems should either provide some data to Product Catalogue or use it as the master source of the information related to market offerings.
The necessity of integration in general is unquestionable; the only remaining issue is determining how the integration will be done and what will be the overall cost. Which type of integration will take place depends on a number of factors discussed below.  
 
The principle dilemma
There are three major options for positioning Product Catalogue within the OSS/BSS environment. Product Catalogue can be deployed as:

  • A standalone application
  • Part of a CRM system
  • Part of a Charging & Billing system

Product Catalogue as a Standalone Application
This option appears tempting at first because: "Who can have better Product Catalogue than a company exclusively specialising in its development?" Unfortunately, troubles tend to surface later on regardless of the attractiveness of the application's GUI.

When a telecommunications operator has intelligent charging and billing processes in place, an advanced standalone Product Catalogue can still produce massive headaches related to the integration and customisation side of its deployment. Generally, telecom vendors are highly unlikely to guarantee compatibility with surrounding OSS/BSS system, nor provide confidential pricing logic definition information (or other advanced features) to third-party vendor. What the operator gets is either a never-ending investment into customisations without clear TCO or ROI or multiple incompatible systems.

The key point is that all the charming features of a standalone Product Catalogue are effectively useless without the surety of seamless integration and excellent support from the surrounding OSS/BSS systems.

Product Catalogue as part of a CRM system
This is without a doubt a better option than the first choice because at least one side of the integration is guaranteed-if ordering is part of the overall CRM system, then two sides are in the safe zone.

The only disadvantage of such an approach is that the pricing logic richness of a CRM system's Product Catalogue is quite low, if any. Subsequently, there is no principal gain in implementing a unified Product Catalogue as long as the definition of the price model and some additional key settings remain on the charging and billing system side. Such a setup is quite far from the ‘unified environment' described at the beginning of this article.

Product Catalogue as part of a charging and billing system
Complex pricing logic/modelling is not only the major differentiator of an operator's market offering; it is also the key to profitability in every price-sensitive market. Even in markets where consumers demand inexpensive flat-rate offers, it is still VAS offers (many using complex pricing logic) driving profits.

Implementation on the side of charging and billing is quite often the most challenging when compared to ordering or CRM, for example. Order fulfilment can also be quite a challenge, especially when considering the example of introducing complex, fixed-mobile convergent packages for the corporate segment; however, Product Catalogue itself has no major effect on its simplification.

We can say that out-of-the box compatibility between Product Catalogue and charging and billing significantly decreases the opex of a service provider as well as markedly shortens time-to-market for the introduction of new market offerings and the modification of existing ones.

Because the overall functional richness and high flexibility in the areas of pricing and convergence are really the key features of charging and billing systems nowadays, out-of-the-box compatibility and reduced costs should facilitate the greatest gains on the service provider's side.

Business benefits
There are a variety of direct and indirect benefits linked to implementation of Product Catalogue into the OSS/BSS environment. All of them are related to three qualities that accompany any successful introduction of Product Catalogue - clarity, accessibility and systematization.

Clarity
Managing market offering lifecycles is supported by Product Catalogue's design, bringing all involved parties within the telecommunication operator a better understanding of related subjects, the level of their involvement and their role within the process. This decreases the level of confusion, which is usually unavoidable regardless of how well the processes are described in paper form.

Accessibility
All Market Offerings are accessible and visible within a single environment, including the history of their changes and the market offering's sub-elements. Anyone, according to their access rights, can view the sections of Product Catalogue applicable to their role.
There is no risk of discrepancies between market offering related data in various systems provided that the Product Catalogue repository is the master data source as stated above. Accessibility to correct data is an important aspect of information accessibility in general.

Systematisation
Product Catalogue not only enforces a certain level of systematisation of market offering creation and maintenance processes but also stores and presents all related business entities in a systematic manner, by default taking their integrity enforced by business logic into account.

Measurable benefits
All three qualities - clarity, accessibility and systematisation - can be translated into two key terms - time and money. A successful implementation of Product Catalogue brings significant savings on the telecommunication operator's side as well as guarantees a considerable shortening of time-to-market for introducing new market offerings. If these two goals are not accomplished by implementing Product Catalogue, such a project must be considered a failure.

A full version of this article can be found here

Michal Illan is Product Marketing Director, Sitronics Telecom Solutions
www.sitronics.com

Ensuring the effectiveness and reliability of complex next generation networks is a major test and measurement challenge.  Nico Bradlee looks for solutions

Almost without exception the world's major service providers are building flat hierarchical next generation networks (NGNs), capable of carrying voice, data and video traffic. They are creating a single core, access independent network, promising lower opex and enabling cost effective, efficient service development and delivery.

Easy on paper but not so easy to realise the promised capex and opex savings, speedy service launches and business agility. Unlike traditional PSTNs where equipment handles specific tasks, the IP multimedia subsystem (IMS) is a complex functional architecture in which devices receive a multitude of signals. Ensuring QoS and guaranteeing reliability in such a complex network is a test and measurement (T&M), nightmare. Top on the list of operators' priorities are equipment interoperability, protocol definitions, capacity and roaming, which the industry is working to resolve.

According to Frost & Sullivan, the global T&M equipment market earned revenues of $27.4 million in 2007 which is expected to rise to $1.2 billion in 2013. Ronald Gruia, principal analyst, Frost & Sullivan, suggests a change in thinking is needed: operators must reconsider capacity requirements and new ways of testing if they are to avoid surprises.
In the IMS environment there are exponentially more protocols and interfaces with networks and devices - legacy, fixed and wireless. Numerous functions interwork with others and the number of signaling messages are an order of magnitude higher than in traditional networks. The situation is further complicated by a multi-vendor environment in which each function can be provided by different suppliers and, although conforming to standards, equipment may include proprietary features. The advantage is that operators can buy best-of-breed components and, providing they work together and conform to specifications, telcos can add functionality without investing in new platforms or changing the whole network architecture.

Like many new standards, IMS is somewhat fluid and open to interpretation. Although standards have been approved, they are often incomplete, are still evolving or may be ambiguous. Further, each of the different IMS standards organisations, which include 3GPP, ETSI, TISPAN and IETF, publishes regular updates. Vendors interpret standards according to the needs of their customers and may introduce new innovations which they refer to standards bodies for inclusion in future releases. "IMS standards don't define interoperability but interfaces and functions which may be misinterpreted or differently interpreted by vendors," explains Dan Teichman, Senior Product Marketing Manager, voice service assurance at Empirix.

The many IP protocols have advanced very rapidly but standards are still evolving so there is considerable flexibility and variation. "This is a new and exciting area," says Mike Erickson, Senior Product Marketing Manager at Tektronix Communications, "but it is very difficult to test and accommodate error scenarios which grow exponentially with the flexibility provided in the protocol.
 

"Rapid technology changes and variety make it difficult for people to become experts and it is no longer possible for customers to build their own T&M tools," continues Erickson. "However, new T&M systems are more intelligent, automated, easier to use and capable of testing the different types of access networks interfacing with the common core. Operators must be able to measure QOS and ensure calls can be set up end-to-end with a given quality - this facility must be built into the series of test tools used both in pre-deployment and in live networks."

IMS networks must be tested end-to-end: from the access to the core, including the myriad network elements, functions and connections/interfaces between them. While the types of tests vary little from those currently used in traditional networks, their number is exponentially higher. "Tests break down into functional tests; capacity testing to ensure network components can handle both sustained traffic levels and surges; media testing - confirming multimedia traffic is transmitted reliably through the network; trouble shooting and 24x7 network monitoring to identify anomalies and flag up problems," says Erickson. "The difference is that in relatively closed PSTNs, four to five basic protocols are being considered compared to hundreds in more open VoIP and IMS networks."

No single vendor or operator has the facilities to conduct comprehensive interoperability, roaming, capacity or other tests to ensure equipment conforms to different iterations of IMS or to test the multiple interfaces with devices, gateways and protocols typical in NGNs. The MultiService Forum, a global association of service and system providers, test equipment vendors and users, recently concluded its GMI 2008 comprehensive IMS tests of over 225 network components from 22 participating vendors. Five host labs on three continents were networked together creating a model of the telecoms world. Roger Ward, MSF President says: "The results showed the overall architecture is complex and the choice of implementation significantly impacts interoperability. IMS protocols are generally mature and products interoperate across service provider environments. Most of the problems encountered were related to routing and configuration rather than protocols. IMS demonstrated the ability to provide a platform for convergence of a wide range of innovative services such as IPTV."

These essentially positive results support the need for continuous testing and monitoring before and during implementation, the results of which can be fed back into vendors' test and measurement teams for product development.

"Building products to emulate IMS functions means operators can buy equipment from multiple vendors, emulate and test functions before implementation and without having to build big test labs," says Teichman. "In IMS networks, T&M is not confined to infrastructure: the huge variety of user interfaces must be tested before implementation to avoid network service outages and QOS problems. While they have to test more functional interfaces, most traditional tests are still valid: although the methodology may be the same, the complexity is higher as many more tests are required to get the same information."

Operators face scalability issues as the number of VoIP users increases. The question, suggests Tony Vo, Senior Product Manager at Spirent, is whether IMS can support thousands of users. "Test solutions must generate high loads of calls. All tests are focused around SIP so tests must emulate different applications. GMI 2008 verified the issues and companies can now develop solutions. However, from a T&M perspective, no one solution can solve all problems."

Nico Bradlee is a freelance business and communications journalist

In an era of increased competition, convergence, and complexity, workforce management has become more important than ever. Field technicians represent a large workforce, and any improvements in technician productivity or vehicle expense can show huge benefits. Likewise, the effectiveness of these technicians directly impacts the customer experience. Deft management of this workforce is more important than ever and requires sophisticated tools, says Seamus Cunningham

Today's communications service providers (CSPs) in the wireless, wireline, or satellite market are providing service activation and outage resolution to their customers - and need to continually do it better, faster, and cheaper. Further, they must do it in an environment of increasing complexity, with new and converged services and networks, and with an ever-growing base of customers. CSPs additionally face global challenges (eg soaring gasoline prices and increased concern about carbon emissions), competitive pressures (eg corporate mergers, triple play offerings, and new entrants), and technological change. To achieve their desired results with such variables impacting their businesses, CSPs must take control of their workforce operations and focus on some combination of key business case objectives including:

  • Reduce operational costs
  • Improve overall customer experience
  • Rapidly deploy new and converged services.

Operational costs for a CSP are significant, especially given the current global financial and economic situation. Consider the total wireline operations of three US Regional Bell Operating Companies (RBOCs), which include operations related to voice and high-speed internet access in the local and interexchange parts of the network:

  • There are over 82,000 outside technicians and over 21,000 inside technicians.
  • Outside technicians have approximately 144 million hours (or 18 million days) and inside technicians have 37 million hours (or 4.6 million days) of productive time a year.
  • There are over 77 million outside dispatches a year and over 96 million inside dispatches a year.
  • The loaded (including salary and benefits) annual labour cost for outside technicians is $7.6 billion (or 15 per cent of their annual cash expense). The loaded annual labour cost for inside technicians is $1.8 billion (or 4 per cent of their annual cash expense).

These are just a subset of the operational costs of a wireline CSP. Similarly, there are significant operational costs in the wireless and satellite markets. Increasing competition continues to put pressure on CSPs to reduce expenses and increase profitability. Some areas that need to be addressed are discussed below.

Technicians are the single largest expense for CSPs. Therefore, introducing labour efficiency is critical for meeting expense objectives. CSPs could increase the number of customer visits in less time by ensuring the right technician is assigned to the right job at the right time. All too often, technicians are unable to do their assigned job because they do not have the right skill set or time to complete it.

Technician productivity can additionally increase by optimising technician routes and reducing travel time and unproductive time. This has the added benefit of reducing fuel and vehicle maintenance expenses and can result in significant carbon emission savings and fuel savings.

A CSP can increase dispatcher productivity by automating existing dispatcher functions such as work assignments and load imbalance resolution and thereby make the dispatcher an exception handler. This way, a dispatcher can focus on the "out of norm" conditions rather than on functions that can be automated.

Consolidation of dispatch systems and processes can reduce CSP expenses and increase efficiency. Integration of dispatch systems for wireless, wireline, or satellite telecommunications operators can sequence, schedule, and track field operations activities for:

  • Service activation and service assurance work for all types of circuits and services
  • All technicians (outside, inside central/switching office, installation and repair, cable maintenance, cell tower technicians)
  • Broadband or narrowband networks
  • A complete range of technologies, products, and services, eg triple play (video, data, and voice networks), fibre (FTTx), DSL, HFC, SONET/SDH, ATM, and copper. Maintaining separate dispatch systems or processes for different areas of business is expensive and inefficient. A single workforce management system to manage all technicians across all aspects of the company can help.

A CSP can reduce time-to-market for new products and services by streamlining their workforce management system integration with business and operations support systems (e.g., service fulfilment, service assurance, customer relationship management [CRM], and field access systems) and automating their flow-through of service orders and tickets. For some CSPs, this could involve integrating with multiple service activation, trouble ticketing, and CRM systems.

When providing service or outage resolution to their customers, CSPs need to ensure their customers are satisfied and that a customer's overall experience while dealing with the CSP is positive. Certainly, it is impossible to keep everyone happy all of the time; however, there are things the CSP can do to help ensure the customer experience is a positive one.
For example, CSPs can improve appointment management by providing the means for service representatives to offer valid, attainable appointments to their customers (based on actual technician availability) and then successfully meet those appointments. CSPs must also make provisions to offer narrow appointment windows to customers as well as provide automated, same-day commitment management. No one wants to wait a long time for a technician to begin with, much less wait and then have the technician show up late or not at all!

The overall customer experience can be improved by keeping the customer up-to-date and informed through increased communication. For example, keeping the customer up-to-date on a technician's estimated time of arrival at the customer premises can go a long way toward overall customer satisfaction. Also, keeping the technician well informed about the services a given customer has, so the technician is prepared to answer customer questions accurately, as well as provide instruction on how to use the services, can add to a positive customer experience.

Finally, through effective and efficient workforce monitoring and operations management, CSPs can monitor key performance metrics, such as mean time to repair (MTTR), which will help track the effect of their business changes on their service activation and network outage times. Also, CSPs need to ensure that they meet their customer's Service Level Agreements (SLAs), because the customers paid for a certain level of installation or maintenance support and should get it.

Another key business case objective is to rapidly deploy new (eg triple play) services and improve the time-to-market by providing easy integration with new systems and services.
CSPs must integrate their existing operations and system algorithms with new technology (eg xPON, FTTx, Bonded DSL). In order to quickly get a new service/technology to market, CSPs must quickly update their business processes and systems to support the new service and technology. This way, they can focus on providing and maintaining the new service/technology to their customers.

By utilising a flexible and configurable workforce management system, CSPs can meet their ever-changing business needs and challenges by utilising user tunable reference data to enhance their flows. This will allow the CSP to process the new service differently than other services and meet their changing business needs and requirements. For example, for a new service offering, additional information regarding that service is needed that could be used by the workforce management system to uniquely route, job type, and price data and video work.

CSPs must make next generation assignment and services information readily available to all technicians as well as provide the technician easy access to all necessary data, in order to minimise their effort to understand the relationships between domains (eg infrastructure, DSL, Layer 2/3 services, etc.). Also, by having the relationships between domains, the system can minimise truck rolls and the number of troubles by correlating root-cause problems that impact multiple domains (eg Layer 1 outage as the root cause of Layer 2 and Layer 3 troubles).

The decisions a CSP makes about their workforce management solution will greatly impact business results. CSP can make the right decisions by considering all aspects of workforce management operations: process, people, network, technology and leadership. It is not just selecting a system, but understanding the impacts of the process on employees, and ultimately providing excellent customer satisfaction to customers.

Seamus Cunningham is Principal Product Manager at Telcordia.
www.telcordia.com

Next Generation Access (NGA) will dramatically increase broadband speeds for European consumers and business over the coming years. However, it also threatens to disrupt established modes of competition and raises complex issues for telecommunications regulation according to Bob House and Michael Dargue

In traditional telco access networks, the architecture of the copper network lent itself to infrastructure-based competition in the form of Local Loop Unbundling (LLU). In countries such as the UK and France, service providers invested in LLU creating price-competitive broadband markets, rich in innovation and service differentiation.

Looking forward, it is unlikely that the same degree of infrastructure-based competition will exist in an NGA world. The economics of laying fibre or deploying electronics in street cabinets do not favour multiple access networks. Furthermore, unbundling may not be technically possible in certain situations, for example where the incumbent chooses Passive Optical Networking (PON) for its fibre-to-the-home (FTTH) network.

In geographies where infrastructure-based alternatives are technically or economically unviable, service providers will be forced to rely on wholesale bitstream from the network operator to serve their end customers. Such wholesale offers have historically consisted of simple bitstream services or resale of the incumbent's retail offer, supporting little or no differentiation. NGA therefore risks eroding the competitive benefits won through LLU.
Strategically, telecommunications regulators see benefits from NGA but want to maintain a high degree of service innovation and consumer choice. The question is how to achieve this with wholesale access.

In the UK, Ofcom sees wholesale access as a necessary complement to infrastructure-based competition in NGA. Ofcom is therefore supporting the development of fit-for-purpose wholesale products. Ofcom is not attempting to specify the products directly, but has worked with industry to define a desirable set of characteristics for NGA wholesale access products: a concept it terms Active Line Access (ALA). The intention is that an ALA-compliant product would provide a service provider with a degree of control as close as possible to that of having its own network - a step change from traditional wholesale access.

There are five key characteristics of ALA as follows:

  • Flexibility in selection of the aggregation or interconnect point;
  • Ability to support QoS;
  • Flexibility in the types of user-network interface and CPE that can be supported;
  • Ability to guarantee network and service security and integrity;
  • Ability to support multicast services.

In addition to the capabilities, Ofcom and the industry identified Ethernet as the most appropriate technology to realise ALA. Ethernet was chosen for its widespread adoption, support for a wide range of physical media, and its transparency to higher layer protocols.
Having agreed the characteristics of Ethernet ALA, Ofcom's next step was to understand whether there were barriers to realising the ALA concept in practice. To this end, Ofcom engaged industry consultants CSMG to develop case studies of real-world wholesale Ethernet-based access services, and to assess the extent to which they embodied the desired characteristics of ALA. The case studies were drawn from international markets and were selected to cover a range of network architectures and market segments.

COLT was included in the study to provide an example of wholesale Ethernet delivered over a copper network. Although best known for its fibre optic metro area networks, COLT has increased its network reach using Ethernet in the First Mile (EFM) over LLU. COLT's wholesale services are available across both infrastructures and include Internet Access, Ethernet Services, IP-VPN and VoIP.

Of the fibre-based examples, Optimum Lightpath has a metro ring architecture in cities on the East coast of the USA. Optimum Lightpath uses Ethernet in the access network to transport its business-focussed voice, data and video services and also to serve the wholesale service provider market.

In Canada, Telus offers wholesale Ethernet access over both its metro fibre rings and point-to-point fibre access networks. Telus uses Ethernet access to provide E-Line and E-LAN services for business customers, emulating leased lines and LANs respectively.
Although not having a wholesale offer, Iliad was included as it uses Ethernet to deliver retail triple-play services on its FTTH network in France. In the wholesale market, Iliad plans to offer unbundled fibre access rather than an active Ethernet service.

BBned, in the Netherlands, provided an example of an alternative operator using point-to-point fibre to serve residential and business end-users. BBned's FTTH footprint includes Amsterdam where it operates the active layer of Amsterdam's CityNet network.
Also in the Netherlands, KPN offers a spectrum of wholesale access options including unbundled fibre and copper. Its wholesale Ethernet service is known as "Wholesale Broadband Access" (WBA) - first launched on ADSL in 2006 and extended to VDSL and FTTH in 2008.

Finally, as an example of wholesale Ethernet services on a Passive Optical Network, we included NTT's layer 2 "LAN Communications" service which is available across both its PON and point-to-point access fibre networks in Japan.

CSMG developed the case studies through a series of interviews with technical and product marketing executives from the network operators. Input was also taken from service provider customers, national regulators and vendors to provide a 360° view.
Looking at the first of the five characteristics, we found considerable flexibility in the range of interconnect and aggregation options. A range of interconnect points were available, enabling aggregation of traffic at local, regional and national levels. One operator also offered international aggregation, i.e. a single interconnect could be used to reach end-users in multiple countries.

We also found strong support for QoS, with network operators adopting one of two approaches. The first of these was to guarantee the bandwidth of individual access connections. The second approach was to classify the traffic (e.g. voice, video and data) and provide guarantees for the performance of traffic within each class. Guaranteed bandwidth was popular in the business market, where end-customers were using Ethernet services substitutes for leased lines. Class of Service was more popular in the consumer market as it enables network capacity to be shared and hence supports lower cost services.
In terms of flexibility at the user-network interface, in all but one of the case studies the network operator installed an active device at the customer's site to present Ethernet ports towards the customer. We found it was common practice for service providers to add their own CPE resulting in two devices in the customer's home or office. At the time of the study, KPN was unique in providing a ‘wires-only' service; however, given historic trends we expect wires-only presentation to become more common in NGA over time.

The ability to guarantee security and integrity was largely determined by the architecture adopted by the network operators and the functionality of their network equipment. The primary techniques in play were to separate customer traffic logically and lock down vulnerable communications, e.g. using VLANs, controlling broadcast traffic, and preventing user-to-user communication at Layer 2. The shared-access medium in PON introduces additional potential risks in terms of eavesdropping and denial of service, which service providers will need to consider in designing their retail propositions.

Of the five ALA characteristics, the one with least support was multicast. Only BBned and Optimum Lightpath had incorporated multicast into their wholesale offers, although the majority of network operators employed it to carry their retail services (e.g. television broadcast or video conferencing). Without access to multicast, it is unlikely that service providers would be able to offer competing retail services as the bandwidth cost of unicasting the traffic would be prohibitive.

Returning to the overall objective of the research, the case studies demonstrate that examples of most ALA characteristics can already be found real-world wholesale Ethernet access services. The presence of these characteristics in commercially available wholesale offers gives credence to the vision of ALA compliant services being realized in practice. The study therefore supports the view that Ethernet ALA would be a useful component of a future regulatory toolkit for NGA.

Going forwards, having established the ALA concept Ofcom is now working with industry to promote the standardisation of Ethernet ALA. Ofcom see ALA as having European, if not global relevance, and therefore plan to hand over the technical requirements to standards bodies as a next step. International standardization would enable widespread adoption by network operators and in turn deliver global scale economies in ALA-compliant infrastructure. Network operators stand to benefit from attracting service providers to their network, and for service providers ALA creates the opportunity for control and differentiation without the need to own infrastructure. Finally, for end customers, ALA promises to support a competitive and innovative market for broadband services.

Bob House and Michael Dargue are senior members of CSMG's London office.
Further information on Next Generation Access and Ethernet ALA can be found at the following websites:
www.ofcom.org.uk/telecoms/discussnga
www.csmg-global.com

A crucial element in building a wholesale VoIP business and maintaining competitive edge in a harsh business environment is the choice of equipment that forms the core of the company's operation, says Nico Bradlee

As VoIP prospects seem to be bright and sunny thanks to new technologies and a plentiful choice of VoIP solutions, it presents an inviting opportunity for starting your own business. VoIP has entrenched itself in the telecommunication world and competitive carriers are exploring the numerous ways to derive benefits from this lucrative technology.

The wholesale VoIP market used to be overwhelmed with a huge number of players from different leagues. The popularity of wholesale VoIP was easy to explain - as you are your own boss you sell a product that can almost sell itself and requires minimum investment both in terms of capex for equipment and human resources.

But from the perspective of the past several years we can see that harsh reality intruded and small players could no longer compete with large-scale telecommunication tycoons. Competition being a lifeblood of technological progress, it remains an essential prerequisite for any market development, to say nothing about VoIP. Competition is actually the driving force that enables carriers to generate new revenues, and equipment vendors to offer new automated tools for them.

Nowadays the VoIP market is undergoing some transformation that affects the scale of businesses presented there. The number of transit operators is reducing due to margin reduction. It presents an additional challenge for the wholesale market's newcomers and poses another reasonable question - how to join the VoIP race and survive in this hard-bitten business world? One of the crucial elements of the strategy to build a brand new wholesale VoIP business will be the right choice of the equipment laying at the core of the company operations. So let's take a look at class 4 switching equipment from the top ten leading brands and get to the bottom of solving the question of how to choose the switch to save your network from going downhill.
Reviewed brands and products:

The platform: hard or soft?
It's of passing interest that the overwhelming majority of vendors use hardware platforms in their switching equipments. Though there is no definitive answer on what is preferable, soft or hardware, since both have their pros and cons.

A hardware platform doesn't require additional equipment and is shipped on already-based server, so you don't need to look for an appropriate base. All the vendors that we picked out, apart from MERA Systems, which uses software platform for its switches, utilise hardware based solutions. The advantages of a soft-based switch are also notable since you can install the software on an existing server and there is no need to turn to the vendor for its substitution in case of some defect. Moreover, if the carrier chooses to relocate the server there will be no call for its physical replacement.

Operating System
When it comes to the operating system there is also no right or wrong on what OS to use. The majority of developers use Linux OS, and it's quite understandable. It's Linux's universality, wide application and compatibility with servers and third-party systems that made Sansay, Nextone, MERA Systems and Audiocodes opt for Linux OS in their switching equipment. On the other hand proprietary platform can offer enhanced functionality and give competitive advantage before other market players. Therefore Acme Packet uses its own OS as an application base that allows increased productivity.

Functionality
The functionality of switches varies greatly, and has taken a big step forward thanks to technological progress. As the prevailing number of operators who got used to H.323 has started to use SIP, all of the leading vendors support conversion of H.323 and SIP protocols ensuring interoperability between equipment from various vendors. Additionally, Sansay and Acme Packet support MGCP protocol, given that Acme can also work with H.248.
As to voice codec conversion, its support in switching equipment is realised only by Acme Packet and MERA Systems. Acme Packet's functionality includes transcoding, that is translation for wireline and wireless codecs, transrating - mediate between variations in rate (eg 10ms to 30ms) - and DTMF translations. MERA Systems' softswitches ensures conversion of a wide range of codecs: G.729, C.729A, G.729AB, G.723.1, G.711 A/U, GSM FR, Speex, iLBC.

An important feature of switching equipment is encryption protocols support. Built-in support of TLS and IPSec is offered by Nextone, Audiocodes and Acme Packet. The support of MTLS, SRTP and easy messaging between them is also specified in Acme Packet equipment.

Capacity
Since hundreds of vendors around the world started to manufacture networking equipment to meet increased demand, the capacity of switches has increased to meet the requirements of different types of carriers. For instance, Nextone equipment, that is capable of handling up to 25,000CC, and Audiocodes that allows for 21,000CC, are targeted on Tier 1 and Tier 2 operators. Acme Packet, whose products are designed first and foremost for Tier 1 operators, also focuses on networks that handle at least 5,000CC and Acme provides this performance on a single server. Sansay and MERA Systems' products represent ideal solutions for Tier 3 and Tier 4 carriers whose network process up to 7-10K of concurrent calls in its most effective configuration.

Billing
No matter how productive and scalable your switching equipment is, for effective business you need a flexible third-party billing system to collect information about telephone calls and other services that are going to be billed to the subscriber A couple of good examples are Cyneric or Jerasoft billing systems. Of the vendors from the above list, an all-in-one solution (that doesn't require a billing system for the business to be operational) is only offered by MERA Systems. Its softswitch is a ready-to-go product with enhanced billing capabilities.

Pricing policy and target audience
Needless to say, the products considered in this article, being comparable in terms of switching functionality, are still designed for different types of carriers. While Nextone, Audiocodes and Acme Packet products deal with large amount of traffic, MERA Systems and Sansay concentrate on solutions for small and medium-sized wholesale businesses offering maximum functionality in switching equipment.

To put the whole thing in a nutshell, each vendor concentrates on various sectors of wholesale business, which explains the differences examined in this overview. It's up to carriers to make a choice and opt for the equipment that best serves his business purposes.

Nico Bradlee is a freelance business and communications journalist.

According to a recent poll, the revenue from current generation messaging services will continue to eclipse those for data services for at least the next four years - around the same time when we'll see wide scale deployment of Service Delivery Platforms. This creates something of a revenue void. Added to this, termination fees and roaming charges, where telcos are making their money today, face an uncertain future as termination-free IP networks are rolled out (if the EU has its way). New advertising and ‘content sponsorship' business models offer hope, as more third party brands are encouraged into the arena. However, in the short term telcos must rely on doing what they do best - selling telecoms services - but in a much cleverer way. Smart services, adding a little more intelligence to the call, could be the key to filling this void. But could they also be the catalyst for bringing advertising revenues to the fore? Jonathan Bell investigates

Hindsight is a wonderful thing - especially when it comes to evaluating the success or otherwise of past visionary ambitions of our industry. It only seems like yesterday that all the predictions and industry research confirmed that by this year our happy customers would be drowning in an interactive environment of data rich media services delivered direct to their handsets. More importantly, by this time, the world's telecoms service providers would have morphed into true content and entertainment companies, leveraging their ownership of customer relationships, access networks and billing systems to dominate this emerging value chain.

The reality today is rather more disappointing. Voice and messaging services continue to make up the great bulk of most mobile service provider's revenues - even as these are eroded by voice commoditisation. Other commercial entities from outside the world of traditional telecoms are actively seeking their own paths to market domination, potentially reducing the operator to bit-pipe players, while everyone scrambles to gain their share of an increasingly fickle and disloyal market.

So, what is to be done? One strategy already successfully adopted by service providers in both developed and developing markets is to introduce some form of advertising supported or brand sponsored services. While business models vary, these essentially translate into customers being able to make or receive calls (and messages) in exchange for exposure to adverts or, in some cases, for various types of content such as ringtones, ringback tones and wallpapers.

For the service provider, this type of activity could surely result in lower churn and higher loyalty, much-needed additional revenues and while infrastructures and technologies that can deliver truly rich services are being developed and deployed.

Of course, this is the first stage for ad-funded mobile usage. The next step requires a degree of personalisation. Being able to target the customer more effectively will be key when justifying larger budget requirements from advertisers. This is perhaps one reason why a poll of telecoms executives at the recent SDP Summit were charmed by the idea of increasingly ‘smart' voice and messaging services. And you can see why.

The ability to add an element targeting through use of location and presence data certainly takes us someway down the line of true personalisation.

In addition, adding intelligence to traditional ‘dumb' voice and messaging applications also offers consumers a degree of personalised call control and, because the services are very visible, they have a clear value to the user. This further reduces revenue erosion and churn.
So far, both research and practice indicate that such ad-funded models are serious and truly viable options - if the service provider gets it right from the start.

According to findings last year by market research company Harris Interactive, 35 per cent of adult US mobile phone users would be happy to accept incentive-based adverts. Of these, more than three quarters saw the best incentives as being simply financial in terms of refunds or free call minutes, with smaller numbers being in favour of free downloads such as games or ringtones. More interestingly - at least in the context of how service providers should best structure their SDP platforms - was that around 70 per cent of those interested in receiving adverts would be happy to provide personal information on their interests and likes and dislikes to their service providers if they can have a service customised to their needs.

On the practical side, we can see the success of service providers like US based Kajeet and the UK's Blyk. Both are targeted at the youth/child end of the market and both use various forms of sponsorship and advertising. Indeed, in the case of Kajeet, parents can also control user profiles, place calling and texting restrictions and call balances.

This combination of research and comparative commercial success, at least so far, does highlight one positive direction that mobile service providers can consider taking to avoid the dangers of disintermediation and eroding revenues. But the real magic is in bringing together multiple facets and contexts for each user or demographic. Service providers must then target groups of users to make the advertising truly personal and relevant - and not an annoying hindrance.

To create such an environment, the SDP platform required must have certain characteristics in terms of its ability to combine both fixed and changeable information about the user - from user-defined areas of interest or tariffing plans, to a user's particular location at any given moment.

As can be seen on any social networking site, today's youth are far more relaxed - at least for the present - about sharing personal attributes and information. Mobile service providers should be ready to exploit this to increase the stickiness of their own services, while growing their relationships with brand and content owners. If we can be smart with location and presence in the short-term, the longer term opportunities of increased levels of personalisation are much more achievable.

Of course, this requires the service provider themselves to develop and roll out services in a far more open and experimental manner than they have had to in the past. It also demands that they be ready and prepared to rapidly scale these up to mass-market offerings as opportunities emerge. And in turn, this requires a high degree of flexibility within core network infrastructure, billing and provisioning systems, and of the application itself - which brings us back to a standards-based approach.

The alternative is to be left out of this new value chain and see strategic assets - like network ownership, billing and customer identity relationships and location information - be exploited by more nimble outsiders with a better understanding of customer behaviours.
However, more than this, by utilising conventional voice and messaging services to both enable, and deliver more targeted advertising, the truly adverse impacts of voice commoditisation, and subsequent revenue loss, may be averted - at least in the short-term.

Jonathan Bell is VP Product Marketing, OpenCloud

Lean times
Fixed-line telecoms services are facing a bleak 2009 and beyond, according to a new report Western European fixed telecoms: market sizings and forecasts 2008-2014 published by Analysys Mason.

"Rapidly saturating broadband means we are entering a new phase for fixed telecoms," argues lead author and principal analyst Rupert Wood. "The structural problems it faces are only exacerbated by the current economic downturn."

The report indicates that all three of the main retail lines of business of fixed telcos face problems.  Broadband service revenue is slowing to low single-digit growth, and at the same time as the sector faces the need to invest to differentiate itself from an increasingly mobile internet, funding will be harder to justify. New services may stabilise the average revenue per line, but this is unlikely to grow.

Legacy voice has been in trouble for years, but the effect of an economic downturn will be to make revenue decline even faster relative to mobile. Unemployment and income squeeze will accelerate households' decisions to give up fixed voice services for good.
Enterprise telecoms revenues will decline as the economic downturn continues, although the report anticipates that, unlike in the main consumer areas, this will pick up again with an economic upturn.

The report forecasts a CAGR of -5.8 per cent for the retail fixed/broadband sector as a whole between 2008 and 2014, compared with -3.3 per cent for 2007-2008. In the traditional voice sector the report forecasts that retail revenue will decline by more than 50 per cent over the period.

"There aren't many bright spots," says Wood. "But having said that, paradoxically, more wireless services mean some very good network and wholesale service opportunities for fixed operators. Ultimately, though, fixed operators need to adapt to their gradually changing role in the converged telecoms value-chain, and focus their growth plans on monetising those non-substitutable areas of their assets: core and metro networks, IT and managed service provision. So as convergence kicks in, we should be hearing less of separate fixed-line operators, and more of integrated fixed-line operations."
www.analysysmason.com

Recycle for charity
The British Red Cross is appealing for people to recycle their old or unused mobile phones and support the work of the charity.

For every mobile phone recycled through the British Red Cross, regardless of brand, model or age, the organisation will receive three pounds sterling to help vulnerable people. "With three pounds we can provide one-week supply of rehydration salts for over 80 children in Africa," said Mark Astarita, Head of Fundraising at the British Red Cross.

Last Christmas, British households were inundated with an estimate 11 million new mobiles and as many ended up in cupboards and bins. "If they had been sent to a charity like the British Red Cross, they would have been turned into £33 million destined to help people in need. This would really make a difference," said Astarita.  To send in an old handset and battery free of charge contact the British Red Cross for Mobile Phone Recycling freepost envelopes at: recycle@redcross.org.uk
www.redcross.org.uk  
 
Mobile marketing
The Mobile Marketing Association (MMA) has published the fifth edition of its MMA International Journal of Mobile Marketing (IJMM). The issue touches on a number of important mobile marketing themes including engaging consumers through the mobile phone; exploring consumer perceptions, attitudes and behaviour, mobile search and advertising, technology and services, and network provider business strategy.
Specific articles include:

  • Mobile advertising: does location-based advertising work?
  • Mobile social networking: the brand at play in the circle of friends with mobile communities representing a strong opportunity for brands
  • University students' attitudes toward mobile political communication
  • Making search work for the mobile ecosystem: implications for operators, portals, advertisers and brands
  • Mobile phone users' behaviours: the motivation factors of the mobile phone
  • Sold on mobile advertising: effective wireless carrier mobile advertising and how to make it even more so
Published by the MMA's Academic Outreach Committee (AOC) twice annually, in June and December, the journal provides a medium for academics, students, and industry professionals from around the world to share their insights and research on how the mobile channel can be effectively used for marketing.
www.mmaglobal.com

Content losses
TM Forum and the Mobile Entertainment Forum (MEF) have launched of a joint initiative to address the estimated $5 billion in annual losses experienced by content suppliers across the mobile content value chain. These losses are attributable to incorrect reporting of revenue, and according to MEF calculations, comprise as much as 25 per cent of the $18 billion mobile content services market.

A combined team of TM Forum and MEF member companies will develop and publish work focused on sales reporting metrics.  These metrics will enable service providers, content aggregators and providers to build a common understanding of the quality and quantity of services delivered, which in turn will improve the measure of revenue flows for these services across the value chain. This effort will build on existing MEF work designed to improve trust and profits across the value chain, and on TM Forum work related to business process and revenue leakage issues that reflect service provider perspectives.
 Keith Willetts, chairman and CEO, TM Forum comments: "We believe this partnership will bring major benefits to both TM Forum and MEF members as well as accelerating cooperation between the content and service provider communities.  The end goal is a win-win where the market for these sorts of services grows, losses are stemmed and profitability increases.  It is critical that all the players in the value chain understand how to work together to tackle these challenges."

Creating, delivering and monetising content and digital media services are creating new demands on business models and operations. Together, the Forums will address these demands and work with their respective members to address real, bottom-line-affecting issues. The collaboration of these two organizations will ensure the solutions span the entire value chain. Over the longer term, the TM Forum and MEF will look at the bigger picture of lowering the cost of rolling out content and media services across mobile networks. The aim of this long-term view will be to stimulate the ability of different players to effectively trade together in an automated fashion and grow the overall market by enabling new joint market approaches.
www.tmforum.org

SMS still king
A new report from Portio Research focused on mobile messaging suggests that SMS will continue to be the cash cow of mobile data revenues for some time to come. Traffic volumes and revenues continue to confound predictions and are expected to keep growing throughout the global economic downturn. Indeed the whole mobile messaging industry worth USD 130 billion in 2008 is predicted to be worth USD 224 billion by 2013, 60 per cent of non-voice service revenues. The report, Mobile Messaging Futures 2008 - 2013, ventures that there is nothing likely to stop continued growth of mobile messaging in the short term, driven by a cocktail of ubiquitous SMS, media rich MMS, enterprise based mobile email and youth conscious mobile IM. 

SMS remains ‘king' because there is no cheap, easy to use alternative that will work with all phones and across all networks, it is loved the world over. Indeed in the US market, where SMS was a comparative slow starter, use per subscriber per month is now almost double the European average.  In China average users send over 100 messages each month whereas the Filipinos continue to be the leading exponents with 755 messages each month.
Portio also predict a bright future for mobile email even though Japan is the only market where consumer mobile email has surpassed the use of SMS. Email is still the most popular form of business communication and the report suggests that mobile email users worldwide will quadruple from approximately a quarter of a billion users in 2008 to over a billion users by the end of 2013.

The rising star in the mobile messaging constellation is mobile instant messaging (MIM), which is still beset by the technical problems of interoperability. Portio however predict exponential growth in mobile IM users, surging from a worldwide total of 111 million users in 2008 to hit a massive 867 million users by the close of 2013. This massive growth in users will be accompanied by an equally impressive 5-fold increase in revenues from approximately USD 2.5 billion in 2008 to approximately USD 12.4 billion in 2013. 
www.portioresearch.com

All eyes are on Barcelona, says Michael O'Hara, as the communications industry gathers for the 2009 Mobile World Congress

In February, the mobile communications industry will again converge on Barcelona for the GSMA Mobile World Congress.  Under the banner "Think Forward", the 2009 Mobile World Congress will draw executives from the world's largest and most influential mobile operators, software companies, equipment providers, internet companies and media and entertainment organizations.

By bringing together the leaders of companies across the broad communications sector, we'll be able to gain further insight into the significant challenges presently facing our industry, and focus on how we can leverage mobility to create new opportunities, and drive productivity and prosperity going forward. To that end, the GSMA is active in a number of initiatives across the industry, centering on the three key areas of Mobile Broadband, Mobile Lifestyle and the Mobile Planet. For example, our Mobile Broadband initiatives focus on the development of a ubiquitous Mobile Broadband infrastructure that will connect the world's population to the internet. Our Mobile Lifestyle initiatives concentrate on the creation of innovative services and experiences that will get delivered on this infrastructure. And through our Mobile Planet initiatives, we'll leverage the benefits of mobile communications to help enrich and improve the lives of individuals across the developing world. 

These themes and initiatives will be reflected throughout the 2009 Mobile World Congress conference programme and exhibition. The programme will focus on issues critical to the development of the mobile communications industry and will address topical areas including the adoption of advanced mobile broadband technologies, such as Long-Term Evolution (LTE), the shift to an open mobile ecosystem, and the proliferation of mobile entertainment and advertising services.

As always, the Mobile World Congress features a "who's who" of the communications industry.  This year's keynote speakers will address the challenges presented by the global economic slowdown, and outline strategies for sustaining growth not only in the core mobile arena, but across the rapidly expanding mobile ecosystem.   Keynote speakers include Ralph de la Vega, President and CEO of AT&T Mobility and Consumer Markets; Steve Ballmer, CEO of Microsoft; Chris DeWolfe, CEO and co-founder of MySpace; Olli-Pekka Kallasvuo, President and CEO of Nokia; Simon Beresford Wylie, CEO of Nokia Siemens Networks; Paul Jacobs, President and CEO of Qualcomm; Josh Silverman, CEO of Skype; César Alierta, Executive Chairman of Telefónica;  Jon Fredrik Baksaas, President and CEO of Telenor Group; Dick Lynch, EVP and CTO of Verizon Communications; and Vittorio Colao, Chief Executive, Vodafone.

Mobile entertainment services and content are seen as key areas of growth for operators, handset and device vendors and traditional content providers.  To address these critical areas, the 2009 Mobile Backstage event will combine keynotes, sessions and discussions exploring the promise of film, music, advertising and gaming on the mobile medium. Additionally, Academy Award-winning actor Kevin Spacey will deliver a keynote speech and host the MOFILM Mobile Short Film Festival.  The MOFILM Festival is the first of its kind and highlights the increasing influence of the mobile medium on the entertainment industry, bringing together art, commerce and technology. The 2009 initiative comes on a wave of a new generation of sophisticated multimedia-enabled mobile handsets and value-added operator services, providing new opportunities to enjoy short form video.

In addition to the conference sessions, the Mobile World Congress features the world's largest exhibition for the mobile industry, showcasing mobile products and services from approximately 1,300 companies.  We'll also celebrate the innovation and achievements of the mobile industry; on Tuesday, 17 February, we'll host the Global Mobile Awards at Barcelona's National Palace overlooking the Fira, home of the Congress since 2006.
2009 is set to be a milestone year for the mobile communications industry.  While not immune to the global economic slowdown, the mobile industry will continue to grow.  Indeed, mobile services will help both small businesses and enterprises to weather the recession by helping them increase productivity and efficiency. Thanks primarily to strong demand in developing countries for voice, text and mobile internet access, and in developed countries for mobile broadband services, our industry is likely to pass the four billion connections mark in February and reach six billion by the end of 2012. The Mobile World Congress is where the decisions will be made to enable our industry to meet the demands of the future.

Michael O'Hara is the chief marketing officer for the GSM Association.
www.gsm.org

It's 8 am and Lucy's mobile email has stopped working.  She's nervous, out of time, and out of patience.  Arriving at the office, she manages to reach a live person after what seemed like an eternity on hold, only to be then led through a confusing set of menus and email settings. A half-hour later, the problem is solved, but is she really happy?  What was the impact on her loyalty, and the mobile operator's operational expenses?  How could this have played out differently? David Ginsburg looks at one type of technology that provides the mobile operator-for the first time-with direct over-the-air access to the phone when the subscriber calls for help, thus avoiding the error-prone and inefficient interplay between the frustrated subscriber and the frontline CSR that has been the norm since the birth of the industry

Everyone agrees.... mobile network operators are facing challenges in delivering quality customer care, especially in light of the explosive growth of smartphones. Indeed, in the next few years, smartphones are expected to account for more than 75 per cent of new devices shipped.  These phones, now entering the mass market, are often difficult or counterintuitive to use and expensive to support. And operators, in a rush to deliver the latest and greatest device in a brutal and unforgiving market, have less control over the stability of the software on these phones.  They face a sea change from a simple world where the handset either worked or was physically broken, to a more sophisticated, more complex world where it is easy to misconfigure advanced services and settings.  These factors all add up to additional support costs, service abandonment and subscriber churn.  Operators have two options: either hire more frontline help, at considerable cost, or hold the line on expenses, and risk reducing customer satisfaction and loyalty.  So how does MDM offer a way out?  If we look at the factors contributing to the customer care dilemma, they fall into three areas - handset recalls, handset returns due to usability, and configuration calls.  Mobile Device Management (MDM) can address all three areas.

Handset recalls
Handset recalls occur when the operator, working with the handset vendor, realizes that the handset, due to a hardware or software bug, is broken in some significant way.  Traditionally, the operator would issue a recall, forcing subscribers to bring their phones in to the store to be replaced or re-flashed.  This results in high per-device costs and does nothing to engender subscriber satisfaction.  Annual exposure amongst Tier-1 operators is upwards of $1.4 billion.  With FOTA (firmware over the air), MDM can address more than $500 million of that $1.4 billion, with this figure growing over time based on increasing FOTA client penetration and MDM server rollouts.  By 2013, MDM will be able to address a projected 75 per cent+ of expected handset recall exposure of $1.9 billion.  These savings, along with the positive impact on the subscriber experience, are compelling arguments in favor of FOTA. Add to them time-to-market advantages that the ability to update devices after they have left the factory provide operators with, and the FOTA value proposition becomes fairly clear.

Handset returns
Handset returns occur when a subscriber just can't seem to properly configure the phone. In fact, one in seven phones in North America, for example, are returned for this very reason. And of these returned phones, there is usually no fault found.  The global exposure for mobile operators from returns in 2009 will be $2.5 billion. MDM can initially address almost $400 million of this, growing to more than $1 billion in potential savings by 2013 through better control of configurations resulting in subscribers actually being able to use their phones and the shiny new billable features they have.

Configuration issues
The biggest support challenge mobile network operators face is configuration, with more than 30 per cent of all calls being configuration related. Tier-1 operators field tens of millions of these calls every year.  Typical reasons for calls include "My phone doesn't ring anymore," or "I cannot receive SMS messages." The subscriber may leap to the conclusion that the device is broken or the issue resides on the network but in most cases the phone not ringing any more is due to it being set to vibrate or not ring. Text messages not coming in or going out are often due to things like the SMS inbox being full.

In addition to problems like these, new device and service launches create their own problems. For example, navigation services result in an entirely new set of questions, including "Does it work with my phone?" or "I've loaded it, but it is not working."  And of course, the care organization must be trained in addressing these complaints.
The ability to significantly reduce configuration call times is perhaps the greatest benefit MDM brings to the table.  In fact, configuration issues alone present mobile operators with a staggering $21 billion bill each year, a figure forecast to grow rapidly with the adoption of the smartphone. But there is a light and the end of the tunnel.  As mentioned earlier, device management opens a real-time channel to the device, allowing the CSR to see into the device and when needed reach out and fix the phone. Gone are the days of walking confused and frustrated subscribers through a twisty little maze of menu choices, all alike. Instead, that frustration and wasted time can be replaced with a "wow" experience where the subscriber is surprised and delighted by how quickly and how completely his or her problem has been addressed.  The figure below illustrates just how dramatic an impact MDM can have on a typical call.

The bottom line
Ultimately, MDM may save operators globally a total of $3 billion in 2009 across the three areas described above - recalls, returns and configuration calls.  This will grow to $23 billion in 2013 due to increasing OMA-DM device penetration and operator familiarity with the technology.  Mapping this to the typical Tier-1, an operator with 50 million subscribers will enjoy $80 million in potential savings in 2009, providing more than enough validation for their MDM investment.  These numbers have been recently validated by the analyst firm Stratecast, providing the first third-party analysis of the positive impact of MDM on frontline care and customer satisfaction.

The call revisited
It's 8 a.m. when Lucy's mobile email stopped working.  She's nervous, out of time, and out of patience.  Arriving at the office, she manages to reach a live person, and is greeted with a very different dialogue.  While on-hold, the system had already polled the phone for its hardware and software status, and has determined if an update is recommended.  The agent then asks if she'd like her email settings checked against the operator's reference settings.  Of course, Lucy says yes.  The settings are retrieved, compared, and corrected in a matter of minutes, and Lucy is on her way.  Mobile Device Management, or MDM, is one technology that makes this all possible.

David Ginsburg is Vice President of Marketing and Product Management at InnoPath Software. He can be reached at dginsburg@innopath.com
www.innopath.com

The most credible revenue opportunities are focused around the metro rather than the core: residential multiplay, business managed services, next generation mobile, software as a service, etc. Service providers should now take a deep look at the metro networks they build, say David Noguer Bau and Jean-Marc Uzé, and evaluate the level of optimisation and the potential to evolve in step with the demand of new services

In the recent years a large portion of the SP infrastructure budget has gone to access and metro networks. These investments have been driven by the convergence of services to all-IP but unfortunately this has frequently translated to multiple networks, purpose-built for each service and application; paradoxically it was during convergence that multiple new networks have been built, converting it to a network divergent world.

It's now time for the transformation of networks into true convergence, driven by cost and simplification and, more importantly, driven by the potential of its future monetization.
A few years back, the architects of core networks were facing a similar situation; with the declining demand of TDM (Time Division Multiplexing) services and the emergence of Ethernet as the standard interface, the core required a major transformation for a true convergence. The issue here was to build a simplified network without unnecessary layers to enable the efficient coexistence of packets and circuits. The idea of layering, and of separating "services" and "infrastructure", was justified in order to allow for a very stable network, over which each service could be managed on its own. So a particular service failure would typically affect just that service. However today, with the success of a foundation multi-services layer based on IP/MPLS and deployed by the IP Services department, there are very few services directly built on top of the transport layer.

With the majority of revenues coming from the higher layers of the network most of the service providers have decided to go for a more pragmatic solution in the core: the integration of the transport and IP departments into a single group, to achieve better visibility of the layers required to build an efficient network and so simplify operations. New technologies such as PBB-TE or MPLS-TP emerged promising a future integration with the transmission elements, and ultimately providing an optimised model to the L0 to L2 services. However the transformed core networks are built with IP-MPLS over DWDM, leveraging the advantages of the two worlds and providing the flexibility and efficiency required by revenue-generating services, and avoiding the limitations of circuits-only or basic packet functionality. Moreover this allows the use of the same model, architecture and operational processes in the metro Ethernet as has been deployed in the backbones over the last 10 years.

Today's metro networks are already Ethernet centric, most of them with IP capabilities, but a large number of service providers still keep separated networks for business, residential and mobile operations. By keeping them separate, service providers are losing a valuable opportunity to capitalise on the synergies between them.

The main goals for convergence are: simplification, lower operating costs, flexibility. To achieve true convergence the service provider must look at the specific requirements of each service and application to integrate all of them successfully. Equally important is to integrate a greater resiliency and scalability across all services and applications. MPLS seems to provide all the ingredients for this mix.

The deployment of MPLS metro networks is not new, however due to the lack of scalability of some early implementations, many service providers decided to split the metro into the multiple networks they have today. MPLS now offers service providers the required scalability and tools for converged metro, and Juniper Networks can provide them in a cost effective way: LDP/BGP MPLS interworking, Point to Multipoint LSPs, unparalleled MAC address tables, L2 and L3 integration.

Purpose-built networks have resulted in faster service deployments in line with the required SLAs, but also in expensive, monolithic and rigid infrastructures unable to evolve with the services and the new demands. With the converged metro there's the risk of recreating again dedicated service delivery points in the edge of the network - bringing back most of the issues experienced in the past. A service-specific edge element can't evolve with new applications, and imposes a one-size-fits-all model not applicable to every service.
Service providers are seeing increasing value in deploying services in a more distributed fashion, such as: location-based services, local advertisement insertion, distributed caching of high-demanding services (video, P2P). But still, some services should remain centralised. As services evolve, the most efficient placement of their delivery points may change in order to scale according to user demand while optimising operational expenses.

The solution here should be based on adding an intelligent service plane so the metro nodes can run a variety of services. The service provider has to be able to decide which services he wants to deliver and where the services have to be enabled in order to match the required architecture for each individual service. This model virtually converts each metro node into a truly "Intelligent Services Edge". Also, with this model the service delivery point for each service can be anywhere, providing the required flexibility and ultimately translating to the expected service velocity and agility to innovate with creative services, retain existing and obtain new subscribers.

A converged MPLS metro network with built-in flexible service deployment brings to the service provider a significant competitive advantage with the richest available set of tools: L2 VPNs, IP VPNs, Intrusion Detection and Prevention (IDP), broadband Subscriber Management, Session Border Control (SBC).

We've seen how a uniform MPLS infrastructure provides seamless service continuity between core and metro so the services can be placed wherever they are most effective.
If the access, metro and core were all based on different technologies, moving a service around would be considerably harder, and may involve shifting boundaries or re-architecting the entire network.

Service providers look for service ubiquity across access technologies (xDSL, FTTx, 2G, 3G, 4G) and this requires a scalable resilient network. The Broadband Forum (BBF) is already debating  "MPLS to the Access Node" for such applications.

With end-to-end MPLS, from the moment that a customer packet enters the network until it exits the network, it will experience no breaks, no discontinuities, whether the customer is residential or business, fixed or mobile, commodity bit-pipe or deeply service- oriented, Layer 2 or Layer 3 or even Layer 7. The implementation of MPLS to the access node obviously provides convergence since the network is uniform. Furthermore, it provides true service flexibility to deploy the services when and where needed, as typically the access LSP becomes the access virtual link of any given service instance, hosted in the appropriate intelligent services edge. It also brings up new services quickly and easily, and moves them as their requirements evolve.

Building a single MPLS network as described, entails very large scaling requirements: from < 1000 nodes today to 10 to 100 thousand nodes in a single MPLS network all-encompassing: access, metro, core. It also requires robust protocols, devices and OAM; a low latency and resiliency levels to provide 50 millisecond service restoration. The network architecture required to achieve the above requirements must not constrain services in any way.
MPLS technology inherits from a hierarchical approach and inter-domain signalling (BGP) that makes possible a scalable end-to-end model. The architecture to achieve the above requirements will divide the network into "regions" and establish the demanded connectivity within them. The simplicity will be achieved by single IP connectivity for control plane and MPLS connectivity for all customer packets.

The network exists to enable services but unfortunately too often, the network architecture dictates which services can be offered. The services should determine connectivity paradigms, quality of experience and resiliency requirements.

Building a converged metro network and adding service delivery flexibility creates a significant competitive advantage to service providers who are focused on high-performance network infrastructure. Extending MPLS from the core to the metro and perhaps to the access nodes, provides a seamless service continuity with a better control of the subscriber experience and ultimately will bring the wanted monetisation of the network.

Intelligent service edge, coupled with MPLS in the access, provide the ultimate flexibility for service providers to offer any service, with appropriate scale while minimising the cost of managing them. Moreover, it allows the service providers to innovate by creating new services, ultimately allowing a non disruptive trial-and-error approach that, so far, generated the most profitable applications we can access to with internet.

Services are the reason subscribers will stay in your network.

David Noguer Bau and Jean-Marc Uzé, Juniper Networks
www.juniper.net

With peak data rates of 100Mbps+ downlink and 50Mbps+ uplink already being promoted by operators - a seven-fold increase from today's 3G High Speed Packet Access (HSPA) services - Long Term Evolution (LTE) will evolve wireless networks for the first time to an all-IP domain. Mike Coward and Manish Singh look at the role that emerging technologies like femtocells and deep packet inspection (DPI) will play as voice and data networks converge

Most of the industry is united in the common demand for mobile broadband, and there are certainly enough applications and content types to use up the 100Mbps downlink data rates of LTE. But the fact remains that subscribers will always demand more for less. Even if LTE delivers a seven-fold increase in data speeds, end users certainly won't pay any more for it, let alone seven times more for that privilege. Thankfully, LTE's spectral efficiency - four times that of 3G - can deliver some cost savings, but there needs to be a great deal more optimization before operators can make significant cuts to the cost per bit.

Before signing off on their LTE strategies, carriers need to answer some fundamental questions. How should capacity be increased to meet end-user demand? Where should this capacity be built into the network? How can operators slash the cost/bit without massive capital expenditure? These are questions that require an appreciation of subscriber behaviour. For example, one insight into today's subscribers is that nearly 60 per cent of today's voice calls start and finish inside a building. So what does this mean for data usage?
We submit that operators should consider the lessons learned from 3G roll-outs; recall that widespread 3G adoption took five years longer than the industry initially predicted. This fact, combined with the sky-high cost of obtaining 3G wireless spectrum, meant that return on investment (ROI) for 3G services was also delayed by five years. Meanwhile, the 2.1GHz spectrum wasn't effective for delivering on indoor coverage, causing a real customer satisfaction issue. And let's not forget the big upfront capital outlays required to build these national 3G networks, which didn't provide material cash flow until there was widespread adoption. Learning from these 3G lessons, it's clear that wireless operators need to find ways to ease into LTE deployment in a cost-effective, scalable manner that minimizes upfront investment and risk.
Fortunately, there is already a technology solution to address these basic yet important issues of increased coverage, more capacity, and reduced churn. Femtocells are small wireless base stations which sit in consumers' homes to provide a 3G air interface, thereby enabling 3G handsets to work much better indoors. By utilizing the consumer's IP broadband connection - such as DSL or cable modem - a femtocell is able to connect to the operator's 3G core network.
We believe mobile operators must leverage LTE femtocells, also known as Home eNodeBs (HeNBs), as part of their LTE network rollout strategy. By enabling carriers to build their LTE networks one household at a time, one femtocell at a time, operators can avoid huge upfront capital expenditure in building citywide and nationwide LTE networks. In other words, femtocells empower operators to augment capacity where it is needed the most - inside homes, offices, airports, etc. - while leveraging their existing 3G networks to provide widespread coverage.
Indeed, there will come a time when it will make economic sense for operators to build citywide macrocell LTE networks. Until that day, LTE femtocells offer operators the ability to expand networks in line with market demand and investment plans.
Thanks to minimal costs related to site acquisition, power, cooling, and backhaul, femtocells are the cheapest type of cell site an operator can deploy, all the while increasing capacity and driving down the cost/bit significantly. Because femtocell devices reside in consumers' homes, utilizing their existing electrical and broadband IP connections, most of the cost is passed to consumers.

It is important to remember that the 100Mbps+ downlink data rates, which LTE femtocells will deliver, cannot be supported by existing DSL or cable modems which currently achieve a maximum of 7-10Mbps. However, by 2010 when we expect LTE networks to begin roll-out, FTTx is likely to provide the baseline residential backhaul infrastructure, and early FTTx adopters are likely to also be the early LTE adopters.

LTE also faces other challenges as it reaches broader market rollout. For example, the proliferation of increasingly intelligent handsets and high wireless bandwidth - giving subscribers network connections equal or superior to personal computers on wireline broadband - mean LTE networks are likely to quickly become swamped with peer-to-peer (P2P) traffic and susceptible to the same type of aggressive network security attacks that afflict wireline networks.

Deep packet inspection (DPI) is one technology that is likely to take centre stage in ensuring LTE networks deliver the high-speed data rates that have been promised. DPI broadly refers to services that inspect the contents of packets, normally for the purposes of identifying the application, which are creating the traffic, such as Voice over IP (VoIP), P2P, e-mail, or Web page downloads. DPI systems take this information and trigger appropriate actions such as traffic shaping, traffic management, lawful intercept, caching, and blocking. DPI has already emerged as a key technology in managing the growth of data traffic in wireline networks.
P2P blocking and traffic shaping are typical of the highest profile DPI deployments in the wireline market to date. While U.S. cable provider Comcast's blocking policy received much criticism at the time, today the industry is coming to realize that some shaping of subscriber traffic is required. Furthermore, delivery of premium services (such as prioritized bandwidth) and the fulfilment of service level agreements (SLAs) can both be implemented with DPI. While initial DPI deployments have been focused in the fixed broadband arena, mobile DPI deployments are on the increase as wireless data traffic explodes, and analyst predictions suggest that mobile DPI revenues will exceed fixed DPI revenues by 2011.

In fact, DPI is arguably the most effective tool to re-take control of the network. When used to implement network-based security and prevent attacks from even reaching subscriber handsets, DPI makes it infinitely easier by blocking an attack in the network - closer to the source - to protect thousands or even millions of subscribers. DPI can also control P2P traffic by throttling it back in order to protect more valuable (and revenue-generating) web, e-mail, or mobile video traffic.

Another interesting perspective to consider is that consumers have come to expect flat-rate plans for their home broadband connection, and this paradigm now being replicated in the mobile broadband arena - the opposite of what wireless carriers had hoped for. Most major US carriers have already moved to a monthly flat rate for calling, short message service (SMS), and wireless data. Such a flat-rate pricing model leaves carriers searching for new and advanced services with a premium price tag, thereby empowering operators to recoup their investment in LTE technology.

DPI systems can provide the basis for delivering such innovative new services and thereby differentiate and generate additional revenue. For instance, operators might want to offer different bandwidth levels for different price plans, or speed boosts when connecting to affiliated network sites, or application-optimized packages that prioritize gaming, VoIP, and video conferencing traffic. DPI platforms also have a market intelligence benefit in helping carriers gather information on where data is used to help plan new service packages or create targeted mobile advertising.

When it comes to deciding where the DPI technology should sit in a wireless network, there is some debate. We believe the optimum approach is to include DPI functionality in the LTE network nodes themselves, particularly if those nodes are based on standards-based, bladed architectures like ATCA. Combining the functionality in this way reduces the number of separate "boxes" in the network and therefore removes some of the complexity and administration required. Packet latency through the system is also improved by reducing the number of hops, which is critical in maintaining good voice call performance in an all-IP network. A bladed environment where DPI and other functions can be mixed and matched also reduces rack footprint and lowers system cooling and management costs while giving carriers the option to upgrade DPI functionality as new threats and defences emerge.
While these are still very early days for LTE, femtocells provide a compelling alternative for how operators can build out their networks; operators can launch new services and higher data rates more quickly without the front-loaded capital expenditures normally required for building citywide and nationwide networks. Likewise, DPI provides solutions to the technical and security challenges posed by high-bandwidth LTE or 3G connections to increasingly open, intelligent, and sometimes vulnerable handsets. Together, Femtocells and DPI provides carriers with a new set of business tools to increase average revenue per user (ARPU) while delivering flexibility, customer satisfaction, and return on investment.

Mike Coward is CTO, Continuous Computing, and can be contacted via: mikec@ccpu.com
Manish Singh is VP Product Line Management, Continuous Computing, and can be contacted via: manish@ccpu.com

    

@eurocomms

Other Categories in Features