European Communications

Last update12:42:18 PM

Features

European Communications discusses the latest telecom trends with telco executives, analysts and topic experts viainsightful analysis, Q&As and opinion pieces.

CONVERGENT CHARGING - Holding the purse strings

Kari Pulkkinen looks at how online cost control can help operators build a business case for convergent charging

" height="<% height %>" align="right" alt="CONVERGENT CHARGING - Holding the purse strings" class="articleimage" />

The uptake of converged communications has brought with it a wide range of new opportunities for service providers. Triple and quadruple-play services, including video services as well as applications, download and other content services, are being increasingly accepted into the mainstream and demanded by business users and consumers alike. However, these news services all need to be accurately charged for and billed to the customer to ensure ongoing usage and maximised revenue. How best to achieve this is currently of major concern to operators and service providers alike. 
In addition to the concerns around accurate billing, is the question of how to ensure that all customers receive the same level of experience – whether they are post or prepaid. Currently, prepaid customers tend to receive limited services and charging models from their providers due to concerns amongst those service providers that their billing solutions have limitations that offer a potential window for fraud.  Although in the past, operators have been hesitant about allowing full service offerings to prepaid subscribers, they are now looking for solutions that allow them to fully capitalise the potential of prepaid services without having revenue leakage and fraud problems.  One such solution, which can enable operators to offer more services to the prepaid user, is online charging.  By deploying online charging solutions, operators can offer all services to all users while closing the gap on fraud and revenue leakage.  Such a solution allows operators to fully capitalise their prepaid potential and, ultimately, fulfil end-user needs with wider service offering.
One final consideration revolves around the issue of ‘usage control’. Traditionally usage control has been linked to the prepaid payment option. However, there is much wider need for usage control regardless of the payment method. As an example, given the focus on children’s use and exposure to such services, an increasing number of parents require this additional level of control. Cost control is an important element, as parents want to control their children’s spending. Particularly important for younger children, as they get their “first mobile”, this type of cost control can educate younger users about usage of mobile services. Online cost control helps both parents and children in these tasks.
There are a variety of service concepts that could cater to helping parents and children control spending, for example, a fixed monthly fee and, on top if it, controlled usage with a user (parent) defined limit. This type of personalised billing model ensures that parents remain confident about costs, and encourages long term usage. For operators, the fixed monthly fee ensures at least the minimum revenue from customers.
In addition, it is vitally important for operators to recognise the role of online cost control in managing both fraud and credit risk, as this can have the greatest impact on their bottom line.  Offering new services, particularly those in emerging markets, is creating new opportunities for revenue, but it also risks exposing operators to increased credit risk.  As part of convergent charging, online cost control can help mitigate against risk through a hybrid approach.  A customer would have a fixed limit for post-paid usage that, once exceeded, would automatically switch the payment method to a prepaid mode. The use of pre-paid account mode is enabled through top-up. 
While it is increasingly clear that the key to successful convergent charging lies in a unified charging infrastructure, achieving this ‘holy grail’ continues to be a major consideration for operators. The more forward thinking operators have already started to develop the business cases and service concepts around convergent charging. A significant building block in this model lies in accurate ‘online cost control’ both for users and operators alike.
As discussed, the number of new services being introduced open up the operator to increased risk. Even though the majority of users do not set out to maliciously defraud the service provider, their unfamiliarity with new services and pricing structures means that it is much more likely that they will exceed anticipated costs, which can result in large costs and a resulting unwillingness to adopt the service long term. This can be exacerbated when the user is trying to utilise such services while travelling, as the roaming fees can dramatically add to the cost. 
In each case, the result is that the customer will be surprised and shocked by the service bill. From an operator’s point of view this outcome can be the death knell for new service adoption, as the customer decides never to use them again and the operator loses all potential future revenues associated with those particular services. Online cost control means that the customer is able to track costs and avoid bill ‘shock’, and are, therefore, much more likely to continue using the service.
This approach benefits operators by allowing them to ensure the credit-worthiness of customers while at the same time maximising revenue streams.  Operators can also use this model to differentiate their service offering and set out truly unique propositions not easily imitated by competitors, as often happens when introducing new price plans.  At the same time, cost conscious subscribers can feel they can be in control of their spending, while benefiting from the availability of a wide range of services.
Many of these concepts are not new, but there have still not been that many online cost control implementations. This is often due to the fact that operators’ existing billing and prepaid systems have limitations when supporting these types of online cost control service concepts.  Again, one effective way to implement this capability is to deploy an online cost control solution. This approach is able to provide flexibility for operators to build their own, individualised service concepts for online cost control. This type of solution can also provide an easy extension path for additional convergent charging areas, such as online data charging for post and prepaid, as well as IP prepaid and other charging solutions and related service concepts.
It is becoming increasingly apparent that online cost control is a must if operators wish to ensure the credit-worthiness of their customers, while enabling those same customers to better control their spending.  For both operators and customers, this is a key element to the successful introduction and ongoing uptake of news services and applications. Added to the recognised benefits of service innovation, online cost control goes a long way towards building a business case for convergent charging.

Kari Pulkkinen is VP, Business Development, Comptel

REVENUE ASSURANCE - An effective plug

Considering the scale of revenue losses that many telecoms operators incur, it is vital that they identify the causes, quantify their magnitude and then set about addressing these leakages in a holistic manner. Dominic Smith looks at the main causes of revenue leakage, and outlines ways in which operators can resolve these with the help of end-to-end pre-integrated business support systems

Revenue assurance continues to be a key concern for most telecoms operators. An on-the-show-floor survey carried out by Cerillion at the 3GSM World Congress in February identified it as one of the three most important business issues facing telecoms operators today, with 15 per cent of respondents acknowledging it as their most urgent concern.
This is hardly surprising when you consider the scale of the problem. Latest estimates suggest that as much as 10 per cent of total provider revenue is still being lost due to revenue leakages. In today’s competitive telecoms environment, this situation is unacceptable. And to retain competitive edge, operators need to ensure they are tackling the problem proactively.

Arguably the most important cause of revenue leakage is poor systems integration. Unfortunately, this is often a characteristic of the traditional best-of-breed approach to the implementation of business support systems. With this model, systems integrators are often tasked with implementing and integrating multiple heterogeneous systems to build a complete solution. Invariably, they encounter two key problems that make effective integration difficult.
First, they typically discover incompatibilities between the data models used in the best-of-breed systems. Synchronising data across different applications is complex because of the need to align different ways of identifying the subscriber, service and orders. However, if these mappings are not carried out properly, the operator will struggle to trace orders across the systems.
Second, the systems integrator may not have an in-depth understanding of all the best-of-breed components. As a result, it may integrate the systems inefficiently and introduce data replication or unnecessary layers of complexity, all of which can result in holes where revenue leakage may occur.
Process problems
Poor integration typically also results in a host of process problems. It may for example lead to data entry in multiple systems or incompatible configuration between solution components. The consequence of this may be, for example, rating/prepaid charging errors - essentially applying an incorrect price to a customer record or not being able to price the record at all. These errors will result in usage that cannot be billed for and, ultimately, revenue leakage.
Incomplete or incorrect usage data is another primary cause of leakage. This problem often occurs when network switches produce erroneous information and prevents the operator identifying the type of service used by a customer or the customer using that service. In either case, the result is an inability to bill for usage incurred.
Poorly integrated systems with no common workflow can also lead to delays in billing. Sometimes manual set-up processes for new services cause a delay of several days to occur before the operator can start invoicing the customer, inevitably resulting in a loss of revenues. In contrast, a fully automated process with flow through provisioning enables the operator to start billing for service use immediately. 
Invoicing system errors are another potential cause of revenue leakage. Traditionally, the problem is thought to be primarily one of under-billing - operators failing to invoice customers for services received. In fact, over-billing can be just as significant. This typically occurs when a service is terminated but the operator continues to bill for the service in error.
It will often result in costly customer disputes and the requirement to generate refunds or provide credit as a goodwill gesture. Valuable time and resource may be required to fix the offending process, and further revenue leakage will occur indirectly as a result of growing customer dissatisfaction and increased rates of customer churn.
Launching new products and decommissioning old ones are two other areas where a badly coordinated system can cause further revenue assurance problems. Businesses often leak money both by providing incorrect tariffs for new services and by not taking older, more costly products out of service quickly enough.

Reactive versus proactive
Putting additional systems and checks in place is largely a reactive approach to revenue assurance in a best-of-breed solution. In essence, it is a ‘sticking plaster’ approach to plugging the gaps in the system. Rather than dealing with problems at source, it focuses on putting processes in place which track where revenues are being lost and then try to correct these errors retrospectively.
As a result, problems can stay hidden for some time and their source can remain obscure. Operators may initially believe that they have billing issues or that they are suffering from credit management problems. In fact, when they carry out thorough ‘root cause analysis’, they often discover that their problem is order management related.
If the system is not proactively managed, a mistake made in this initial order process will not be discovered by the operator for a month or six weeks, when the customer receives his first bill and finds he has been placed on the wrong tariff or is being billed for a service he never received, for example. 
In contrast, the best end-to-end pre-integrated solution suites give operators the confidence that all elements within the product suite will work together in harmony. The holistic approach of these systems is clearly in line with operators’ increasing desire to address and monitor the whole lifecycle from the initial order placement right through to billing and cash collection.
These solutions also enable operators to be much more proactive. Rather than merely reacting to problems when they occur, their seamless connectivity offers a means to prevent ‘gaps’ in the system appearing in the first place. In other words, they treat the root cause of the problem rather than the symptoms.
The tight integration of these solutions helps eliminate data replication and synchronisation problems. In addition, embedded workflow and order management functionality allows front-end orders to be successfully transitioned to the back office, ensuring all services can be billed for and eliminating revenue leakage at source.
The pre-integrated nature of these systems allows key business information to be proactively tracked, detailed reports to be generated for each process, revenue leakages quickly identified and revenue losses minimised. It is hardly surprising, therefore, that ever-greater numbers of operators see end-to-end pre-integrated solution suites as a vital weapon in their ongoing battle to achieve genuine revenue assurance.

Dominic Smith is Marketing Director, Cerillion Technologies

SERVICE CREATION - A factory approach

Rapid assembly of services will be the key differentiator for telcos striving to beat out cable, entertainment and Internet companies encroaching on their customer bases says Brian Naughton

Telecom carriers will have to go through a significant metamorphosis as the lines blur among telecom, entertainment, retail, and Internet domains. In hotly contested triple- and quad play markets, carriers must become customer service providers (CSPs) capable of making the transition from me-too services to truly converged, on-demand services that differ from those offered by MSOs and non-traditional competitors.

To achieve that end, CSPs will have to work with third-party developers to create scores, if not hundreds, of niche services that leverage their substantial investments in IP networks. After all, they laid the fibre to enable voice, video and data to come together over the same connection in very short time frames. That unique ability should enable CSPs to create prodigious catalogues of converged services without disrupting the underlying architecture.
The goal should be the rapid assembly of services. To that end, a mindset change will be necessary. Carriers will have to move away from the staid and stodgy belief that service launches must take months or years, to a mindset that products can be rolled out in hours, if not minutes.
That will require CSPs to move into a manufacturing mindset, where the concepts of computer-aided design (CAD) and computer-aided manufacturing (CAM) come to fruition. The marriage of the two enables hundreds, if not thousands, of services to be rolled out in an “assembly line” fashion.
In the same way that the car manufacturing industry illustrates components for new products in CAD systems, carriers can illustrate the components of new products and move service “components” along an “assembly line” to CAM systems, where coding, rules and algorithms can be determined automatically.
The lifecycle management enabled by the CAD and CAM principles is now beginning to burgeon in telecom. In other words, the knowledge of bundling will be removed from existing systems and centralised in a location in which all service and product building blocks can be modelled within a “workbench” environment.
That reflects somewhat the precepts of service-oriented architecture (SOA), which promulgates the interchangeable use of building blocks among applications.
 “While SOA has been hyped for many years as a common framework for segmenting operations and coupling services, the reasons for it are far more compelling now,” says Larry Goldman, co-founder and senior analyst with OSS Observer. “The Internet has created an expectation of immediate gratification, so carriers have to figure out how to roll out services at the time of demand.”
After heavy investments in IP networks, Goldman believes operators have to concentrate on the software side of the equation. “CSPs should focus on re-use within their execution environments. That means services must be decoupled from networks for integration with business processes.”
Goldman says carriers can then begin to drive re-use –not only of common data models, but of formats, naming conventions, interfaces, and design processes across the organisation.
To galvanise the concept of ‘re-use’, CSPs must break back-office silos down into components that represent operational elements of network and IT systems, as well as product, service and resource specifications. These components can ultimately be turned into loosely coupled “building blocks” for interchangeable use across different services and products.
As carriers create a library of building blocks, SOA environments become true service delivery platforms (SDP) from which new functionality can be driven (i.e., SIP capabilities around presence, location and more advanced voice mail services that can be used in creative product bundles). By implementing common SIP servers for applications needing connectivity over IP networks, carriers can procure data from disparate sources so that billing authorisation and billing detail are consistent across the organisation.
As new services are created through increasingly agile SDPs and execution environments, CSPs will have to simultaneously orchestrate changes within OSS/BSS applications. The complexity of orchestration for dynamic services will require full automation of activation, ordering and billing processes so that fulfilment and assurance processes can seamlessly work for new service rollouts.
Within the TeleManagement Forum’s Product & Service Assembly (PSA) Initiative, an independent consortium of leading telcos and vendors has been working to develop a revolutionary IT reference architecture to satisfy the burgeoning need to standardise and simplify the way that products and services are designed, assembled and delivered. This reference architecture incorporates the CAD/CAM manufacturing approach by enabling the creation of “building blocks,” which carriers can assemble into service or product offerings.
At the heart of the IT reference architecture is an active catalogue that is a design-and-assembly environment within which service components can be defined and configured without any need for writing code. This catalogue aligns service design and creation with service execution so that product managers can decouple management of product lifecycles from OSS, BSS and network engineering.
Within the building-blocks lies is a rich library of components and products through which product managers and architects can drive dependencies, prerequisites, exclusions and visual metaphors about service components.
“We have leveraged our deep understanding of the fulfilment process as well of that of our customers and partners to define components that could be used interchangeably across services and functions,” says Simon Osborne of Axiom Systems, one of the founders of the PSA Initiative, noting that Cable & Wireless, BT, TeliaSonera, Atos Origin, Huawei, and Oracle have worked to define the building blocks.
To simplify the definition and configuration of services using those building blocks, a visual and intuitive GUI has been created for product managers to view loosely coupled composites or aggregate services, as well as for IT to create, test and publish components for re-use across the organisation.
The essence of the IT reference architecture is that it has been designed with a “bilateral” top-down/bottom-up approach in mind.
 “This IT reference architecture empowers marketing professionals to define service components without having to go through IT departments, and enables IT to use pre-tested business options and variants to drive component use across the organisation,” comments Osborne.
For example, ringtone downloads, VoIP, VoD, and find-me services each require their own sets of fundamental parameters around availability, order-taking and activation. However, there inherently exists overlap in what each service requires. The active catalogue helps carriers to leverage that fact by establishing interchangeable building blocks in one catalogue that can then be rearranged to support other services as well. Rather than having to write new code to launch each new service, carriers can specify necessary attributes in reasonably basic forms so that one catalogue and order-handling system can handle many different services.
Simon Farrell, IT Architect, Cable & Wireless comments: “We can define residential VoIP and the prerequisites for broadband DSL, and are able to stitch together relationships among end points to execute on fulfilment request” - demonstrating that graphical representations, such as a ‘green light’ for ‘it’s a go’ or ‘red light’ for ‘outstanding dependencies’ enables C&W to assemble end-points that must exist on the enterprise service bus (ESB).
In other words, there are distinct interfaces, order types and end points specific to any services that are to be fulfilled. Through the interface, the active catalogue provides an environment for modelling end points into an assembly landscape that defines relationships and polices exceptions or dependencies.
 “A residential home triple play service that requires a broadband and VoIP server, as well as IPTV server, will rely on rules around what third parties must be called upon to provide that hardware, and in what sequence those systems should be called upon,” explains Osborne. “That sets the stage for how data travels interface to interface as the service transitions through the lifecycle.”
While the active catalogue does not run every task, it calls the service end points that, in turn, run the processes externally. “This active catalogue provides a way of defining the end point and rules around those endpoints, so fulfilment dynamically figures out what end points to call upon,” he says.
As orders are fulfilled through the active catalogue, the software creates an inventory of pre-existing capabilities for end users. The software records against every instance of an order, using the same language that was modelled at service end points. Ultimately, that means CSPs end up with rules sets that are usable for up-sell and cross-sell capabilities. “If 35 per cent of customers have a certain type of access, CSPs can target them with new services that tie to that type of access,” notes Osborne.
In the long run, that ability drives versioning and lifecycle management. “If a service is to be deployed for only six months, there can be published rules stating that the service will be decommissioned in a certain time period, and warnings can be issued at the end of the period to those parties with bundled components.”
That can be particularly important among partners who are re-branding wholesale offerings, or for inter-departmental strategies at large telcos, where orchestrating processes can be complex. “Ultimately, you get a federation of catalogues with clear demarcation of where the SLAs are among different departments,” Osborne explains. With a federation of catalogues, CSPs start to create a topology through which all catalogues and associated end points can be referenced for more intelligent cross-sell and up-sell actions.
To ensure there is an accurate model of infrastructure, this revolutionary IT reference architecture has been designed to sit on top of most major network resource management systems (inventory) that serve as databases of record for carriers.
The architecture can serve as the foundation for collaboration among product managers, service and network engineers, as well as operational communities. By creating a central point for standardising multiple vendors' products, carriers can move closer to the SOA principles they strive to embrace.
As carriers continue to expose their design environment to different departments and customers, they can begin to truly “mass market” the configuration of products. That sets the stage for commonality in how components, access controls and security measures are employed across the enterprise and partner environments.
As that commonality grows, carriers can get closer to self-service in management of product and service lifecycles. Then, they can be better positioned to create value-adds in their IP services domain—especially if they can roll out sophisticated services in a matter of hours, or even minutes.

For further information about the IT reference architecture and the active catalogue, please visit www.psainitiative.org  or e-mail This e-mail address is being protected from spambots. You need JavaScript enabled to view it .
Brian Naughton is VP Strategy & Architecture, Axiom Systems

SERVICE QUALITY MANAGEMENT - Virtue in finding fault

Service quality management offers a critical pathway to the delivery of quality of service in developing markets, says Tony Kalcina

Accidents happen. People make mistakes. Nothing or no one is infallible. We all know this. Which is why, when we buy a product or service, what is important is not so much whether or not it has faults, but what happens after a fault occurs.
It is a well-know maxim in client service that a customer whose problem has been dealt with in an exemplary fashion is likely to be more satisfied and loyal than one who has never experienced a problem to begin with. The former knows from experience that they can rely on the provider of the service or product; the latter has no idea what might happen if things go wrong.

This principle applies as much in telecommunications as elsewhere, but with an added twist: customers want to have the certainty that problems will be dealt with effectively and efficiently before they happen.
This means service providers have to provide a high level of assurance at the contract stage, typically through a service level agreement (SLA). But there are SLAs and SLAs.
In fiercely competitive developing markets, the ability to offer and deliver on meaningful, measurable and manageable standards of service is becoming a major competitive differentiator.
Telecommunications SLAs traditionally underpin service quality management (SQM) programmes, which aim to monitor performance, pinpoint faults and prevent them from recurring.
SQM is valuable to corporate customers because, in theory, it provides analysis and verification of the performance they are paying for. And, in the event of a problem, it serves to provide a measure of the recompense they might be entitled to.
For operators in developing markets, SQM also has an important role to play in the supply chain by policing incumbent operators, for instance when competition rules allow Local Loop Unbundling (LLU) for third-party providers of DSL services.
The inclusion of an SLA in the supply chain process ensures protection for third party operators and their customers; if incumbent operators fail to undertake the LLU in the time agreed, the third party operator can often claim a rebate.
At the same time, the end customer may also be entitled to compensation for failure to deliver the requisite level of service mandated by the regulatory body.
In practice, this can be problematic to claim at an individual level, but the automated monitoring and reporting of SLA violations can be a useful input to the process of managing collective performance by the incumbent. 
Elsewhere, it stands to reason that savvy customers will pick suppliers whose SLAs offer the highest level of financial security; in other words, those which pay out the most in the event of a problem.
This means that in order to satisfy the most demanding customers, telecommunications operators need to embrace SQM so that any faults and liabilities can be fully verified to the satisfaction of both the operator and its customers.
SQM allows operators to measure and gauge the validity of customer complaints; whilst the customer should always be put first, operators can determine the need for - and level of - compensation required for a perceived service fault. Clearly, then, there are massive benefits to be had from being seen to possess a market-leading SQM programme. But not all operators currently have one.
Currently, performance data, where it is available, often only involves some fairly basic measurements of the state of the network. In addition, delivering SQM often relies heavily on expensive manpower.
An operator will not be able to cost-effectively differentiate its service offering unless manual steps are kept to an absolute minimum and, preferably, eliminated altogether to avoid the higher cost and delays of manual processes.
Finally, many of the current low-cost diagnostic tools that are in place can only provide basic alerts to the effect that certain pieces of equipment are failing, without identifying which customers (if any) are affected, or how.
What this means in practice is that operators relying on these basic SQM tools cannot truly be said to be delivering quality of service to their customers—and risk either losing credibility or paying over the odds for SLA failures. The situation need not be thus, however.
More complex SQM tools exist. They combine service fulfilment and assurance capabilities and can be integrated with a provisioning package to automatically identify faults or dips in service and restore the services or compensate customers with additional offers or refunds.
Clarity, for example, offers a pre-integrated product and database that features the TeleManagement Forum’s 17 electronic Telecom Operations Map model elements of Operational Support Systems (OSS) in a single suite.
These systems allow operators to see the impact that network operations are having on revenue and customers’ experience from both a service fulfilment and assurance perspective.
Clarity’s OSS is network and services neutral, rapidly configurable and widely deployed, supporting an end user base of 50 million subscribers worldwide. Companies that have taken SQM seriously have reaped significant benefits.
Sri Lanka Telecom, to take an example from the developing world, has been able to clear 84 per cent of faults within hours thanks to a single OSS information store for fulfilment and assurance data, coupled with real-time correlation and integrated SQM workflow processes.
Other operators can follow this path. All that is needed is a greater awareness of the importance of SQM as a tool for achieving competitive advantage. Telecoms operators, specifically in developing markets, must realise the importance of service assurance in helping to predict, monitor and manage in real time the availability and quality of services, ensuring conformance to the business’s strategic SQM objectives.
Investing in OSS to support state-of-the-art SQM programmes is no longer a ‘nice to have’, but increasingly a vital component of strategies to attract and retain loyal residential and commercial customers, improve operational effectiveness and to accelerate the order-to-cash process. SQM may have until now been something of a minority interest for telecommunications operators. But as the battle for customers heats up in developing markets, it looks set to become a key weapon for competitive advantage.

Tony Kalcina is founder of Clarity

IMS CUSTOMER EXPERIENCE - Going to the next level

The many bells and whistles promised by IMS make it essential for operators to understand and monitor all the device-types used by customers, if they are to ensure high standards of customer experience, says Matt Herdlein

As emerging IMS platforms open the doors to real-time, interactive multimedia services, taking care of the customer experience becomes an even more critical ingredient for achieving success. Investments made to support the emerging service complexity could be wasted if customers cannot derive the intended value. Moreover, the assessment of service quality would be misleading unless the actual performance of user devices is considered as well.  As services become more sophisticated and complex, more functionality and features are migrating to user devices, thus making them an integral element in the overall service quality equation.

With 3G handsets based on open operating systems, the numbers of both device makers and third-party application vendors have skyrocketed. Handango, a leading supplier of applications for handsets and Personal Digital Assistants (PDAs), reported more than 11,000 new applications in 2005 from more than 1,200 new vendors, and “type approval” has moved to vendor certification. The GSM Suppliers Association reported that in 2006 there were 212 GSM/EDGE terminal devices available in the market from 33 vendors. Handset operating systems come from companies such as Microsoft, Symbian, and Qualcomm, among many others.
In this environment, the challenge for operators is clear. They must be able to deploy an enormous and quickly growing range of services on a host of intelligent devices, and ensure that those services operate successfully on each device. In short, operators need to augment the scope of service quality to include user device performance to better assess the actual customer experience derived from their IMS investments.
To further understand the problem, consider the following simple case: if a new service fails 90 per cent of the time on a user device that serves 10 per cent of the market, then an analysis of the service will only show a one per cent failure rate. However, the reality is that 10 per cent of the customers are unhappy and may move to another operator or stop using the service.
MobileGuru, a UK company that sells mobile phones and accessories, compared handset performance on a UK network and found that call drop rates can vary from about two per cent for the best performing devices to nearly 10 per cent for the worst performing devices. Even then, within a single device type, there may be significant variations in performance caused by batch problems in manufacturing, user configuration errors, or software download problems.
These issues are not new. GSM operators faced the dilemma of whether to issue recalls or modify their networks in the mid-nineties, when two of the leading handset vendors were found to have compatibility issues with their networks. At that time, they had to make software changes under controlled conditions since type approval would be invalidated if the changes were done incorrectly. The recall was avoided in those cases, but smaller manufacturers did have to recall devices.

Identifying the problem
The advent of Universal Serial Bus (USB), and changes in component prices, along with the relaxation of type approval, made home-based upgrades to handset software more common. A handset's International Mobile Equipment Identity (IMEI) can be used to identify the model and even the place of manufacture, but it is no longer a reliable indicator of software build or application set.
Further complicating the matter is the fact that handset operating systems are available to suit the preferences of any manufacturer or vendor. If you like Java, try Savaje; if you prefer Linux, then look at MontaVisto; or if you are Microsoft fan, there is a version of Windows available. The market leader, Symbian, grew out of UK PDA innovator Psion, and if you don't like the Symbian software, then Nokia supplies Series 60. Handango reports having more than 190,000 titles from 16,000 content partners supporting nine different handset operating systems.
Vodafone has reported a significant linkage between the rate of churn for residential customers and the numbers of services they use on a regular basis – the more services used, the more likely customers will stay. Of course, services will only be used if they work reliably.
The cost of unreliable service to operators can be measured not just in lost revenue but also in lost handset investment. Nokia, which supplies one in three of the world's handsets, reported that the average selling price of its handsets is US$125 (103EUR), while Sony Ericsson, which has more high end products, has an average selling price of $180 (149EUR). Some retailers in the UK are offering all Nokia handsets for free when bundled with a post-paid tariff, so the cost to the operator is around $146 (120EUR) for each handset.
The abundance of software and device options presents a number of challenges for operators. It also provides a unique opportunity for operators to better serve valuable customers. When root causes of problems can be traced to individual customers or device types, operators can develop new application versions or make changes to operating systems to correct the problems.
Consider, for example, the value to enterprise customers who, according to Yankee Group, make up 28 per cent of mobile operators' revenue. These customers can be advised on which mobile phones perform best for their services or can be given upgrades. Customer satisfaction would increase, as would retention levels for a valuable market segment.

Inspecting the packets
To monitor service quality, it has always been imperative that operators have ready access to reliable data. For voice calls, there are many options, from Call Detail Records (CDRs) to signalling probes. But for data services, and to support the move to IMS, more sophisticated, deep packet inspection probes are required.
Data service quality is determined by three main variables: network performance, device performance, and portal performance. Degradation in any of these variables will result in a poor customer experience.
Any effective analytical tools should permit early identification of trends so that solutions can be developed and customers informed before service is affected. Hotspots may occur when changes are made to services, access networks, or core networks, or when handset operating system upgrades are introduced, but these may be difficult to identify among millions of users, and operators must take care that “normal” user actions are not misinterpreted.
For example, with voice, a short call (one quickly terminated by the user) may raise no alarms but actually involve unacceptable voice quality, whereas with data, short sessions may be the result of high throughput, and long sessions may indicate problems.
By comparing different handset models running the same service, patterns can be established. Analysts should also consider whether particular models are only available to a limited user group, such as prepaid. Traditional service quality monitoring gets a view of specific services based on consolidated data extracted from the network, and by using field probes that perform synthetic transactions depicting various services and user behaviours. Although these approaches have merits, they fail to analyse service quality from actual service transactions that customers make. In other words, they fail to capture the trends and nuances of the true customer experience.

Human behaviour
It is, therefore, essential for operators to understand and constantly monitor the “behaviour” of all the various device-types used by their customers. Specifically, it is important to understand service performance by device-type as well as by specific device configuration, to understand how customers are affected by network or device issues.
Some of the typical issues that mobile operators face on a daily basis are:
• How to choose a device/s for a new service
• During trials, how to measure device performance by type, service, and configuration
• How to view device performance to identify service bottlenecks before they can affect the service, and how to identify affected (and potentially affected) customers
• How to know if a new configuration or update is performing as expected
• How to identify devices with high support overhead.
To answer these questions, operators need to go beyond probes that make educated guesses about performance based on small, “synthetic” samples. Operators need a solution that aggregates actual device performance around the clock from every service transaction.
Back to reality
The reality is that virtually every mobile operator supports dozens of device-types and millions of active devices. That means, to manage service quality and customer experience, operators must do more than monitor their networks and operations. They have to know, at any given time, the capabilities and limitations of all of their user devices, how those devices are performing, how they will handle new service offerings, and how customers are using them.
When operators have access to this level of user device performance intelligence, the business benefits are invaluable. For example, operators can: “see” how customers are reacting to marketing campaigns and special offers; recommend the best devices for customers when orders for new or expanded services are received; respond to customer calls with a holistic understanding of the customers' experience; offer targeted promotions to individuals or groups; provide incentives to device vendors based on verifiable performance and offer more focused SLAs.
As the choices for new and more complex services continues to grow, and user devices that offer more functionality emerge, mobile operators need to understand service quality from the device perspective. Operators can expand their traditional service monitoring arsenals and realise the business benefits that can result from higher customer satisfaction.

Matt Herdlein is Executive Director, Service Management, Telcordia

AGILE SERVICE DEPLOYMENT - Riding the popular wave

Bob Drummond discusses how operators can benefit from an agile, flexible and open platform to proactively deliver dynamic services to their customer base

What do the Glastonbury music festival, the Rugby World Cup and the Oscars have in common?  They are all high profile, internationally broadcast events that draw attention from millions of fans and dominate the agendas of society, newspapers and television for the short period of their duration.

These are all events that operators could capitalise on if they had the flexibility and agility to rapidly and economically deploy innovative services on their networks, even for a short period of time. For operators on the lookout for new revenue streams or the next ‘sticky’ application, this is a golden opportunity to engage new and existing mobile subscribers by riding the wave of highly popular live events with the offer of exciting applications.
Over the recent Cricket World Cup, what cricket fan would not enjoy winning a game involving the same team and opponents on his or her mobile phone? If your team didn’t win, replay the game on your mobile and see if you could have done better! Next time you’re on your way to watch your favourite football team play, what if you could play the match on your mobile – complete with the same starting team on the pitch and on the bench, correct strip, same opponent players and the same conditions…even down to the weather conditions?
With higher return visits to the application promised through this dynamic, always-fresh approach, and premium revenues on offer, what is holding operators back from introducing services, applications or games aligned to such headline-making events? Beyond understanding the opportunities presented by such events, how do operators meet the technological challenges that ensure that customers are happy with the new services they receive? 
The telecoms industry is challenged to achieve a business model that keeps costs down, maintains innovation and responds to competition from within and outside of its own marketplace. All of this whilst still creating profit and new revenue streams to stay in the game. The most difficult aspect of this challenge, however, has been created over the years by the operators themselves. It is the legacy of a history of growth that has seen additional, proprietary infrastructure systems installed for each wave of evolution and has resulted in vertically-oriented and proprietary systems’ infrastructures.
Proprietary Intelligent Network (IN) systems are typically monolithic in structure, with hardware, software and applications tightly integrated and designed to operate well as a unit. As a consequence they are expensive to maintain and enhance because operators are restricted to using the services of the vendor even for minor enhancements to the system. This creates a ‘lock-in’ environment where operators become increasingly reliant on the vendor for its ability to innovate. A new service capability can take years and millions of dollars to deploy in this environment, vastly affecting the feasibility, cost and timescale of bringing new services to market. 
Furthermore, the telecoms industry has typically invested in applications and platforms as and when needed, resulting in a mix of incompatible development, deployment and operational environments. Typically, the switching and services layers of the IN will be organised vertically – rather than with integration across the rest of the infrastructure in mind – producing a complex series of silo-based architectures where the cost to develop, deploy and maintain exciting new services for all subscribers is too high. The obstruction to innovation in new multi-media, multi-access, multi-network services means that operators face difficulties in delivering the rich, converged services that their customers want and that differentiate them in a crowded marketplace.
The ability to offer new services that piggyback high profile events such as a World Cup or the Live8 music festival requires a degree of agility and flexibility that silo design and proprietary lock-in of legacy infrastructures obstructs. So, without heavy investment in a new convergent architecture, what can operators do?
The answer lies in open standards. Compared to the world of Internet and enterprise applications, developing telecoms services on the traditional proprietary IN platforms is an outdated approach that is time-consuming and expensive.  Proprietary, vertically-integrated systems need to make way for openness, modularity and portability to create an environment for cost-effective service development. 
Operators have spent a decade demanding open platforms from their suppliers, even introducing a series of open standards initiatives, such as Parlay and JAIN, to drive this agenda.
JAIN SLEE is the open Java standard that is tailored to the large-scale execution of communications services across existing and Next Generation Networks. With JAIN SLEE-compliant application servers providing an open, flexible and carrier-grade service execution platform, operators can achieve agility in service development and deployment, and also capitalise on cost leadership. Application development is no longer controlled by the proprietary vendors, but open to input from operator’s own in-house development and a competitive market of off-the-shelf application developers.
In this dynamic environment, a range of application developers can quickly and cost-effectively address market opportunities and roll out services in conjunction with events that hit their audience’s agenda. As JAIN SLEE addresses the need for a horizontal platform across the entire operator infrastructure, services can converge voice, data and video silos to provide truly innovative and compelling offerings that drive revenues and grow customer loyalty.
A live multi-media service for the Glastonbury music festival, for example, can be designed to appeal to operator’s high spending audience of young adults. The open platform makes it flexible enough to update daily with news, weather, alerts and programme changes, as well as offer live downloads of artist tracks, in order to provide a compelling service for users.
With the move away from inflexible legacy telecoms networks to an open environment, operators can now benefit from a wide pool of third party developers for innovative and cost effective new applications.  For the type of applications discussed at the outset of this article, an agile and flexible platform also supports the modification or reconfiguration of an application during the lifetime of the related real-world event to continue providing a compelling service for repeat users based on service take-up, user behaviour and feedback received during the event.
Operators need to fully embrace the opportunity of such dynamic service delivery, or risk being left behind by users that come to expect more from their network. For operators such as Vodafone and O2 that sponsor high-profile events around the world, the opportunities are endless for increasing sponsorship returns, explore new revenues and generate new levels of customer loyalty using dynamic service innovation.

Bob Drummond is VP of Marketing and Professional Services at OpenCloud

TELCO ADVERTISING - Personal Focus

New technology is disrupting traditional advertising, and in its place different forms are evolving, offering very specifically targeted messages.  Lawrence Kenny and Rob van den Dam describe how advertising spend in the emerging online channels is now growing at a remarkable rate

The advent of emerging online advertising channels is making marketers lick their lips. These marketers are seeking more effective ways of optimising their expenditure and they are excited over the prospect of being able to target their ads in a highly personal way. They are spending more and more on targeted personalised advertising - at considerable cost to traditional advertising. Everyone is fighting for the new media advertising revenue. At the same time telcos have begun to realise that advertising can become an important source of revenue, an opportunity that they simply can't resist. 

Although telecom operators have little presence in advertising today, the medium represents an emerging opportunity that operators are uniquely positioned to address. They have unique assets that advertisers value. First of all, they have a large customer base. And with their authentication, authorisation and accounting controls, telcos are able to determine who the customer is and what services and products they are buying. Useful not only for controlling where the ads go to, but also for tracking advertising effectiveness.
Telcos have a direct relationship with customers. They collect vast quantities of customer data, which they can use to develop profiles of their subscribers, including demographic characteristics, personal attributes and preferences of those subscribers – and even, perhaps, their shopping habits and viewing patterns, provided the operators have the relevant analytical tools and capabilities. They can combine these customer insights with their ability to identify where individual users are based and offer highly targeted, localised promotions. Moreover, many operators have already developed solid relationships with local advertisers through their directory businesses.
Telcos are also well placed to enable the advertising experience practically anywhere, on any device and at any time. They can, for instance, manage the delivery of ads across the mobile phone, PC and TV-set; over fixed, wireless and other networks. What's more, they also provide a direct interactive response channel for the customers, and a feedback loop to advertisers allowing them to track advertising performance.
As telcos move into media - an industry that has historically been part funded through advertising - it will find that relying on subscriptions and pay-per-view models is unsustainable in a world where consumers do not expect to pay for all content. Content is expensive to generate and offer to consumers, and advertising provides a means to offer richer content at a more reasonable cost. Many telcos are therefore experimenting with opt-in advertising plans to fund content. Perhaps this is the most significant benefit, as it allows consumer access to richer content and media. Advertising may also provide consumers with access to content they previously were unaware of. A number of operators are already taking steps toward adding advertising on IPTV and cell phones.

IPTV advertising
The big advertising revenue still comes from television. But the traditional TV advertising model is becoming increasingly unsustainable. With the shift from analogue to digital broadcasting, the number of TV channels has multiplied, and audiences are becoming much more fragmented. This reduces the efficacy of an approach that relies on centrally scheduled programmes to deliver real-time advertising to a large, undifferentiated audience; and uses ratings to estimate the size of the audience. It results in low effectiveness, as advertisers need to pay for large audience even if they just reach small targets. Which makes TV ads too expensive.
IPTV could provide the answer. IPTV presents the opportunity to combine the powerful brand-building effect of conventional TV-quality advertising with the strengths of online; the ability to target specific audiences and allow customers to easily pursue their interest in a product, even to the point of purchase.
IPTV is an advertiser's dream. With IPTV, telcos have the ability to control where the ads go to - targeted at large groups, small groups or even individual television sets within a single household. 
The ads can be fine-tuned to the people within a household most likely to be watching at a certain time. When watching IPTV, users will be able to freeze the programming in order to interact with any advertising that attracts their attention, submit their details for further information on a brand or in some cases make an online purchase. And IPTV provides the means to measure precisely how many people have seen a particular advertisement. Payment models can be geared to actual viewers watching, the number of “red button” pressed, or perhaps a percentage of the sales.
With IPTV the ways in which ads can be personalised are limitless. Different ads can be generated once one ad has been shown a specific number of times. It gives advertisers the benefit that their ads won't annoy irrelevant audiences, or be shown too often and alienate their customers. IPTV also opens new opportunities to diversify ad formats. Ads can be placed when the set-top box boots up, on information screens, as a screensaver, as buffer when a movie loads, or dynamically in the video streams. The facility to 'telescope' out an advertisement could be possible using a click-through function for the consumer. There is also the possibility of search and recommendation, perhaps in partnership with an Internet search engine such as Google.
IPTV could provide a gateway to Internet advertising for sectors traditionally reluctant to embrace the medium. And IPTV will attract local companies who would otherwise not have considered TV advertising as an option. Telecom Austria has already explored ultra-local TV-advertising in the village of Engerwitzdorf and found it especially attracted local companies for advertising.
In Europe, the French IPTV market is leading the pack in targeted advertising trials, but IPTV providers in other European countries are also experimenting with advertising. Examples are Tiscali TV (formerly known as Homechoice) running a dedicated Honda channel in the UK, and Telecom Austria. BT is talking to both brands and agencies about offering (Vision) IPTV advertising. In the US, Verizon is currently deploying the technical tools that will allow it to insert local ads into its programming. On that foundation, the telco plans to introduce more targeted and interactive ads in its FIOS IPTV service. Though advanced ad deployments are still a ways off, AT&T (with its U-verse IPTV service) also likes the promise of an ad play that combines mobile phones, television and the Internet.

Mobile advertising
Mobile advertising represents another unexploited opportunity for telecom operators. It is one that telcos are particularly well-positioned to capture since they have control over what is delivered to the device and are the only companies that have the right to know the location of their subscribers, information that advertisers would love to use to target customers. The mobile phone is the most personal consumer device we own, and that most people carry with them 24 hours a day. It affords advertisers an opportunity to present very targeted and time-sensitive information that is of interest to the user. With nearly three billion cell phone users in the world, it's clear that mobile advertising represents a huge opportunity. Informa Telecom & Media predicts that worldwide spend of mobile advertising will be worth $11.35 billion in 2011.
Advertising on mobile devices can take many forms, including banners, sponsored video content and messages sent to users, but telcos and advertisers still need to determine what works best in different circumstances. Advertising techniques cannot simply be copied from the Internet. The screens and devices are smaller; the exposure time tolerated by the user is likely to be less; too many click-throughs will annoy users; and in many cases, operators must be able to identify the device type to render content appropriately.
Even more so than with Internet advertising, mobile advertising must be relevant, interesting to the audience and, especially, not overbearing in quantity. In fact, mobile advertising should be a combination of search, location and presence, and recommendation functions, based on a deep understanding of the consumer's passions, hobbies, purchases, past click-patterns and the like.
Outside Asia, where mobile advertising has grown rapidly in markets like Japan where NTT Docomo has been running small banner ads on its mobile portals for more than five years, the mobile operators have moved cautiously in adding advertising on cell phones for fear of alienating subscribers and increasing churn by doing so. But there have been a number of initiatives.
In the summer of 2006, Virgin Mobile USA introduced a programme called Sugar Mama, that compensates its phone users with free calling minutes for watching commercials, reading advertiser text messages and taking surveys from brands. In its first seven months, the Sugar Mama campaign awarded 3 million minutes to about 250,000 of the registered customers. Virgin Mobile recently announced that they will use JumpTab's search-based advertising platform to offer ads that are highly targeted and relevant for its users. Companies such as Verizon, Sprint and Cingular are now also beginning to test and roll out advertising on mobile phone screens.
In Europe, EMI Music and T-Mobile joined forces at the end of 2006 to pilot ad-supported mobile videos in Britain. Ad-funding company Amobee has recently launched a commercial advertising trial with Orange in France, with such companies as Coca-Cola and Saab having signed up for the trial. Orange customers interested in playing games will be offered them for free, or at a reduced rate, if they first agree to watch an advert. Mobile operator 3UK announced the launch of a service in April supported by personalised advertising to provide free content for its users. Also Vodafone and Yahoo! aim to launch a mobile advertising business in the first half of this year.
However, media brands such as Fox News, USA Today and The New York Times are now also joining the game by providing advertising via their mobile websites, which are accessed directly through a mobile browser and not through a mobile operator's menu. And they are not the only parties that think there will be big business for them down the road. Internet players Google and Yahoo! have already started to include advertising in their mobile search and portal properties. Yahoo! has even launched a mobile advertising platform in 19 countries across Europe, Asia and the Americas, instantly enabling advertisers to reach consumers around the globe on their mobile phones. Advertisers already signed up include the Hilton Hotel group, Pepsi and Singapore Airlines. And then there is Nokia, also jumping onto the mobile advertising bandwagon, by announcing two mobile advertising services designed for targeted campaigns on the handsets.
Highly targeted and addressable advertising will increase advertising revenue per viewer significantly, while the viewer experience becomes more personalised and well received. Several studies have confirmed that subscribers are more likely to respond favourably to advertisements if the topic is of interest to them. This type of advertising, however, raises the issue of privacy. There are acts in both Europe and the US to ensure that user-specific data is not used for any purpose other than for providing the telecommunications service itself. “Opting in” may well be seen as the route to go, and prove popular with consumers: giving them increasingly relevant ads. Here consumers allow their “user-specific” data to be used, in return for being included in special offers.
Many parties, from marketers to big media companies, to handset makers, to Internet players, to telecom operators, hope to get a piece of the pie. But operators have the demographic, transactional, behaviour and location data necessary to deliver marketing and advertising that meets the consumer need for relevant advertising. Operators are now at the point where they should exploit their unique technical advantages to secure their part of the pie.

Lawrence Kenny is Global Telecommunications Industry Leader for IBM Global Business Services.  Rob van den Dam is European Telecommunications Leader for the IBM Institute for Business Value

VDSL - A light touch

With ADSL2+ technology now being pushed to its absolute limits, carriers are talking about the next generation of broadband, VDSL and VDSL2, with speeds of up to 200 Mbits/s on relatively short line lengths from the DSLAM. Jorg Franzke explains how it is possible to roll out the speed benefits of VDSL and VDSL2 to most urban and city customers, without breaking the telco bank

It’s been an eventful decade across Europe as former state incumbents, cable companies and virtual telcos have all, seemingly en-masse, jumped on the broadband telecoms wagon and rolled out an increasing range of high-speed broadband services to their customers.
Most experts agree that, even with ADSL2+ offering customers access to up to 24 Mbits/s downstream data speeds, customers’ appetites for even faster speed services are still increasing, with some cable companies already talking about offering 100 Mbits/s as standard.

The only problem with this new generation of very high-speed broadband services is that they rely on VDSL and VDSL2 technology. Whilst ADSL2+ can happily support copper line lengths of two or more kilometres, the maximum available rates are achieved with VDSL at a maximum range of just 300 meters from the DSLAM (digital subscriber line access module), which gives around 52 Mbits/s. When we move on up to VDSL2 (ITU G.993.2) technology, carriers are even talking about rates of up to 200 Mbits/s.
But VDSL2 deteriorates quickly from a theoretical maximum of 250 Mbit/s at zero metres from the DSLAM to 100 Mbits/s at 500 metres, and 50 Mbits/s at 1.0 kilometre.  As a result of these line length limitations, very few customers will be within the coverage range of VDSL2 DSLAMs installed at the central exchange. So, most local loop carriers are discussing moving the active electronics, including the DSLAMs, out of their central offices and into larger versions of the roadside cabinets that form an integral part of the street furniture we see every day.
A major issue is that, with the move out of the central office comes the de-centralisation of the main distribution frame where connections have to be moved to initiate new services such as ADSL and VDSL.  Each time a customer requests a change of service, jumper wires have to be moved - fairly easy and efficient in a warm, dry clean centralised environment – but once the connections have to be made in the cold and rain it becomes an operational issue.  To give the reader an idea of the massive scale involved, however, a network the size of BT in the UK would require around 65,000 of these externally deployed active electronics cabinets. A network the size of Germany’s T-Com would require around 100,000 such cabinets.
In theory, the incumbent telcos could employ teams of roving engineers to maintain and provision the cabinets in much the same way as central offices are serviced at the moment, but the costs associated with the necessary engineering `truck rolls’ is anathema on the financial and ecological fronts. Even one visit per fortnight, at say ?50 per technician visit, would clock up annual costs of ?130 million per annum on a 100,000 cabinet network.
Consequently, any carrier electing to stay with the status quo and implement manual re-jumpering at the thousands upon thousands of active equipment roadside cabinets, will be forced to reduce their costs by making only scheduled visits. The corollary of this is that each cabinet may only figure in the schedule once every fortnight, meaning that the time-to-provision each customer will become much longer than currently is the case.
New approach
A markedly different technique is needed and newly developed automatic cross-connects  (ACX) can now be used to replace manual distribution/jumpering frames in the remote cabinets and so save carriers significant sums of money on the operational expenditure (OpEx) front.
With an automated ACX solution, not only are there no delays in waiting for an appropriate technician truck roll, but also the control of the ACX can be integrated directly into the carrier’s operations support system. Using this approach allows for the service connection to follow on automatically from the customer's order, within an hour or two, rather than the customer facing a wait of several days, as is currently normally the case with a central office, or several weeks in the case of above manual re-jumpering scenario.
Many carriers and manufacturers alike are chasing a holy-grail of Zero Touch for their networks. We have been pioneering a slightly less pipedream approach based on practicality and best Return on Investment.
The aim of a Zero Touch network would, of course, be that the field technician never needs to visit the remote site.  Which is all very well until you take into account that active equipment, and its power supplies and air-conditioning go wrong from time to time. So occasional technician visits are inevitable.
Zero touch systems have other drawbacks, not the least of which is the fact that the purchase costs can be substantial, so reducing the installation's return on investment. In the case of automated cross connects, Zero-touch would need a non-blocking switching matrix which is very expensive, and current non-blocking technology simply isn’t up to the job of transmitting 100Mbit/s signals.
The third issue with zero touch systems is the fact that the cabinet needs to be equipped with a large degree of reserve DSLAM and splitter capacity, ditto power supplies and air-conditioning, so seriously increasing the levels of capital expenditure required.
Our theory is simple. What happens if we introduce a minimum number of technician truck tolls to the mix, creating a `minimum touch’ not zero touch active electronics-based local loop?
This is where the financials begin to get interesting, as a minimum touch network is far more financially viable. It requires significantly lower levels of capital expenditure with very similar levels of operating costs. Less spare capacity is needed as this can be added when demand dictates. Likewise a much less expensive semi-blocking ACX can be used – with the full frequency range for 100Mbit/s service delivery
A good minimum touch system has the advantage of automating the provisioning and re-provisioning of lines without incurring the high capital costs of a zero touch system, or attenuating the signal levels required for effective VDSL and VDSL2 transmissions.
Well before the ACX system reaches saturation levels, it can signal its status to the central exchange, allowing engineers to make a planned site visit, install additional capacity if needed and hardwire connections already switched through to VDSL freeing up the switch ports to be used again for the next six or twelve months.
The result of this approach is good scalability, lower cost per line and a reduced space requirement. And all without affecting those all-important customer satisfaction levels.
Using a minimum touch ACX approach means that only one or two maintenance visits each year are required for each cabinet, with remote monitoring shouldering the responsibility of maximising network up-time.
In the event that something like a DSLAM card fails, the ability of ACX to connect ‘any-to-any’ can be employed to ensure that customers are only minimally affected by any technical problems. Depending on the severity of the failure, the cabinet's active technology can be remotely reconfigured to maintain service for the customers affected and the network operations centre can schedule a truck roll when it suits the operator. This makes for a more cost-effective maintenance strategy.
ACX technologies
In a survey of switching technology for remote automated cross connect devices used in next generation carrier networks, research company Venture Development Corporation (VDC) considered a number of technologies, but rejected robotic and solid-state/electronic switching, the former being error prone, expensive and with poor life expectancy, whilst solid-state/electronic switches have electrical parameters that make them unsuitable for the high bandwidth requirements of xDSL services like VDSL2. VDC also noted that a very specific `electromagnetic’ variation of the MEMS relay may become a suitable technology, but this is currently only in testing as regards ACX applications and, as yet, has no field application track record.
VDC concluded: “We believe the electromagnetic relay is acceptable technology because of its proven reliability, ruggedness and minimal transmission impairment.”  It did not judge any other technology to be currently acceptable. This, and the fact that the power requirements are so minimal, are the reasons that we had chosen to develop our own ACX product range around the tried and tested electromagnetic relay.
Obviously whether or not to implement ACX or to manage the service provision process manually is a matter for individual carriers. The choice of technology is critical from the perspective of reliability, minimal power consumption and the ability to handle the very high frequencies needed for VDSL2, but far more important in this rapidly changing telecoms word is the need for rapid return on investment.
It is our contention that Zero Touch is a step too far and that in the world of every day  engineering issues, Minimum Touch networks and minimum touch ACX are the way to minimised costs.

Jorg Franzke is ACX product manager for ADC KRONE, and can be contacted via tel: +49 308453-2498; e-mail: jö This e-mail address is being protected from spambots. You need JavaScript enabled to view it
www.adckrone.com

LEAD INTERVIEW - An Open Approach

End-to-end transaction data is increasingly being recognised as the not-so-secret sauce required for full-flavoured telco transformation. If so, it should be treated with the reverence it deserves, Thomas Sutter, CEO of data collection and correlation specialist, Nexus Telecom tells Ian Scales

Nexus Telecom is a performance and service assurance specialist in the telecom OSS field. It is privately held, based in Switzerland and was founded in 1994. With 120 employees and about US$30 million turnover, Nexus Telecom can fairly be described as a 'niche' player within a niche telecom market. However, heavyweights amongst its 200 plus customers include Vodafone, T-Mobile and Deutsche Telekom.

It does most of its business in Europe and has found its greatest success in the mobile market. The core of its offer to telcos involves a range of network monitoring probes and service and revenue assurance applications, which telcos can use to plan network capacity, identify performance trends and problems and to verify service levels. Essentially, says CEO, Thomas Sutter, Nexus Telecom gathers event data from the network - from low-level network stats, right up to layer 7 applications transactions - verifies, correlates and aggregates it and generally makes it digestible for both its own applications and those delivered by other vendors.  What's changing, though, is the importance of such end-to-end transaction data. 
Nexus Telecom is proud of its 'open source approach' to the data it extracts from its customers' networks and feels strongly that telcos must demand similar openness from all their suppliers if the OSS/BSS field is to develop properly.  Instead of allowing proprietary approaches to data collection and use at network, service and business levels respectively, Sutter says the industry must support an architecture with a central transaction record repository capable of being easily interrogated by the growing number of business and technical applications that demand access.  It's an idea whose time may have come.  According to Sutter, telcos are increasingly grasping the idea that data collection, correlation and aggregation is not just an activity that will help you tweak the network, it's about using data to control the business. The term 'transformation' is being increasingly used in telecom.
As currently understood it usually means applying new thinking and new technology in equal measure: not just to do what you already do slightly better or cheaper, but to completely rethink the corporate approach and direction, and maybe even the business model itself. 
There is a growing conviction that telco transformation through the use of detailed end-to-end transaction data to understand and interact with specific customers has moved from interesting concept to urgent requirement as new competitors, such as Google and eBay, enter the telecom market, as it were, pre-transformed. Born and bred on the Internet, their sophisticated use of network and applications data to inform and drive customer interaction is not some new technique, cleverly adopted and incorporated, but is completely integral to the way they understand and implement their business activities. If they are to survive and prosper, telcos have to catch up and value data in a similar way.  Sutter says some are, but some are still grappling with the concepts. 
"Today I can talk to customers who believe that if they adopt converged networks with IP backbones, then the only thing they need do to stay ahead in the business is to build enough bandwidth into the core of the network, believing that as long as they have enough bandwidth everything will be OK."
This misses the point in a number of ways, claims Sutter. 
"Just because the IP architecture is simple doesn't mean that the applications and supply chain we have to run over it are simple  - in fact it's rather the other way about.  The 'simple' network requires that the supporting service layers have to be more complex because they have to do more work." 
And in an increasingly complex telco business environment, where players are engaged with a growing number of partners to deliver services and content, understanding how events ripple across networks and applications is crucial.
"The thing about this business is not just about what you're doing in your own network - it's about what the other guy is doing with his. We are beginning to talk about our supply chains. In fact the services are generating millions of them every day because supply chains happen automatically when a service, let's say a voice call over an IP network, gets initiated, established, delivered and then released again. These supply chains are highly complex and you need to make sure all the events have been properly recorded and that your customer services are working as they should. That's the first thing, but there's much more than that.  Telcos need to harness network data - I call them 'transactions' - to develop their businesses."
Sutter thinks the telecom industry still has a long way to go to understand how important end-to-end transaction data will be.
"Take banking. Nobody in that industry has any doubt that they should know every single detail on any part of a transaction. In telecoms we've so far been happy to derive statistics rather than transaction records. Statistics that tell us if services are up and running or if customers are generally happy. We are still thinking about how much we need to know, so we are at the very beginning of this process."
So end-to-end transaction data is important and will grow in importance.  How does Nexus Telecom see itself developing with the market?
"When you look at what vendors deliver from their equipment domains it becomes obvious that they are not delivering the right sort of information. They tend to deliver a lot of event data in the form of alarms and they deliver performance data - layer 1 to layer 4 - all on a statistical basis.  This tells you what's happening so you can plan network capacity and so on.  But these systems never, ever go to layer 7 and tell you about transaction details - we can. 
"Nexus Telecom uses passive probes (which just listen to traffic rather than engage interactively with network elements) which we can deploy independently of any vendor and sidestep interoperability problems.  Our job is to just listen so all we need is for the equipment provider to implement the protocols in compliance with the given standards."
So given that telcos are recognising the need to gather and store, what's the future OSS transaction record architecture going to look like? 
"I think people are starting to understand it's important that we only collect the data once and then store it in an open way so that different departments and organisations can access it at the granularity and over the time intervals they require, and in real (or close to real) time. So that means that our  approach and the language we use must change. Where today we conceptualise data operating at specific layers - network, service and business - I can see us developing an architecture which envisages all network data as a single collection which can be used selectively by applications operating at any or all of those three layers.  So we will, instead, define layers to help us organise the transaction record lifecycle. I envisage a collection layer orchestrating transaction collection, correlation and aggregation.  Then we could have a storage layer, and finally some sort of presentation layer so that data can be assembled in an appropriate format for its different constituencies  - the  marketing  people, billing people, management guys, network operation guys and so on, each of which have their own particular requirements towards being in control of the service delivery chain. Here you might start to talk about OSS/BSS Convergence."
Does he see his company going 'up the stack' to tackle some of these applications in the future. 
"It is more important to have open interfaces around this layering.  We think our role at Nexus Telecom is to capture, correlate, aggregate and pre-process data and then stream or transfer it in the right granularity and resolution to any other open system."
Sutter thinks the supplier market is already evolving in a way that makes sense for this model.
"If you look at the market today you see there are a lot of companies - HP, Telcordia, Agilent and Arantech, just to name a few - who are developing all sorts of tools to do with customer experience or service quality data warehouses.  We're complementary since these players don't want to be involved in talking to network elements, capturing data or being in direct connection with the network.  Their role is to provide customised information such as specific service-based KPIs (key performance indicators) to a very precise set of users, and they just want a data source for that."
So what needs to be developed to support this sort of role split between suppliers? An open architecture for the exchange of data between systems is fundamental, says Sutter. In the past, he says, the ability of each vendor to control the data generated by his own applications was seen as fundamental to his own business model and was jealously guarded. Part of this could be attributed to the old-fashioned instinct to 'lock in' customers. 
"They had to ask the original vendor to build another release and another release just to get access to their own data," he says. But it was also natural caution.  "You would come along and ask, 'Hey guys, can you give me access to your database?', the response would be 'Woah, don't touch my database.  If you do then I can't guarantee performance and reliability.' This was the problem for all of us and that's why we have to get this open architecture. If the industry accepts the idea of open data repositories as a principle, immediately all the vendors of performance management systems, for instance, will have to cut their products into two pieces.  One piece will collect the data, correlate and aggregate it, the second will run the application and the presentation to the user.  At the split they must put in a standard interface supporting standards such as JMS, XML or SNMP. That way they expose an open interface at the join so that data may be stored in an open data to the repository as well as exchanged with their own application. When telcos demand this architecture, the game changes. Operators will begin to buy separate best in class products for collecting the data and presenting it and this will be a good thing for the entire industry.  After all, why should I prevent my customer having the full benefit of the data I collect for him just because I'm not as good in the presentation and applications layer as I am in the collection layer? If an operator is not happy with a specific reporting application on service quality and wants to replace it, why should he always loose the whole data collection and repository for that application at the same time?"
With the OSS industry both developing and consolidating, does Nexus Telecom see itself being bought out by a larger OSS/BSS player looking for a missing piece in its product portfolio?
"Nexus Telecom is a private company so we think long-term and we grow at between 10 and 20 per cent each year, investing what we earn. In this industry, when you are focusing on a specialisation such as we are, the business can be very volatile and, on a quarter-by-quarter basis, it sometimes doesn't look good from a stock market perspective."
But if a public company came along and offered a large amount of money? "Well, I'm not sure. The thing is that our way of treating customers, our long-term thinking and our stability would be lost if we were snapped up by a large vendor. Our customers tend to say things like  'I know you won't come through my door and tell me that someone somewhere in the US has decided to buy this and sell that and therefore we have to change strategy.' Having said that, every company is for sale for the right price, but it would have to be a good price."
So where can Nexus Telecom go from here?  Is there an opportunity to apply the data collection and correlation expertise to sectors outside telecom, for instance?
"Well, the best place to go is just next door and for us that's the enterprise network. The thing is, enterprise networks are increasingly being outsourced to outsourcing companies, which then complete the circle and essentially become operators. So again we're seeing some more convergence and any requirement for capturing, correlating and aggregating of transactions on the network infrastructure is a potential market for us. In the end I think everything will come together: there will be networks and operators of networks and they will need transaction monitoring.  But at the moment we're busy dealing with the transition to IP - we have to master the technology there first.”

Ian Scales is a freelance communications journalist.