European Communications

Last update09:48:48 AM

Features

European Communications discusses the latest telecom trends with telco executives, analysts and topic experts viainsightful analysis, Q&As and opinion pieces.

3G TAKE-UP - Pulling the usage trigger

For 3G to be a success, Alon Barnea explains, users need to be motivated to use it and be given an easier way to adopt the technology

" height="<% height %>" align="right" alt="3G TAKE-UP - Pulling the usage trigger" class="articleimage" />

The introduction of 3G and video calls was not met with the fanfare response that was expected by the industry. Even now with over 100 million (and rapidly growing number of) subscribers, 3G mobile users still remain a rather small part of the overall two billion mobile subscribers worldwide and video usage within this video enabled community is still deemed as a disappointment. Point-to-point video calls are evidently not a big enough draw to encourage people to jump on the 3G bandwagon, and with an estimated one in ten mobile phone users actually owning a 3G phone, it seems unlikely that person to person video calling will be the phenomenon that SMS has become. Though most share the notion that video will become mainstream and a major revenue source, we still need to address the question of what then will make video communications a success?

The UMU factor
Much has been said about the limiting factor of 3G video, stemming from peoples’ reluctance to accept “intrusive surprise” video calls. That’s where the User Motivated Usage (UMU) factor comes in. The UMU factor is related to video applications where an entity is generated, at a given moment, to motivate users to make (rather then receive) a video call and is the key to elevating the level of video usage over mobile devices by taking out the “surprise call” element and the absolute necessity to be seen.
For 3G to be a success, users need to be motivated to use it and be given an easier way to adopt the technology; rather than having to wait for their friends to catch on too. With the UMU factor, 3G can be used by anybody today for an exciting experience that is independent of the 3G availability of other participants.
Naturally, “traditional” video communication is happening now. Take a group of female friends, for example, getting ready for a night out together. The advent of 3G mobile to PC communication opens up a new avenue for these women to get their friends’ opinions on how they look in a particular outfit. Using their 3G phones, or webcams on their PCs, the group of women can ‘meet’ in their own online community and compare clothes from their separate homes before meeting later in the evening.
Another example is that of a businessman who is travelling. While travelling through France on the TGV, he can still take part in a face-to-face briefing with a client based in Scotland using his 3G mobile phone to his client’s PC, while conferencing in his partner who is sat at her desk in Brussels.
But wouldn’t it be appealing to those avid sports fans to see and hear the Most Valuable Player right after a major basketball game?  Members of the team’s fan club can call to see and hear what the MVP has to say, and maybe even be selected by a moderator to be seen by all of the viewers in the fan club community to ask a question.  At the same time, someone sitting in their living room watching the game on their new HDTV can join the live video session as well because their cable STB is also a video-enabled client. It could be just your luck that you are stuck at work, but you can enjoy this live from your desktop PC.

The secrets of success
For 3G to be a success, operators and service providers need to answer the following questions:
1.    Have we secured a strong enough trigger/interest for usage?
Without a reason to use 3G, why should users start paying out extra to make video calls on their 3G phones? Users need to be given something to inspire them to pick up their 3G phones and make a video call. No matter what the lifestyle, users need to know that 3G can benefit them; that it is there for everyone.
2.    Is there a specific context/timing for when a service will be used (here and now)?
A 3G mobile device is always smaller, with lower quality and is more expensive than any other media (PC,TV, STB etc.) but it’s the only real mobile device and always available. An event or community that triggers a use at a specific time or context that motivates the user to join at that moment will create the need and reason to use a mobile device. If the service is timed with the end of a sporting event, and targeted to the viewers who are at that time mobile, then the usage trigger exists.  When a service is geared for people on the move, then the user motivation is created.
3.    What is the guaranteed success and completion rate?
Successful services must have high completion rates. The way to guarantee this is to deploy a service that is not dependent on a high ratio of other 3G enabled handsets – for example where participants and content can also originate from the IP. In this manner, those who do own 3G phones will be ensured a successful service, with a 100 per cent completion rate.  Furthermore, a converged environment, including video-enabled PCs, expands the boundaries of the relevant communities, and increases the levels of participation thus increasing adoption rates.
The more limited or complicated a medium is, the more necessary it becomes to correctly understand the above factors” to ensure usage and success. In the context of video and the mobile handset, the success factors are definitely challenging, and given the low usage rates experienced by the industry today, it can safely be said that the answers as indicated, still need to be integrated into the 3G services provided by the operators.

Pulling the trigger (of usage)
Users need to be shown and delivered true benefits of what can be done with 3G and what advantages it has. What video over mobile has to offer, besides a small screen and limited quality, is unprecedented mobility and availability; no matter where you are and what you’re doing (within reason, of course) you can always see your friends and family, make that meeting, and enjoy video content. By taking advantage of the mobility and availability of 3G and considering the success factors, an interesting and promising future for mobile video can be seen.
If the UMU factor is put into effect, then the usage possibilities are endless. Even now, some of these possibilities have become reality. 
Imagine how many such UMU “events” we are missing every day! 
The options for 3G are infinite – imagine how much mobile video traffic can and should be generated when influenced by the UMU factor. Consumers just need a little push in the right direction and before you know it they will be saying: “I’m here, I’m interested and I’m ready to pay! And I will use the best device and access method that is available to me at any given moment.”

Alon Barnea is General Manager of RADVISION’s mobile business

OSS/J - In Perfect Harmony

Since joining forces last year, OSS/J and the TM Forum are proving that by combining their strengths, not only the OSS community, but the communications industry as a whole, has much to gain.  Doug Strombom takes a look over the past twelve months

" height="<% height %>" align="right" alt="OSS/J - In Perfect Harmony" class="articleimage" />

The OSS through Java Initiative’s (OSS/J) decision in early 2006 to join with the TeleManagement Forum (TM Forum) appears to have been a good one.  OSS/J is a rising star within the TM Forum, making a strong contribution to the technical programme there, and increasing its influence on the TM Forum’s New Generation Operations Systems and Software (NGOSS) standards-making efforts.
In January 2006, when OSS/J first discussed joining the TM Forum at OSS/J’s face-to-face meeting in Dusseldorf, Germany, there was some trepidation that the group’s strong focus on standards-making might be diluted within the much larger organisation.  But the following day when the idea of merging OSS/J into TM Forum was presented to the telecommunications service providers present at the OSS/J Service Provider Roundtable, it was greeted with acclaim.  The move would put to rest the concern by service providers that there are too many different standards and standards-making organizations within the telecommunications industry.  By removing the uncertainty factor of having multiple competing standards, the service providers agreed that it was to everyone’s benefit to widely adopt a single OSS integration standard.
The path to the standardisation of OSS interfaces has not been smooth.  With hundreds of telecommunications service providers worldwide, not to mention hundreds of OSS vendors and system integrators, reaching agreement on common standards can be a real challenge.  The proverbial chicken-and-egg problem is often cited, with service providers agreeing to adopt open standards only when sufficient OSS vendors support them, and OSS vendors agreeing to provide open standards only when sufficient service providers agree on which standards they require.  Because there are so many players in telecommunications, it is much more difficult to agree on standards than in more concentrated and vertically-integrated industries like the automotive industry.
That’s why industry standards bodies like TM Forum are so important, and it helps that the TM Forum has plenty of prestige within the telecommunications industry.  Its members include approximately 600 telecommunications service providers, OSS vendors and system integrators.  When the TM Forum says “this is the standard that our members want to adopt,” it is a very significant statement. 
The TM Forum can justly claim that its choice of standards is impartial and in the best interests of the whole telecommunications industry.  It is an open group, with a Board that is elected by its corporate members, with councils representing service providers, vendors and system integrators, respectively.  That Board appoints a Technical Committee to sort through industry best practices and make the final determination on standards issues.  Within the TM Forum, standards-making programmes like OSS/J perform their work at the behest of the Technical Committee.  The Technical Committee’s overarching goal is to define NGOSS, into which OSS/J fits neatly as an implementation-oriented interface standard.  These open governance mechanisms of the TM Forum are helping the OSS industry find a unified voice in favour of standardisation.
An immediate result of OSS/J uniting with the TM Forum was an upsurge in OSS/J membership.  With OSS/J now under the auspices of the TM Forum, more industry participants were assured about the impartiality of OSS/J.  Major OSS players like HP and integrators like TCS (Tata Consultancy Services) added their considerable industry weight and technical resources to the development and maintenance of OSS/J APIs.  OSS/J development is performed under the open Java Community Process (JCP).  Each API project is led by a Spec Lead from an industry insider, and participation on the project is open to other companies who can contribute their requirements and technical support.  In the parlance of the JCP, each API project is called a “JSR” (Java Specification Request).  HP took on the new OSS/J Fault Management API.  TCS began to participate by constructing Reference Implementations (RIs) for many OSS/J APIs.
Membership in the OSS/J Programme at TM Forum is open to new members who are willing to make a technical contribution to the development of OSS/J APIs.  The TM Forum assigned the job of negotiating technical contributions and following up on those promises throughout the year to a dedicated Technical Programme Manager.  This important job went to Antonio Plutino, who has successfully managed OSS/J deliverables over the years.  Of course, one does not have to be an OSS/J member in order to contribute to API standards: many individuals and companies contribute to JSRs at the invitation of the Spec Leads.
A second major impact of moving OSS/J into the TM Forum relates to the professionalism and governance of TM Forum’s standards-making process.  OSS/J subjects itself to a formal JCP process because the JCP is a tried-and-true development process with build-in checks and balances.  Because the JCP is an open process involving key experts from the industry, the quality of inputs to the JSRs is very high.  And the review steps that are inherent in the JCP process help to ensure that many reviewers validate the approach taken to define interfaces and the quality of the resulting specifications.  This additional governance has breathed fresh air into the TM Forum’s standardisation process, as the TM Forum itself has begun to adopt governance that can stand up to ever greater scrutiny.
OSS/J helped introduce advanced techniques for producing interface specifications such as use of a common information model and model-driven tooling.  All of the new OSS/J APIs have been specified using the Core Business Entities (CBE) from the TM Forum’s Shared Information Data (SID) model.  This helps to ensure compatibility between APIs developed according to the OSS/J standard.  In addition, OSS/J interfaces are built using Tigerstripe Workbench, a model-driven tool developed by Tigerstripe, an OSS/J member.  This software allows JSRs to design an abstract specification of an interface, and then to generate specific code in XML, Java and WSDL, in order to support the different deployment profiles required by OSS/J. The use of a common model and model-driven tools has greatly sped up the development time and quality of OSS/J APIs.  One JSR reported a 70 per cent reduction in specification effort through the use of the Tigerstripe tools.
A third impact has been on creating user-focused standards, as opposed to purely technical specifications.  OSS/J and the JCP have high standards for defining interfaces.  In addition to the interface specification, there must also be a use case or ‘Reference Implementation’ (RI) and a testing framework or ‘Technology Compatibility Kit’ (TCK).  This allows an uninitiated integrator to see how the interface was intended to be used in a real-world scenario, and to test the compatibility of his or her application with the open standard.  The TM Forum now requires that these useful tools to be delivered with all of their interface standards. 
OSS/J helped the TM Forum craft the PROSSPERO™ programme, which certifies open standards as being ready for market adoption.  PROSSPERO-ready interfaces package everything that an implementer needs for OSS or BSS interoperability including: interface specifications, testing frameworks, guidebooks, online developer support; access to reference implementations, plus educational, marketing, and developers’ tools. PROSSPERO interfaces must meet criteria of market adoption and having documented use cases.   The idea behind PROSSPERO is to make it even easier for telecommunications companies to adopt open standard interfaces by setting high criteria for market readiness.
Now that OSS/J is well entrenched within the TM Forum, the organisation has shifted into high gear.  Most of the OSS/J APIs will be upgraded and delivered as a ‘Summer Release’ in August 2007.  The APIs that are planned to be released then are:
•    Common API (which underpins all OSS/J APIs)
•    Fault Management API
•    Order Management API
•    Trouble Ticket API
•    Inventory API
New APIs that are slated to be released before the end of 2007 are:
•    Pricing API
•    Discovery API
Meanwhile, OSS/J has an open call to fellow members of the TM Forum to contribute resources and expertise to these and other specification efforts.
Going forward, more exciting news is likely from the OSS/J and TM Forum.  One thread is the growing movement to harmonise all standards-making effort within TM Forum.  At TeleManagement World held in Nice, France, on 20-24th May, 2007, the Harmony Catalyst demonstrated a unified approach to integration that incorporated OSS/J and MTOSI standards.  This work demonstrated that OSS/J and MTOSI, two of the most popular standards from the TM Forum, are compatible with each other.  The TM Forum Technical Committee is underscoring the need for a single standard, and a Harmony Architecture team is taking up the challenge to define the common guidelines for TM Forum standards.  In addition, the TM Forum is reaching out to other standards bodies to use its PROSSPERO programme to promote other valuable standards in the market place.
The TM Forum has the right scope and clout to address the need for OSS integration standards.  Never before has an organisation with a global perspective like the TM Forum – with reach into wireless, broadband, IP, billing and content – stood so firmly behind a unified standard for OSS integration.  The focus that TM Forum is bringing to standardisation is unprecedented, and in combination with the rigor of OSS/J standards-making process, the impact is sure to be felt far and wide in the telecommunications industry.  We may finally have the answer to the question by service providers, vendors and integrators alike: which OSS integration standard should we use? There’s a growing consensus behind a harmonised OSS interface standard from the TM Forum.
Information about OSS/J can be found at www.tmforum.org/ossj or by contacting Antonio Plutino at This e-mail address is being protected from spambots. You need JavaScript enabled to view it
Doug Strombom is a Steering Committee Member of the TM Forum’s OSS/J Programme and CEO of Tigerstripe, Inc.

ADDED VALUE CRM - Dreaming the impossible dream?

Is holding on to customers for life a real possibility for
operators? Alastair Hanlon believes that a new approach to CRM will provide the answer

" height="<% height %>" align="right" alt="ADDED VALUE CRM - Dreaming the impossible dream?" class="articleimage" />

Given a choice between winning new customers or holding on to the ones they have, any operator worth its salt would plump for ‘both’. It’s a reasonable choice but the fact is that many operators have been less successful at tackling the perennial problem of churn than at luring in new customers with attractive but costly offers. Of course, one person’s churn is another’s new sale.
For many, the solution has been to make heavy investments in IT, CRM systems in particular, in the hope that these will build longer term customer relationships. So far, this has not turned out quite as hoped, partly because the much-vaunted CRM and back office systems have tended to operate in silos and not as a seamless facility that provides a full picture of the customer relationship across all aspects of the business. This is not a deliberate policy, simply a result of rapid expansion and the need to add new systems to support new services. The information usually exists; it is just not readily accessible.
It becomes even more difficult in this era of convergence, as operators add new services.
Unfortunately, a customer trying to contact an operator can be forgiven for wondering whether they are dealing with one company or several. They soon find that call centre agents, their first point of contact, rarely have all the billing, service, offer and helpdesk information at their fingertips, as they might expect.
Things can be just as frustrating for the call centre agents themselves. In order to build a complete picture of a particular customer’s relationship with the company they have to switch between different CRM and back office applications, often resorting to handwritten notes to relate what they find in one with the information they collect from the others. It is inefficient, time consuming and highly frustrating.
If an agent is dealing with someone who is ready to churn, the question is what scope they have to make new offers or set up different deals, in order to retain the customer. Without a complete picture and firm policies in place to guide them, agents can end up giving less valuable customers more than they are worth and neglecting those with the greater long-term value. The customer with greater long-term revenue potential could decide to leave simply because the phone queues are clogged up, e-mail responses are too slow, service levels are poor and offers are unattractive.

Ending the silo culture
It is time for a strategic approach that makes the best use of technology, provides all the data needed and equips organisations to identify the highest potential customers and make the decisions needed to keep them on-board for the long haul. 
This has to be supported by technology that goes further than most do at present. Existing CRM systems are just not capable of identifying the true lifetime value of individual customers and turning that into action every time there is a customer contact. Through integrating CRM and back-office systems, upgrading processes and reviewing business rules and procedures, a whole new world of opportunities is opening up.
This will provide a complete view both of the services available and of the customers. This makes agents very much more effective because they are able to see into all systems at the same time and no longer have to ferret for information from different sources.  Business policies also become more consistent. This means that customers get the same information and level of service across all contact channels, whether they are interacting over automated self-care or with a call centre agent.
By integrating and making accessible information on customer history and services subscribed to, and having it delivered in a clear and transparent manner, it will be possible to raise the bar in customer service and focus on strategic areas that can transform the business performance.

Revenues for life
A comprehensive route to solving these problems can be found in an approach called Lifetime Value Optimisation. This is a process developed by strategy consultants, McKinsey, which moves well beyond traditional CRM.
LTVO can have direct impact on revenues and profitability by addressing the core issues of customer relationships – perceptions, loyalty and churn.
McKinsey has shown that LTVO can generate dramatic increases in EBITDA (earnings before interest, taxes, depreciation and amortisation). An incremental rise of between three and five per cent in EBITDA is possible and, depending on the size of the business, this can translate into hundreds of millions of dollars per year.
For LTVO to succeed service providers must be able to capture and respond to real-time events in the areas of customer care, billing and service delivery. The focus should be on four key areas, each of which directly influences the customer relationship, namely: customer satisfaction, customer retention, increased usage of existing services and take up of new services.
Rather than just reacting to problems when they come up, automated systems based on LTVO can make agents more efficient and allow them to be more proactive when dealing with customer queries.  Automation can be made more effective by tailoring voice or web self-care to the customers’ needs in real time. With fewer incoming calls, call centre agents are freed up to focus on new sales and on serving high value customers.  Not only does this reduce the cost of care and help raise revenues, this unified approach also has the potential to improve the overall customer experience and so increase customer satisfaction and loyalty.
With real-time data and proper micro-market segmentation, service providers are in a position to offer an immediate response to situations as they arise. These can be in such areas as billing queries, response to changing usage patterns or solving problems in real-time. Most importantly they can focus on customers with the highest potential lifetime value and give them targeted attention and high quality service.
Greater convergence means that a complete picture of individual customers, the services they use and their past behaviour and future potential is more essential than ever.
As well as increasing customer satisfaction and loyalty, the operator is better placed to take the initiative by making relevant and attractive offers that will increase each customer’s overall value to its business.

Real-time interaction
Underpinning all this is the principle that every time someone gets a bill, makes a payment, uses a service, makes a call or downloads some content, there is an opportunity to improve the effectiveness of those interactions.
By applying the LTVO approach operators can capture real time events in the customer care, billing, and service delivery environments, evaluate policies related to those events, and carry through real-time actions related to those policies.
Ultimately it is all about giving individual customers the level of service they warrant. Those with high lifetime potential are treated differently from those with lower potential but no one is left feeling neglected or unwanted. The system will identify incoming calls from high value customers and route them to an agent, while a lower value customer might be transferred to an automated self-care system.
This approach produces individual solutions for individual customers, personalised to their needs, habits and tastes, and it does this proactively and in real-time. So, if a customer starts downloading music or ring tones to a mobile, the system might suggest an offer – a good-value subscription offer or a two-for-one option. Similarly, another customer who is about to buy their third ‘pay-per-view’ movie in a week might be offered a particular movie package, possibly a free movie as a reward for buying ‘now’.
Taking advantage of these ‘warm’ sales opportunities might be done by e-mail, phone or text but, above all, it happens at precisely the moment when the customer is focused on a particular aspect of the service – right when they are about to make a purchase. Rather than seeing this as ‘hard sell’, they are more likely to view it as a response to a real need.
Proactive troubleshooting
LTVO is not restricted to expanding sales opportunities, it is just as effective in solving customer problems, particularly in anticipating and addressing problems before the user even asks for help. If a customer changes their usage patterns or they suggest that they are having a problem of some kind, help can be offered even before it is requested.
This kind of proactive response to an issue flagged up by the integrated system, in whatever form it takes, can surprise and impress customers who have come to expect, slow, ‘after the event’ reactions.
This will not only help to resolve the customers’ problems, it can also make a strong impression on them and build their trust in, and loyalty to, the operator.
The types of events that might warrant an immediate, pro-active response include: someone who appears to be struggling on a self-care web site; a customer using a service for the first time or showing signs of becoming a regular user; when a fault occurs like a dropped call, failed download or device failure; when bills are being paid or get left unpaid.
The more you know about your customers’ behaviour and priorities, the more you can do to strengthen your relationship with them.
The main benefit of LTVO processes comes from better policy enforcement and improved treatment of inbound contacts, while between 10 and 15 per cent will result from outbound actions triggered by the real-time data being provided through the system.
The holistic approach contrasts strikingly with the traditional tendency to deal with problems piecemeal. At Convergys we see this new approach providing an effective solution to even the toughest sales challenges and, most importantly, one that will be reflected in greatly improved profitability.

Alastair Hanlon is Director, Innovation Strategy, Convergys Corporation, EMEA, and can be contacted via tel: +44 1223 705000
www.convergys.com

CUSTOMER INTELLIGENCE MANAGEMENT - Mining for gold

Who knows more about their customers, a mobile phone operator or Google? The answer, you would think, should be straightforward… but you may be surprised says Adrian Kelly

" height="<% height %>" align="right" alt="CUSTOMER INTELLIGENCE MANAGEMENT - Mining for gold" class="articleimage" />

The significant advantage that mobile phone operators have over other industries, and that includes the Channel 4s, Skys and even Googles of this world, is the vast volume of customer data they accumulate from a consumer’s daily interaction with the most personal of devices, the mobile phone.  Clearly the operators are sitting on a customer information goldmine. At the moment however, operators are simply not using this wealth of information and as a result are missing an incredible opportunity.
It is an opportunity upon which they will be looking to capitalise over the next 12-18 months as marketing continues to be a key battleground for service providers looking to avoid becoming a bit-pipe.  Under threat from many quarters, including media companies and Internet brands, operators’ marketing initiatives have to become two pronged.  Acquisition marketing remains a battle of the brands, where expensive sponsorship and clever pricing are essential to stand out in the increasingly crowded marketplace.  Cross and up sell however, as well as retention marketing, is much more of a fine art.  Marketing to existing customers requires a ‘mass-personalisation’ approach, based on deep customer knowledge.
Operator’s retention tactics (offering an incentive to stay the moment a subscriber requests their PAC number) are well known among subscribers.  However, the aim must be to offer the appropriate and relevant incentive in anticipation of a customer’s natural churn cycle, or to encourage them to adopt new services when the time is right for that individual customer – not waiting until it is more costly or potentially too late to keep them. Today, marketing departments are often restricted by a lack of up-to-date information about current subscriber behaviour, as their hands are tied by a dependence on technical teams to extract the information they need to target campaigns, and to assess their success rate.  The upshot of which is that marketing teams are left unable to react quickly and accurately to opportunities and events. 
Numerous service providers are finding that established techniques of segmentation based on demographics do not create the depth and accuracy of knowledge required.  The latest generation of Customer Intelligence Management solutions offer a whole new level of depth, accuracy and speed of knowledge acquisition for service provider marketing departments, allowing them to truly capitalise on the customer data currently sitting unused within the operator’s network.  Segmentation is performed on service usage data, and so represents their actual behaviour, rather than assumed behaviour from demographics, and is updated daily direct to the desktop. Customer Intelligence Management is already proving to be a compelling prospect for operators hoping that it will give them a unique advantage over their Internet-based challengers.
Cross and up selling focused marketing should, by its nature, be easier than acquisition marketing.  You are talking to a captive audience.  One that you know, has already bought into the brand proposition, and is probably reasonably happy with the service.  To use a business analogy it’s a little like walking into a sales meeting where you already know the people you are going to see.  Compare it to the acquisition scenario, which is much more akin to the cold call, and you should be in for an easier ride.  However, if your preparatory information is out of date, or you don’t research the motivations and preferences of the people you are meeting, you will not be able to take advantage of the situation. In fact, if your research is so poor that you are making offers that are completely irrelevant, you may even damage your existing relationship.
Service providers can now have access to incredibly detailed behavioural information. With data services continuously on the rise, operators now know when someone sends an MMS, what type of multimedia it was, who it went to, what application-to-person services are used (horoscopes, TV show information), and what TV shows they interact with through voting and content applications.  As the mobile Internet is becoming an increasingly real phenomenon, service providers also have access to much more web browsing information than a search engine can record – wherever the consumer goes online using their mobile or PDA, the operator has a click by click record of their behaviour.
Effective Customer Intelligence Management logs and analyses service usage patterns and mobile browsing habits as they occur – presenting them to marketers in an easily actionable format.  Such a revolutionary approach will put operators in the unique position of not only understanding the habits, behaviour and interests of the user, but also their wider social circle - and crucially, being able to act on them.
Operators have tended to segment their customer base down to around ten profiles (such as heavy talkers, texters, business data users).  Limiting to so few, mainly demographic based groups of subscribers tends to overlook less mainstream usage trends and character traits of users, and becomes increasingly restrictive as operators look to offer more niche services and move into the content and media markets.  With effective real time analysis, a ten-segment model will become a thing of the past, as media industry modelling with up to 100 segments (as used by the BBC for example) becomes a real possibility, and not a management nightmare.  More precise segmentation is the passport to a ‘mass personalisation’ approach, with individuals’ brand loyalty strengthening when marketing is more personally relevant.  From a product planning perspective there is also further opportunity for service providers to tailor new services to suit ever-evolving communities.
The next twelve to eighteen months will present two major challenges for service providers; increasing market competition and continued hesitancy among consumers to adopt new and unproven services.  The key to success on both fronts will be a service provider’s ability to understand its customers; their motivations and preferences.   Only by knowing their audience will they be able to offer effectively personalised services and, in doing so, stay one step ahead of the market.

Adrian Kelly is head of Customer Intelligence Management for Acision

CONVERGENT CHARGING - Holding the purse strings

Kari Pulkkinen looks at how online cost control can help operators build a business case for convergent charging

" height="<% height %>" align="right" alt="CONVERGENT CHARGING - Holding the purse strings" class="articleimage" />

The uptake of converged communications has brought with it a wide range of new opportunities for service providers. Triple and quadruple-play services, including video services as well as applications, download and other content services, are being increasingly accepted into the mainstream and demanded by business users and consumers alike. However, these news services all need to be accurately charged for and billed to the customer to ensure ongoing usage and maximised revenue. How best to achieve this is currently of major concern to operators and service providers alike. 
In addition to the concerns around accurate billing, is the question of how to ensure that all customers receive the same level of experience – whether they are post or prepaid. Currently, prepaid customers tend to receive limited services and charging models from their providers due to concerns amongst those service providers that their billing solutions have limitations that offer a potential window for fraud.  Although in the past, operators have been hesitant about allowing full service offerings to prepaid subscribers, they are now looking for solutions that allow them to fully capitalise the potential of prepaid services without having revenue leakage and fraud problems.  One such solution, which can enable operators to offer more services to the prepaid user, is online charging.  By deploying online charging solutions, operators can offer all services to all users while closing the gap on fraud and revenue leakage.  Such a solution allows operators to fully capitalise their prepaid potential and, ultimately, fulfil end-user needs with wider service offering.
One final consideration revolves around the issue of ‘usage control’. Traditionally usage control has been linked to the prepaid payment option. However, there is much wider need for usage control regardless of the payment method. As an example, given the focus on children’s use and exposure to such services, an increasing number of parents require this additional level of control. Cost control is an important element, as parents want to control their children’s spending. Particularly important for younger children, as they get their “first mobile”, this type of cost control can educate younger users about usage of mobile services. Online cost control helps both parents and children in these tasks.
There are a variety of service concepts that could cater to helping parents and children control spending, for example, a fixed monthly fee and, on top if it, controlled usage with a user (parent) defined limit. This type of personalised billing model ensures that parents remain confident about costs, and encourages long term usage. For operators, the fixed monthly fee ensures at least the minimum revenue from customers.
In addition, it is vitally important for operators to recognise the role of online cost control in managing both fraud and credit risk, as this can have the greatest impact on their bottom line.  Offering new services, particularly those in emerging markets, is creating new opportunities for revenue, but it also risks exposing operators to increased credit risk.  As part of convergent charging, online cost control can help mitigate against risk through a hybrid approach.  A customer would have a fixed limit for post-paid usage that, once exceeded, would automatically switch the payment method to a prepaid mode. The use of pre-paid account mode is enabled through top-up. 
While it is increasingly clear that the key to successful convergent charging lies in a unified charging infrastructure, achieving this ‘holy grail’ continues to be a major consideration for operators. The more forward thinking operators have already started to develop the business cases and service concepts around convergent charging. A significant building block in this model lies in accurate ‘online cost control’ both for users and operators alike.
As discussed, the number of new services being introduced open up the operator to increased risk. Even though the majority of users do not set out to maliciously defraud the service provider, their unfamiliarity with new services and pricing structures means that it is much more likely that they will exceed anticipated costs, which can result in large costs and a resulting unwillingness to adopt the service long term. This can be exacerbated when the user is trying to utilise such services while travelling, as the roaming fees can dramatically add to the cost. 
In each case, the result is that the customer will be surprised and shocked by the service bill. From an operator’s point of view this outcome can be the death knell for new service adoption, as the customer decides never to use them again and the operator loses all potential future revenues associated with those particular services. Online cost control means that the customer is able to track costs and avoid bill ‘shock’, and are, therefore, much more likely to continue using the service.
This approach benefits operators by allowing them to ensure the credit-worthiness of customers while at the same time maximising revenue streams.  Operators can also use this model to differentiate their service offering and set out truly unique propositions not easily imitated by competitors, as often happens when introducing new price plans.  At the same time, cost conscious subscribers can feel they can be in control of their spending, while benefiting from the availability of a wide range of services.
Many of these concepts are not new, but there have still not been that many online cost control implementations. This is often due to the fact that operators’ existing billing and prepaid systems have limitations when supporting these types of online cost control service concepts.  Again, one effective way to implement this capability is to deploy an online cost control solution. This approach is able to provide flexibility for operators to build their own, individualised service concepts for online cost control. This type of solution can also provide an easy extension path for additional convergent charging areas, such as online data charging for post and prepaid, as well as IP prepaid and other charging solutions and related service concepts.
It is becoming increasingly apparent that online cost control is a must if operators wish to ensure the credit-worthiness of their customers, while enabling those same customers to better control their spending.  For both operators and customers, this is a key element to the successful introduction and ongoing uptake of news services and applications. Added to the recognised benefits of service innovation, online cost control goes a long way towards building a business case for convergent charging.

Kari Pulkkinen is VP, Business Development, Comptel

REVENUE ASSURANCE - An effective plug

Considering the scale of revenue losses that many telecoms operators incur, it is vital that they identify the causes, quantify their magnitude and then set about addressing these leakages in a holistic manner. Dominic Smith looks at the main causes of revenue leakage, and outlines ways in which operators can resolve these with the help of end-to-end pre-integrated business support systems

Revenue assurance continues to be a key concern for most telecoms operators. An on-the-show-floor survey carried out by Cerillion at the 3GSM World Congress in February identified it as one of the three most important business issues facing telecoms operators today, with 15 per cent of respondents acknowledging it as their most urgent concern.
This is hardly surprising when you consider the scale of the problem. Latest estimates suggest that as much as 10 per cent of total provider revenue is still being lost due to revenue leakages. In today’s competitive telecoms environment, this situation is unacceptable. And to retain competitive edge, operators need to ensure they are tackling the problem proactively.

Arguably the most important cause of revenue leakage is poor systems integration. Unfortunately, this is often a characteristic of the traditional best-of-breed approach to the implementation of business support systems. With this model, systems integrators are often tasked with implementing and integrating multiple heterogeneous systems to build a complete solution. Invariably, they encounter two key problems that make effective integration difficult.
First, they typically discover incompatibilities between the data models used in the best-of-breed systems. Synchronising data across different applications is complex because of the need to align different ways of identifying the subscriber, service and orders. However, if these mappings are not carried out properly, the operator will struggle to trace orders across the systems.
Second, the systems integrator may not have an in-depth understanding of all the best-of-breed components. As a result, it may integrate the systems inefficiently and introduce data replication or unnecessary layers of complexity, all of which can result in holes where revenue leakage may occur.
Process problems
Poor integration typically also results in a host of process problems. It may for example lead to data entry in multiple systems or incompatible configuration between solution components. The consequence of this may be, for example, rating/prepaid charging errors - essentially applying an incorrect price to a customer record or not being able to price the record at all. These errors will result in usage that cannot be billed for and, ultimately, revenue leakage.
Incomplete or incorrect usage data is another primary cause of leakage. This problem often occurs when network switches produce erroneous information and prevents the operator identifying the type of service used by a customer or the customer using that service. In either case, the result is an inability to bill for usage incurred.
Poorly integrated systems with no common workflow can also lead to delays in billing. Sometimes manual set-up processes for new services cause a delay of several days to occur before the operator can start invoicing the customer, inevitably resulting in a loss of revenues. In contrast, a fully automated process with flow through provisioning enables the operator to start billing for service use immediately. 
Invoicing system errors are another potential cause of revenue leakage. Traditionally, the problem is thought to be primarily one of under-billing - operators failing to invoice customers for services received. In fact, over-billing can be just as significant. This typically occurs when a service is terminated but the operator continues to bill for the service in error.
It will often result in costly customer disputes and the requirement to generate refunds or provide credit as a goodwill gesture. Valuable time and resource may be required to fix the offending process, and further revenue leakage will occur indirectly as a result of growing customer dissatisfaction and increased rates of customer churn.
Launching new products and decommissioning old ones are two other areas where a badly coordinated system can cause further revenue assurance problems. Businesses often leak money both by providing incorrect tariffs for new services and by not taking older, more costly products out of service quickly enough.

Reactive versus proactive
Putting additional systems and checks in place is largely a reactive approach to revenue assurance in a best-of-breed solution. In essence, it is a ‘sticking plaster’ approach to plugging the gaps in the system. Rather than dealing with problems at source, it focuses on putting processes in place which track where revenues are being lost and then try to correct these errors retrospectively.
As a result, problems can stay hidden for some time and their source can remain obscure. Operators may initially believe that they have billing issues or that they are suffering from credit management problems. In fact, when they carry out thorough ‘root cause analysis’, they often discover that their problem is order management related.
If the system is not proactively managed, a mistake made in this initial order process will not be discovered by the operator for a month or six weeks, when the customer receives his first bill and finds he has been placed on the wrong tariff or is being billed for a service he never received, for example. 
In contrast, the best end-to-end pre-integrated solution suites give operators the confidence that all elements within the product suite will work together in harmony. The holistic approach of these systems is clearly in line with operators’ increasing desire to address and monitor the whole lifecycle from the initial order placement right through to billing and cash collection.
These solutions also enable operators to be much more proactive. Rather than merely reacting to problems when they occur, their seamless connectivity offers a means to prevent ‘gaps’ in the system appearing in the first place. In other words, they treat the root cause of the problem rather than the symptoms.
The tight integration of these solutions helps eliminate data replication and synchronisation problems. In addition, embedded workflow and order management functionality allows front-end orders to be successfully transitioned to the back office, ensuring all services can be billed for and eliminating revenue leakage at source.
The pre-integrated nature of these systems allows key business information to be proactively tracked, detailed reports to be generated for each process, revenue leakages quickly identified and revenue losses minimised. It is hardly surprising, therefore, that ever-greater numbers of operators see end-to-end pre-integrated solution suites as a vital weapon in their ongoing battle to achieve genuine revenue assurance.

Dominic Smith is Marketing Director, Cerillion Technologies

SERVICE CREATION - A factory approach

Rapid assembly of services will be the key differentiator for telcos striving to beat out cable, entertainment and Internet companies encroaching on their customer bases says Brian Naughton

Telecom carriers will have to go through a significant metamorphosis as the lines blur among telecom, entertainment, retail, and Internet domains. In hotly contested triple- and quad play markets, carriers must become customer service providers (CSPs) capable of making the transition from me-too services to truly converged, on-demand services that differ from those offered by MSOs and non-traditional competitors.

To achieve that end, CSPs will have to work with third-party developers to create scores, if not hundreds, of niche services that leverage their substantial investments in IP networks. After all, they laid the fibre to enable voice, video and data to come together over the same connection in very short time frames. That unique ability should enable CSPs to create prodigious catalogues of converged services without disrupting the underlying architecture.
The goal should be the rapid assembly of services. To that end, a mindset change will be necessary. Carriers will have to move away from the staid and stodgy belief that service launches must take months or years, to a mindset that products can be rolled out in hours, if not minutes.
That will require CSPs to move into a manufacturing mindset, where the concepts of computer-aided design (CAD) and computer-aided manufacturing (CAM) come to fruition. The marriage of the two enables hundreds, if not thousands, of services to be rolled out in an “assembly line” fashion.
In the same way that the car manufacturing industry illustrates components for new products in CAD systems, carriers can illustrate the components of new products and move service “components” along an “assembly line” to CAM systems, where coding, rules and algorithms can be determined automatically.
The lifecycle management enabled by the CAD and CAM principles is now beginning to burgeon in telecom. In other words, the knowledge of bundling will be removed from existing systems and centralised in a location in which all service and product building blocks can be modelled within a “workbench” environment.
That reflects somewhat the precepts of service-oriented architecture (SOA), which promulgates the interchangeable use of building blocks among applications.
 “While SOA has been hyped for many years as a common framework for segmenting operations and coupling services, the reasons for it are far more compelling now,” says Larry Goldman, co-founder and senior analyst with OSS Observer. “The Internet has created an expectation of immediate gratification, so carriers have to figure out how to roll out services at the time of demand.”
After heavy investments in IP networks, Goldman believes operators have to concentrate on the software side of the equation. “CSPs should focus on re-use within their execution environments. That means services must be decoupled from networks for integration with business processes.”
Goldman says carriers can then begin to drive re-use –not only of common data models, but of formats, naming conventions, interfaces, and design processes across the organisation.
To galvanise the concept of ‘re-use’, CSPs must break back-office silos down into components that represent operational elements of network and IT systems, as well as product, service and resource specifications. These components can ultimately be turned into loosely coupled “building blocks” for interchangeable use across different services and products.
As carriers create a library of building blocks, SOA environments become true service delivery platforms (SDP) from which new functionality can be driven (i.e., SIP capabilities around presence, location and more advanced voice mail services that can be used in creative product bundles). By implementing common SIP servers for applications needing connectivity over IP networks, carriers can procure data from disparate sources so that billing authorisation and billing detail are consistent across the organisation.
As new services are created through increasingly agile SDPs and execution environments, CSPs will have to simultaneously orchestrate changes within OSS/BSS applications. The complexity of orchestration for dynamic services will require full automation of activation, ordering and billing processes so that fulfilment and assurance processes can seamlessly work for new service rollouts.
Within the TeleManagement Forum’s Product & Service Assembly (PSA) Initiative, an independent consortium of leading telcos and vendors has been working to develop a revolutionary IT reference architecture to satisfy the burgeoning need to standardise and simplify the way that products and services are designed, assembled and delivered. This reference architecture incorporates the CAD/CAM manufacturing approach by enabling the creation of “building blocks,” which carriers can assemble into service or product offerings.
At the heart of the IT reference architecture is an active catalogue that is a design-and-assembly environment within which service components can be defined and configured without any need for writing code. This catalogue aligns service design and creation with service execution so that product managers can decouple management of product lifecycles from OSS, BSS and network engineering.
Within the building-blocks lies is a rich library of components and products through which product managers and architects can drive dependencies, prerequisites, exclusions and visual metaphors about service components.
“We have leveraged our deep understanding of the fulfilment process as well of that of our customers and partners to define components that could be used interchangeably across services and functions,” says Simon Osborne of Axiom Systems, one of the founders of the PSA Initiative, noting that Cable & Wireless, BT, TeliaSonera, Atos Origin, Huawei, and Oracle have worked to define the building blocks.
To simplify the definition and configuration of services using those building blocks, a visual and intuitive GUI has been created for product managers to view loosely coupled composites or aggregate services, as well as for IT to create, test and publish components for re-use across the organisation.
The essence of the IT reference architecture is that it has been designed with a “bilateral” top-down/bottom-up approach in mind.
 “This IT reference architecture empowers marketing professionals to define service components without having to go through IT departments, and enables IT to use pre-tested business options and variants to drive component use across the organisation,” comments Osborne.
For example, ringtone downloads, VoIP, VoD, and find-me services each require their own sets of fundamental parameters around availability, order-taking and activation. However, there inherently exists overlap in what each service requires. The active catalogue helps carriers to leverage that fact by establishing interchangeable building blocks in one catalogue that can then be rearranged to support other services as well. Rather than having to write new code to launch each new service, carriers can specify necessary attributes in reasonably basic forms so that one catalogue and order-handling system can handle many different services.
Simon Farrell, IT Architect, Cable & Wireless comments: “We can define residential VoIP and the prerequisites for broadband DSL, and are able to stitch together relationships among end points to execute on fulfilment request” - demonstrating that graphical representations, such as a ‘green light’ for ‘it’s a go’ or ‘red light’ for ‘outstanding dependencies’ enables C&W to assemble end-points that must exist on the enterprise service bus (ESB).
In other words, there are distinct interfaces, order types and end points specific to any services that are to be fulfilled. Through the interface, the active catalogue provides an environment for modelling end points into an assembly landscape that defines relationships and polices exceptions or dependencies.
 “A residential home triple play service that requires a broadband and VoIP server, as well as IPTV server, will rely on rules around what third parties must be called upon to provide that hardware, and in what sequence those systems should be called upon,” explains Osborne. “That sets the stage for how data travels interface to interface as the service transitions through the lifecycle.”
While the active catalogue does not run every task, it calls the service end points that, in turn, run the processes externally. “This active catalogue provides a way of defining the end point and rules around those endpoints, so fulfilment dynamically figures out what end points to call upon,” he says.
As orders are fulfilled through the active catalogue, the software creates an inventory of pre-existing capabilities for end users. The software records against every instance of an order, using the same language that was modelled at service end points. Ultimately, that means CSPs end up with rules sets that are usable for up-sell and cross-sell capabilities. “If 35 per cent of customers have a certain type of access, CSPs can target them with new services that tie to that type of access,” notes Osborne.
In the long run, that ability drives versioning and lifecycle management. “If a service is to be deployed for only six months, there can be published rules stating that the service will be decommissioned in a certain time period, and warnings can be issued at the end of the period to those parties with bundled components.”
That can be particularly important among partners who are re-branding wholesale offerings, or for inter-departmental strategies at large telcos, where orchestrating processes can be complex. “Ultimately, you get a federation of catalogues with clear demarcation of where the SLAs are among different departments,” Osborne explains. With a federation of catalogues, CSPs start to create a topology through which all catalogues and associated end points can be referenced for more intelligent cross-sell and up-sell actions.
To ensure there is an accurate model of infrastructure, this revolutionary IT reference architecture has been designed to sit on top of most major network resource management systems (inventory) that serve as databases of record for carriers.
The architecture can serve as the foundation for collaboration among product managers, service and network engineers, as well as operational communities. By creating a central point for standardising multiple vendors' products, carriers can move closer to the SOA principles they strive to embrace.
As carriers continue to expose their design environment to different departments and customers, they can begin to truly “mass market” the configuration of products. That sets the stage for commonality in how components, access controls and security measures are employed across the enterprise and partner environments.
As that commonality grows, carriers can get closer to self-service in management of product and service lifecycles. Then, they can be better positioned to create value-adds in their IP services domain—especially if they can roll out sophisticated services in a matter of hours, or even minutes.

For further information about the IT reference architecture and the active catalogue, please visit www.psainitiative.org  or e-mail This e-mail address is being protected from spambots. You need JavaScript enabled to view it .
Brian Naughton is VP Strategy & Architecture, Axiom Systems

SERVICE QUALITY MANAGEMENT - Virtue in finding fault

Service quality management offers a critical pathway to the delivery of quality of service in developing markets, says Tony Kalcina

Accidents happen. People make mistakes. Nothing or no one is infallible. We all know this. Which is why, when we buy a product or service, what is important is not so much whether or not it has faults, but what happens after a fault occurs.
It is a well-know maxim in client service that a customer whose problem has been dealt with in an exemplary fashion is likely to be more satisfied and loyal than one who has never experienced a problem to begin with. The former knows from experience that they can rely on the provider of the service or product; the latter has no idea what might happen if things go wrong.

This principle applies as much in telecommunications as elsewhere, but with an added twist: customers want to have the certainty that problems will be dealt with effectively and efficiently before they happen.
This means service providers have to provide a high level of assurance at the contract stage, typically through a service level agreement (SLA). But there are SLAs and SLAs.
In fiercely competitive developing markets, the ability to offer and deliver on meaningful, measurable and manageable standards of service is becoming a major competitive differentiator.
Telecommunications SLAs traditionally underpin service quality management (SQM) programmes, which aim to monitor performance, pinpoint faults and prevent them from recurring.
SQM is valuable to corporate customers because, in theory, it provides analysis and verification of the performance they are paying for. And, in the event of a problem, it serves to provide a measure of the recompense they might be entitled to.
For operators in developing markets, SQM also has an important role to play in the supply chain by policing incumbent operators, for instance when competition rules allow Local Loop Unbundling (LLU) for third-party providers of DSL services.
The inclusion of an SLA in the supply chain process ensures protection for third party operators and their customers; if incumbent operators fail to undertake the LLU in the time agreed, the third party operator can often claim a rebate.
At the same time, the end customer may also be entitled to compensation for failure to deliver the requisite level of service mandated by the regulatory body.
In practice, this can be problematic to claim at an individual level, but the automated monitoring and reporting of SLA violations can be a useful input to the process of managing collective performance by the incumbent. 
Elsewhere, it stands to reason that savvy customers will pick suppliers whose SLAs offer the highest level of financial security; in other words, those which pay out the most in the event of a problem.
This means that in order to satisfy the most demanding customers, telecommunications operators need to embrace SQM so that any faults and liabilities can be fully verified to the satisfaction of both the operator and its customers.
SQM allows operators to measure and gauge the validity of customer complaints; whilst the customer should always be put first, operators can determine the need for - and level of - compensation required for a perceived service fault. Clearly, then, there are massive benefits to be had from being seen to possess a market-leading SQM programme. But not all operators currently have one.
Currently, performance data, where it is available, often only involves some fairly basic measurements of the state of the network. In addition, delivering SQM often relies heavily on expensive manpower.
An operator will not be able to cost-effectively differentiate its service offering unless manual steps are kept to an absolute minimum and, preferably, eliminated altogether to avoid the higher cost and delays of manual processes.
Finally, many of the current low-cost diagnostic tools that are in place can only provide basic alerts to the effect that certain pieces of equipment are failing, without identifying which customers (if any) are affected, or how.
What this means in practice is that operators relying on these basic SQM tools cannot truly be said to be delivering quality of service to their customers—and risk either losing credibility or paying over the odds for SLA failures. The situation need not be thus, however.
More complex SQM tools exist. They combine service fulfilment and assurance capabilities and can be integrated with a provisioning package to automatically identify faults or dips in service and restore the services or compensate customers with additional offers or refunds.
Clarity, for example, offers a pre-integrated product and database that features the TeleManagement Forum’s 17 electronic Telecom Operations Map model elements of Operational Support Systems (OSS) in a single suite.
These systems allow operators to see the impact that network operations are having on revenue and customers’ experience from both a service fulfilment and assurance perspective.
Clarity’s OSS is network and services neutral, rapidly configurable and widely deployed, supporting an end user base of 50 million subscribers worldwide. Companies that have taken SQM seriously have reaped significant benefits.
Sri Lanka Telecom, to take an example from the developing world, has been able to clear 84 per cent of faults within hours thanks to a single OSS information store for fulfilment and assurance data, coupled with real-time correlation and integrated SQM workflow processes.
Other operators can follow this path. All that is needed is a greater awareness of the importance of SQM as a tool for achieving competitive advantage. Telecoms operators, specifically in developing markets, must realise the importance of service assurance in helping to predict, monitor and manage in real time the availability and quality of services, ensuring conformance to the business’s strategic SQM objectives.
Investing in OSS to support state-of-the-art SQM programmes is no longer a ‘nice to have’, but increasingly a vital component of strategies to attract and retain loyal residential and commercial customers, improve operational effectiveness and to accelerate the order-to-cash process. SQM may have until now been something of a minority interest for telecommunications operators. But as the battle for customers heats up in developing markets, it looks set to become a key weapon for competitive advantage.

Tony Kalcina is founder of Clarity

IMS CUSTOMER EXPERIENCE - Going to the next level

The many bells and whistles promised by IMS make it essential for operators to understand and monitor all the device-types used by customers, if they are to ensure high standards of customer experience, says Matt Herdlein

As emerging IMS platforms open the doors to real-time, interactive multimedia services, taking care of the customer experience becomes an even more critical ingredient for achieving success. Investments made to support the emerging service complexity could be wasted if customers cannot derive the intended value. Moreover, the assessment of service quality would be misleading unless the actual performance of user devices is considered as well.  As services become more sophisticated and complex, more functionality and features are migrating to user devices, thus making them an integral element in the overall service quality equation.

With 3G handsets based on open operating systems, the numbers of both device makers and third-party application vendors have skyrocketed. Handango, a leading supplier of applications for handsets and Personal Digital Assistants (PDAs), reported more than 11,000 new applications in 2005 from more than 1,200 new vendors, and “type approval” has moved to vendor certification. The GSM Suppliers Association reported that in 2006 there were 212 GSM/EDGE terminal devices available in the market from 33 vendors. Handset operating systems come from companies such as Microsoft, Symbian, and Qualcomm, among many others.
In this environment, the challenge for operators is clear. They must be able to deploy an enormous and quickly growing range of services on a host of intelligent devices, and ensure that those services operate successfully on each device. In short, operators need to augment the scope of service quality to include user device performance to better assess the actual customer experience derived from their IMS investments.
To further understand the problem, consider the following simple case: if a new service fails 90 per cent of the time on a user device that serves 10 per cent of the market, then an analysis of the service will only show a one per cent failure rate. However, the reality is that 10 per cent of the customers are unhappy and may move to another operator or stop using the service.
MobileGuru, a UK company that sells mobile phones and accessories, compared handset performance on a UK network and found that call drop rates can vary from about two per cent for the best performing devices to nearly 10 per cent for the worst performing devices. Even then, within a single device type, there may be significant variations in performance caused by batch problems in manufacturing, user configuration errors, or software download problems.
These issues are not new. GSM operators faced the dilemma of whether to issue recalls or modify their networks in the mid-nineties, when two of the leading handset vendors were found to have compatibility issues with their networks. At that time, they had to make software changes under controlled conditions since type approval would be invalidated if the changes were done incorrectly. The recall was avoided in those cases, but smaller manufacturers did have to recall devices.

Identifying the problem
The advent of Universal Serial Bus (USB), and changes in component prices, along with the relaxation of type approval, made home-based upgrades to handset software more common. A handset's International Mobile Equipment Identity (IMEI) can be used to identify the model and even the place of manufacture, but it is no longer a reliable indicator of software build or application set.
Further complicating the matter is the fact that handset operating systems are available to suit the preferences of any manufacturer or vendor. If you like Java, try Savaje; if you prefer Linux, then look at MontaVisto; or if you are Microsoft fan, there is a version of Windows available. The market leader, Symbian, grew out of UK PDA innovator Psion, and if you don't like the Symbian software, then Nokia supplies Series 60. Handango reports having more than 190,000 titles from 16,000 content partners supporting nine different handset operating systems.
Vodafone has reported a significant linkage between the rate of churn for residential customers and the numbers of services they use on a regular basis – the more services used, the more likely customers will stay. Of course, services will only be used if they work reliably.
The cost of unreliable service to operators can be measured not just in lost revenue but also in lost handset investment. Nokia, which supplies one in three of the world's handsets, reported that the average selling price of its handsets is US$125 (103EUR), while Sony Ericsson, which has more high end products, has an average selling price of $180 (149EUR). Some retailers in the UK are offering all Nokia handsets for free when bundled with a post-paid tariff, so the cost to the operator is around $146 (120EUR) for each handset.
The abundance of software and device options presents a number of challenges for operators. It also provides a unique opportunity for operators to better serve valuable customers. When root causes of problems can be traced to individual customers or device types, operators can develop new application versions or make changes to operating systems to correct the problems.
Consider, for example, the value to enterprise customers who, according to Yankee Group, make up 28 per cent of mobile operators' revenue. These customers can be advised on which mobile phones perform best for their services or can be given upgrades. Customer satisfaction would increase, as would retention levels for a valuable market segment.

Inspecting the packets
To monitor service quality, it has always been imperative that operators have ready access to reliable data. For voice calls, there are many options, from Call Detail Records (CDRs) to signalling probes. But for data services, and to support the move to IMS, more sophisticated, deep packet inspection probes are required.
Data service quality is determined by three main variables: network performance, device performance, and portal performance. Degradation in any of these variables will result in a poor customer experience.
Any effective analytical tools should permit early identification of trends so that solutions can be developed and customers informed before service is affected. Hotspots may occur when changes are made to services, access networks, or core networks, or when handset operating system upgrades are introduced, but these may be difficult to identify among millions of users, and operators must take care that “normal” user actions are not misinterpreted.
For example, with voice, a short call (one quickly terminated by the user) may raise no alarms but actually involve unacceptable voice quality, whereas with data, short sessions may be the result of high throughput, and long sessions may indicate problems.
By comparing different handset models running the same service, patterns can be established. Analysts should also consider whether particular models are only available to a limited user group, such as prepaid. Traditional service quality monitoring gets a view of specific services based on consolidated data extracted from the network, and by using field probes that perform synthetic transactions depicting various services and user behaviours. Although these approaches have merits, they fail to analyse service quality from actual service transactions that customers make. In other words, they fail to capture the trends and nuances of the true customer experience.

Human behaviour
It is, therefore, essential for operators to understand and constantly monitor the “behaviour” of all the various device-types used by their customers. Specifically, it is important to understand service performance by device-type as well as by specific device configuration, to understand how customers are affected by network or device issues.
Some of the typical issues that mobile operators face on a daily basis are:
• How to choose a device/s for a new service
• During trials, how to measure device performance by type, service, and configuration
• How to view device performance to identify service bottlenecks before they can affect the service, and how to identify affected (and potentially affected) customers
• How to know if a new configuration or update is performing as expected
• How to identify devices with high support overhead.
To answer these questions, operators need to go beyond probes that make educated guesses about performance based on small, “synthetic” samples. Operators need a solution that aggregates actual device performance around the clock from every service transaction.
Back to reality
The reality is that virtually every mobile operator supports dozens of device-types and millions of active devices. That means, to manage service quality and customer experience, operators must do more than monitor their networks and operations. They have to know, at any given time, the capabilities and limitations of all of their user devices, how those devices are performing, how they will handle new service offerings, and how customers are using them.
When operators have access to this level of user device performance intelligence, the business benefits are invaluable. For example, operators can: “see” how customers are reacting to marketing campaigns and special offers; recommend the best devices for customers when orders for new or expanded services are received; respond to customer calls with a holistic understanding of the customers' experience; offer targeted promotions to individuals or groups; provide incentives to device vendors based on verifiable performance and offer more focused SLAs.
As the choices for new and more complex services continues to grow, and user devices that offer more functionality emerge, mobile operators need to understand service quality from the device perspective. Operators can expand their traditional service monitoring arsenals and realise the business benefits that can result from higher customer satisfaction.

Matt Herdlein is Executive Director, Service Management, Telcordia