Features

David Knox examines the potential of GPS mobile phones and how real time charging and control systems can help the innovation successfully map the future of the mobile phone business as a new marketing device for consumers

GPS ON MOBILE PHONES - Marketing maps

This is the year of the GPS revolution, a technical achievement that hasn't stirred this much industry excitement since the launch of 3G phones. Every major mobile maker around the world is scrambling to become the first to launch the most efficient and user-friendly satellite navigation system for their new handsets. No longer confined to the dashboard of your car, the GPS technology will be available in many new handsets and will not only tell us where we are, but also give us tips on where to dine, shop, or see a film.
The concept of GPS navigation software for phones has been around for a while and has made some significant progress in the US market, where it is currently used as an enhanced emergency system that enables emergency operators to work out the location of someone calling from a mobile phone to help them out of trouble.
In Europe, however, the success has been minimal. Despite its general availability, mobile GPS has never hit the mainstream jackpot – primarily because of fussy, user-unfriendly gadget requirements (i.e. a separate GPS module) and GPS's unsuitability for
pedestrians trying to get from point A to point B
without a car.
This is all about to change with a range of innovative new handsets, complete with integrated GPS receivers, offering a useful and more relevant piece of navigational technology for any user. Leading the pack of new phones is Nokia's N95, which has already gone on sale in the UK and other markets. Boasting excellent, computer style graphics, owners will be able to use the handset as a full-blown navigation device, whether in their vehicle or on foot.
GPS also promises greater customised service as well, enabling mobile users to combine calendar and contact functionality with navigation. For example, the user can tell the device to navigate them to their next appointment, which may be a friend's birthday party, and also navigate via a shop or outlet selling whatever they may need to pick up en route.
This level of mapping sophistication opens many doors for both the consumer and the mobile operators. The GPS phone can help users make lifestyle choices by not only telling people where they or going but, with the help of advertising campaigns, letting them know what can be enjoyed along the way. Imagine turning on a GPS phone outside Bond Street tube station and trying to find the best route to the nearest park. The phone will not only be able to tell you how to get to Green Park but also inform you of relevant special offers at the Fenwicks department store- which you need to pass to get to the park.
Indeed, mobile GPS will not just be about helping users get to a location, but will also present an opportunity to enhance their lifestyles though location-based marketing. This raises the question of whether customers will be willing to accept mobile marketing with their GPS handsets, as well as the privacy issues that go along with them.
Blogging and GPS
The blogging phenomenon suggests that many customers are ready for a customised approach to marketing and social networking. In Japan, for example, car company Honda has already introduced a GPS device that allows drivers to make comments on points of interest along their route. Whether it is providing a review of a restaurant or a description of a museum exhibit, the navigation system offers a social-networking opportunity that allows drivers to make information available to other GPS users in real-time.
What has emerged in the Internet world is indisputable evidence that users rely heavily upon the word of like minded individuals when making choices on which restaurant to eat at, which hotel to stay at and so on. Combining navigation functionality with instant access to reviews and tips along the way will provide the mobile user with a truly mobile Internet experience.
Mobile GPS has the potential to fine-tune this method of networking, by allowing each subscriber to specify their interests and subsequently receive customised itineraries, targeted advertisements and other useful information based on their destination- without the need for them to trawl the Internet to find what they are looking for. This could be the latest new restaurant located close to the theatre which the GPS system is helping the mobile user find on the map, or the nearest toyshop on the way to attending the birthday of a friend's child.  Combining mobile GPS and marketing is not merely a possibility but inevitable as people become more engaged with the technology and want to enjoy as many new experiences as possible.
So how will mobile GPS be transformed into a profitable, commercial success? Convincing customers to purchase GPS handsets, which are still relatively expensive is the first step. The second one is the delivery of the aforementioned marketing tools. A crucial element in launching a successful ad campaign is through effective consumer profiling, which has already been touched on. Simply put, in order for any mobile operator to make any money out of location based services, they must be able to have the potential to earn revenue from value added services such as mobile marketing and targeted information provision, as there will not be a charge for the mapping service itself.
Real time charging and control applications can help operators conduct effective advertising campaigns through their ability to store and access user profile information, and then use that information in combination with real-time location data to deliver relevant adverts to the mobile device.
Information about the brand tastes and interests of the mobile user can be gathered by the operator before the user agrees to subscribe to advertising. All this information can then be stored in the network and be accessible to the charging and control solution to ensure that advertising is targeting the right audience and will appear whenever a customer uses the GPS device to map out their journey. So, for example, if one mobile user is identified as a football fan and has turned on their GPS phone to find directions to the stadium where a match is being held, then the charging and control solution could access previously stored profile information in-real time and automatically check to see whether anything nearby the stadium would appeal to the mobile user.  If it is a restaurant near the stadium offering a two for one lunch deal then an advert could appear letting the subscriber know about the offer.
A charging and control device can also identify that the user has viewed the advert and to subsequently check whether it has been followed or not. For example, if the user wants to avail himself of the offer, he request a promotional code by clicking a link on the advert, and then subsequently uses this code to validate the offer in the establishment.  This potential method of monitoring not only helps to track the success of a particular campaign but also to determine revenue share that is generated as a result. For everyone that received the message and subsequently went to the restaurant, the charging and control device would be able to provide the operator with details on how many and which subscribers viewed the advert, and how many actually responded to the promotion. Most importantly, the charging and control device could calculate the revenue share from the campaign based on the agreement between the operator and the advertiser.
Handsets with GPS technology are ideally placed not only to enable navigation and true location based services, but also to combine this dynamic information with social networking and targeted marketing services.
The next step is creating the commercial models necessary to make GPS handsets a hit in the marketplace. So far the concept has had very few critics and everyone seems to be taking a 'wait and see approach' when determining the success of the pocket-size navigator. What is certain, however, is that the demand for social networking and the marketing it enables has already been proven by the blogging generation.  Whether it will win mobile customers at the same phenomenal pace is anybody's guess, but perhaps the combination of GPS navigation capabilities and real-time access to targeted advertising and networking will finally deliver upon the promise of true location based services.

David Knox is Product Marketing Director at VoluBill
www.volubill.com

External Links

Volubill

For 3G to be a success, Alon Barnea explains, users need to be motivated to use it and be given an easier way to adopt the technology

3G TAKE-UP - Pulling the usage trigger

The introduction of 3G and video calls was not met with the fanfare response that was expected by the industry. Even now with over 100 million (and rapidly growing number of) subscribers, 3G mobile users still remain a rather small part of the overall two billion mobile subscribers worldwide and video usage within this video enabled community is still deemed as a disappointment. Point-to-point video calls are evidently not a big enough draw to encourage people to jump on the 3G bandwagon, and with an estimated one in ten mobile phone users actually owning a 3G phone, it seems unlikely that person to person video calling will be the phenomenon that SMS has become. Though most share the notion that video will become mainstream and a major revenue source, we still need to address the question of what then will make video communications a success?

The UMU factor
Much has been said about the limiting factor of 3G video, stemming from peoples’ reluctance to accept “intrusive surprise” video calls. That’s where the User Motivated Usage (UMU) factor comes in. The UMU factor is related to video applications where an entity is generated, at a given moment, to motivate users to make (rather then receive) a video call and is the key to elevating the level of video usage over mobile devices by taking out the “surprise call” element and the absolute necessity to be seen.
For 3G to be a success, users need to be motivated to use it and be given an easier way to adopt the technology; rather than having to wait for their friends to catch on too. With the UMU factor, 3G can be used by anybody today for an exciting experience that is independent of the 3G availability of other participants.
Naturally, “traditional” video communication is happening now. Take a group of female friends, for example, getting ready for a night out together. The advent of 3G mobile to PC communication opens up a new avenue for these women to get their friends’ opinions on how they look in a particular outfit. Using their 3G phones, or webcams on their PCs, the group of women can ‘meet’ in their own online community and compare clothes from their separate homes before meeting later in the evening.
Another example is that of a businessman who is travelling. While travelling through France on the TGV, he can still take part in a face-to-face briefing with a client based in Scotland using his 3G mobile phone to his client’s PC, while conferencing in his partner who is sat at her desk in Brussels.
But wouldn’t it be appealing to those avid sports fans to see and hear the Most Valuable Player right after a major basketball game?  Members of the team’s fan club can call to see and hear what the MVP has to say, and maybe even be selected by a moderator to be seen by all of the viewers in the fan club community to ask a question.  At the same time, someone sitting in their living room watching the game on their new HDTV can join the live video session as well because their cable STB is also a video-enabled client. It could be just your luck that you are stuck at work, but you can enjoy this live from your desktop PC.

The secrets of success
For 3G to be a success, operators and service providers need to answer the following questions:
1.    Have we secured a strong enough trigger/interest for usage?
Without a reason to use 3G, why should users start paying out extra to make video calls on their 3G phones? Users need to be given something to inspire them to pick up their 3G phones and make a video call. No matter what the lifestyle, users need to know that 3G can benefit them; that it is there for everyone.
2.    Is there a specific context/timing for when a service will be used (here and now)?
A 3G mobile device is always smaller, with lower quality and is more expensive than any other media (PC,TV, STB etc.) but it’s the only real mobile device and always available. An event or community that triggers a use at a specific time or context that motivates the user to join at that moment will create the need and reason to use a mobile device. If the service is timed with the end of a sporting event, and targeted to the viewers who are at that time mobile, then the usage trigger exists.  When a service is geared for people on the move, then the user motivation is created.
3.    What is the guaranteed success and completion rate?
Successful services must have high completion rates. The way to guarantee this is to deploy a service that is not dependent on a high ratio of other 3G enabled handsets – for example where participants and content can also originate from the IP. In this manner, those who do own 3G phones will be ensured a successful service, with a 100 per cent completion rate.  Furthermore, a converged environment, including video-enabled PCs, expands the boundaries of the relevant communities, and increases the levels of participation thus increasing adoption rates.
The more limited or complicated a medium is, the more necessary it becomes to correctly understand the above factors” to ensure usage and success. In the context of video and the mobile handset, the success factors are definitely challenging, and given the low usage rates experienced by the industry today, it can safely be said that the answers as indicated, still need to be integrated into the 3G services provided by the operators.

Pulling the trigger (of usage)
Users need to be shown and delivered true benefits of what can be done with 3G and what advantages it has. What video over mobile has to offer, besides a small screen and limited quality, is unprecedented mobility and availability; no matter where you are and what you’re doing (within reason, of course) you can always see your friends and family, make that meeting, and enjoy video content. By taking advantage of the mobility and availability of 3G and considering the success factors, an interesting and promising future for mobile video can be seen.
If the UMU factor is put into effect, then the usage possibilities are endless. Even now, some of these possibilities have become reality. 
Imagine how many such UMU “events” we are missing every day! 
The options for 3G are infinite – imagine how much mobile video traffic can and should be generated when influenced by the UMU factor. Consumers just need a little push in the right direction and before you know it they will be saying: “I’m here, I’m interested and I’m ready to pay! And I will use the best device and access method that is available to me at any given moment.”

Alon Barnea is General Manager of RADVISION’s mobile business

Since joining forces last year, OSS/J and the TM Forum are proving that by combining their strengths, not only the OSS community, but the communications industry as a whole, has much to gain.  Doug Strombom takes a look over the past twelve months

OSS/J - In Perfect Harmony

The OSS through Java Initiative’s (OSS/J) decision in early 2006 to join with the TeleManagement Forum (TM Forum) appears to have been a good one.  OSS/J is a rising star within the TM Forum, making a strong contribution to the technical programme there, and increasing its influence on the TM Forum’s New Generation Operations Systems and Software (NGOSS) standards-making efforts.
In January 2006, when OSS/J first discussed joining the TM Forum at OSS/J’s face-to-face meeting in Dusseldorf, Germany, there was some trepidation that the group’s strong focus on standards-making might be diluted within the much larger organisation.  But the following day when the idea of merging OSS/J into TM Forum was presented to the telecommunications service providers present at the OSS/J Service Provider Roundtable, it was greeted with acclaim.  The move would put to rest the concern by service providers that there are too many different standards and standards-making organizations within the telecommunications industry.  By removing the uncertainty factor of having multiple competing standards, the service providers agreed that it was to everyone’s benefit to widely adopt a single OSS integration standard.
The path to the standardisation of OSS interfaces has not been smooth.  With hundreds of telecommunications service providers worldwide, not to mention hundreds of OSS vendors and system integrators, reaching agreement on common standards can be a real challenge.  The proverbial chicken-and-egg problem is often cited, with service providers agreeing to adopt open standards only when sufficient OSS vendors support them, and OSS vendors agreeing to provide open standards only when sufficient service providers agree on which standards they require.  Because there are so many players in telecommunications, it is much more difficult to agree on standards than in more concentrated and vertically-integrated industries like the automotive industry.
That’s why industry standards bodies like TM Forum are so important, and it helps that the TM Forum has plenty of prestige within the telecommunications industry.  Its members include approximately 600 telecommunications service providers, OSS vendors and system integrators.  When the TM Forum says “this is the standard that our members want to adopt,” it is a very significant statement. 
The TM Forum can justly claim that its choice of standards is impartial and in the best interests of the whole telecommunications industry.  It is an open group, with a Board that is elected by its corporate members, with councils representing service providers, vendors and system integrators, respectively.  That Board appoints a Technical Committee to sort through industry best practices and make the final determination on standards issues.  Within the TM Forum, standards-making programmes like OSS/J perform their work at the behest of the Technical Committee.  The Technical Committee’s overarching goal is to define NGOSS, into which OSS/J fits neatly as an implementation-oriented interface standard.  These open governance mechanisms of the TM Forum are helping the OSS industry find a unified voice in favour of standardisation.
An immediate result of OSS/J uniting with the TM Forum was an upsurge in OSS/J membership.  With OSS/J now under the auspices of the TM Forum, more industry participants were assured about the impartiality of OSS/J.  Major OSS players like HP and integrators like TCS (Tata Consultancy Services) added their considerable industry weight and technical resources to the development and maintenance of OSS/J APIs.  OSS/J development is performed under the open Java Community Process (JCP).  Each API project is led by a Spec Lead from an industry insider, and participation on the project is open to other companies who can contribute their requirements and technical support.  In the parlance of the JCP, each API project is called a “JSR” (Java Specification Request).  HP took on the new OSS/J Fault Management API.  TCS began to participate by constructing Reference Implementations (RIs) for many OSS/J APIs.
Membership in the OSS/J Programme at TM Forum is open to new members who are willing to make a technical contribution to the development of OSS/J APIs.  The TM Forum assigned the job of negotiating technical contributions and following up on those promises throughout the year to a dedicated Technical Programme Manager.  This important job went to Antonio Plutino, who has successfully managed OSS/J deliverables over the years.  Of course, one does not have to be an OSS/J member in order to contribute to API standards: many individuals and companies contribute to JSRs at the invitation of the Spec Leads.
A second major impact of moving OSS/J into the TM Forum relates to the professionalism and governance of TM Forum’s standards-making process.  OSS/J subjects itself to a formal JCP process because the JCP is a tried-and-true development process with build-in checks and balances.  Because the JCP is an open process involving key experts from the industry, the quality of inputs to the JSRs is very high.  And the review steps that are inherent in the JCP process help to ensure that many reviewers validate the approach taken to define interfaces and the quality of the resulting specifications.  This additional governance has breathed fresh air into the TM Forum’s standardisation process, as the TM Forum itself has begun to adopt governance that can stand up to ever greater scrutiny.
OSS/J helped introduce advanced techniques for producing interface specifications such as use of a common information model and model-driven tooling.  All of the new OSS/J APIs have been specified using the Core Business Entities (CBE) from the TM Forum’s Shared Information Data (SID) model.  This helps to ensure compatibility between APIs developed according to the OSS/J standard.  In addition, OSS/J interfaces are built using Tigerstripe Workbench, a model-driven tool developed by Tigerstripe, an OSS/J member.  This software allows JSRs to design an abstract specification of an interface, and then to generate specific code in XML, Java and WSDL, in order to support the different deployment profiles required by OSS/J. The use of a common model and model-driven tools has greatly sped up the development time and quality of OSS/J APIs.  One JSR reported a 70 per cent reduction in specification effort through the use of the Tigerstripe tools.
A third impact has been on creating user-focused standards, as opposed to purely technical specifications.  OSS/J and the JCP have high standards for defining interfaces.  In addition to the interface specification, there must also be a use case or ‘Reference Implementation’ (RI) and a testing framework or ‘Technology Compatibility Kit’ (TCK).  This allows an uninitiated integrator to see how the interface was intended to be used in a real-world scenario, and to test the compatibility of his or her application with the open standard.  The TM Forum now requires that these useful tools to be delivered with all of their interface standards. 
OSS/J helped the TM Forum craft the PROSSPERO™ programme, which certifies open standards as being ready for market adoption.  PROSSPERO-ready interfaces package everything that an implementer needs for OSS or BSS interoperability including: interface specifications, testing frameworks, guidebooks, online developer support; access to reference implementations, plus educational, marketing, and developers’ tools. PROSSPERO interfaces must meet criteria of market adoption and having documented use cases.   The idea behind PROSSPERO is to make it even easier for telecommunications companies to adopt open standard interfaces by setting high criteria for market readiness.
Now that OSS/J is well entrenched within the TM Forum, the organisation has shifted into high gear.  Most of the OSS/J APIs will be upgraded and delivered as a ‘Summer Release’ in August 2007.  The APIs that are planned to be released then are:
•    Common API (which underpins all OSS/J APIs)
•    Fault Management API
•    Order Management API
•    Trouble Ticket API
•    Inventory API
New APIs that are slated to be released before the end of 2007 are:
•    Pricing API
•    Discovery API
Meanwhile, OSS/J has an open call to fellow members of the TM Forum to contribute resources and expertise to these and other specification efforts.
Going forward, more exciting news is likely from the OSS/J and TM Forum.  One thread is the growing movement to harmonise all standards-making effort within TM Forum.  At TeleManagement World held in Nice, France, on 20-24th May, 2007, the Harmony Catalyst demonstrated a unified approach to integration that incorporated OSS/J and MTOSI standards.  This work demonstrated that OSS/J and MTOSI, two of the most popular standards from the TM Forum, are compatible with each other.  The TM Forum Technical Committee is underscoring the need for a single standard, and a Harmony Architecture team is taking up the challenge to define the common guidelines for TM Forum standards.  In addition, the TM Forum is reaching out to other standards bodies to use its PROSSPERO programme to promote other valuable standards in the market place.
The TM Forum has the right scope and clout to address the need for OSS integration standards.  Never before has an organisation with a global perspective like the TM Forum – with reach into wireless, broadband, IP, billing and content – stood so firmly behind a unified standard for OSS integration.  The focus that TM Forum is bringing to standardisation is unprecedented, and in combination with the rigor of OSS/J standards-making process, the impact is sure to be felt far and wide in the telecommunications industry.  We may finally have the answer to the question by service providers, vendors and integrators alike: which OSS integration standard should we use? There’s a growing consensus behind a harmonised OSS interface standard from the TM Forum.
Information about OSS/J can be found at www.tmforum.org/ossj or by contacting Antonio Plutino at aplutino@tmforum.org
Doug Strombom is a Steering Committee Member of the TM Forum’s OSS/J Programme and CEO of Tigerstripe, Inc.

Is holding on to customers for life a real possibility for
operators? Alastair Hanlon believes that a new approach to CRM will provide the answer

ADDED VALUE CRM - Dreaming the impossible dream?

Given a choice between winning new customers or holding on to the ones they have, any operator worth its salt would plump for ‘both’. It’s a reasonable choice but the fact is that many operators have been less successful at tackling the perennial problem of churn than at luring in new customers with attractive but costly offers. Of course, one person’s churn is another’s new sale.
For many, the solution has been to make heavy investments in IT, CRM systems in particular, in the hope that these will build longer term customer relationships. So far, this has not turned out quite as hoped, partly because the much-vaunted CRM and back office systems have tended to operate in silos and not as a seamless facility that provides a full picture of the customer relationship across all aspects of the business. This is not a deliberate policy, simply a result of rapid expansion and the need to add new systems to support new services. The information usually exists; it is just not readily accessible.
It becomes even more difficult in this era of convergence, as operators add new services.
Unfortunately, a customer trying to contact an operator can be forgiven for wondering whether they are dealing with one company or several. They soon find that call centre agents, their first point of contact, rarely have all the billing, service, offer and helpdesk information at their fingertips, as they might expect.
Things can be just as frustrating for the call centre agents themselves. In order to build a complete picture of a particular customer’s relationship with the company they have to switch between different CRM and back office applications, often resorting to handwritten notes to relate what they find in one with the information they collect from the others. It is inefficient, time consuming and highly frustrating.
If an agent is dealing with someone who is ready to churn, the question is what scope they have to make new offers or set up different deals, in order to retain the customer. Without a complete picture and firm policies in place to guide them, agents can end up giving less valuable customers more than they are worth and neglecting those with the greater long-term value. The customer with greater long-term revenue potential could decide to leave simply because the phone queues are clogged up, e-mail responses are too slow, service levels are poor and offers are unattractive.

Ending the silo culture
It is time for a strategic approach that makes the best use of technology, provides all the data needed and equips organisations to identify the highest potential customers and make the decisions needed to keep them on-board for the long haul. 
This has to be supported by technology that goes further than most do at present. Existing CRM systems are just not capable of identifying the true lifetime value of individual customers and turning that into action every time there is a customer contact. Through integrating CRM and back-office systems, upgrading processes and reviewing business rules and procedures, a whole new world of opportunities is opening up.
This will provide a complete view both of the services available and of the customers. This makes agents very much more effective because they are able to see into all systems at the same time and no longer have to ferret for information from different sources.  Business policies also become more consistent. This means that customers get the same information and level of service across all contact channels, whether they are interacting over automated self-care or with a call centre agent.
By integrating and making accessible information on customer history and services subscribed to, and having it delivered in a clear and transparent manner, it will be possible to raise the bar in customer service and focus on strategic areas that can transform the business performance.

Revenues for life
A comprehensive route to solving these problems can be found in an approach called Lifetime Value Optimisation. This is a process developed by strategy consultants, McKinsey, which moves well beyond traditional CRM.
LTVO can have direct impact on revenues and profitability by addressing the core issues of customer relationships – perceptions, loyalty and churn.
McKinsey has shown that LTVO can generate dramatic increases in EBITDA (earnings before interest, taxes, depreciation and amortisation). An incremental rise of between three and five per cent in EBITDA is possible and, depending on the size of the business, this can translate into hundreds of millions of dollars per year.
For LTVO to succeed service providers must be able to capture and respond to real-time events in the areas of customer care, billing and service delivery. The focus should be on four key areas, each of which directly influences the customer relationship, namely: customer satisfaction, customer retention, increased usage of existing services and take up of new services.
Rather than just reacting to problems when they come up, automated systems based on LTVO can make agents more efficient and allow them to be more proactive when dealing with customer queries.  Automation can be made more effective by tailoring voice or web self-care to the customers’ needs in real time. With fewer incoming calls, call centre agents are freed up to focus on new sales and on serving high value customers.  Not only does this reduce the cost of care and help raise revenues, this unified approach also has the potential to improve the overall customer experience and so increase customer satisfaction and loyalty.
With real-time data and proper micro-market segmentation, service providers are in a position to offer an immediate response to situations as they arise. These can be in such areas as billing queries, response to changing usage patterns or solving problems in real-time. Most importantly they can focus on customers with the highest potential lifetime value and give them targeted attention and high quality service.
Greater convergence means that a complete picture of individual customers, the services they use and their past behaviour and future potential is more essential than ever.
As well as increasing customer satisfaction and loyalty, the operator is better placed to take the initiative by making relevant and attractive offers that will increase each customer’s overall value to its business.

Real-time interaction
Underpinning all this is the principle that every time someone gets a bill, makes a payment, uses a service, makes a call or downloads some content, there is an opportunity to improve the effectiveness of those interactions.
By applying the LTVO approach operators can capture real time events in the customer care, billing, and service delivery environments, evaluate policies related to those events, and carry through real-time actions related to those policies.
Ultimately it is all about giving individual customers the level of service they warrant. Those with high lifetime potential are treated differently from those with lower potential but no one is left feeling neglected or unwanted. The system will identify incoming calls from high value customers and route them to an agent, while a lower value customer might be transferred to an automated self-care system.
This approach produces individual solutions for individual customers, personalised to their needs, habits and tastes, and it does this proactively and in real-time. So, if a customer starts downloading music or ring tones to a mobile, the system might suggest an offer – a good-value subscription offer or a two-for-one option. Similarly, another customer who is about to buy their third ‘pay-per-view’ movie in a week might be offered a particular movie package, possibly a free movie as a reward for buying ‘now’.
Taking advantage of these ‘warm’ sales opportunities might be done by e-mail, phone or text but, above all, it happens at precisely the moment when the customer is focused on a particular aspect of the service – right when they are about to make a purchase. Rather than seeing this as ‘hard sell’, they are more likely to view it as a response to a real need.
Proactive troubleshooting
LTVO is not restricted to expanding sales opportunities, it is just as effective in solving customer problems, particularly in anticipating and addressing problems before the user even asks for help. If a customer changes their usage patterns or they suggest that they are having a problem of some kind, help can be offered even before it is requested.
This kind of proactive response to an issue flagged up by the integrated system, in whatever form it takes, can surprise and impress customers who have come to expect, slow, ‘after the event’ reactions.
This will not only help to resolve the customers’ problems, it can also make a strong impression on them and build their trust in, and loyalty to, the operator.
The types of events that might warrant an immediate, pro-active response include: someone who appears to be struggling on a self-care web site; a customer using a service for the first time or showing signs of becoming a regular user; when a fault occurs like a dropped call, failed download or device failure; when bills are being paid or get left unpaid.
The more you know about your customers’ behaviour and priorities, the more you can do to strengthen your relationship with them.
The main benefit of LTVO processes comes from better policy enforcement and improved treatment of inbound contacts, while between 10 and 15 per cent will result from outbound actions triggered by the real-time data being provided through the system.
The holistic approach contrasts strikingly with the traditional tendency to deal with problems piecemeal. At Convergys we see this new approach providing an effective solution to even the toughest sales challenges and, most importantly, one that will be reflected in greatly improved profitability.

Alastair Hanlon is Director, Innovation Strategy, Convergys Corporation, EMEA, and can be contacted via tel: +44 1223 705000
www.convergys.com

Who knows more about their customers, a mobile phone operator or Google? The answer, you would think, should be straightforward… but you may be surprised says Adrian Kelly

CUSTOMER INTELLIGENCE MANAGEMENT - Mining for gold

The significant advantage that mobile phone operators have over other industries, and that includes the Channel 4s, Skys and even Googles of this world, is the vast volume of customer data they accumulate from a consumer’s daily interaction with the most personal of devices, the mobile phone.  Clearly the operators are sitting on a customer information goldmine. At the moment however, operators are simply not using this wealth of information and as a result are missing an incredible opportunity.
It is an opportunity upon which they will be looking to capitalise over the next 12-18 months as marketing continues to be a key battleground for service providers looking to avoid becoming a bit-pipe.  Under threat from many quarters, including media companies and Internet brands, operators’ marketing initiatives have to become two pronged.  Acquisition marketing remains a battle of the brands, where expensive sponsorship and clever pricing are essential to stand out in the increasingly crowded marketplace.  Cross and up sell however, as well as retention marketing, is much more of a fine art.  Marketing to existing customers requires a ‘mass-personalisation’ approach, based on deep customer knowledge.
Operator’s retention tactics (offering an incentive to stay the moment a subscriber requests their PAC number) are well known among subscribers.  However, the aim must be to offer the appropriate and relevant incentive in anticipation of a customer’s natural churn cycle, or to encourage them to adopt new services when the time is right for that individual customer – not waiting until it is more costly or potentially too late to keep them. Today, marketing departments are often restricted by a lack of up-to-date information about current subscriber behaviour, as their hands are tied by a dependence on technical teams to extract the information they need to target campaigns, and to assess their success rate.  The upshot of which is that marketing teams are left unable to react quickly and accurately to opportunities and events. 
Numerous service providers are finding that established techniques of segmentation based on demographics do not create the depth and accuracy of knowledge required.  The latest generation of Customer Intelligence Management solutions offer a whole new level of depth, accuracy and speed of knowledge acquisition for service provider marketing departments, allowing them to truly capitalise on the customer data currently sitting unused within the operator’s network.  Segmentation is performed on service usage data, and so represents their actual behaviour, rather than assumed behaviour from demographics, and is updated daily direct to the desktop. Customer Intelligence Management is already proving to be a compelling prospect for operators hoping that it will give them a unique advantage over their Internet-based challengers.
Cross and up selling focused marketing should, by its nature, be easier than acquisition marketing.  You are talking to a captive audience.  One that you know, has already bought into the brand proposition, and is probably reasonably happy with the service.  To use a business analogy it’s a little like walking into a sales meeting where you already know the people you are going to see.  Compare it to the acquisition scenario, which is much more akin to the cold call, and you should be in for an easier ride.  However, if your preparatory information is out of date, or you don’t research the motivations and preferences of the people you are meeting, you will not be able to take advantage of the situation. In fact, if your research is so poor that you are making offers that are completely irrelevant, you may even damage your existing relationship.
Service providers can now have access to incredibly detailed behavioural information. With data services continuously on the rise, operators now know when someone sends an MMS, what type of multimedia it was, who it went to, what application-to-person services are used (horoscopes, TV show information), and what TV shows they interact with through voting and content applications.  As the mobile Internet is becoming an increasingly real phenomenon, service providers also have access to much more web browsing information than a search engine can record – wherever the consumer goes online using their mobile or PDA, the operator has a click by click record of their behaviour.
Effective Customer Intelligence Management logs and analyses service usage patterns and mobile browsing habits as they occur – presenting them to marketers in an easily actionable format.  Such a revolutionary approach will put operators in the unique position of not only understanding the habits, behaviour and interests of the user, but also their wider social circle - and crucially, being able to act on them.
Operators have tended to segment their customer base down to around ten profiles (such as heavy talkers, texters, business data users).  Limiting to so few, mainly demographic based groups of subscribers tends to overlook less mainstream usage trends and character traits of users, and becomes increasingly restrictive as operators look to offer more niche services and move into the content and media markets.  With effective real time analysis, a ten-segment model will become a thing of the past, as media industry modelling with up to 100 segments (as used by the BBC for example) becomes a real possibility, and not a management nightmare.  More precise segmentation is the passport to a ‘mass personalisation’ approach, with individuals’ brand loyalty strengthening when marketing is more personally relevant.  From a product planning perspective there is also further opportunity for service providers to tailor new services to suit ever-evolving communities.
The next twelve to eighteen months will present two major challenges for service providers; increasing market competition and continued hesitancy among consumers to adopt new and unproven services.  The key to success on both fronts will be a service provider’s ability to understand its customers; their motivations and preferences.   Only by knowing their audience will they be able to offer effectively personalised services and, in doing so, stay one step ahead of the market.

Adrian Kelly is head of Customer Intelligence Management for Acision

Kari Pulkkinen looks at how online cost control can help operators build a business case for convergent charging

CONVERGENT CHARGING - Holding the purse strings

The uptake of converged communications has brought with it a wide range of new opportunities for service providers. Triple and quadruple-play services, including video services as well as applications, download and other content services, are being increasingly accepted into the mainstream and demanded by business users and consumers alike. However, these news services all need to be accurately charged for and billed to the customer to ensure ongoing usage and maximised revenue. How best to achieve this is currently of major concern to operators and service providers alike. 
In addition to the concerns around accurate billing, is the question of how to ensure that all customers receive the same level of experience – whether they are post or prepaid. Currently, prepaid customers tend to receive limited services and charging models from their providers due to concerns amongst those service providers that their billing solutions have limitations that offer a potential window for fraud.  Although in the past, operators have been hesitant about allowing full service offerings to prepaid subscribers, they are now looking for solutions that allow them to fully capitalise the potential of prepaid services without having revenue leakage and fraud problems.  One such solution, which can enable operators to offer more services to the prepaid user, is online charging.  By deploying online charging solutions, operators can offer all services to all users while closing the gap on fraud and revenue leakage.  Such a solution allows operators to fully capitalise their prepaid potential and, ultimately, fulfil end-user needs with wider service offering.
One final consideration revolves around the issue of ‘usage control’. Traditionally usage control has been linked to the prepaid payment option. However, there is much wider need for usage control regardless of the payment method. As an example, given the focus on children’s use and exposure to such services, an increasing number of parents require this additional level of control. Cost control is an important element, as parents want to control their children’s spending. Particularly important for younger children, as they get their “first mobile”, this type of cost control can educate younger users about usage of mobile services. Online cost control helps both parents and children in these tasks.
There are a variety of service concepts that could cater to helping parents and children control spending, for example, a fixed monthly fee and, on top if it, controlled usage with a user (parent) defined limit. This type of personalised billing model ensures that parents remain confident about costs, and encourages long term usage. For operators, the fixed monthly fee ensures at least the minimum revenue from customers.
In addition, it is vitally important for operators to recognise the role of online cost control in managing both fraud and credit risk, as this can have the greatest impact on their bottom line.  Offering new services, particularly those in emerging markets, is creating new opportunities for revenue, but it also risks exposing operators to increased credit risk.  As part of convergent charging, online cost control can help mitigate against risk through a hybrid approach.  A customer would have a fixed limit for post-paid usage that, once exceeded, would automatically switch the payment method to a prepaid mode. The use of pre-paid account mode is enabled through top-up. 
While it is increasingly clear that the key to successful convergent charging lies in a unified charging infrastructure, achieving this ‘holy grail’ continues to be a major consideration for operators. The more forward thinking operators have already started to develop the business cases and service concepts around convergent charging. A significant building block in this model lies in accurate ‘online cost control’ both for users and operators alike.
As discussed, the number of new services being introduced open up the operator to increased risk. Even though the majority of users do not set out to maliciously defraud the service provider, their unfamiliarity with new services and pricing structures means that it is much more likely that they will exceed anticipated costs, which can result in large costs and a resulting unwillingness to adopt the service long term. This can be exacerbated when the user is trying to utilise such services while travelling, as the roaming fees can dramatically add to the cost. 
In each case, the result is that the customer will be surprised and shocked by the service bill. From an operator’s point of view this outcome can be the death knell for new service adoption, as the customer decides never to use them again and the operator loses all potential future revenues associated with those particular services. Online cost control means that the customer is able to track costs and avoid bill ‘shock’, and are, therefore, much more likely to continue using the service.
This approach benefits operators by allowing them to ensure the credit-worthiness of customers while at the same time maximising revenue streams.  Operators can also use this model to differentiate their service offering and set out truly unique propositions not easily imitated by competitors, as often happens when introducing new price plans.  At the same time, cost conscious subscribers can feel they can be in control of their spending, while benefiting from the availability of a wide range of services.
Many of these concepts are not new, but there have still not been that many online cost control implementations. This is often due to the fact that operators’ existing billing and prepaid systems have limitations when supporting these types of online cost control service concepts.  Again, one effective way to implement this capability is to deploy an online cost control solution. This approach is able to provide flexibility for operators to build their own, individualised service concepts for online cost control. This type of solution can also provide an easy extension path for additional convergent charging areas, such as online data charging for post and prepaid, as well as IP prepaid and other charging solutions and related service concepts.
It is becoming increasingly apparent that online cost control is a must if operators wish to ensure the credit-worthiness of their customers, while enabling those same customers to better control their spending.  For both operators and customers, this is a key element to the successful introduction and ongoing uptake of news services and applications. Added to the recognised benefits of service innovation, online cost control goes a long way towards building a business case for convergent charging.

Kari Pulkkinen is VP, Business Development, Comptel

Considering the scale of revenue losses that many telecoms operators incur, it is vital that they identify the causes, quantify their magnitude and then set about addressing these leakages in a holistic manner. Dominic Smith looks at the main causes of revenue leakage, and outlines ways in which operators can resolve these with the help of end-to-end pre-integrated business support systems

Revenue assurance continues to be a key concern for most telecoms operators. An on-the-show-floor survey carried out by Cerillion at the 3GSM World Congress in February identified it as one of the three most important business issues facing telecoms operators today, with 15 per cent of respondents acknowledging it as their most urgent concern.
This is hardly surprising when you consider the scale of the problem. Latest estimates suggest that as much as 10 per cent of total provider revenue is still being lost due to revenue leakages. In today’s competitive telecoms environment, this situation is unacceptable. And to retain competitive edge, operators need to ensure they are tackling the problem proactively.

Arguably the most important cause of revenue leakage is poor systems integration. Unfortunately, this is often a characteristic of the traditional best-of-breed approach to the implementation of business support systems. With this model, systems integrators are often tasked with implementing and integrating multiple heterogeneous systems to build a complete solution. Invariably, they encounter two key problems that make effective integration difficult.
First, they typically discover incompatibilities between the data models used in the best-of-breed systems. Synchronising data across different applications is complex because of the need to align different ways of identifying the subscriber, service and orders. However, if these mappings are not carried out properly, the operator will struggle to trace orders across the systems.
Second, the systems integrator may not have an in-depth understanding of all the best-of-breed components. As a result, it may integrate the systems inefficiently and introduce data replication or unnecessary layers of complexity, all of which can result in holes where revenue leakage may occur.
Process problems
Poor integration typically also results in a host of process problems. It may for example lead to data entry in multiple systems or incompatible configuration between solution components. The consequence of this may be, for example, rating/prepaid charging errors - essentially applying an incorrect price to a customer record or not being able to price the record at all. These errors will result in usage that cannot be billed for and, ultimately, revenue leakage.
Incomplete or incorrect usage data is another primary cause of leakage. This problem often occurs when network switches produce erroneous information and prevents the operator identifying the type of service used by a customer or the customer using that service. In either case, the result is an inability to bill for usage incurred.
Poorly integrated systems with no common workflow can also lead to delays in billing. Sometimes manual set-up processes for new services cause a delay of several days to occur before the operator can start invoicing the customer, inevitably resulting in a loss of revenues. In contrast, a fully automated process with flow through provisioning enables the operator to start billing for service use immediately. 
Invoicing system errors are another potential cause of revenue leakage. Traditionally, the problem is thought to be primarily one of under-billing - operators failing to invoice customers for services received. In fact, over-billing can be just as significant. This typically occurs when a service is terminated but the operator continues to bill for the service in error.
It will often result in costly customer disputes and the requirement to generate refunds or provide credit as a goodwill gesture. Valuable time and resource may be required to fix the offending process, and further revenue leakage will occur indirectly as a result of growing customer dissatisfaction and increased rates of customer churn.
Launching new products and decommissioning old ones are two other areas where a badly coordinated system can cause further revenue assurance problems. Businesses often leak money both by providing incorrect tariffs for new services and by not taking older, more costly products out of service quickly enough.

Reactive versus proactive
Putting additional systems and checks in place is largely a reactive approach to revenue assurance in a best-of-breed solution. In essence, it is a ‘sticking plaster’ approach to plugging the gaps in the system. Rather than dealing with problems at source, it focuses on putting processes in place which track where revenues are being lost and then try to correct these errors retrospectively.
As a result, problems can stay hidden for some time and their source can remain obscure. Operators may initially believe that they have billing issues or that they are suffering from credit management problems. In fact, when they carry out thorough ‘root cause analysis’, they often discover that their problem is order management related.
If the system is not proactively managed, a mistake made in this initial order process will not be discovered by the operator for a month or six weeks, when the customer receives his first bill and finds he has been placed on the wrong tariff or is being billed for a service he never received, for example. 
In contrast, the best end-to-end pre-integrated solution suites give operators the confidence that all elements within the product suite will work together in harmony. The holistic approach of these systems is clearly in line with operators’ increasing desire to address and monitor the whole lifecycle from the initial order placement right through to billing and cash collection.
These solutions also enable operators to be much more proactive. Rather than merely reacting to problems when they occur, their seamless connectivity offers a means to prevent ‘gaps’ in the system appearing in the first place. In other words, they treat the root cause of the problem rather than the symptoms.
The tight integration of these solutions helps eliminate data replication and synchronisation problems. In addition, embedded workflow and order management functionality allows front-end orders to be successfully transitioned to the back office, ensuring all services can be billed for and eliminating revenue leakage at source.
The pre-integrated nature of these systems allows key business information to be proactively tracked, detailed reports to be generated for each process, revenue leakages quickly identified and revenue losses minimised. It is hardly surprising, therefore, that ever-greater numbers of operators see end-to-end pre-integrated solution suites as a vital weapon in their ongoing battle to achieve genuine revenue assurance.

Dominic Smith is Marketing Director, Cerillion Technologies

Rapid assembly of services will be the key differentiator for telcos striving to beat out cable, entertainment and Internet companies encroaching on their customer bases says Brian Naughton

Telecom carriers will have to go through a significant metamorphosis as the lines blur among telecom, entertainment, retail, and Internet domains. In hotly contested triple- and quad play markets, carriers must become customer service providers (CSPs) capable of making the transition from me-too services to truly converged, on-demand services that differ from those offered by MSOs and non-traditional competitors.

To achieve that end, CSPs will have to work with third-party developers to create scores, if not hundreds, of niche services that leverage their substantial investments in IP networks. After all, they laid the fibre to enable voice, video and data to come together over the same connection in very short time frames. That unique ability should enable CSPs to create prodigious catalogues of converged services without disrupting the underlying architecture.
The goal should be the rapid assembly of services. To that end, a mindset change will be necessary. Carriers will have to move away from the staid and stodgy belief that service launches must take months or years, to a mindset that products can be rolled out in hours, if not minutes.
That will require CSPs to move into a manufacturing mindset, where the concepts of computer-aided design (CAD) and computer-aided manufacturing (CAM) come to fruition. The marriage of the two enables hundreds, if not thousands, of services to be rolled out in an “assembly line” fashion.
In the same way that the car manufacturing industry illustrates components for new products in CAD systems, carriers can illustrate the components of new products and move service “components” along an “assembly line” to CAM systems, where coding, rules and algorithms can be determined automatically.
The lifecycle management enabled by the CAD and CAM principles is now beginning to burgeon in telecom. In other words, the knowledge of bundling will be removed from existing systems and centralised in a location in which all service and product building blocks can be modelled within a “workbench” environment.
That reflects somewhat the precepts of service-oriented architecture (SOA), which promulgates the interchangeable use of building blocks among applications.
 “While SOA has been hyped for many years as a common framework for segmenting operations and coupling services, the reasons for it are far more compelling now,” says Larry Goldman, co-founder and senior analyst with OSS Observer. “The Internet has created an expectation of immediate gratification, so carriers have to figure out how to roll out services at the time of demand.”
After heavy investments in IP networks, Goldman believes operators have to concentrate on the software side of the equation. “CSPs should focus on re-use within their execution environments. That means services must be decoupled from networks for integration with business processes.”
Goldman says carriers can then begin to drive re-use –not only of common data models, but of formats, naming conventions, interfaces, and design processes across the organisation.
To galvanise the concept of ‘re-use’, CSPs must break back-office silos down into components that represent operational elements of network and IT systems, as well as product, service and resource specifications. These components can ultimately be turned into loosely coupled “building blocks” for interchangeable use across different services and products.
As carriers create a library of building blocks, SOA environments become true service delivery platforms (SDP) from which new functionality can be driven (i.e., SIP capabilities around presence, location and more advanced voice mail services that can be used in creative product bundles). By implementing common SIP servers for applications needing connectivity over IP networks, carriers can procure data from disparate sources so that billing authorisation and billing detail are consistent across the organisation.
As new services are created through increasingly agile SDPs and execution environments, CSPs will have to simultaneously orchestrate changes within OSS/BSS applications. The complexity of orchestration for dynamic services will require full automation of activation, ordering and billing processes so that fulfilment and assurance processes can seamlessly work for new service rollouts.
Within the TeleManagement Forum’s Product & Service Assembly (PSA) Initiative, an independent consortium of leading telcos and vendors has been working to develop a revolutionary IT reference architecture to satisfy the burgeoning need to standardise and simplify the way that products and services are designed, assembled and delivered. This reference architecture incorporates the CAD/CAM manufacturing approach by enabling the creation of “building blocks,” which carriers can assemble into service or product offerings.
At the heart of the IT reference architecture is an active catalogue that is a design-and-assembly environment within which service components can be defined and configured without any need for writing code. This catalogue aligns service design and creation with service execution so that product managers can decouple management of product lifecycles from OSS, BSS and network engineering.
Within the building-blocks lies is a rich library of components and products through which product managers and architects can drive dependencies, prerequisites, exclusions and visual metaphors about service components.
“We have leveraged our deep understanding of the fulfilment process as well of that of our customers and partners to define components that could be used interchangeably across services and functions,” says Simon Osborne of Axiom Systems, one of the founders of the PSA Initiative, noting that Cable & Wireless, BT, TeliaSonera, Atos Origin, Huawei, and Oracle have worked to define the building blocks.
To simplify the definition and configuration of services using those building blocks, a visual and intuitive GUI has been created for product managers to view loosely coupled composites or aggregate services, as well as for IT to create, test and publish components for re-use across the organisation.
The essence of the IT reference architecture is that it has been designed with a “bilateral” top-down/bottom-up approach in mind.
 “This IT reference architecture empowers marketing professionals to define service components without having to go through IT departments, and enables IT to use pre-tested business options and variants to drive component use across the organisation,” comments Osborne.
For example, ringtone downloads, VoIP, VoD, and find-me services each require their own sets of fundamental parameters around availability, order-taking and activation. However, there inherently exists overlap in what each service requires. The active catalogue helps carriers to leverage that fact by establishing interchangeable building blocks in one catalogue that can then be rearranged to support other services as well. Rather than having to write new code to launch each new service, carriers can specify necessary attributes in reasonably basic forms so that one catalogue and order-handling system can handle many different services.
Simon Farrell, IT Architect, Cable & Wireless comments: “We can define residential VoIP and the prerequisites for broadband DSL, and are able to stitch together relationships among end points to execute on fulfilment request” - demonstrating that graphical representations, such as a ‘green light’ for ‘it’s a go’ or ‘red light’ for ‘outstanding dependencies’ enables C&W to assemble end-points that must exist on the enterprise service bus (ESB).
In other words, there are distinct interfaces, order types and end points specific to any services that are to be fulfilled. Through the interface, the active catalogue provides an environment for modelling end points into an assembly landscape that defines relationships and polices exceptions or dependencies.
 “A residential home triple play service that requires a broadband and VoIP server, as well as IPTV server, will rely on rules around what third parties must be called upon to provide that hardware, and in what sequence those systems should be called upon,” explains Osborne. “That sets the stage for how data travels interface to interface as the service transitions through the lifecycle.”
While the active catalogue does not run every task, it calls the service end points that, in turn, run the processes externally. “This active catalogue provides a way of defining the end point and rules around those endpoints, so fulfilment dynamically figures out what end points to call upon,” he says.
As orders are fulfilled through the active catalogue, the software creates an inventory of pre-existing capabilities for end users. The software records against every instance of an order, using the same language that was modelled at service end points. Ultimately, that means CSPs end up with rules sets that are usable for up-sell and cross-sell capabilities. “If 35 per cent of customers have a certain type of access, CSPs can target them with new services that tie to that type of access,” notes Osborne.
In the long run, that ability drives versioning and lifecycle management. “If a service is to be deployed for only six months, there can be published rules stating that the service will be decommissioned in a certain time period, and warnings can be issued at the end of the period to those parties with bundled components.”
That can be particularly important among partners who are re-branding wholesale offerings, or for inter-departmental strategies at large telcos, where orchestrating processes can be complex. “Ultimately, you get a federation of catalogues with clear demarcation of where the SLAs are among different departments,” Osborne explains. With a federation of catalogues, CSPs start to create a topology through which all catalogues and associated end points can be referenced for more intelligent cross-sell and up-sell actions.
To ensure there is an accurate model of infrastructure, this revolutionary IT reference architecture has been designed to sit on top of most major network resource management systems (inventory) that serve as databases of record for carriers.
The architecture can serve as the foundation for collaboration among product managers, service and network engineers, as well as operational communities. By creating a central point for standardising multiple vendors' products, carriers can move closer to the SOA principles they strive to embrace.
As carriers continue to expose their design environment to different departments and customers, they can begin to truly “mass market” the configuration of products. That sets the stage for commonality in how components, access controls and security measures are employed across the enterprise and partner environments.
As that commonality grows, carriers can get closer to self-service in management of product and service lifecycles. Then, they can be better positioned to create value-adds in their IP services domain—especially if they can roll out sophisticated services in a matter of hours, or even minutes.

For further information about the IT reference architecture and the active catalogue, please visit www.psainitiative.org  or e-mail info@psainitiative.org.
Brian Naughton is VP Strategy & Architecture, Axiom Systems

Service quality management offers a critical pathway to the delivery of quality of service in developing markets, says Tony Kalcina

Accidents happen. People make mistakes. Nothing or no one is infallible. We all know this. Which is why, when we buy a product or service, what is important is not so much whether or not it has faults, but what happens after a fault occurs.
It is a well-know maxim in client service that a customer whose problem has been dealt with in an exemplary fashion is likely to be more satisfied and loyal than one who has never experienced a problem to begin with. The former knows from experience that they can rely on the provider of the service or product; the latter has no idea what might happen if things go wrong.

This principle applies as much in telecommunications as elsewhere, but with an added twist: customers want to have the certainty that problems will be dealt with effectively and efficiently before they happen.
This means service providers have to provide a high level of assurance at the contract stage, typically through a service level agreement (SLA). But there are SLAs and SLAs.
In fiercely competitive developing markets, the ability to offer and deliver on meaningful, measurable and manageable standards of service is becoming a major competitive differentiator.
Telecommunications SLAs traditionally underpin service quality management (SQM) programmes, which aim to monitor performance, pinpoint faults and prevent them from recurring.
SQM is valuable to corporate customers because, in theory, it provides analysis and verification of the performance they are paying for. And, in the event of a problem, it serves to provide a measure of the recompense they might be entitled to.
For operators in developing markets, SQM also has an important role to play in the supply chain by policing incumbent operators, for instance when competition rules allow Local Loop Unbundling (LLU) for third-party providers of DSL services.
The inclusion of an SLA in the supply chain process ensures protection for third party operators and their customers; if incumbent operators fail to undertake the LLU in the time agreed, the third party operator can often claim a rebate.
At the same time, the end customer may also be entitled to compensation for failure to deliver the requisite level of service mandated by the regulatory body.
In practice, this can be problematic to claim at an individual level, but the automated monitoring and reporting of SLA violations can be a useful input to the process of managing collective performance by the incumbent. 
Elsewhere, it stands to reason that savvy customers will pick suppliers whose SLAs offer the highest level of financial security; in other words, those which pay out the most in the event of a problem.
This means that in order to satisfy the most demanding customers, telecommunications operators need to embrace SQM so that any faults and liabilities can be fully verified to the satisfaction of both the operator and its customers.
SQM allows operators to measure and gauge the validity of customer complaints; whilst the customer should always be put first, operators can determine the need for - and level of - compensation required for a perceived service fault. Clearly, then, there are massive benefits to be had from being seen to possess a market-leading SQM programme. But not all operators currently have one.
Currently, performance data, where it is available, often only involves some fairly basic measurements of the state of the network. In addition, delivering SQM often relies heavily on expensive manpower.
An operator will not be able to cost-effectively differentiate its service offering unless manual steps are kept to an absolute minimum and, preferably, eliminated altogether to avoid the higher cost and delays of manual processes.
Finally, many of the current low-cost diagnostic tools that are in place can only provide basic alerts to the effect that certain pieces of equipment are failing, without identifying which customers (if any) are affected, or how.
What this means in practice is that operators relying on these basic SQM tools cannot truly be said to be delivering quality of service to their customers—and risk either losing credibility or paying over the odds for SLA failures. The situation need not be thus, however.
More complex SQM tools exist. They combine service fulfilment and assurance capabilities and can be integrated with a provisioning package to automatically identify faults or dips in service and restore the services or compensate customers with additional offers or refunds.
Clarity, for example, offers a pre-integrated product and database that features the TeleManagement Forum’s 17 electronic Telecom Operations Map model elements of Operational Support Systems (OSS) in a single suite.
These systems allow operators to see the impact that network operations are having on revenue and customers’ experience from both a service fulfilment and assurance perspective.
Clarity’s OSS is network and services neutral, rapidly configurable and widely deployed, supporting an end user base of 50 million subscribers worldwide. Companies that have taken SQM seriously have reaped significant benefits.
Sri Lanka Telecom, to take an example from the developing world, has been able to clear 84 per cent of faults within hours thanks to a single OSS information store for fulfilment and assurance data, coupled with real-time correlation and integrated SQM workflow processes.
Other operators can follow this path. All that is needed is a greater awareness of the importance of SQM as a tool for achieving competitive advantage. Telecoms operators, specifically in developing markets, must realise the importance of service assurance in helping to predict, monitor and manage in real time the availability and quality of services, ensuring conformance to the business’s strategic SQM objectives.
Investing in OSS to support state-of-the-art SQM programmes is no longer a ‘nice to have’, but increasingly a vital component of strategies to attract and retain loyal residential and commercial customers, improve operational effectiveness and to accelerate the order-to-cash process. SQM may have until now been something of a minority interest for telecommunications operators. But as the battle for customers heats up in developing markets, it looks set to become a key weapon for competitive advantage.

Tony Kalcina is founder of Clarity

The many bells and whistles promised by IMS make it essential for operators to understand and monitor all the device-types used by customers, if they are to ensure high standards of customer experience, says Matt Herdlein

As emerging IMS platforms open the doors to real-time, interactive multimedia services, taking care of the customer experience becomes an even more critical ingredient for achieving success. Investments made to support the emerging service complexity could be wasted if customers cannot derive the intended value. Moreover, the assessment of service quality would be misleading unless the actual performance of user devices is considered as well.  As services become more sophisticated and complex, more functionality and features are migrating to user devices, thus making them an integral element in the overall service quality equation.

With 3G handsets based on open operating systems, the numbers of both device makers and third-party application vendors have skyrocketed. Handango, a leading supplier of applications for handsets and Personal Digital Assistants (PDAs), reported more than 11,000 new applications in 2005 from more than 1,200 new vendors, and “type approval” has moved to vendor certification. The GSM Suppliers Association reported that in 2006 there were 212 GSM/EDGE terminal devices available in the market from 33 vendors. Handset operating systems come from companies such as Microsoft, Symbian, and Qualcomm, among many others.
In this environment, the challenge for operators is clear. They must be able to deploy an enormous and quickly growing range of services on a host of intelligent devices, and ensure that those services operate successfully on each device. In short, operators need to augment the scope of service quality to include user device performance to better assess the actual customer experience derived from their IMS investments.
To further understand the problem, consider the following simple case: if a new service fails 90 per cent of the time on a user device that serves 10 per cent of the market, then an analysis of the service will only show a one per cent failure rate. However, the reality is that 10 per cent of the customers are unhappy and may move to another operator or stop using the service.
MobileGuru, a UK company that sells mobile phones and accessories, compared handset performance on a UK network and found that call drop rates can vary from about two per cent for the best performing devices to nearly 10 per cent for the worst performing devices. Even then, within a single device type, there may be significant variations in performance caused by batch problems in manufacturing, user configuration errors, or software download problems.
These issues are not new. GSM operators faced the dilemma of whether to issue recalls or modify their networks in the mid-nineties, when two of the leading handset vendors were found to have compatibility issues with their networks. At that time, they had to make software changes under controlled conditions since type approval would be invalidated if the changes were done incorrectly. The recall was avoided in those cases, but smaller manufacturers did have to recall devices.

Identifying the problem
The advent of Universal Serial Bus (USB), and changes in component prices, along with the relaxation of type approval, made home-based upgrades to handset software more common. A handset's International Mobile Equipment Identity (IMEI) can be used to identify the model and even the place of manufacture, but it is no longer a reliable indicator of software build or application set.
Further complicating the matter is the fact that handset operating systems are available to suit the preferences of any manufacturer or vendor. If you like Java, try Savaje; if you prefer Linux, then look at MontaVisto; or if you are Microsoft fan, there is a version of Windows available. The market leader, Symbian, grew out of UK PDA innovator Psion, and if you don't like the Symbian software, then Nokia supplies Series 60. Handango reports having more than 190,000 titles from 16,000 content partners supporting nine different handset operating systems.
Vodafone has reported a significant linkage between the rate of churn for residential customers and the numbers of services they use on a regular basis – the more services used, the more likely customers will stay. Of course, services will only be used if they work reliably.
The cost of unreliable service to operators can be measured not just in lost revenue but also in lost handset investment. Nokia, which supplies one in three of the world's handsets, reported that the average selling price of its handsets is US$125 (103EUR), while Sony Ericsson, which has more high end products, has an average selling price of $180 (149EUR). Some retailers in the UK are offering all Nokia handsets for free when bundled with a post-paid tariff, so the cost to the operator is around $146 (120EUR) for each handset.
The abundance of software and device options presents a number of challenges for operators. It also provides a unique opportunity for operators to better serve valuable customers. When root causes of problems can be traced to individual customers or device types, operators can develop new application versions or make changes to operating systems to correct the problems.
Consider, for example, the value to enterprise customers who, according to Yankee Group, make up 28 per cent of mobile operators' revenue. These customers can be advised on which mobile phones perform best for their services or can be given upgrades. Customer satisfaction would increase, as would retention levels for a valuable market segment.

Inspecting the packets
To monitor service quality, it has always been imperative that operators have ready access to reliable data. For voice calls, there are many options, from Call Detail Records (CDRs) to signalling probes. But for data services, and to support the move to IMS, more sophisticated, deep packet inspection probes are required.
Data service quality is determined by three main variables: network performance, device performance, and portal performance. Degradation in any of these variables will result in a poor customer experience.
Any effective analytical tools should permit early identification of trends so that solutions can be developed and customers informed before service is affected. Hotspots may occur when changes are made to services, access networks, or core networks, or when handset operating system upgrades are introduced, but these may be difficult to identify among millions of users, and operators must take care that “normal” user actions are not misinterpreted.
For example, with voice, a short call (one quickly terminated by the user) may raise no alarms but actually involve unacceptable voice quality, whereas with data, short sessions may be the result of high throughput, and long sessions may indicate problems.
By comparing different handset models running the same service, patterns can be established. Analysts should also consider whether particular models are only available to a limited user group, such as prepaid. Traditional service quality monitoring gets a view of specific services based on consolidated data extracted from the network, and by using field probes that perform synthetic transactions depicting various services and user behaviours. Although these approaches have merits, they fail to analyse service quality from actual service transactions that customers make. In other words, they fail to capture the trends and nuances of the true customer experience.

Human behaviour
It is, therefore, essential for operators to understand and constantly monitor the “behaviour” of all the various device-types used by their customers. Specifically, it is important to understand service performance by device-type as well as by specific device configuration, to understand how customers are affected by network or device issues.
Some of the typical issues that mobile operators face on a daily basis are:
• How to choose a device/s for a new service
• During trials, how to measure device performance by type, service, and configuration
• How to view device performance to identify service bottlenecks before they can affect the service, and how to identify affected (and potentially affected) customers
• How to know if a new configuration or update is performing as expected
• How to identify devices with high support overhead.
To answer these questions, operators need to go beyond probes that make educated guesses about performance based on small, “synthetic” samples. Operators need a solution that aggregates actual device performance around the clock from every service transaction.
Back to reality
The reality is that virtually every mobile operator supports dozens of device-types and millions of active devices. That means, to manage service quality and customer experience, operators must do more than monitor their networks and operations. They have to know, at any given time, the capabilities and limitations of all of their user devices, how those devices are performing, how they will handle new service offerings, and how customers are using them.
When operators have access to this level of user device performance intelligence, the business benefits are invaluable. For example, operators can: “see” how customers are reacting to marketing campaigns and special offers; recommend the best devices for customers when orders for new or expanded services are received; respond to customer calls with a holistic understanding of the customers' experience; offer targeted promotions to individuals or groups; provide incentives to device vendors based on verifiable performance and offer more focused SLAs.
As the choices for new and more complex services continues to grow, and user devices that offer more functionality emerge, mobile operators need to understand service quality from the device perspective. Operators can expand their traditional service monitoring arsenals and realise the business benefits that can result from higher customer satisfaction.

Matt Herdlein is Executive Director, Service Management, Telcordia

Bob Drummond discusses how operators can benefit from an agile, flexible and open platform to proactively deliver dynamic services to their customer base

What do the Glastonbury music festival, the Rugby World Cup and the Oscars have in common?  They are all high profile, internationally broadcast events that draw attention from millions of fans and dominate the agendas of society, newspapers and television for the short period of their duration.

These are all events that operators could capitalise on if they had the flexibility and agility to rapidly and economically deploy innovative services on their networks, even for a short period of time. For operators on the lookout for new revenue streams or the next ‘sticky’ application, this is a golden opportunity to engage new and existing mobile subscribers by riding the wave of highly popular live events with the offer of exciting applications.
Over the recent Cricket World Cup, what cricket fan would not enjoy winning a game involving the same team and opponents on his or her mobile phone? If your team didn’t win, replay the game on your mobile and see if you could have done better! Next time you’re on your way to watch your favourite football team play, what if you could play the match on your mobile – complete with the same starting team on the pitch and on the bench, correct strip, same opponent players and the same conditions…even down to the weather conditions?
With higher return visits to the application promised through this dynamic, always-fresh approach, and premium revenues on offer, what is holding operators back from introducing services, applications or games aligned to such headline-making events? Beyond understanding the opportunities presented by such events, how do operators meet the technological challenges that ensure that customers are happy with the new services they receive? 
The telecoms industry is challenged to achieve a business model that keeps costs down, maintains innovation and responds to competition from within and outside of its own marketplace. All of this whilst still creating profit and new revenue streams to stay in the game. The most difficult aspect of this challenge, however, has been created over the years by the operators themselves. It is the legacy of a history of growth that has seen additional, proprietary infrastructure systems installed for each wave of evolution and has resulted in vertically-oriented and proprietary systems’ infrastructures.
Proprietary Intelligent Network (IN) systems are typically monolithic in structure, with hardware, software and applications tightly integrated and designed to operate well as a unit. As a consequence they are expensive to maintain and enhance because operators are restricted to using the services of the vendor even for minor enhancements to the system. This creates a ‘lock-in’ environment where operators become increasingly reliant on the vendor for its ability to innovate. A new service capability can take years and millions of dollars to deploy in this environment, vastly affecting the feasibility, cost and timescale of bringing new services to market. 
Furthermore, the telecoms industry has typically invested in applications and platforms as and when needed, resulting in a mix of incompatible development, deployment and operational environments. Typically, the switching and services layers of the IN will be organised vertically – rather than with integration across the rest of the infrastructure in mind – producing a complex series of silo-based architectures where the cost to develop, deploy and maintain exciting new services for all subscribers is too high. The obstruction to innovation in new multi-media, multi-access, multi-network services means that operators face difficulties in delivering the rich, converged services that their customers want and that differentiate them in a crowded marketplace.
The ability to offer new services that piggyback high profile events such as a World Cup or the Live8 music festival requires a degree of agility and flexibility that silo design and proprietary lock-in of legacy infrastructures obstructs. So, without heavy investment in a new convergent architecture, what can operators do?
The answer lies in open standards. Compared to the world of Internet and enterprise applications, developing telecoms services on the traditional proprietary IN platforms is an outdated approach that is time-consuming and expensive.  Proprietary, vertically-integrated systems need to make way for openness, modularity and portability to create an environment for cost-effective service development. 
Operators have spent a decade demanding open platforms from their suppliers, even introducing a series of open standards initiatives, such as Parlay and JAIN, to drive this agenda.
JAIN SLEE is the open Java standard that is tailored to the large-scale execution of communications services across existing and Next Generation Networks. With JAIN SLEE-compliant application servers providing an open, flexible and carrier-grade service execution platform, operators can achieve agility in service development and deployment, and also capitalise on cost leadership. Application development is no longer controlled by the proprietary vendors, but open to input from operator’s own in-house development and a competitive market of off-the-shelf application developers.
In this dynamic environment, a range of application developers can quickly and cost-effectively address market opportunities and roll out services in conjunction with events that hit their audience’s agenda. As JAIN SLEE addresses the need for a horizontal platform across the entire operator infrastructure, services can converge voice, data and video silos to provide truly innovative and compelling offerings that drive revenues and grow customer loyalty.
A live multi-media service for the Glastonbury music festival, for example, can be designed to appeal to operator’s high spending audience of young adults. The open platform makes it flexible enough to update daily with news, weather, alerts and programme changes, as well as offer live downloads of artist tracks, in order to provide a compelling service for users.
With the move away from inflexible legacy telecoms networks to an open environment, operators can now benefit from a wide pool of third party developers for innovative and cost effective new applications.  For the type of applications discussed at the outset of this article, an agile and flexible platform also supports the modification or reconfiguration of an application during the lifetime of the related real-world event to continue providing a compelling service for repeat users based on service take-up, user behaviour and feedback received during the event.
Operators need to fully embrace the opportunity of such dynamic service delivery, or risk being left behind by users that come to expect more from their network. For operators such as Vodafone and O2 that sponsor high-profile events around the world, the opportunities are endless for increasing sponsorship returns, explore new revenues and generate new levels of customer loyalty using dynamic service innovation.

Bob Drummond is VP of Marketing and Professional Services at OpenCloud

New technology is disrupting traditional advertising, and in its place different forms are evolving, offering very specifically targeted messages.  Lawrence Kenny and Rob van den Dam describe how advertising spend in the emerging online channels is now growing at a remarkable rate

The advent of emerging online advertising channels is making marketers lick their lips. These marketers are seeking more effective ways of optimising their expenditure and they are excited over the prospect of being able to target their ads in a highly personal way. They are spending more and more on targeted personalised advertising - at considerable cost to traditional advertising. Everyone is fighting for the new media advertising revenue. At the same time telcos have begun to realise that advertising can become an important source of revenue, an opportunity that they simply can't resist. 

Although telecom operators have little presence in advertising today, the medium represents an emerging opportunity that operators are uniquely positioned to address. They have unique assets that advertisers value. First of all, they have a large customer base. And with their authentication, authorisation and accounting controls, telcos are able to determine who the customer is and what services and products they are buying. Useful not only for controlling where the ads go to, but also for tracking advertising effectiveness.
Telcos have a direct relationship with customers. They collect vast quantities of customer data, which they can use to develop profiles of their subscribers, including demographic characteristics, personal attributes and preferences of those subscribers – and even, perhaps, their shopping habits and viewing patterns, provided the operators have the relevant analytical tools and capabilities. They can combine these customer insights with their ability to identify where individual users are based and offer highly targeted, localised promotions. Moreover, many operators have already developed solid relationships with local advertisers through their directory businesses.
Telcos are also well placed to enable the advertising experience practically anywhere, on any device and at any time. They can, for instance, manage the delivery of ads across the mobile phone, PC and TV-set; over fixed, wireless and other networks. What's more, they also provide a direct interactive response channel for the customers, and a feedback loop to advertisers allowing them to track advertising performance.
As telcos move into media - an industry that has historically been part funded through advertising - it will find that relying on subscriptions and pay-per-view models is unsustainable in a world where consumers do not expect to pay for all content. Content is expensive to generate and offer to consumers, and advertising provides a means to offer richer content at a more reasonable cost. Many telcos are therefore experimenting with opt-in advertising plans to fund content. Perhaps this is the most significant benefit, as it allows consumer access to richer content and media. Advertising may also provide consumers with access to content they previously were unaware of. A number of operators are already taking steps toward adding advertising on IPTV and cell phones.

IPTV advertising
The big advertising revenue still comes from television. But the traditional TV advertising model is becoming increasingly unsustainable. With the shift from analogue to digital broadcasting, the number of TV channels has multiplied, and audiences are becoming much more fragmented. This reduces the efficacy of an approach that relies on centrally scheduled programmes to deliver real-time advertising to a large, undifferentiated audience; and uses ratings to estimate the size of the audience. It results in low effectiveness, as advertisers need to pay for large audience even if they just reach small targets. Which makes TV ads too expensive.
IPTV could provide the answer. IPTV presents the opportunity to combine the powerful brand-building effect of conventional TV-quality advertising with the strengths of online; the ability to target specific audiences and allow customers to easily pursue their interest in a product, even to the point of purchase.
IPTV is an advertiser's dream. With IPTV, telcos have the ability to control where the ads go to - targeted at large groups, small groups or even individual television sets within a single household. 
The ads can be fine-tuned to the people within a household most likely to be watching at a certain time. When watching IPTV, users will be able to freeze the programming in order to interact with any advertising that attracts their attention, submit their details for further information on a brand or in some cases make an online purchase. And IPTV provides the means to measure precisely how many people have seen a particular advertisement. Payment models can be geared to actual viewers watching, the number of “red button” pressed, or perhaps a percentage of the sales.
With IPTV the ways in which ads can be personalised are limitless. Different ads can be generated once one ad has been shown a specific number of times. It gives advertisers the benefit that their ads won't annoy irrelevant audiences, or be shown too often and alienate their customers. IPTV also opens new opportunities to diversify ad formats. Ads can be placed when the set-top box boots up, on information screens, as a screensaver, as buffer when a movie loads, or dynamically in the video streams. The facility to 'telescope' out an advertisement could be possible using a click-through function for the consumer. There is also the possibility of search and recommendation, perhaps in partnership with an Internet search engine such as Google.
IPTV could provide a gateway to Internet advertising for sectors traditionally reluctant to embrace the medium. And IPTV will attract local companies who would otherwise not have considered TV advertising as an option. Telecom Austria has already explored ultra-local TV-advertising in the village of Engerwitzdorf and found it especially attracted local companies for advertising.
In Europe, the French IPTV market is leading the pack in targeted advertising trials, but IPTV providers in other European countries are also experimenting with advertising. Examples are Tiscali TV (formerly known as Homechoice) running a dedicated Honda channel in the UK, and Telecom Austria. BT is talking to both brands and agencies about offering (Vision) IPTV advertising. In the US, Verizon is currently deploying the technical tools that will allow it to insert local ads into its programming. On that foundation, the telco plans to introduce more targeted and interactive ads in its FIOS IPTV service. Though advanced ad deployments are still a ways off, AT&T (with its U-verse IPTV service) also likes the promise of an ad play that combines mobile phones, television and the Internet.

Mobile advertising
Mobile advertising represents another unexploited opportunity for telecom operators. It is one that telcos are particularly well-positioned to capture since they have control over what is delivered to the device and are the only companies that have the right to know the location of their subscribers, information that advertisers would love to use to target customers. The mobile phone is the most personal consumer device we own, and that most people carry with them 24 hours a day. It affords advertisers an opportunity to present very targeted and time-sensitive information that is of interest to the user. With nearly three billion cell phone users in the world, it's clear that mobile advertising represents a huge opportunity. Informa Telecom & Media predicts that worldwide spend of mobile advertising will be worth $11.35 billion in 2011.
Advertising on mobile devices can take many forms, including banners, sponsored video content and messages sent to users, but telcos and advertisers still need to determine what works best in different circumstances. Advertising techniques cannot simply be copied from the Internet. The screens and devices are smaller; the exposure time tolerated by the user is likely to be less; too many click-throughs will annoy users; and in many cases, operators must be able to identify the device type to render content appropriately.
Even more so than with Internet advertising, mobile advertising must be relevant, interesting to the audience and, especially, not overbearing in quantity. In fact, mobile advertising should be a combination of search, location and presence, and recommendation functions, based on a deep understanding of the consumer's passions, hobbies, purchases, past click-patterns and the like.
Outside Asia, where mobile advertising has grown rapidly in markets like Japan where NTT Docomo has been running small banner ads on its mobile portals for more than five years, the mobile operators have moved cautiously in adding advertising on cell phones for fear of alienating subscribers and increasing churn by doing so. But there have been a number of initiatives.
In the summer of 2006, Virgin Mobile USA introduced a programme called Sugar Mama, that compensates its phone users with free calling minutes for watching commercials, reading advertiser text messages and taking surveys from brands. In its first seven months, the Sugar Mama campaign awarded 3 million minutes to about 250,000 of the registered customers. Virgin Mobile recently announced that they will use JumpTab's search-based advertising platform to offer ads that are highly targeted and relevant for its users. Companies such as Verizon, Sprint and Cingular are now also beginning to test and roll out advertising on mobile phone screens.
In Europe, EMI Music and T-Mobile joined forces at the end of 2006 to pilot ad-supported mobile videos in Britain. Ad-funding company Amobee has recently launched a commercial advertising trial with Orange in France, with such companies as Coca-Cola and Saab having signed up for the trial. Orange customers interested in playing games will be offered them for free, or at a reduced rate, if they first agree to watch an advert. Mobile operator 3UK announced the launch of a service in April supported by personalised advertising to provide free content for its users. Also Vodafone and Yahoo! aim to launch a mobile advertising business in the first half of this year.
However, media brands such as Fox News, USA Today and The New York Times are now also joining the game by providing advertising via their mobile websites, which are accessed directly through a mobile browser and not through a mobile operator's menu. And they are not the only parties that think there will be big business for them down the road. Internet players Google and Yahoo! have already started to include advertising in their mobile search and portal properties. Yahoo! has even launched a mobile advertising platform in 19 countries across Europe, Asia and the Americas, instantly enabling advertisers to reach consumers around the globe on their mobile phones. Advertisers already signed up include the Hilton Hotel group, Pepsi and Singapore Airlines. And then there is Nokia, also jumping onto the mobile advertising bandwagon, by announcing two mobile advertising services designed for targeted campaigns on the handsets.
Highly targeted and addressable advertising will increase advertising revenue per viewer significantly, while the viewer experience becomes more personalised and well received. Several studies have confirmed that subscribers are more likely to respond favourably to advertisements if the topic is of interest to them. This type of advertising, however, raises the issue of privacy. There are acts in both Europe and the US to ensure that user-specific data is not used for any purpose other than for providing the telecommunications service itself. “Opting in” may well be seen as the route to go, and prove popular with consumers: giving them increasingly relevant ads. Here consumers allow their “user-specific” data to be used, in return for being included in special offers.
Many parties, from marketers to big media companies, to handset makers, to Internet players, to telecom operators, hope to get a piece of the pie. But operators have the demographic, transactional, behaviour and location data necessary to deliver marketing and advertising that meets the consumer need for relevant advertising. Operators are now at the point where they should exploit their unique technical advantages to secure their part of the pie.

Lawrence Kenny is Global Telecommunications Industry Leader for IBM Global Business Services.  Rob van den Dam is European Telecommunications Leader for the IBM Institute for Business Value

With ADSL2+ technology now being pushed to its absolute limits, carriers are talking about the next generation of broadband, VDSL and VDSL2, with speeds of up to 200 Mbits/s on relatively short line lengths from the DSLAM. Jorg Franzke explains how it is possible to roll out the speed benefits of VDSL and VDSL2 to most urban and city customers, without breaking the telco bank

It’s been an eventful decade across Europe as former state incumbents, cable companies and virtual telcos have all, seemingly en-masse, jumped on the broadband telecoms wagon and rolled out an increasing range of high-speed broadband services to their customers.
Most experts agree that, even with ADSL2+ offering customers access to up to 24 Mbits/s downstream data speeds, customers’ appetites for even faster speed services are still increasing, with some cable companies already talking about offering 100 Mbits/s as standard.

The only problem with this new generation of very high-speed broadband services is that they rely on VDSL and VDSL2 technology. Whilst ADSL2+ can happily support copper line lengths of two or more kilometres, the maximum available rates are achieved with VDSL at a maximum range of just 300 meters from the DSLAM (digital subscriber line access module), which gives around 52 Mbits/s. When we move on up to VDSL2 (ITU G.993.2) technology, carriers are even talking about rates of up to 200 Mbits/s.
But VDSL2 deteriorates quickly from a theoretical maximum of 250 Mbit/s at zero metres from the DSLAM to 100 Mbits/s at 500 metres, and 50 Mbits/s at 1.0 kilometre.  As a result of these line length limitations, very few customers will be within the coverage range of VDSL2 DSLAMs installed at the central exchange. So, most local loop carriers are discussing moving the active electronics, including the DSLAMs, out of their central offices and into larger versions of the roadside cabinets that form an integral part of the street furniture we see every day.
A major issue is that, with the move out of the central office comes the de-centralisation of the main distribution frame where connections have to be moved to initiate new services such as ADSL and VDSL.  Each time a customer requests a change of service, jumper wires have to be moved - fairly easy and efficient in a warm, dry clean centralised environment – but once the connections have to be made in the cold and rain it becomes an operational issue.  To give the reader an idea of the massive scale involved, however, a network the size of BT in the UK would require around 65,000 of these externally deployed active electronics cabinets. A network the size of Germany’s T-Com would require around 100,000 such cabinets.
In theory, the incumbent telcos could employ teams of roving engineers to maintain and provision the cabinets in much the same way as central offices are serviced at the moment, but the costs associated with the necessary engineering `truck rolls’ is anathema on the financial and ecological fronts. Even one visit per fortnight, at say ?50 per technician visit, would clock up annual costs of ?130 million per annum on a 100,000 cabinet network.
Consequently, any carrier electing to stay with the status quo and implement manual re-jumpering at the thousands upon thousands of active equipment roadside cabinets, will be forced to reduce their costs by making only scheduled visits. The corollary of this is that each cabinet may only figure in the schedule once every fortnight, meaning that the time-to-provision each customer will become much longer than currently is the case.
New approach
A markedly different technique is needed and newly developed automatic cross-connects  (ACX) can now be used to replace manual distribution/jumpering frames in the remote cabinets and so save carriers significant sums of money on the operational expenditure (OpEx) front.
With an automated ACX solution, not only are there no delays in waiting for an appropriate technician truck roll, but also the control of the ACX can be integrated directly into the carrier’s operations support system. Using this approach allows for the service connection to follow on automatically from the customer's order, within an hour or two, rather than the customer facing a wait of several days, as is currently normally the case with a central office, or several weeks in the case of above manual re-jumpering scenario.
Many carriers and manufacturers alike are chasing a holy-grail of Zero Touch for their networks. We have been pioneering a slightly less pipedream approach based on practicality and best Return on Investment.
The aim of a Zero Touch network would, of course, be that the field technician never needs to visit the remote site.  Which is all very well until you take into account that active equipment, and its power supplies and air-conditioning go wrong from time to time. So occasional technician visits are inevitable.
Zero touch systems have other drawbacks, not the least of which is the fact that the purchase costs can be substantial, so reducing the installation's return on investment. In the case of automated cross connects, Zero-touch would need a non-blocking switching matrix which is very expensive, and current non-blocking technology simply isn’t up to the job of transmitting 100Mbit/s signals.
The third issue with zero touch systems is the fact that the cabinet needs to be equipped with a large degree of reserve DSLAM and splitter capacity, ditto power supplies and air-conditioning, so seriously increasing the levels of capital expenditure required.
Our theory is simple. What happens if we introduce a minimum number of technician truck tolls to the mix, creating a `minimum touch’ not zero touch active electronics-based local loop?
This is where the financials begin to get interesting, as a minimum touch network is far more financially viable. It requires significantly lower levels of capital expenditure with very similar levels of operating costs. Less spare capacity is needed as this can be added when demand dictates. Likewise a much less expensive semi-blocking ACX can be used – with the full frequency range for 100Mbit/s service delivery
A good minimum touch system has the advantage of automating the provisioning and re-provisioning of lines without incurring the high capital costs of a zero touch system, or attenuating the signal levels required for effective VDSL and VDSL2 transmissions.
Well before the ACX system reaches saturation levels, it can signal its status to the central exchange, allowing engineers to make a planned site visit, install additional capacity if needed and hardwire connections already switched through to VDSL freeing up the switch ports to be used again for the next six or twelve months.
The result of this approach is good scalability, lower cost per line and a reduced space requirement. And all without affecting those all-important customer satisfaction levels.
Using a minimum touch ACX approach means that only one or two maintenance visits each year are required for each cabinet, with remote monitoring shouldering the responsibility of maximising network up-time.
In the event that something like a DSLAM card fails, the ability of ACX to connect ‘any-to-any’ can be employed to ensure that customers are only minimally affected by any technical problems. Depending on the severity of the failure, the cabinet's active technology can be remotely reconfigured to maintain service for the customers affected and the network operations centre can schedule a truck roll when it suits the operator. This makes for a more cost-effective maintenance strategy.
ACX technologies
In a survey of switching technology for remote automated cross connect devices used in next generation carrier networks, research company Venture Development Corporation (VDC) considered a number of technologies, but rejected robotic and solid-state/electronic switching, the former being error prone, expensive and with poor life expectancy, whilst solid-state/electronic switches have electrical parameters that make them unsuitable for the high bandwidth requirements of xDSL services like VDSL2. VDC also noted that a very specific `electromagnetic’ variation of the MEMS relay may become a suitable technology, but this is currently only in testing as regards ACX applications and, as yet, has no field application track record.
VDC concluded: “We believe the electromagnetic relay is acceptable technology because of its proven reliability, ruggedness and minimal transmission impairment.”  It did not judge any other technology to be currently acceptable. This, and the fact that the power requirements are so minimal, are the reasons that we had chosen to develop our own ACX product range around the tried and tested electromagnetic relay.
Obviously whether or not to implement ACX or to manage the service provision process manually is a matter for individual carriers. The choice of technology is critical from the perspective of reliability, minimal power consumption and the ability to handle the very high frequencies needed for VDSL2, but far more important in this rapidly changing telecoms word is the need for rapid return on investment.
It is our contention that Zero Touch is a step too far and that in the world of every day  engineering issues, Minimum Touch networks and minimum touch ACX are the way to minimised costs.

Jorg Franzke is ACX product manager for ADC KRONE, and can be contacted via tel: +49 308453-2498; e-mail: jörg.franzke@adckrone.com
www.adckrone.com

End-to-end transaction data is increasingly being recognised as the not-so-secret sauce required for full-flavoured telco transformation. If so, it should be treated with the reverence it deserves, Thomas Sutter, CEO of data collection and correlation specialist, Nexus Telecom tells Ian Scales

Nexus Telecom is a performance and service assurance specialist in the telecom OSS field. It is privately held, based in Switzerland and was founded in 1994. With 120 employees and about US$30 million turnover, Nexus Telecom can fairly be described as a 'niche' player within a niche telecom market. However, heavyweights amongst its 200 plus customers include Vodafone, T-Mobile and Deutsche Telekom.

It does most of its business in Europe and has found its greatest success in the mobile market. The core of its offer to telcos involves a range of network monitoring probes and service and revenue assurance applications, which telcos can use to plan network capacity, identify performance trends and problems and to verify service levels. Essentially, says CEO, Thomas Sutter, Nexus Telecom gathers event data from the network - from low-level network stats, right up to layer 7 applications transactions - verifies, correlates and aggregates it and generally makes it digestible for both its own applications and those delivered by other vendors.  What's changing, though, is the importance of such end-to-end transaction data. 
Nexus Telecom is proud of its 'open source approach' to the data it extracts from its customers' networks and feels strongly that telcos must demand similar openness from all their suppliers if the OSS/BSS field is to develop properly.  Instead of allowing proprietary approaches to data collection and use at network, service and business levels respectively, Sutter says the industry must support an architecture with a central transaction record repository capable of being easily interrogated by the growing number of business and technical applications that demand access.  It's an idea whose time may have come.  According to Sutter, telcos are increasingly grasping the idea that data collection, correlation and aggregation is not just an activity that will help you tweak the network, it's about using data to control the business. The term 'transformation' is being increasingly used in telecom.
As currently understood it usually means applying new thinking and new technology in equal measure: not just to do what you already do slightly better or cheaper, but to completely rethink the corporate approach and direction, and maybe even the business model itself. 
There is a growing conviction that telco transformation through the use of detailed end-to-end transaction data to understand and interact with specific customers has moved from interesting concept to urgent requirement as new competitors, such as Google and eBay, enter the telecom market, as it were, pre-transformed. Born and bred on the Internet, their sophisticated use of network and applications data to inform and drive customer interaction is not some new technique, cleverly adopted and incorporated, but is completely integral to the way they understand and implement their business activities. If they are to survive and prosper, telcos have to catch up and value data in a similar way.  Sutter says some are, but some are still grappling with the concepts. 
"Today I can talk to customers who believe that if they adopt converged networks with IP backbones, then the only thing they need do to stay ahead in the business is to build enough bandwidth into the core of the network, believing that as long as they have enough bandwidth everything will be OK."
This misses the point in a number of ways, claims Sutter. 
"Just because the IP architecture is simple doesn't mean that the applications and supply chain we have to run over it are simple  - in fact it's rather the other way about.  The 'simple' network requires that the supporting service layers have to be more complex because they have to do more work." 
And in an increasingly complex telco business environment, where players are engaged with a growing number of partners to deliver services and content, understanding how events ripple across networks and applications is crucial.
"The thing about this business is not just about what you're doing in your own network - it's about what the other guy is doing with his. We are beginning to talk about our supply chains. In fact the services are generating millions of them every day because supply chains happen automatically when a service, let's say a voice call over an IP network, gets initiated, established, delivered and then released again. These supply chains are highly complex and you need to make sure all the events have been properly recorded and that your customer services are working as they should. That's the first thing, but there's much more than that.  Telcos need to harness network data - I call them 'transactions' - to develop their businesses."
Sutter thinks the telecom industry still has a long way to go to understand how important end-to-end transaction data will be.
"Take banking. Nobody in that industry has any doubt that they should know every single detail on any part of a transaction. In telecoms we've so far been happy to derive statistics rather than transaction records. Statistics that tell us if services are up and running or if customers are generally happy. We are still thinking about how much we need to know, so we are at the very beginning of this process."
So end-to-end transaction data is important and will grow in importance.  How does Nexus Telecom see itself developing with the market?
"When you look at what vendors deliver from their equipment domains it becomes obvious that they are not delivering the right sort of information. They tend to deliver a lot of event data in the form of alarms and they deliver performance data - layer 1 to layer 4 - all on a statistical basis.  This tells you what's happening so you can plan network capacity and so on.  But these systems never, ever go to layer 7 and tell you about transaction details - we can. 
"Nexus Telecom uses passive probes (which just listen to traffic rather than engage interactively with network elements) which we can deploy independently of any vendor and sidestep interoperability problems.  Our job is to just listen so all we need is for the equipment provider to implement the protocols in compliance with the given standards."
So given that telcos are recognising the need to gather and store, what's the future OSS transaction record architecture going to look like? 
"I think people are starting to understand it's important that we only collect the data once and then store it in an open way so that different departments and organisations can access it at the granularity and over the time intervals they require, and in real (or close to real) time. So that means that our  approach and the language we use must change. Where today we conceptualise data operating at specific layers - network, service and business - I can see us developing an architecture which envisages all network data as a single collection which can be used selectively by applications operating at any or all of those three layers.  So we will, instead, define layers to help us organise the transaction record lifecycle. I envisage a collection layer orchestrating transaction collection, correlation and aggregation.  Then we could have a storage layer, and finally some sort of presentation layer so that data can be assembled in an appropriate format for its different constituencies  - the  marketing  people, billing people, management guys, network operation guys and so on, each of which have their own particular requirements towards being in control of the service delivery chain. Here you might start to talk about OSS/BSS Convergence."
Does he see his company going 'up the stack' to tackle some of these applications in the future. 
"It is more important to have open interfaces around this layering.  We think our role at Nexus Telecom is to capture, correlate, aggregate and pre-process data and then stream or transfer it in the right granularity and resolution to any other open system."
Sutter thinks the supplier market is already evolving in a way that makes sense for this model.
"If you look at the market today you see there are a lot of companies - HP, Telcordia, Agilent and Arantech, just to name a few - who are developing all sorts of tools to do with customer experience or service quality data warehouses.  We're complementary since these players don't want to be involved in talking to network elements, capturing data or being in direct connection with the network.  Their role is to provide customised information such as specific service-based KPIs (key performance indicators) to a very precise set of users, and they just want a data source for that."
So what needs to be developed to support this sort of role split between suppliers? An open architecture for the exchange of data between systems is fundamental, says Sutter. In the past, he says, the ability of each vendor to control the data generated by his own applications was seen as fundamental to his own business model and was jealously guarded. Part of this could be attributed to the old-fashioned instinct to 'lock in' customers. 
"They had to ask the original vendor to build another release and another release just to get access to their own data," he says. But it was also natural caution.  "You would come along and ask, 'Hey guys, can you give me access to your database?', the response would be 'Woah, don't touch my database.  If you do then I can't guarantee performance and reliability.' This was the problem for all of us and that's why we have to get this open architecture. If the industry accepts the idea of open data repositories as a principle, immediately all the vendors of performance management systems, for instance, will have to cut their products into two pieces.  One piece will collect the data, correlate and aggregate it, the second will run the application and the presentation to the user.  At the split they must put in a standard interface supporting standards such as JMS, XML or SNMP. That way they expose an open interface at the join so that data may be stored in an open data to the repository as well as exchanged with their own application. When telcos demand this architecture, the game changes. Operators will begin to buy separate best in class products for collecting the data and presenting it and this will be a good thing for the entire industry.  After all, why should I prevent my customer having the full benefit of the data I collect for him just because I'm not as good in the presentation and applications layer as I am in the collection layer? If an operator is not happy with a specific reporting application on service quality and wants to replace it, why should he always loose the whole data collection and repository for that application at the same time?"
With the OSS industry both developing and consolidating, does Nexus Telecom see itself being bought out by a larger OSS/BSS player looking for a missing piece in its product portfolio?
"Nexus Telecom is a private company so we think long-term and we grow at between 10 and 20 per cent each year, investing what we earn. In this industry, when you are focusing on a specialisation such as we are, the business can be very volatile and, on a quarter-by-quarter basis, it sometimes doesn't look good from a stock market perspective."
But if a public company came along and offered a large amount of money? "Well, I'm not sure. The thing is that our way of treating customers, our long-term thinking and our stability would be lost if we were snapped up by a large vendor. Our customers tend to say things like  'I know you won't come through my door and tell me that someone somewhere in the US has decided to buy this and sell that and therefore we have to change strategy.' Having said that, every company is for sale for the right price, but it would have to be a good price."
So where can Nexus Telecom go from here?  Is there an opportunity to apply the data collection and correlation expertise to sectors outside telecom, for instance?
"Well, the best place to go is just next door and for us that's the enterprise network. The thing is, enterprise networks are increasingly being outsourced to outsourcing companies, which then complete the circle and essentially become operators. So again we're seeing some more convergence and any requirement for capturing, correlating and aggregating of transactions on the network infrastructure is a potential market for us. In the end I think everything will come together: there will be networks and operators of networks and they will need transaction monitoring.  But at the moment we're busy dealing with the transition to IP - we have to master the technology there first.”

Ian Scales is a freelance communications journalist.

    

This website uses cookies to improve your experience. Using our website, you agree to our use of cookies

Learn more

I understand

About cookies

This website uses cookies. By using this website and agreeing to this policy, you consent to SJP Business Media's use of cookies in accordance with the terms of this policy.

Cookies are files sent by web servers to web browsers, and stored by the web browsers.

The information is then sent back to the server each time the browser requests a page from the server. This enables a web server to identify and track web browsers.

There are two main kinds of cookies: session cookies and persistent cookies. Session cookies are deleted from your computer when you close your browser, whereas persistent cookies remain stored on your computer until deleted, or until they reach their expiry date.

Refusing cookies

Most browsers allow you to refuse to accept cookies.

In Internet Explorer, you can refuse all cookies by clicking “Tools”, “Internet Options”, “Privacy”, and selecting “Block all cookies” using the sliding selector.

In Firefox, you can adjust your cookies settings by clicking “Tools”, “Options” and “Privacy”.

Blocking cookies will have a negative impact upon the usability of some websites.

Credit

This document was created using a Contractology template available at http://www.freenetlaw.com.

Other Categories in Features