Customer experience will only improve when customers are viewed as individuals, not account numbers says Giovanni Pellegrini

In the business environment waste and inefficiency are quite rightly abhorred. Conversely, efforts are poured into increasing productivity and efficiency. Recent attempts in the field are ever more focused on achieving this objective via improved customer satisfaction and strengthened customer loyalty.

In the highly fragmented media environment, however, consumers are empowered to switch allegiance with the ease of a mouse click. Suppliers are becoming ever more aware that any action that is perceived as a slight by the consumer, such as the still widespread phenomenon of misaddressed mail, will be met with defection.

Recent research by Pitney Bowes Group 1 Software indicates that increasingly fierce competition has not been met by the implementation of successful retention strategies.
"The Dynamics of Defection" report found in fact that customer churn is on the rise throughout Europe having reached almost 19 per cent across key consumer industries in 2007. The mobile telecoms industry was found to be particularly affected by defections with just over one in five consumers (21 per cent) switching mobile telecoms provider in continental Europe in 2007.  Consequently European mobile telecoms providers are urgently shifting their focus from acquisition to customer retention.

In order to increase loyalty and improve retention, the telecoms industry needs to depart from an impersonal 'account' approach to campaign management - where elements of the communication cycle are handled remotely and/or disparately - and opt to create a two-way, business-to-individual-back-to-business closed loop-process.

Organisations have invested large amounts into implementing complex Customer Relationship Management (CRM) systems, but general opinion holds that they have failed to take off due to a lack of change in corporate mentality: customer experience cannot improve if customers are still viewed as account numbers and not individuals.

Sets of activities that lie at the heart of CRM and come under the heading "customer communications" have in the past been largely overlooked. These activities include everything from data management to address data quality; from personalised document generation to electronic bill presentment and payment (EBPP) and document management and even call centre operations.

To ensure these customer communications fulfil their purpose and truly engage the customer, it is necessary to integrate them with the appropriate business processes they connect with, however disparate they may appear. The analysis drawn from these integrated information streams equips businesses to reach out to their customers intelligently. This integration of key business processes and their related information streams into CRM defines and drives Customer Communications Management (CCM).
While CRM is fundamentally customer facing and outwardly focused, CCM, by capturing the external customer information and linking it to internal business processes such as those drawn from the marketing, sales and other departments, can create a comprehensive Single Customer View (SCV).

There are seven equally viable points of entry to a comprehensive enterprise CCM solution. This solution should at all times be fully scalable and entirely compatible with line of business legacy systems.

Data access and integration
The first step to more effective customer communications is gaining an all-round view of the customer as an individual.  To this end the CCM data access and integration tools give instant, seamless access to customer information wherever in the business it is stored.
Companies can then consolidate and integrate this data across the systems in which it resides to finally obtain an SCV. These tools also give marketers the ability to generate business intelligence reports, marketing campaign analyses, customer segmentations and audits.  Most importantly, however, managers are empowered to make more timely and informed business decisions.

Data manipulation
Data manipulation tools perform address cleansing and mail coding tasks to avoid duplication and reduce print and mail costs, ensure prompt delivery and increase response rates.
More sophisticated data manipulation tools are able to target offers based on specific business geographies and create customer profiles defined by household demographics.  As a result, companies can accurately predict response rates for a range of offers and identify up-sell and cross-sell opportunities on the fly.

Document creation
Document creation tools provide a single, easy-to-use approach to creating one-to-one multi-channel communications for both high volume and interactive lettering. These tools are designed to efficiently create different documents: contracts, complex billing, insurance policies, bank account statements, and even packages containing multiple documents like travel booklets.

Document creation tools can also help speed the document development process: once created, a design can be re-used across applications and multiple delivery channels for business rules, templates, text and other content, and distributed via the web, SMS, fax e-mail and print.

Production / Distribution
Production/distribution tools streamline both high-volume and on-demand production off all forms of customer communication.  In addition to this, they allow users to distribute and proof documents over the web prior to giving final authorisation. 

Data vault
The data vault places all customer data into a single, secure yet accessible electronic environment.  The vault will need to be able to integrate both print and digital files at a much lower cost than the expensive PDF- and HTML-based solutions.  The modular architecture of a CCM data vault should be able to rapidly deploy call centre, customer self-service, and EBPP applications. 

Customer & call centre support
With the availability of a centralised CCM data vault call centres can drastically reduce call handling and conflict resolution times by instantly retrieving exact replicas of all customer documents.  

Customer support can also be extended to provide 24/7 web customer self service for the individual's customer account, enabling them to autonomously retrieve information and make online payments.

Replenishment tools create closed transaction loops by providing automated updates and connecting all communications back to the related business processes.  For instance, they can link to accounts receivable for round-trip processing, mine data from dynamic documents and continuously refine business intelligence. With replenishment tools it is possible to reduce remittance processing errors and costs and generate accurate time-sensitive financial reporting.  

Giovanni Pellegrini is Sales Director Southern Europe, Pitney Bowes Group 1 Software

The wide variety of technology formats that promise consumers access to premium content any time, any place, anywhere is putting conditional access systems under the spotlight.  Lynd Morley takes a look

The application of conditional access (CA) used to be fairly simple to grasp: the protection of content - most commonly sent to digital television systems via cable or satellite - requiring certain criteria to be met before granting access to said content.  OK, so install the Sky box near your TV; insert the smart card; and you're away. 

But the increasing digitisation of content and the plethora of new distribution methodologies and business models are rather complicating the picture.  Consumers can now access content via a wide range of devices, and at a time and place of their choice.  And while content distributors and service providers are clearly expanding the boundaries and enhancing the reach of their services to the consumer, such premium content is, inevitably, increasingly exposed to the risks of piracy and theft.  Indeed, the increasing prevalence of broadband networks and the ease with which digital media can be cloned and distributed, combine great opportunity and great risk for the owners of premium content, raising security to a high priority.

Doubtless that is one of the reasons that market analysts remain fairly bullish about the conditional access sector, which is set to generate revenues approaching $1.4 billion this year, according to ABI Research, who also note that telcos will be taking an increasingly large slice of the pie at the expense of cable and satellite industries.  Not that cable and satellite players are about to disappear, but industry analyst Zippy Aima notes: "The options now offered by new deployments of mobile and IP TV - including interactive and on-demand content, time-shifting and place-shifting - are generating a buzz that drives demand for their premium content to a wider audience."

Certainly conditional access players are addressing a fast changing, challenging market, where all participants, whether content providers and distributors or security solutions vendors, continue to jockey for position and battle for market share, all with an eye to the technology changes and developments that will impact their success.  Indeed, the marketplace has recently seen not only hard competitive selling, but the unedifying spectacle of court action over the alleged cracking of smart card encryption by one solutions vendor - the results of which are passed on to pirates - in order to gain competitive advantage over another.  This may raise the question of who should actually be described as a ‘pirate'.
For the most part, however, CA solutions vendors, including such leading lights as Irdeto, Viaccess, Conax, Nagravision, and Verimatrix take a more conventional competitive stance in arguing the advantages of their particular approach or products. And faced with a fast developing, and clearly highly competitive market, they are also finding different ways to present competitive advantage.  Irdeto, for example, is now offering what it describes as a full range of Content and Business Model Protection.  Having established its name in content security, with more than 400 CA and Digital Rights Management (DRM) customers worldwide, the company recently acquired business support systems specialist IBS Interprit, set-top box (STB) solutions provider Idway and software and data centre security firm Cloakware.  The idea is to enable operators and broadcasters to launch and grow their digital TV businesses profitably and securely, and while each acquired product will continue to be sold individually under the Irdeto brand, they will also be sold bundled as an end-to-end solution called Irdeto SmartStart for digital TV operators.

"We believe that this approach gives us a lot of strength in being able to offer clients the option of an end-to-end solution," explains Christopher Schouten, Director Global Product Marketing with Irdeto.  "We've seen a real demand for this type of solution in the marketplace, particularly in developing economies such as India, where the SmartStart concept is very attractive to companies whose - often home-grown - legacy systems are beginning to buckle under the demands of providing multi-play services."

Conax is also building a solid customer base in many of the emerging economies, having established a presence in India, China and Brazil back in 2003.  Celebrating the five-year anniversary of its presence in India on June 27th of this year, the company announced that it had deployed a total of five million smart cards to the Indian market during that time. But Conax's philosophy is to concentrate on its security products - its core competency - as Geir Bjorndal, COO and Sales & Marketing Director explains.  "We are very focussed on our security products, and I believe that having that focus means we can be very reactive to market requirements."

Certainly, the company has a number of security laurels to - if not rest on - at least, point to.  It developed one of the first pay-TV smart cards in the world in 1990, and by 2006 Conax CA was in operation in over 180 installations in more than 60 countries around the world.  The latest figures show an expansion into over 70 countries globally. 
Bjorndal believes that broadcast solutions are still the ‘bread and butter' of CA offerings, noting: "These linear solutions are going to be needed for some time to come.  However," he adds, "developing technologies, and changes to infrastructure and distribution mean we have to stay awake!"   And to that end, he adds: "We are actively building relations both with new and existing customers wishing to upgrade to IPTV, and with major integration partners, to deliver total solutions."  Conax is also, according to Bjorndal, closely following the development within Mobile TV and is maintaining close contact with several partners offering solutions in the area.

Irdeto's Christopher Schouten agrees that changing markets require a sharp awareness of the need for different solutions. "Our focus on software-only solutions, for instance, is certainly increasing," he comments, "but it's really a matter of ‘horses for courses' - we need to ensure that the security that is provided for any environment is appropriate to that environment."

The software route has proved pretty successful for the comparatively ‘new kid on the block' Verimatrix whose software-based content security offering has brought plaudits from the likes of the Multimedia Research Group (MRG) ranking the company as a ‘global leader of IPTV content security' in its bi-annual IPTV Market Leader Report.  Stephen Christian, Verimatrix VP Marketing, notes: "The whole notion of security in pay-TV systems has been very limited in scope for a long while, and very firmly centred around the notion that it's the smart card that represents the secure capability.

"We're coming from the background of a different kind of distribution system - not cable and satellite, but IPTV.  And what we're seeing is that the general principles we've established for security in the IPTV world are actually going to be the norm for future distribution systems of all types - mobile video, satellite video, cable video and so forth.  Everything is heading towards IP technologies, and we need security regimes that are built on IP foundations."

For any CA system to succeed in the marketplace, the exceptionally powerful studios - providing the all-important content - need to be comfortable with the solution.  Verimatrix has been careful to ensure that their brand is known to the studios.  "We've never had a pay-tv operator refused content because of their software based security regime," Christian explains. "We're pro-active with the studios - making sure the relevant decision makers inside studios and broadcast companies are well aware of what we can bring to the party, and how we're able to protect their interests.  So licensing deals go as smoothly for us as for the legacy players."

Keeping the studios informed - and aware of your brand - is at least as important as the technical merits of any particular solution according to Christian.
He goes on to point out that there's no tougher testing environment for ensuring robust security than the Internet, where every solution must run a gauntlet of professional and underground hackers on a continuous basis.  "That's why software-based IP security technologies have emerged as the gold standard for securing everything from web-based banking and financial transactions to high-value video in broadband and IPTV service applications," he comments.  "Clearly, standards-based, high integrity security can be applied to media just as much as to, say, banking transactions.  All the dominos are in place to make this happen - chip sets in set-top boxes are that much more powerful; TCP/IP protocols are widely available; broadband access is increasingly pervasive.  All the constituent components are in place - let's take advantage of it, and make this leap forward."

NBC/Digital Rapids
NBC has selected Digital Rapids to provide media encoding, transcoding and streaming systems for the network's Internet coverage of the 2008 Olympic Games from Beijing. Digital Rapids' DRC-Stream encoding and streaming solutions will enable NBC Olympics' live and on-demand online coverage.   Some 2200 hours of video will be streamed live on the Internet at NBCOlympics.com, primarily encoded from video feeds into web-friendly streams through the DRC-Stream systems. Streams will be encoded in the VC-1 compression format for a viewing experience powered by Microsoft Silverlight technology. The encoded live streams will also be archived for viewers to watch on-demand. Digital Rapids Transcode Manager will be used to convert affiliate-provided content between compression and file formats for US domestic distribution.  "We're thrilled to continue our relationship with NBC by supplying our solutions for coverage of this year's paramount event, the Beijing Olympics," says Briek Eksten, President of Digital Rapids. "The nearly unlimited scope of Internet-based video lends itself perfectly to coverage of an event of this scale, and our solutions are renowned for bringing video to the web with exceptional quality and reliability. We're pleased that NBC has again placed their trust in our technology and expertise for their ground-breaking online coverage." Rab Mukraj, Director of Digital Media Delivery at NBC Universal, adds: "Delivering an unparalleled online experience is a vital component of our unprecedented multi-platform coverage of the Beijing Olympics.  The Digital Rapids encoding systems will enable an outstanding viewing experience for our online audience through superior encoded video quality and robust reliability, while providing us the workflow efficiencies needed for coverage of this magnitude."

Digital Rapids' DRC-Stream encoding solutions combine powerful hardware for video and audio capture and pre-processing with the intuitive Stream software interface, delivering reliable, high-quality, multi-format media encoding and streaming for professional applications such as high-end Internet TV and IPTV. The advanced, hardware-based video processing features enable superior quality and the most efficient use of bandwidth in the compressed result. Digital Rapids Transcode Manager provides automated, distributed transcoding with centralized management and exceptional load balancing intelligence for high-volume, multi-format workflows, increasing production volume while reducing operational costs. Details: www.digital-rapids.com

The MTN Group is the leading provider of communication services in Africa and the Middle East with over 61m subscribers.
Innovation is paramount to MTN's brand values.  With today's consumers searching for new, exciting and interactive ways to communicate, MTN was quick to recognise the opportunity for brand differentiation by launching an enhanced mobile messaging service.
In December 2007, MTN deployed a new mobile IM (MIM) service in South Africa, called ‘noknok', powered by Colibria.
The market:

  • MTN is the second largest operator in South Africa, with a 36 per cent market share and 14.8m subscribers
  • Mobile phone penetration is over 80 per cent, however Internet penetration is approximately 10 per cent, making this an ideal market for an enhanced mobile messaging service
  • The market is technically challenging as many of the mobile phones in circulation are not recent models
  • Third-party MIM services are already available making this a well educated, yet highly competitive market
The service:
Compatible with a wide range of handsets at launch, noknok is a feature-rich MIM service that offers a truly community-based mobile experience.
Operator benefits:
  • A complementary revenue-generating service alongside voice and existing messaging technologies
  • An enabling technology that enhances the functionality and usability of existing applications and services
  • The technical infrastructure is modular, therefore new revenue-generating services and applications can easily be introduced
User benefits:
  • Simple to download and install ensuring the user experience is intuitive and compelling
  • Users can impulsively share experiences with friends and groups at the click of a button
  • Fully interoperable so users can chat with friends on either the MTN network or the Vodacom network in South Africa
  • Users can add anyone to their contact list. Those who don't have noknok on their mobile will receive messages as either an SMS or an MMS. This supports SMS continuity as advocated by the GSMA's PIM Initiative
  • Incorporates Presence enabling users to see their contacts availability, status picture, status text and mood details

Noknok launched with a range of clients; including a PC client, a WAP client and MIM clients for Symbian and Java handsets.  In addition a Java ‘Lite' client has been produced specifically targeted towards basic and low-cost handsets. 

Noknok is about much more than just delivering messages instantly - noknok is also about establishing identities and promoting personalities.  The next evolution of noknok will include content bots and non-P2P chat services to further grow and enhance the user experience. 
Details: www.colibria.com

Back in 2004 Turkcell realised that mobile messaging solutions were due to become a mass market, globally ubiquitous service and recognised the timing was now right to begin to launch its own Mobile Instant Messaging (IM) solution. Turkcell turned to NeuStar for the technology needed to launch a service that was uniquely Turkcell's, and not just an extension to other fixed service provider offerings.

Turkcell needed to not only make the most of this new market opportunity, but also consider its existing customers and potential threats the new offering might have on its SMS revenues. The service had to clearly demonstrate customer benefits with functionality like Presence being at the heart of the user experience.

February 2005, TurkcellMessenger was launched to both prepaid and postpaid customers. The service could be used through a mobile application downloaded to the handset or via the PC, Web, and Wap clients, enabling IM.

TurkcellMessenger allowed subscribers to communicate in context for the first time, with a Presence enabled contacts list detailing their buddies' status (online/away/busy etc) and enabled them to control their own experience by activating their own status.
Two years into the service, TurkcellMessenger's chat rooms had become one of the most popular features of Turkcell's service with almost 50 per cent of the total IM traffic generated in chat rooms.

 "We launched Turkcell Messenger on a flat rate monthly charge with a pay-per- message alternative and unlimited data usage for both, and used viral marketing techniques to promote the service", says Leylim Erenel- Product Manager. "Our chat rooms have become one of the most popular features, so we will look to extend these social network type applications for our customers".

Turkcell enjoyed an increase in the arpu of users who have subscribed to TurkcellMessenger. More surprisingly, helping to debunk the myth that mobile IM can potentially cannibalise text usage, Turkcell has also enjoyed a rise in the SMS usage of IM users. A recent analysis carried out by Turkcell showed that the SMS traffic created by users who subscribed to TurkcellMessenger at the beginning of Q4 2006 actually increased by 5.8 per cent during the same quarter.

On the back of this success, Turkcell extended the service this year and launched Turkcell Windows Live Messenger, allowing subscribers to log on to Windows Live Messenger on their mobile through a download application.  In the first three months of deployment 1.1 million subscribers signed up to the service, following an effective marketing campaign. The service produced record-breaking statistics with subscribers logging in 15 million times and exchanging 800 million messages in just three months.
Details: www.neustar.biz

When Kireeti Kompella and David Noguer Bau ask the service provider community about the future of transport networks, there is general agreement that the future is in Ethernet. So what are the wider implications of this position?

Driven by the reduced cost per bit, Ethernet is becoming the standard interface in the telecommunications industry; we can find Ethernet ports from DSLAMs to mobile base stations. At the same time, Ethernet VPNs are gaining popularity to provide connectivity between enterprise branches.
This change in the industry is driving the requirement for an efficient transport model. The limitations in extending Ethernet into the MAN and WAN are well known (scalability, resiliency, lack of OAM...), so its growing importance is pushing for optimized transport mechanisms:

  • T-MPLS: A ‘profile' of MPLS that meets transport requirements (only)
  • PBB-TE: Purpose: to make Ethernet carrier-grade (or transport-grade)
  • MSP: Multiservice Provisioning Platform that adds functionalities to SDH nodes (trying to extend SDH live).
  • MPLS: A true Multiservice transport (IP + Ethernet + legacy)

However, before jumping to quick fixes for Ethernet limitations, let's look at a brief history of the transition from TDM-centric networks to packet-centric networks; hopefully, in doing so, we will gain better perspective on why things are the way they are, and what really needs to be changed.

A bit of history
Fine-grained Time Division Multiplexing (TDM) networks were designed primarily for voice services and adapted reasonably successfully for leased line circuits as data requirements became more important.
A decade ago, with the incipient demand of data services, the network was still able to accommodate it.
The transport requirements for TDM were clear, making SDH a magic layer providing the required features for data:

  • Frequency synchronization
  • Deep Channelization: down to DS0
  • Framing
  • Integrated OAM model
  • Redundancy with Fast Restoration (around 50ms)
  • Traffic Engineering for path an capacity management

So the combined SDH + DWDM model was emerging as a universal transport, common to all services and mainly voice- and circuit-centric. The transport department was in charge of providing the right requirements (bandwidth, resiliency, framing...) and all the services ran across the top. We'll define the separation between the two departments, Services and Transport, as ‘the Purple Line'.

This model is still implemented in most service provider organizations today; however the idea is to get a sense of the value of TDM networks, what the issues are, and how this should evolve around the growing dominance of Ethernet.

The next generation
The massive demand for best-effort Internet services, the migration of voice services to IP, the quick replacement of leased lines by Ethernet and IP VPNS, as well as the growing importance of IPTV, is challenging this model. The requirements for the transport layer are new and Ethernet appears to look well positioned.

This transition towards Ethernet consequently forces re-allocation of the missing functions: OAM, Traffic Engineering, Synchronization and Fast Restoration should move into the new ‘magic layer'. Today, the industry is struggling to find the best technology to fulfill the magic layer requirements, positioning at the heart of this debate technologies such as T-MPLS, PBB-TE and MSE, all designed to complete and optimize the transport of Ethernet.

The ‘Purple Line' made sense 20 years ago, when several independent services rode over the transport network. The Purple Line drew a demarcation between ‘infrastructure' and ‘services'. A particular service failure would typically affect just that service while an outage in the infrastructure would affect all services. Keeping infrastructure separate enabled a very stable network, over which each service could be managed on its own.

Today, with the NGN (Next Generation Network) model, there is essentially just a single ‘traditional service' over transport, namely IP/MPLS. Replacing SDH with an enhanced Ethernet technology is not going to change it.  All the real services will still be sitting at a higher layer. Since IP/MPLS carries all the services, it must have the same stability and resilience as the ‘infrastructure' below the Purple Line. The natural consequence of this is that IP/MPLS must be part of the transport infrastructure, i.e., the Purple Line must be redrawn ....

Placing the line
Where should the new Purple Line be placed? In other words, is ‘IP/MPLS' really a service? Having a transport-specialized MPLS and keeping IP as part of the services would separate IP and MPLS into different departments, therefore negating the tight synergy between IP and MPLS.

The right model is having IP/MPLS as part of the transport side of the Purple Line and all the real applications and control services sitting on top of it. This model shows a good partition between infrastructure and services maintaining the synergy between MPLS and IP. Also note that we can now finally fill in the "magic layer": a thin layer of Ethernet (for framing) and G.709 (for optical OAM/FEC).

This model is the only way for networks to take a giant step forward and become packet-centric rather than optimized for TDM circuits. Keeping IP/MPLS separated from transport introduces inefficiencies and duplications as two different departments have to deal with the same issues: resiliency, traffic engineering, capacity. This integration will also help equipment vendors to find new synergies between IP/MPLS and optical transport.  As we begin the process of moving the Purple Line, a long list of opportunities for improving the overall network will arise.

Moving the Purple Line is not at all easy, as 20 years is a long time for habits and attitudes to take hold. This particular future has consequences for many groups: vendors, service providers, regulators, unions.  How quickly and effectively these groups respond to the challenge will determine how fast we can move to the new paradigm of packet-centric networks.

New platforms have to be built to meet the new requirements. New architectures and new management paradigms are needed to best use these new platforms. New regulations may be needed to say which platforms can be deployed, where and how. The labour force may need to be reorganized to address the new opportunities.

The Purple Line served a very useful purpose, but has become stagnant over time, and now finds itself out of place.  However, the idea of separating "services" and "infrastructure" is still valid and should be preserved. Redrawing the Purple Line must be the first priority in designing a packet-centric Next Generation Network in order to truly optimize it for cost and efficiency within the new communication paradigms (point-to-point, any-to-any, multicast ...) and this may be challenging for many.

In this new context, the way packet and optical switches are built, deployed and managed has to be rethought. The good news is the validation from both the packet and the transport worlds  - IP control and data plane infrastructure is effective, robust and future-proof, service-enabling and scalable.

Leaders will define the future, followers will live in it.

Kireeti Kompella is Distinguished Engineer and Fellow at Juniper Networks, and David Noguer Bau is Head of Carrier Ethernet and Multiplay Marketing for EMEA at Juniper Networks

The 1990s brought us Business Process Re-engineering.  Now the talk is all about Business Transformation. Hugh Roberts contrasts and compares the approaches - and the results

‘BPR' (business process re-engineering) was the hot topic in the 90's when it came to change management. This time around, we've moved from BPR to BPM, but our new hot button is business transformation. (Still so new, it doesn't yet have a proper abbreviation!) Ostensibly, there isn't much difference between the two approaches, but the closer one looks, the clearer the differences in business prioritisation and market drivers become.
The one thing that has remained the same, however, is the high failure rate of transformation projects, often accompanied by the regular repopulation of executives at board level. After all, someone has to carry the can for all of those apparently ‘unfit-for-purpose' systems...

Towards the end of the last century as software capabilities improved, technology was first and foremost positioned as a means of enabling automation - seen as a key element in cost management programmes aimed at downsizing personnel and overcoming the restrictive work practices endemic in formerly monopolistic incumbent telcos. As a consequence of reduced staff numbers, existing hierarchies crumbled in an almost fetishistic rush to delayer organisations and establish everyone still employed in the organisation from top to bottom as a ‘Process Owner'. Similarly, whilst lip service was paid to the establishment of customer-centricity at all levels of the business, the real focus of culture change was to identify means of managing and motivating staff in a working environment that had regressed from one of high stability to one of high volatility, and where job satisfaction and staff churn were moving in opposite and unhelpful directions.

One might think that the new entrant operators would have been protected from the worst ravages of BPR, but of course - as demanded by their shareholders - skilled and experienced labour was required, and where better to get it from than the large and now freely
available pool of ex-incumbent employees? To quote Brendan Logan, who heads up Patni's Telecommunications Consulting and Advisory practice: "It took a new entrant about three years to create the same levels of inefficiency and disfunctionality in its operations environment that it used to take the PTTs eighty years to achieve."

In these new flatter organisations consisting of tens, hundreds, and in some cases thousands of Process Owners, the real problem was that very few people knew how the process that they owned actually fitted into the overall value generation mechanisms of the business, or how their processes related to those in other business units. However, they did at least know that they owned them. In the 21st century version of BPR - not least because of Sox 404, plus initiatives such as the eTOM - knowledge of process flows and inter-relationships has become significantly better. Unfortunately, the emerging convergence ecosystem has required us to maintain a much more fluid view of process ownership, so rather than declining, turf wars and inter-departmental politics are on the increase as we attempt to transform our organisations into lean, mean and agile enterprises.
Make no mistake; change is here to stay. Time-to-market constraints are typically no longer determined by technology development cycles (IMS and related notwithstanding!), so the strategic planning process must needs remain in a state of flux.

There are any number of obvious fiscal and housekeeping challenges in the finance domain raised by business transformation, amongst them the management of CapEx and OpEx, the sweating of legacy assets, the maintenance of good governance, corporate security and so on. One way or another, all of these are centred on risk management, which will undoubtedly supplant business transformation in the foreseeable future as the next ‘unifying' business focus of choice. In addressing these issues, the communications industry is probably no better or worse than most other industries. However, there are quite a number of ‘telco-specific' challenges posed by business transformation, most of which are a direct reflection of our uniquely intimate relationship with interconnected technologies and our relative lack of competitive and regulatory maturity on a global scale.
Here are four of the more insidious that we now need to face up to.

1. Recognising that ‘best practice' is, although useful to be aware of, an outmoded concept to use as a guiding principle. As the reality of globe-spanning operations and ownership bites, it is quite clear that local cultural, political, regulatory and socio-demographic factors will continue to maintain high levels of market diversity, and however mature we become as an industry this isn't going to change any time soon. Clearly, these factors must be respected. Although at the network layer and up into the bottom end of our OSS - anywhere, in fact, that functional activities could and should be totally transparent to the end user - standardisation is a given; in the BSS domain we will continue to waste an awful lot of money implementing applications and platforms that turn out not to be ‘fit for purpose' under local operating conditions. We have to acknowledge that ‘best fit' is going to be far more critical to our profitability and competitiveness, and that the determination of this may lead to conflicts with group directives and economies of scale.

2. Almost everything to do with ‘the customer'.
Telecoms must be the only industry on the planet that can't agree on a single and unifying definition of what a customer is. Quite apart from the competitive sensitivity of maintaining definitions that maximise our apparent market penetration levels, it remains common for many of the sub-systems within a single network operator's operating environment to maintain different customer data models. The challenge of developing and maintaining a single view of the customer is therefore quite daunting; never mind the challenges of doing so on an international basis or of extending the reach of telecoms into new market areas under the auspices of convergence. However much we believe we are being truly customer-centric... we're not. On a path well trodden by every other industry, CRM, CEM and BI are all steps in the right direction, but that's all they are: steps. We do, however, have some remarkable capabilities with the capture and management of high volumes of usage data. Once we learn to co-operate rather than compete with our ‘other customers' - the other players in the value chain - with regard to customer ownership, we may be able to fully leverage these skills to our advantage.

3. What to do about the information architecture.
Every business unit and function feeds off the central data backbone that we often (and somewhat erroneously) call ‘billing'. Somehow we need to find a way to take the politics out of the movement of data as we monetise the process of turning information into knowledge. Moreover, as the range and complexity of the services we offer increases, so do the number of relevant sources of knowledge about the customer's experience and perceptions of quality. We can no longer rely on the network to provide metrics that determine the value of the customer value proposition, nor can we rely on the traditional parameters we have used in the past to determine the value of the customers' attention and actions to our third party supply chain partners. We need to embrace the new methodologies entering the industry alongside the ‘X-factor' players - the network has the ability to generate knowledge but in the new generation it is certainly nether the owner nor the arbiter of ultimate (and bankable) truth.

4. Determining who your friends are.
Perhaps the greatest challenge of business transformation is that it doesn't lend itself to ‘projectisation'. Whilst elements of a transformation programme can be instigated and undertaken as projects, the reality is that transformation must be treated holistically if meaningful and lasting success is to be achieved. The impact must be felt on the systems, processes and skills deployed across the entire organisation. Unfortunately - and however much the pressures for rapid time-to-market wish to direct otherwise - the timescales required for the transformation of these three key elements of business operations are not synchronous. It is hopelessly unrealistic to expect new business processes and staff re-skilling to be in place at the point where platforms and applications have been upgraded, and vice versa. As a consequence, and even if the ‘plug and play' of COTS products were a credible reality, traditional RFP-based methodologies for the selection of products, vendors and integrators are almost certain to lead to failure. The selection of supply-side partners has now become the most critical of all transformation issues, and we have as yet no established framework in place for determining how to proceed. We need to learn how to ‘buy into' ongoing and flexible framework relationships with our suppliers for mutual - not exploitative - long term benefit.

In keeping with green operations, much of the slideware generated by BPR remains recyclable in the current climate of business transformation. But this doesn't mean that - even if it was meaningful the first time around - we should allow ourselves the luxury of feeling we've ‘been there before'. In the 90's we were competing for customer revenues by delivering a range of familiar services, albeit exploiting new technologies to deliver approximately the same services better, faster and cheaper, and we were competing between ourselves. At the moment, even the most basic of business questions remain open-ended: who we are competing with; what we are selling; who we are selling it to; and what it might be worth to them. The only answer that remains constant is the need for change, in response to change.

Hugh Roberts is Senior Strategist for Patni Telecoms Consulting and can be contact via: hugh@hughroberts.com

Everyone agrees that backhaul is expensive, but is there an ideal one-size-fits-all  solution, asks Lance Hiley

The headlines are clear for everyone to see: backhaul is one of the biggest issues and expenses facing mobile operators today. There isn't much consensus within the industry on what to do about it, but one thing that everyone does agree on is that the cost of backhaul represents 30 per cent of the capital and operational expenditure spend of the average operator each year. This could represent nearly $20 billion this year, and the figure has grown over the last few years with more data being consumed. Indeed, figures from Yankee Group indicate that transmission costs as part of operational expenditure (opex) in 2G networks can be as little as 10 to 20 per cent, but rises to 30 to 40 per cent in existing 3G networks. Global expenditure is predicted to reach $23billion by 2013.

Carrying data is clearly expensive, and unless this cost is brought down, it will continue to increase with the problem being exacerbated as mobile networks are built and upgraded to support new mobile data services and standards such as HSPA, WiMAX and LTE. If operators are to roll out the next generation of data services and importantly, realise significant profits, both opex and capex need to be reduced - doing ‘more of the same' is no longer an option.
Whilst doing more of the same is no longer enough, to solve the backhaul issues facing operators around the world and equip them for the future, we need to recognise that the majority of operators will have legacies of leased line and point-to-point backhaul infrastructure already in place. As such, we cannot simply recommend discarding the past and beginning with a clean slate. Particularly in Western Europe, with the predominance of point-to-point microwave links connecting cellular base stations, recommending that each link is replaced is simply not a realistic option for operators.

Achieving higher backhaul capacity is not just a matter of adding bandwidth; it also involves increasing the efficiency of traffic handling. As the industry evolves to a full packet environment, microwave must be able to support Ethernet IP protocols in addition to legacy 2/3G interfaces, such as time division multiplexing (TDM) / asynchronous Transfer Mode (ATM) and synchronous optical networking (SONET) / synchronous Digital Hierarchy (SDH).If we look briefly at the options available to operators for backhaul - leased line, point-to-point, point-to-multipoint - we begin to see that only two technologies effectively extend the benefits of IP/Ethernet principles to the network edge - which is where it needs to be to get the necessary bandwidth at reduced opex.









Leased from

third party 

 Low High



Ts Low


dependent upon


 Fiber Slow





but diminishes

over time




market in




Medium - subject to

civil works and



 Microwave Fast


to Low;


not related

to distance



to High

High, users have total


The table above is a useful comparison prepared by Yankee Group of the different backhaul technologies and approaches available. It is clear that there are several trade-offs to be understood when deciding on a backhaul strategy.

Leased lines and fibre tend to be seen as a panacea for the industry, but clearly there are disadvantages. Older leased-line technologies such as T1 and E1 cannot be dimensioned easily to cope with the unpredictable traffic demands of mobile data networks, and a network planner has to make quality of service decisions such as  dimensioning a link for the peak or mean traffic coming from a particular cell site. The ratio between peak to mean traffic coming from a cell site can be as much as 10:1. Designing for the mean will result in customers being limited in performance (and experience) at busy periods. Designing for the peak will result in underutilised resources for most of the operating day - a waste of operating capital.

Fibre reduces some of the issues of leased lines in that the acuteness of designing for the peak and mean does not present itself (unless an operator is paying for the fibre on a Mb/s basis) but the cost is higher and in many cases the wait longer. Fibre is well suited for very high traffic cells in a dense urban environment where fibre access is likely to be good. Outside of this environment, other options, such as microwave, seem to be a better choice.
There are two other factors to be considered when choosing between leased lines, fibre and microwave - cost and reliability. As the name implies, leased lines are an ongoing cost (lease) as well as an asset that an operator may not own or control. This is an important consideration when assessing the strategic aspects of building a backhaul network. Leased lines are great - provided that an operator trusts the independence and business model of their supplier. Leasing capacity from a competitor is always a risk - regardless of the strength of the local regulator - if there is one!

Reliability of leased lines is not a topic that we hear about often but it needs to be considered. Availability of lines in Europe tends to be very good - especially in markets where, by and large, they are buried underground. However in markets like North America where much of the lines are still strung between poles, reliability can be an issue and data integrity may be compromised.

Globally, an increasing percentage of new backhaul investment is in microwave. The business case for microwave rests on ease of deployment, and greater range, performance and flexibility. With zero dependence on renting or leasing wired lines both overall expenditure and running costs are reduced plus systems can be installed quickly and are not prone to cable cuts, increasing overall reliability. Already, according to Yankee group, globally microwave represents 50 per cent of all backhaul, and outside of North America microwave penetration is more than 60 per cent.

There are two flavours of microwave: point-to-point microwave and point-to-multipoint microwave. Point-to-point (PTP) is best suited for longer range links, rural areas and short very high capacity links. PTP microwave generally require a spectrum license for each link and are designed to provide a fixed capacity bandwidth link. PTP microwave operates over a range of frequencies - some of which are affected by atmospheric conditions. To deal with this, they sometimes employ adaptive modulation to step-down the capacity for a short period of time whilst an atmospheric event - like a snow storm - passes though the region where the microwave links are operating.

Because they are point-to-point links, and operate at a fixed frequency and capacity, PTP microwave are very much like leased lines in that an operator has to design its network to provision for peak loads, and as such, an operator may find himself spending capex on spectrum for links that are only operating at 10 per cent of their capacity most of the time. This is frequently called the ‘fat-pipe' approach to backhaul but clearly, it is a poor use of valuable spectrum resources.

Point-to-multipoint microwave uses a different architecture to address the backhaul issue. Rather than point-to-point links, point-to-multipoint backhaul architectures bring a number of cell site links back to a single aggregation point or hub. Immediately, this reduces the number of radios and antennae making the network less expensive to build. However, because the spectrum for the system is licensed across a number of radios, the resource is in effect shared - making the utilisation of the spectrum more efficient. PMP microwave systems also lend themselves to a more IP-like approach to packet data management.
Beginning with an already impressive raw data rate, of over 150Mbps gross throughput in a sector, PMP solutions utilise data optimisation and statistical multiplexing, together with its advanced on-air bandwidth control and interference management, to provide an ‘efficiency gain factor' of up to 4x.

The question is how to ensure a smooth migration to new backhaul networks that reduce costs and improve customer experience. Operators need to invest in a backhaul solution that takes into account the realities of their current network infrastructure as well as the vision of their future network. With its inherently traffic-neutral, flexible, innovative architecture, PMP microwave will in most cases fit the bill.

Not only is it easier to deploy than other backhaul technologies it offers increased capacity at a much lower ‘cost per bit' with cost estimates for a typical Western European operator running at cost savings of up to 44 per cent of capex and in excess of 58 per cent of opex compared with point-to-point links.

Clearly there is no ‘one size fits all' for every operator, but when weighted up against competing options, microwave and in particularly PMP microwave offers a compelling argument based on the four main metrics which matter to operators - capacity, quality of service, capex and opex.

Lance Hiley is VP Market Strategy, Cambridge Broadband Networks, and can be contacted via tel: +44 1223 703000; e-mail: LHiley@cbnl.com


There's a stark dynamic framing in the telecoms Operations Support Systems (OSS) market. Until recently networks were expensive while the price tags for the OSS systems used to assure the services running across them were, by comparison, puny. Today that's all changed - not because OSS systems have become significantly more costly, but because network components are a fraction of the capital cost they were 15 years ago. The result is an apparent cost disparity that may be causing some operators to swallow hard and think about putting off their OSS investments, Thomas Sutter, CEO of Nexus Telecom tells Ian Scales.  That would be a huge mistake, he says, because next generation networks actually need more OSS handholding than their predecessors, not less

Naturally, Thomas has an interest. Nexus Telecom specializes in data collection, passive monitoring and network and service investigation systems and, while Nexus Telecom's own sales are still on a healthy upswing (the company is growing in double figures), he's growing increasingly alarmed at some of the questions and observations he's hearing back from the market. "There is a whole raft of issues that need exploring around the introduction of IP and what that can and can't do," he says. "And we need to understand those issues in the light of the fundamental dynamics of computer technology. I think what's happening in our little area of OSS is the same as what tends to happen right across the high technology field. As the underlying hardware becomes ten times more powerful and ten times as cheap, it changes the points of difference and value within competing product sets." If you go back and look at the PC market, says Thomas, as you got more powerful hardware, the computers became cheaper but more standard and the real value and product differentiation was, and still is, to be found in the software. "And if you look at the way the PC system itself has changed, you see that when microcomputers were still fairly primitive in the early 1980s all the processor power and memory tended to be dedicated to the actual application task  - you know, adding up figures in a spreadsheet, or shuffling words about in a word processor. But as PC power grew, the excess processing cycles were put to work at the real system bottleneck: the user interface. Today my instincts tell me that 90 per cent of the PC's energy is spent on generating the graphical user interface.  Well I think it's very similar in our field. In other words, the network infrastructure has become hugely more efficient and cost effective and that's enabled the industry to concentrate on the software. And the industry's equivalent of the user interface, from the telco point of view at least, is arguably the OSS. "You could even argue that the relative rise in the cost of OSS is a sign that the telecoms market as a whole is maturing." That makes sense, but if that's the case what are these other issues that make the transformation to IP and commodity network hardware so problematical from an OSS point of view?
"There's a big problem over perceptions and expectations. As the networks transform and we go to 'everything over IP', the scene starts to look different and people start to doubt whether the current or old concepts of service assurance are still valid. "So for example, people come to our booth and ask, 'Do you think passive probe monitoring is still needed?  Or even, is it still feasible?  Can it still do the job?' After all, as the number of interfaces decrease in this large but simplified network, if you plug into an interface you're not going to detect immediately any direct relationships between different network elements doing a telecom job like before, all you'll see is a huge IP pipe with one stream of IP packets including traffic from many different network elements and what good is that? "And following on from that perception, many customers hope that the new, big bandwidth networks are somehow self-healing and that they are in less danger of getting into trouble. Well they aren't.  If anything, while the topological architecture of the network is simplifying things (big IP pipes with everything running over them), the network's operating complexity is actually increasing." As Thomas explains, whenever a new technology comes along it seems in its initial phases to have solved all the problems associated with the last, but it's also inevitably created new inefficiencies. "If you take the concept of using IP as a transport layer for everything, then the single network element of the equation does have the effect of making the network simpler and more converged and cost effective. But the by-product of that is that the network elements tend to be highly specialized engines for passing through the data  - no single network element has to care about the network-wide service." So instead of a top-down, authoritarian hierarchy that controls network functions, you effectively end up with 'networking by committee'. And as anyone who has served on a committee knows, there is always a huge, time-consuming flow of information between committee members before anything gets decided.  So a 'flat' IP communications network requires an avalanche of communications in the form of signaling messages if all the distributed functions are to co-ordinate their activities. But does that really make a huge difference; just how much extra complexity is there? "Let's take LTE [Long Term Evolution], the next generation of wireless technology after 3G. On the surface it naturally looks simpler because everything goes over IP. But guess what? When you look under the bonnet at the signaling it's actually much more complicated for the voice application than anything we've had before. "We thought it had reached a remarkable level of complexity when GSM was introduced. Back then, to establish a call we needed about 11 or 12 standard signaling messages, which we thought was scary. Then, when we went into GPRS, the number of messages required to set up a session was close to 50.  When we went to 3G the number of messages for a handover increased to around 100 to set up a standard call. Now we run 3GPP Release 4 networks (over IP) where in certain cases you need several hundred signaling messages (standard circuit switching signaling protocol) to perform handovers or other functions; and these messages are flowing between many different logical network element types or different logical network functions. "So yes of course, when you plug in with passive monitoring you're probably looking at a single IP flow and it all looks very simple, but when you drill down and look at the actual signaling and try to work out who is talking to who, it becomes a nightmare. Maybe you want to try to draw a picture to show all this with arrows - well, it's going to be a very complex picture with hundreds of signaling messages flying about for every call established. "And if you think that sort of complexity isn't going to give you problems:  one of my customers - before he had one of our solutions I hasten to add - took  three weeks using a protocol analyzer to compile a flow chart of signaling events across his network. You simply can't operate like that - literally. And by the way, keep in mind that even after GSM networks became very mature, all the major operators went into SS7 passive monitoring to finally get the last 20 per cent of network optimization and health keeping done. So if this was needed in the very mature environment of GSM, what is the driver of doubting it for less mature but far more complex new technologies? ''
Underpinning a lot of the questions about OSS from operators is the cost disparity between the OSS and the network it serves, says Thomas. "Today our customers are buying new packet switched network infrastructure and to build a big network today you're probably talking about 10 to 20 million dollars. Ten or 15 years ago they were talking about 300 to 400 million, so in ten years the price of network infrastructure has come down by a huge amount while network capacity has actually risen. That's an extraordinary change. 
"But here's the big problem from our point of view.  Ten years ago when you spent $200 million on the network you might spend $3 million on passive probe monitoring.  Today it's $10 million on the network and $3 million on the passive probing solution. Today, also, the IP networks are being introduced into a hybrid, multiple technology network environment so during this transition the service assurance solution is getting even more complex. "So our customers are saying, ‘Hey!  Today we have to pay a third of the entire network budget on service assurance and the management is asking me, 'What the hell's going on?' How can it be that just to get some quality I need to invest a third of the money into service assurance?' "You can see why those sorts of conversations are at the root of all the doubts about whether they'll now need the OSS - they're asking: 'why isn't there a magic vendor who can deliver me a self-healing network so that I don't have to spend all this money?" Competitive pressures don't help either. "Today, time-to-market must be fast and done at low cost," says Thomas, "so if I'm a shareholder in a network equipment manufacturing company and they have the technology to do the job of delivering a communication service from one end to the other, I want them to go out to the market.  I don't want them to say, 'OK, we now have the basic functionality but please don't make us go to the market, first can we build self-healing capabilities, or built-in service assurance functionality or built-in end-to-end service monitoring systems - then go to the market?'  This won't happen." The great thing about the 'simple' IP network was the way it has commoditized the underlying hardware costs, says Thomas. "As I've illustrated, the 'cost' of this simplicity is that the complexity has been moved on rather than eliminated - it now resides in the signaling chatter generated by the ad hoc 'committees' of elements formed to run the flat, non-hierarchical IP network. "From the network operator's point of view there's an expectation problem: the capital cost of the network itself is being vastly reduced, but that reduction isn't being mirrored by similar cost reductions in the support systems.  If anything, because of the increased complexity the costs of the support systems are going up. "And it's always been difficult to sell service assurance because it's not strictly quantitative. The guy investing in the network elements has an easy job getting the money - he tells the board if there's no network element there's no calls and there's no money. But with service assurance much more complicated qualitative arguments must be deployed. You've got to say, 'If we don't do this, the probability is that 'x' number of customers may be lost. And there is still no exact mathematical way to calculate what benefits you derive from a lot of OSS investment."
The problem, says Thomas, is as it's always been. That is, that building the cloud of network elements - the raw capability if you like - is always the priority and what you do about ensuring there's a way of fixing the network when something goes wrong is always secondary. "When you buy, you buy on functionality. And to be fair it's the same with us when we're developing our own products. We ask ourselves, what should we build first? Should we build new functionality for our product or should we concentrate on availability stability, ease of installation and configuration.  If I do too much of the second I'll have less features to sell and I'll lose the competitive battle. "The OSS guy within the operators organization knows that there's still a big requirement for investment, but for the people in the layer above it's very difficult to decide - especially when they've been sold the dream of the less complex architecture. It's understandable that they ask: 'why does it need all this investment in service assurance systems when it was supposed to be a complexity-buster?" So on each new iteration of technology, even though they've been here before, service providers have a glimmer of hope that 'this time' the technology will look after itself. We need to look back at our history within telecoms and take on board what actually happens.    

What OSS technologies will be in demand by carriers in the coming year and beyond? Clarissa Jacobson believes that the way investors and industry experts answer this question influences decisions about venture capital investment and mergers and acquisitions

In this world of advancing technology where wrong bets on the future can result in major failures, telecoms executives and investors are constantly on the watch for what might be termed the "Next Big Thing."  Since our industry is so replete with acronyms, we shorten this to "NBT."  In this article we explore current thinking about NBTs via a survey we conducted of a number of top executives and venture capitalists who are intimately familiar with the telecom/OSS space.  We also review some recent merger and acquisition trends.
Both the survey and M&A review heavily point at a tidal shift in carrier requirements.  The emphasis during the last five to ten years has been on rolling out new services and adapting systems to IP architecture.  While this continues to be of importance, our study indicates that carriers are increasingly turning their focus to making existing systems more cost and customer efficient and doing so within a real-time environment.  Sophisticated OSS applications that elegantly address these needs are the OSS world's NBTs.
Respondents to the survey, asked about what was most likely to capture venture capital investment, cited several areas over and over again: business and network intelligence, customer self-care and product lifecycle management (PLM).  One respondent articulated it very clearly:  "Any technology that supports automation of customer processes and reduced administration will garner VC interest.  Carrier focus is definitely on operating expense reduction."

Companies that can demonstrate good return on investment and overall lower total cost operation for their solution set were ideal according to Nick Stanley, VP Networks of Brilliant Cities, a designer, builder, and operator of regional broadband telecom networks.
The survey results came from a wide girth of OSS business owners/executives/board members, consultants, venture capitalists and carrier executives.  The majority were from North America and Europe with a smattering from ANZO, Asia, and Africa. 

From 1999-2005 the major trend was that of adapting to the explosion of new services and deploying IPplatforms.  Companies hustled to deliver the vast array of new products; and software that could guarantee seamless capabilities was all the buzz.  Fast forward to today.  Now that most carriers have next-gen service capabilities in place their primary concern returns to managing costs and improving customer experience.   At the risk of oversimplifying what the future has to offer, it can be seen that applications that help increase arpu, reduce churn and lower costs are where the deals will happen, and are happening.  Several big acquisitions in the past year confirm this.   

In September of 2007 Cisco Systems (NASDAQ:CSCO) bought Web Business Intelligence and Analytic Reporting Company, Latigent, whose product helps to cull call centre data into reports to improve customer service and analyse customer behaviour.   "By acquiring Latigent, Cisco is signalling a commitment to increase the value of customer investments in our customer interaction solutions, by providing appealing, robust and dynamic tools to enable increased visibility and efficiency, resulting in improved customer experiences," says Laurent Philonenko, Vice President and General Manager of the Customer Contact Business Unit, Cisco.  

One of the biggest deals announced in 2007 and closed in 2008, SAP (NYSE:SAP) acquired Business Objects for 4.8 billion Euros.   A French company, Business Objects software helps companies analyse data to detect market trends.  SAP had been comfortable making smaller targeted acquisitions, but in an effort to compete with Oracle, which has aggressively been acquiring business application companies over the past three years, SAP took the leap at the end of October with its decision to acquire Business Objects.  Oracle has spent more than $20 billion on companies that offer software which manage human resources, supply chains and customer relations and previously acquired SAP's competitor, Hyperion.
At practically the same time as the SAP activity, NetScout Systems (NASDAQ:NTCT) announced their intent to acquire data mining and network analysis company Network General for $213 million.  They closed the acquisition January 14, 2008.  NetScout said the combined company would focus on reducing Mean Time to Resolution for enterprises, wireless providers and government agencies.  NetScout President and CEO Anil Singhal said: "Today, we are bringing together two established companies with complementary technologies to form a new, stronger organisation that will have the scale, technology and mindshare to meet some of the greatest challenges associated with virtualisation, convergence, SOA and highly distributed network-centric operations."   With the ability to integrate the companies, they expect to achieve numerous cost synergies and $30 million in expense reductions. 

In December of 2007, Motricity announced the completion of its acquisition of InfoSpace's Mobile Services Business.   A provider of mobile content services and solutions, Motricity acquired the InfoSpace unit for $135 million.  It served to expand their customer base and offer a full range of services with an end-to-end platform. Ryan Wuerch, Chairman and CEO of Motricity says: "Perhaps the biggest differentiator of the combined company is that we offer unmatched insight into the mobile consumer.  This insight is invaluable for our partners."

Finally, but by no means the last of the business intelligence deals that we expect to see in the coming year, was Nokia Siemens Networks' announcement to buy Apertio for 140M Euros, which is expected to close in May.  Apertio is a provider of mobile subscriber data platforms and applications.  Key to the acquisition for Nokia is that Apertio will give them the added edge to help customers simplify their networks and manage their subscriber data.   
Jurgen Walter, head of Nokia Siemens Networks, notes: "The race is on to deliver seamless and highly targeted services to end-users across various access devices and this requires a unified approach to subscriber data.  Enabling access to this information in real-time means you can profile subscribers and deliver new services and advertising appropriately." 
Paul Magelli, Apertio CEO puts into a nutshell exactly the reason business intelligence deals have been so prevalent: "With Internet services, communications services, and entertainment services now converging, operators must simplify their networks and focus on subscriber intelligence to stay competitive." 

One area that several of the survey respondents mentioned, but that did not show up in the merger and acquisition deals of the past months, was Customer Self-Care.   No longer does this mean a simple web portal for customers to review a bill or get information.  The next generation of self-care automates the entire move, change, add workflow process from customer entry to provisioning and activation.  Providing customers with the ability to help themselves is extremely beneficial to a carrier's business as it reduces operational costs and improves customer experience.    With churn rates averaging 1-3 per cent monthly and the typical carrier spending $300-$600 to gain one new customer, it is obvious why this is a hot topic. 

Lane Nordquist, President Information Solutions, a subsidiary of HickoryTech Corporation, a diversified communications company, states: "Customer Self-Care through the web or mobile devices is becoming increasingly pervasive as customers/prospective customers take advantage of their ability to execute consumer choices without interference.  Any technology that seamlessly links customer self care to automated provisioning of services should attract venture capital investment." 

One can only deduce that the capital hasn't been put forth, as the technology to make Customer Self-Care seamless is not quite there yet.   

The number one area cited by survey respondents can be lumped together under what today is loosely referred to as Product Lifecycle Management (PLM).  Conceptually, PLM allows a carrier to build, introduce and deliver services and consumer choices much faster.  This requires managing and coordinating many divergent systems and databases.  A true PLM system integrates network and business intelligence with back office functionality.
A few survey responses suggested that emerging WiMax and Mobile Communities, and the OSS software for managing these areas was an up and coming NBT.  With high profile M&A deals for companies like FaceBook and MySpace, some see this area as ripe for investment as on-line communities extend into the mobile and wireline environment.

Carriers are faced with fickle customers with increased demands and little patience.  Competition is cut throat, and carriers that are able to streamline costs while delivering a better customer experience are the companies that will succeed in the years to come.

Clarissa Jacobson, is Director, Research and Administration with Peter A. Sokoloff & Co - an investment banking firm that specialises in mergers and acquisitions of companies in the Telecom and Security industries. She can be contacted via: cjacobson@sokoloffco.com

Dante Iacovoni discusses how operators can benefit from the lessons of Web 2.0 success

The phenomenal success of web 2.0 companies is there for all to see. They have, in a relatively small space of time, gone from a new and emerging technology to a worldwide phenomenon led by companies like MySpace, Facebook and Google.  The uptake and popularity of these companies and services has been based almost universally on personal and free of charge content, subsidised by the very effective gathering of advertising revenues. 

The primary key to this success is in the information that 2.0 companies have on their consumers and the way in which they use this information for their benefit.  Each company has a detailed level of insight into the behaviour of the people using its service, making them desirable to advertisers.  But what can the telco world learn from this? 

Telco operators are, for the most part, used to basing their revenues on subscriptions and usage fees.  As a result, telco portfolios are often very similar and it can be difficult for consumers to differentiate between them. In today's environment, consumers switch service providers simply according to which operator is offering the best price for what is likely to be a very similar service. To go beyond this fleeting loyalty and really build a relationship with the customer, operators will first and foremost need to offer distinct and compelling services beyond the triple or even quad-play bundles that are becoming the norm in some markets.
Although triple and quad play have initially succeeded in reducing churn, telcos will soon find that they need to provide more distinctive services to maintain their customers' loyalty.
Competition amongst telcos ultimately comes down to who "owns" the consumer. To stand the test of this competition, operators will need to learn to better understand their end user and to incentivise them and generate loyalty that goes beyond "call minutes". This will enable them to differentiate themselves amidst such fierce competition and to gain a deeper level of understanding of the users they're servicing. Once they know what services their users want, and to consider not only their subscribers but also each person in that house who uses the service, they will be able to identify the most effective ways to monetise them.
Distinct and compelling services will be the primary catalyst in acquiring and keeping customers. One of the strongest weapons in the fight to develop these services will be trust, which is essential for a user to provide the kind of information that is needed to create tailored and personalised services.  Operators have a distinct advantage in that they already occupy a position of trust with their users, but they have yet to leverage the full potential of these existing relationships and convert them to advertising revenues. In the meantime savvy Internet companies have used their insight into consumer behaviour to leverage ad revenues and grow - in some cases exponentially - as a result. Their distinct and compelling services are the key to their success and offering them free of charge has helped them to create a sizeable user base. Operators as yet have not begun to leverage this kind of opportunity - despite the fact that they are in a position of trust with their users and could do so with relative ease.

For operators who learn not to categorise users by network access there can be even more advantages to be gained. They will be able to target advertising based on the consolidated user behaviour and then reach the user with messages based not only on their interest but also their location. The best advertising can also be that which users do not think of as advertising - take Google search as an example.

But the links between the two worlds can extend beyond simply learning from the 2.0 success stories. As an operator you are providing access not only to your own services but also to those existing in "the Internet cloud" - the likes of Joost, Facebook and so forth. Today these exist entirely independently of each other, but in the future there may be value in finding synergies between the two, and perhaps even striking agreements between an operator and the individual 2.0 companies. There are, for example, many opportunities for shared revenue; it is just a question of working out the right format.

The opportunities for telephony, triple- and quad-play will eventually be pushed to the limit. All consumers see is that they are getting essentially the same service they would anywhere, nothing revolutionary or overly exciting. And they have a point: many of today's IPTV services are a simple carbon copy of cable. 

The key now is to upscale the value of the broadband network and leverage the opportunities it offers. To do this, operators will need to build intelligent service nodes into the home that service-connect the end users. New functionality will need to be added to both set-top boxes (STBs) and home gateways (HGWs) to meet this demand: customer premises equipment (CPE) needs to be fully upgradable, providing new functionality in line with new service offerings as well as advances in technology and standards.

To date operators have been very technology focused, concentrating on improving the way they do business rather than re-evaluating their business models. There is a need for them to focus less on technology and more on services than they have before, and by doing so, to acquire revenue from more sources than they do today: call minutes alone are no longer enough.

One viable opportunity lies in third party cooperation and in helping third parties to develop their own applications. Some operators like France Telecom and BT are already starting to push this and will eventually open up a broader portfolio of services as a result.

There is a real opportunity for operators to capitalise on the lessons of the digital boom, but to do so successfully they will need to broaden their view of who their customers are and to understand that they are individuals with individual wants. It is about moving from Average Revenue Per User (ARPU) to Gross Revenue Per User (GRPU). By making this effort they will be able not only to enhance their own service portfolios, but also to sell on to advertisers who are interested in gaining that knowledge, creating a strong and sustainable revenue stream and, vitally, gaining a long-term foothold in the "ownership" of the customer.
We are on the threshold of changing times.  The future is about new services and applications as well as new business models. For operators, the possibilities to develop their business in new directions are huge, and if they can acquire revenues from advertisers, they will be able to offer new services at a lower price.

They have a chance to understand and build a relationship with the customer and through that to develop a power position; without this, they will have a clear risk of ending up as a bit pipe for 2.0 companies.  Get to know your consumer and you can create compelling services that you know will appeal to them; and once you've achieved that goal, you can leverage your knowledge to create valid and sustainable revenue streams. 

Dante Iacovoni is Marketing Director, Tilgin

The growth in demand for data services is great news for the industry in general, but it does change the dynamics of the market. Doug Dorrat outlines the implications for mobile operators in an environment dominated by the need for unique customer profiling

It's been talked about in markets around the world for a long time - the conversion from voice-focus to a data-focused telecommunications market, - but there's only about 18 months left before traditionalist telcos hit a critical point.

Why? Because across the world voice revenues are predicted to drop so steeply that conventional voice services can no longer be the ‘bread and butter' revenue source that all operators have enjoyed.

And, there are clever CEOs of content-focused virtual operators such as Finland's Blyk that are changing the rules of the way both data and voice services are marketed and provided.
Blyk, for instance, is offering free airtime to the vital 16-24 year old market segment who are prepared to be advertised to - Blyk is making revenue from the advertisers. The advertisers are reaching a highly targeted audience, and the kids are phoning, sms-ing and connecting to the Internet from their mobile devices free of charge.

Everyone's happy - and the message to telcos could not be clearer: understand that your old market is disappearing and get to grips with your new market in fine detail.
The old market is gone because there is no longer a mass market to which you can supply bulk access and charge for the time customers use on your network.  As the Blyk example shows, the generation leaving school now expects to connect free of charge. And, because of push technologies, the mass market is turning into millions of individuals, each of whom wants access to connectivity in a unique way that has massive revenue-bearing potential for telcos.

But, to tap into that potential you need business intelligence (BI) - and you need it at the right level.

By the right level, I mean the kind of industrial strength BI solution that not only enables you to link appropriately to your audience individually, but also enables data service and content providers to link to you and your audience.

To achieve that you need to be able to collect, analyse, report on, and share terabytes of data - because you need to track the behaviour of each of your millions of customers in extreme detail. When do they use your services, what are their work demands on your network, what are their lifestyle preferences, what kind of advertisers are going to want to reach which of your customers? And so on.

Take the example of someone flying into, say, Heathrow airport and wanting a taxi to take him to his meeting in town. Location-based services will automatically advertise taxi services to him and the operator provides an accessible easy to use application to order the taxi, send his selected company his destination details, get a quote and book and pay for the trip on the device. In-built GPS enabled to guide the taxi directly to the customer.
The operator wins through advertising revenue, a share of the transaction and the fee charged to the taxi firms for hosting the application. With this model the operator makes significantly more than a voice call ordering a taxi. How about that for boosting arpu?  And if your organisation can't provide him with that capability, then he'll go to one that can.
Which brings us to the question of churn and customer loyalty - both of which are dependent entirely on your ability to differentiate yourself in the market. Right now, using the old bulk network approach, there's no way you can differentiate yourself. The telecoms market is very nearly saturated.

Most of the people who want telecoms access have it in some form or another. Also, the tariffs for individual telco services are confusing for the average consumer, so you can't build loyalty by offering the cheapest service (if that is what the customer is looking for). Besides, you already know that you're running a very expensive infrastructure for a largely unprofitable group of users. Or do you?

Conventional telco wisdom has it that a tiny pocket of users is profitable. In theory, they're the ones who use your network a lot to make lengthy calls. Conventional wisdom also says that on average each new user you take on takes a year to 18 months to become profitable when you consider all the elements of cost the customer incurs individually. So, in effect, you're funding your customers.

This problem has prompted operators to put Customer Value and Profitability at the very top of their strategic agendas as a means to maintaining competitiveness, maintaining loyalty and finding new ways of growing their businesses. In fact Bain & Co recently found out that an increase of one per cent in customer loyalty can improve profitability by 25 per cent!
That's not only a difficult way to make money; it's also a risky model considering the new competitive threats.

BI is now proving that the customers you used to think were unprofitable are actually the ones you need most. As in a housewife who makes very few calls from her mobile device but receives a large number from her friends and family. In other words, she costs you very little, but brings in a significant revenue from other networks. She's one of the thousands like her that you should be marketing to - in terms of content offerings.

But unless you have BI, you can't know which of your customers are profitable and in which ways. Without effective customer-level BI, you're essentially running your business on a hunch.

Certainly, you're not going to be able to compete in what is an entirely new market that has absolutely nothing to do with paying for calls. Your shareholders should be very anxious - the market is not going to be tolerant of outdated services for more than about another 18 months.

The process of implementing the right kind of BI, however, is going to take you at least that long - if you start planning now. Remember, it's not just a question of installing the solution. You have to adjust your business to use the information it gives you. Information not just about your customers but about the cost of servicing and marketing to those customers. You're going to need to build a profitability model based on how you choose to differentiate yourself.

It takes time, but it's not particularly difficult. British Telecom, for instance, saw the writing on the wall in the late nineties and, this year, will make more money out of providing services other than traditional telephony services to consumers.

Doug Dorrat is Microstrategy Director Industry Services - Telecoms and Media, Europe, Middle East and Africa

Ajit Jaokar examines the synergies between Mobile Web 2.0 and IMS, defining the terms, and exploring how these two concepts complement each other

At first glance, Mobile Web 2.0 and IMS have no synergies. After all, they operate at different layers of the stack - Mobile Web 2.0 is at the Web/services layer, and IMS is a networking paradigm.

However, market forces have conspired to bring these two ideas together because many IMS services can be implemented on the web (often for free). In a nutshell, the telecoms industry cannot ignore the web. It must instead think of how it can add value to the web and identify elements that can be uniquely implemented in the core of the network (and not the edge).

Web 2.0 and Mobile Web 2.0
Since Mobile Web 2.0 extends the meme of Web 2.0 - it is necessary to understand Web 2.0 before we explore Mobile Web 2.0. In spite of all the hype, the distinguishing characteristic of Web 2.0 lies in its use of the web as a platform. If we now extend this idea to mobile devices, then at a minimum, a Mobile Web 2.0 service must use the web as a backbone.
On first impressions, Mobile Web 2.0 is simple enough.  However it's implications are profound, as we shall see below.

The first implication is: The web is the dominant player and not telecoms. This is not a comforting factor for many in the telecoms industry. Yet we, as users, accept these ideas. Even the youth today are spending more time on the web and less on mobile devices (for instance with applications such as Facebook). In addition, the web is 'free' - which is immediately adds to suspicion from the telecoms side
Secondly - in a Mobile Web 2.0 scenario, the device and the service become more important than the network itself. This is a natural by-product of the intelligence shifting to the edge of the network.

In addition, we have the 'deep blue sea problem'. If we end up capturing content from a phone and uploading it on the 'deep blue sea' of sites like Flickr - then the unique mobile advantage is lost (ie once the content is on the web, it can be treated as any other piece of content). Hence there is a need to consider the question of 'uniqueness of mobile' when it comes to interacting with the web.

It is against this backdrop that we explore IMS - ie we are exploring what IMS can add to a service that can be uniquely performed by the network

IMS brings IP (Internet Protocol) to the telecoms world. A complete definition of IMS is outside the scope of this article - however the Wikipedia entry on IMS gives a good introduction.  IP traditionally implies dumb pipes and smart nodes (aka net neutrality principles - all packets are created equal and intelligence shifts to the edge of the network). However, although IMS is IP based, it is philosophically opposite to the principles of net neutrality since it seeks to make the network intelligent.
On one hand, thinking of IMS applications is a bit like thinking of 3G applications. Every application will be a 3G application but in most cases, the bearer does not matter. Consequently, if you flip this argument, then an ‘IMS application' needs to be an application that will make use of the (bearer) telecoms network itself.

So can such applications be possible?
In theory - yes.
In itself, making the network intelligent is not such a big issue. Consider delay tolerant networks - which are used in military and space applications. In that case, all packets are not created equal especially when operating in hostile environments.
The real question is - are all packets created commercially equal?
Hence, the question spans more than the technical remit and is directly tied to business models and can be reframed as: Will people pay for applications with differential charging and differential QOS?

If such applications may be found and/or they add value uniquely from the network core - then they would be 'IMS' applications in the true sense of the word (otherwise they are likely to be implemented by the web/application layer itself and are likely to be free).
The context within which IMS operates cannot be ignored as well. The Internet and the web are dominant. They are options for most IMS applications. The Internet and the web are global and they are free. That does not help for IMS applications.
So, IMS applications must:
a) Uniquely leverage the network
b) For an operator - and let's face it, IMS is mainly driven by operators - be chargeable to the end user and
c) Must take the Internet into account - ie competing against the Internet else it will not work.
One key observation is: The web is global. IMS is national at best - and in most cases sub-national in coverage (more than one operator within a country). Also, end-to-end IMS connectivity issues are still not solved - and that hampers many IMS applications.
IMS applications

Is there an example of an IMS application?
Consider the case of ‘Mobile Multimedia Twitter'. Twitter is popular microblogging service .. and according to Wikipedia:
"Twitter is a free social networking and micro-blogging service that allows users to send ‘updates' (or ‘tweets'- text-based posts, up to 140 characters long) to the Twitter website, via short message service, instant messaging, or a third-party application such as Twitterrific. Updates are displayed on the user's profile page and instantly delivered to other users who have signed up to receive them. The sender can restrict delivery to those in his or her circle of friends (delivery to everyone is the default). Users can receive updates via the Twitter website, instant messaging, SMS, RSS, e-mail or through an application."
The idea of a media rich twitter is not new and, indeed, there are some services already in existence, and, of course, Twitter itself is already 'mobile' in the sense that you can get updates via SMS.

However, to take the idea of video twitter to mobile devices, would be a complex proposition, and would need optimisation of the network (hence an IMS
The idea of mobile video twitter could combine a number of different ideas - most of which we know already:
a) Twitter itself, ie short updates
b) Video
c) Maybe presence
d) Maybe location
e) Maybe push to talk
f) Client side optimisation

However, most importantly - it will need the mobile network to be optimised. Push to Talk (PTT) has been around for a long time - it's biggest proponent being Nextel. However, PTT has not taken off in most places in the world - partly because it needs the network to be optimised - and in most places, you end up delivering voice over the non optimised GPRS network, which is not really feasible from a performance and user experience standpoint, as we can see from the experience of Orange which attempted to launch PTT back in 2004 without much success.

However, the networks themselves have come a long way since that time, and indeed, one of the most common questions we see today is  ‘Where are the IMS applications?'  - which translates to ‘Where are applications that can uniquely use the network?' The service will need client side optimisation as well as network side optimisation if it is to be truly useful and friction free to the end user. From an end user perspective, we can view it almost like ‘video push to talk'. I have been sceptical of the idea of end to end (person to person) IMS, and I don't think person-to-person mobile video twitter will work (yet). However, a web terminated service can certainly work.

Interestingly, it is one of the very few services I have seen where an operator can have a competitive advantage over a similar web application (because the service needs both device side optimisation and network side optimisation)

Many IMS services can be implemented by Web 2.0 (often for free). However, as we have seen above - not all IMS services can be implemented by Web 2.0. To identify truly unique IMS services, it is necessary to leverage those tasks that can be uniquely performed by the network.

Ajit Jaokar is the founder and CEO of the publishing company futuretext. He believes in a pragmatic but open mobile data industry - a vision which he fosters through his blog OpenGardens. Ajit is the co-author of the book 'Mobile web 2.0' . He chairs Oxford University's Next Generation Mobile Applications Panel (ForumOxford) and conducts courses on Web 2.0 and User Generated Content at Oxford University. He is currently doing a PhD on Identity and Reputation systems at UCL in London

Telecoms providers challenged with the need to transform their technology to meet next generation service requirements are looking to IT benchmarking for the roadmap ahead, says Paul Michaels

A recent study by Ovum entitled IT Governance for Telcos, reports that: "IT for the telecoms vertical is currently going through an exciting period of change as telecoms operators gear up for the long haul of business transformation - from a traditional vertically-integrated telco into a competitive service provider based on a next-generation, all IP-network." 

Yes, the European communications industry has entered cyberspace and the future is advancing at warp velocity. Whatever else the next ten years may bring, one thing is certain: telecoms providers will continue to face fierce competition, especially from new players ‘born' in the Internet era, and will be forced to cope with unrelenting pressures to deliver services ever faster, better, cheaper.  According to Ovum: "Transformation into an ‘infotainment' or ICT company - to name just two examples - requires intelligent, responsive infrastructures and running costs that are more in line with today's competitive business environment."

To be among this new breed of telecoms provider, organisations need access to enabling technology that can drive next generation IP networks, content and value-added customer support services. However, technology itself is only part of the equation - to be fully optimised it must be supported by a progressive corporate culture.  To be in the vanguard of the next generation communications industry, organisations must be committed to reducing operational costs, making continual performance improvements and bringing to market new services along with best practice customer support.

Transforming current telecoms technology and operational support structures (OSS) is no trivial task, especially when faced with the need to juggle such opposing pressures as cost reduction on the one hand and investment in new services on the other. So where should one start?  Any journey towards change must begin with a clear picture of one's current situation - a frequently non-straightforward task, particularly in the case of large organisations burdened with multiple, often duplicated, legacy systems and broadly dispersed infrastructures.  However, without this initial clarity, many fundamental business decisions cannot be made. 

Consider, for example, the question of whether it is more cost efficient to support, say,  the customer billing service or the enterprise desktop environment through the in-house ICT department, or instead turn these applications over to an outsourcing provider. This issue can only be addressed effectively when management has a full set of detailed, current baseline data on such items as cost and key performance indicators (KPI) on all relevant IT components and OSS methodologies. Without this type of granular metric, it is very difficult for management to evaluate trends-over-time of improvements in cost management and/or performance levels. And it is virtually impossible to make an accurate ‘apples-to-apples' comparison between in-house and outsourcing costs. 

Because of this increased appetite for business information, benchmarking - both in the back office and at the customer-facing end of the operation - has become an increasingly popular way to achieve best practice and thereby win competitive edge. Whether it is analysing the cost, quality and performance measurements of IP networking infrastructures, client-server and help desk support, or making cost-vs-quality comparisons between supporting fixed line, 3G and global m-banking services, benchmarking parameters are potentially vast.   Benchmarking provides the analytical data upon which management and business consultants base their advice. Its aim is to measure an organisation's own operational methodologies, pricing structures, service levels, technology infrastructures and customer service levels and compare this with the competition (peer group), and against best practice within the industry as a whole. Whether analysing IT, service quality or any other element of the business process, benchmarking has in the past been viewed as a somewhat mundane back-office activity. These days it is coming into the boardroom as telecoms leaders realise that without these metrics, it is hard to see where they stand in a fast-moving industry, or what they must do to stay ahead of the curve. 

Generating a set of cost and performance metrics that provides the launching platform for transformational change is not always easily achieved from inside the organisation - this can be for several reasons. Stakeholders do not always feel incentivised to upset the status quo. Even where there is enterprise-wide buy-in (as in the majority of cases), it can still be difficult to achieve the objectivity needed to assess one's own strengths and weaknesses, or to obtain a 360 degree view of the operation.  It's like the blind men attempting to describe an elephant, each one focused on a different part of its body. To one, it's a tree trunk, to another a sail flapping in the wind, to the other a swinging rope: not one of them is able to perceive the complete entity.  In a similar way, a company looking to benchmark itself may see the wisdom of employing outside help to gain an impartial view of the company's situation.

It's frequently easier for an external consultant to sidestep a company's internal politics and enlist staff participation. Perhaps most importantly, they offer a unique level of access (because consultants tend to work with many companies in the same industry) to comparative peer group metrics on cost, productivity, service quality, and so on.  These specialists also have ‘insider' data on service pricing data for local, near-shore and off-shore outsourcing providers and a wealth other independent market information resources. They often also act as intermediaries in the negotiation of service provider contracts, helping to clarify the deliverables and make cost structures more transparent.

Many established organisations - and telecoms providers are no exception - suffer from an accretion of legacy hardware, applications, databases, desktop and network systems glued together with complex links that need ever greater levels of maintenance to function. This situation is further compounded by a variety of disjointed workflow methodologies that impede a company's end-to-end efficiency. 

Identifying and benchmarking those load points in the system that are causing higher than necessary costs and reducing performance can result in significant savings and lead to streamlined workflows that mean a more nimble service to customers. This is equally true whether a service is run in-house or it is outsourced. Often external provider costs are inflated because they are forced to support clients' overly complex systems. These costs are then passed onto the customer, often without the causes for the surcharge being clear. Just this fact alone can account for much misunderstanding between clients and their outsourcing partners.

Organisations with an eye to transformational change are beginning to take a broad-based view of benchmarking. Instead of viewing benchmarking as a one-off, crisis-driven expense, it is increasingly being implemented as a strategic tool for generating key business intelligence data. 

In this broader role, benchmarking moves beyond cost-only considerations to examine, among other things, the balances between a technology or a service's cost and quality, or cost and performance. As anyone knows from the high street, the lowest cost does not necessarily represent the best value. Assessing the value of a particular system, whether it is a sales or finance system or a corporate tool such as e-mail, is arrived at by looking at the balance between running cost and service quality, complexity and productivity. Being intelligent and responsive to the future and leveraging the disruptive technologies that are driving change, depends upon access to good business metrics. As organisations get more forensic, and begin to introduce IT costs and KPI measurement as part of their ‘good housekeeping' procedures; and as they get into the habit of regularly comparing the quality of their service levels against best practice models, the more the telcos of tomorrow will be in a position to ‘benchmark their way to success'.

Paul Michaels is Director of Consulting at Metri Measurement Consulting, and can be contacted via paul.michaels@metri-mc.com



Other Categories in Features