Telcos ignore data protection at their peril, says Lynd Morley


In the wake of the recent fiasco of a UK Government department managing to lose the personal details of 25 million individuals - described as the country's worst ever data protection breach - it is worth noting that, for some time now, there have been many warning voices about the fragile state of data protection, and the risks attendant to data loss, not least of which is the massive potential for identity theft and fraud.
But the subjects of data protection, security, privacy and identity management just don't stand a chance of being "sexy" in the fast moving, competitive, high-octane industry that telecoms now believes itself to be. I was astonished, at a recent conference, when discussing the use of personal information gathered about network users for marketing purposes, to be told by a pretty senior telecoms player that "customers don't really care about what we do with their data".

Yet there are those, within the industry, who are clearly concerned - and have been for some time.  Witness, for instance, a report earlier this year from law firm Linklaters' Technology, Media & Telecommunications Group.  Entitled Data Protected, it stresses that in a EU market of half a billion people, it is increasingly important that businesses address compliance with data protection legislation in a systematic way.

"There is a risk that any such compliance programme will take its impetus from the more exotic and media-friendly issues such as the passenger name records spat between the EU and US, and the dispute over the disclosure of banking payments to the US Department of the Treasury," notes Christopher Millard, Partner at Linklaters.  "However, in reality, it is no more likely that the EU's 27 national data protection regulators will make any serious attempt to close down the global banking system than it was that they would try to stop planes flying across the Atlantic and, although ad hoc enforcement action by individual regulators can't be ruled out, the only practical solution is for a deal at an inter-governmental level."

Millard goes on to comment that, with the above in mind, organisations should concentrate on the issues they can control and really should be doing something about - if they aren't already buttoned down.  Among his suggestions for attention, is the need to get people to take information security seriously.  Millard explains that, by forcing organisations to regulate themselves by sending warning notices to individuals who might be at risk of identity theft following security breaches, US State legislators appear to have stolen a march on the EU with its often bureaucratic approach to regulation.  The European Commission has consulted on whether breach notification rules should be introduced in the EU, starting with telcos and ISPs.

Millard further suggests that priority might be placed on "stopping people doing stupid things with e-mail."  He notes that despite all the publicity surrounding ‘smoking gun' e-mails, many organisations still seem to have a cavalier attitude to e-mail and, worse still, instant messaging.  Many don't bother to deploy appropriate policies, training, software tools or disciplinary procedures.

However, judging by the furore following the UK Government's lapse, there are signs that enterprises and their customers are becoming more aware of the issues, and more concerned about privacy and identity theft.  The Enterprise Privacy Group, for example, has noted growing interest in the concept of ‘Information Brokers' to help users control their personal data.  As customers do gain understanding of the issues, it would be an unwise company that didn't address privacy and identity concerns.


Mobile TV is engendering strongly held - and opposing - positions on the chances of its successful adoption.  Dr Windsor Holden examines the detail behind the posturing

Much of the literature that has been written about mobile TV falls roughly into one of two camps. In the blue corner are the evangelists for whom it is the killer app to which the populace will subscribe in their millions; weaving and ducking in the red corner are the nay-sayers, amongst whom the general consensus appears to be that they would not touch mobile TV, no Sirree, not even with a barge-pole, and anyone that does will be turned into a pillar of salt, or at the least see their enterprise go belly-up due to lack of customers.
Now then. Mobile TV clearly represents a tremendous opportunity for the various participants in the value chain. People will pay a premium for mobile TV. So far so good. Now for the caveats. It will generate significant revenues, if... (You should sit back and make yourself comfortable at this point, for there are quite a few "ifs".) If it is properly marketed; if prices of handsets and services are not prohibitive; if operators are allowed a free hand to pick their standard of choice; if it is if delivered via a cost-effective but robust solution; if the service can be received nationwide; if the service quality is acceptable at high-speed; if service providers can obtain the mobile rights to key content; if prime spectrum is made available; if chipsets are fitted in mass-market handsets. And so on.
Read on, and we'll go through a few of those "ifs" in a little more detail...

Regulatory constraints
One of the most pressing concerns for adherents of the various standards is that regulatory bodies will mandate one of their competitors, effectively ending their interest in a given market. Chinese regulator SARFT moved down this path in 2006 when it mandated STiMi as the de jure standard in the country, but a similar (if not yet so emphatic) stance has been taken by the European Commission (EC). The Commission report, Strengthening the Internal Market for Mobile TV, argued that DVB-H should be introduced as a common standard for the following reasons:

  • DVB-H is the most widely-used standard in Europe and is also becoming popular worldwide; and,
  • DVB-H is fully backwards compatible with DVB-T, a core advantage as DVB-T is used for terrestrial digital transmission everywhere in Europe, and network operators have experience in building and operating DVB-T networks

While the EC's report makes some valid points, most notably over a single standard delivering economies of scale, the effective imposition of a standard both flies in the face of policy elsewhere (the EC calls, for example, for a "level playing field" in terms of content obligations for mobile TV) and hamstring players who feel that MediaFLO or another standard might be better suited to their particular circumstances.

Spectrum availability
The availability of spectrum is a key factor in determining the success of Mobile TV. This is especially true in the case of DVB-H, which plans to operate in the UHF spectrum band currently occupied (depending on the country/region) by analogue TV.  However, the availability of UHF spectrum varies widely across Europe: this is partly a reflection on the relative popularity of analogue terrestrial services in the member states. Thus, in the Netherlands and Luxembourg for example, where less than two per cent of customers used terrestrial TV services, relatively little disruption was caused to consumers when analogue signals were switched off. By contrast, the UK has traditionally relied on terrestrial broadcast TV services with a comparatively low adoption of cable, and even though the country has the highest level of digitalization in the EC - 80.5 per cent of households were digital at the end of March 2007 - that left more than 4.6m households dependent upon analogue terrestrial transmissions for their television.

A consultation paper released by Ofcom in December 2006 on the Digital Dividend Review (DDR) suggested that L-Band spectrum (scheduled to be auctioned in the UK by the end of 2007) might be suitable for mobile TV. However, UHF spectrum is perceived as ideal for DVB-H because signals can travel comparatively long distances (thus minimising the required number of repeaters), and can penetrate walls without noticeable degradation. By contrast, signals in the L-Band (1452-1492MHz) travel shorter distances (more repeaters, therefore a more expensive network) and are less effective at penetrating buildings Furthermore, if the UK unilaterally decided to utilise L-Band spectrum, this would immediately put it out of kilter with neighbouring countries committed to using UHF spectrum for DVB-H. Thus, in the UK at least, there appears to be an impasse: Ofcom will not release the optimal spectrum; L-Band is perceived as too expensive.

Network costs
The start-up costs for a mobile TV network are not insignificant, regardless of the broadcast standard employed. In the first instance, operators may be obliged to pay a fee for spectrum over which the service will operate. Secondly, there are the network infrastructure (construction and implementation) costs. Thirdly, the backhaul costs of distributing the mobile TV content to the broadcast stations will be substantial. The cost of the spectrum will vary significantly by market and the relative desirability of the spectrum in question. In the US, Crown Castle acquired nationwide spectrum in the L-Band for $13m; the UHF (channel 55) spectrum acquired by MediaFLO cost $38m. In the UK, Arqiva recently acquired four lots of 4MHz spectrum at 400MHz for a total of £1.5m ($3.0m).
However, these costs are minimal when compared with the network rollout costs. The La 3, Hutchison Italia DVB-H network cost around $280m, while Mediaset's DVB-H network cost around $320m. The MediaFLO network in the US was estimated to cost somewhere between $800m, while Modeo would have been obliged to spend in excess of $1bn. Costs also vary depending upon the spectrum utilized: as noted on page, the different propagation attributes of the spectrum bands means that L-Band services require a greater number of terrestrial repeaters.  Even for relatively small countries, certain topographical characteristics can ramp up costs.

The sheer scale of these rollout costs make it imperative for operators to share infrastructure. Let us assume a single network rollout in a medium-sized country (e.g. France), costing around $400m (DVB-H UHF spectrum) or $600m (L-Band). In either case, it will take an operator five years until cumulative service revenues exceed these rollout costs alone (regardless of other expense such as backhaul costs, programming costs, etc). The construction of two sets of competing infrastructure would therefore be uneconomic, suggesting that the best model would be for competing service operators to share network infrastructure to minimize costs.

Network coverage
Clearly the costs outlined above are dependent upon the breadth and intensity of network coverage involved. One of the key factors that can ramp up the cost is providing an acceptable level of quality of in-building coverage. This has been an issue for most cellular networks, and with higher 3G frequencies and a higher degradation of the signal through thick walls and glass, an increasing number of cellular operators and building owners are resorting to In-Building solutions like Distributed Antenna Systems, Pico Cells and Repeaters. The same issue of indoor coverage applies to broadcast mobile TV technologies like DVB-H, DAB/DMB, ISDB-T and MediaFLO. Although it is too early to say, the degree of the problem and which technology will eventually face an indoor coverage issue, we believe that the success of broadcast mobile TV is very much dependent of the user experience. As the various trials of mobile TV have illustrated, the majority of mobile TV viewing is expected to happen at the office, at home or while in a bus or train. This means that the user will at most times be inside an enclosed space (whether stationary or at speed). Thus it becomes very important from a network design point of view.

However, the comparative analysis of mobile TV standards is very much a game of claim and counter-claim, with the adherents of each particular standard citing research that purports to show that their technology is superior to that of the competition. Regardless of the veracity of these various claims, the fact remains that mobile TV will not achieve its full potential unless the selected technology offers an acceptable audio and video quality over the majority of a country's territory and to subscribers watching its service within buildings and moving vehicles. Ultimately, customers are paying for the universal availability of a mobile service; the premium they will pay is to receive that service anywhere, anytime. This is a problem that has yet to be fully addressed by 3G networks, where streamed services are regularly interrupted when users move out of areas with good coverage. It is an issue that must be addressed at the outset when establishing a dedicated mobile TV network. To put it another way: customers of a DTH or cable pay TV service would not renew their subscriptions if such services regularly and repeatedly broke down.

Service pricing
Key to the mass adoption of mobile TV is finding an acceptable price point for services: too high and users will be disinclined to subscribe; too low and service providers run the risk of failing to extract maximum value from their service. In January 2007, the Mobile DTV Alliance released a white paper claiming that:
"When prospective subscribers are asked if they are willing to pay $20/month for the service, before experiencing the service first-hand, only 10 per cent on average respond positively. However, once holding phones in their hands with real, live, high-quality broadcast services available, this number should change, and more than 50 per cent will be willing to pay for the service, as worldwide commercial trials (Italy, Finland, UK) have shown. What will the mobile broadcast TV revenue of US operators be if 25 per cent of their subscribers are willing to pay $20/month? With close to 200m US subscribers, the annual revenue will be $12bn. Even if we assume that 50 per cent of that revenue will go directly to the content owners, and the entire investment in infrastructure (estimated at anywhere between $500m and $2bn) is to be amortized over one year - there is plenty of profit to share."

This argument is flawed because it places too much faith in comparatively small survey groups, and secondly, in reality, adoption of such technologies is almost always much lower when the reality (I must now pay $20) takes over from the hypothetical (I will pay $20 in the future). It is true that, in Italy, a monthly subscription to the service retails at P19 ($24.77) which on the surface would suggest that the Mobile DTV Alliance case holds water. However, this price also includes 1GB of bundled data, and - most pertinently - more customers are tempted by offers which bundles mobile TV with low cost telephony, or else (seeing that they have made the investment to buy a broadcast TV-enabled set in the first instance) are purchasing three month subscription at which point the effective monthly spend falls to P9.67 ($12.61). Bearing in mind there are already some casual (daily/weekly) users of the service in the market, and taking 3 Italia's own figures, the effective spend attributable to broadcast mobile TV is around P8.3 ($10.8) per user per month.

Certainly, within Western Europe at least, customers seem unlikely to baulk at a monthly subscription of P8-10 ($10.43-13.04), provided that the service offers good coverage and service quality. Furthermore, operators seeking to retain customers could emulate 3 Italia and bundle mobile TV in as part of a wider content/data bundle, although one caveat is that in so doing they should ensure that they do not undervalue the mobile TV element of the bundle (particularly given that in most cases they will have been required to invest significant capital in any mobile TV venture).

Given the high cost of handsets, it is likely that most initial customers will be postpaid, with operators in a position to offer further incentives for customers to renew subscriptions by offering the high-end handsets either free of charge or (more likely in the first instance) at a significant discount as part of the subscription package. In addition, operators should make mobile TV available on a one-off basis to encourage revenues from casual users of its postpaid services, who might just want to watch occasional sporting events or might be tempted to "dip their toes in the water" with one or two viewings before signing up to a monthly subscription. In such cases it makes sense to offer a varied tariff, with certain programmes (i.e. the aforementioned sporting events) priced at a premium to news and soap operas.

Conversely, for mobile prepaid users, it makes sense for operators to seek to emulate postpaid subscriptions by offering subscription-type tariffs, with customers paying in advance (a la 3 Italia) for access to the service for 24 hours, a week, or a month.
One additional factor to bear in mind is that for simulcasts of pay TV services, the premium charged to existing subscribers of those services (i.e. those who already receive them via DTH, cable or DTT) will need to be lower than that charged to new customers: i.e., the premium they pay will be a mobility premium rather than a content premium. Thus, customers who already pay $50-80 per month for a Comcast, Viacom or BSkyB package will be reluctant to pay an additional $15 per month purely for the benefit of receiving that content over a mobile.

However, operators and service providers should instead see this an opportunity, ultimately offering the opportunity to bundle services as part of a triple or quad play package by locking customers in to a pay TV package across different media. Thus, for example, in the UK Sky Mobile offers existing Sky TV subscribers the opportunity to purchase themed bundles of Sky channels at £5 ($10) per bundle, thereby reinforcing its relationship with those customers but within the mobile environment. While not currently offered to non-Sky TV subscribers, it is possible that such bundles might be offered in the future although at a considerable premium.

The above "ifs" are not insignificant, and there are others, equally problematic, which are, unfortunately, not going to disappear overnight. That said, if and when operators and service providers have addressed those "ifs", have ticked them off on their "to do" lists, then the opportunity afforded by the small screen of mobile TV appears large indeed.

Dr Windsor Holden is Principal Analyst with Juniper Research, and can be contacted via: windsor.holden@juniperresearch.com

Today, most major carriers and service providers have far progressed strategies to deploy all IP Next Generation Networks. However, how does one secure the delivery of high quality user experience over IP infrastructures? As with legacy networks and services this will also be an issue also for tomorrow's converged networks.
Few things in life are trivial, and providing high service quality to the end users across a converged network is certainly not one of them. Although IP in many cases will simplify the operation, the convergence in the access layer and across services indeed adds complexity. Ever since the launch of UMTS wireless operators have struggled with inter-technology handovers between their GSM and UMTS radio access networks. Once extended into a fixed-mobile converged (FMC) environment, handovers need to be facilitated between cellular accesses and both trusted and non-trusted local WiFi and WiMAX accesses, to enable mobility and Voice Call Continuity.
One of the enablers for FMC is the IP Multimedia Subsystem, IMS. Destined to provide a wide set of seamless services through service delivery platforms and application servers, IMS will be an important means for service expansion and hence revenue. Even though IMS is widely accepted, few believe it will be the only service environment. This means that operators need to manage multiple service domains apart from their multiple access networks and their core backbone network. Added to this, most operators will continue using their legacy infrastructure for a foreseeable future.
Much of the richness in the services of tomorrow comes from the continued evolution of the user terminals. Open Java and SIP smart-phones enables a completely new set of services, and the very foundation for the operators' future revenue growth. The increased complexity in the service architecture with terminal based client applications may however represent an unpleasant rendezvous for the operators. Most users who have attempted to use WAP and GPRS services probably also have experienced problems, problems which often have been due to trivial profile or parameter related reasons, but still fatal for the end user, and fatal to the success of the service. Open terminals can easily turn in to a true nightmare for the operators trying to guarantee seamless operation and roaming.
Service quality is probably one of the most important sources for differentiation. Having end-to-end visibility across the NGN network and services will be paramount, not only for understanding the customers' experience, but also to efficiently manage the increasingly complex value chains, where application- and service partners play an ever more important role thereby rendering SLA management a must. Only by proper monitoring, operators will be able to tell what their customers experience, and whether they get through loud and clear!

7 layers
More and more industry segments are beginning to integrate wireless modules, such as GSM/GPRS, UMTS/HSDPA, W-LAN, Bluetooth etc, into their products. By doing this they are increasing the attractiveness and usability of their products considerably. The range of businesses for which wireless technologies can bring considerable improvements is extremely wide. The continuously growing Health Care Sector, the Traffic and Automotive Sector or the Metering Sector are only a few examples of industries that benefit from the integration of wireless modules into their products.
However, for manufacturers from the non-telecommunications sector, integrating wireless modules is quite a challenge. First of all they have to keep any eye on the interaction between the various modules and other hardware and software components within their products. And secondly interoperability between products from different manufacturers is a must if products are going to be a success on the global market.
But that is not all. Not only do manufacturers have to fulfill their own quality standards. They also have to fulfill country specific regulatory and type approval demands, which increase considerably when integrating wireless modules. On top of this, module integrators have to make sure their products meet the requirements of qualification and certification regimes such as the Bluetooth SIG, the GSMA, PTCRB etc. Depending on the way wireless modules have been integrated and probably altered during the development process, it is necessary to go through a fairly complex, time consuming test and certification process. Having a partner who thoroughly understands the ins and outs of these processes can be a great help.
7 layers is one of the world's leading test and service centres for the wireless industry and supports some of the largest mobile phone manufacturers with testing and certification of their products. In addition to this they have a thorough understanding of a large number of successful modules, chip-sets and reference designs that have been tested in their laboratories. This is the experience manufacturers from the non-telecommunications sector can build on when tackling highly interesting but demanding new business fields by integrating wireless modules into their products.

Service providers are actively looking to interactive, personalised IPTV services to differentiate their triple and quad play solutions.  As always, the search for top line revenue growth has to be balanced with a firm eye on bottom line expenses.  Although most operators understand how IPTV middleware is critical for delivering a superior quality of experience, few are aware of the economic impact of this choice.  Recently, Espial, a leading IPTV middleware provider, released a white paper studying the economics of middleware.  They considered several cost areas in their analysis including set-top boxes (STB), IPTV head-end and ecosystem components; IPTV middleware; service innovation; and, finally network infrastructure. The conclusion?  IPTV middleware significantly impacts overall deployment costs on a $600M investment.
The fictitious operator in the white paper builds one million subscribers over five years.  The projected savings from judicious middleware selection was 33 per cent or US$195M. Let's quickly explore the impact of middleware selection on two cost areas.
First, Set Top Boxes (STBS) are a major cost line item -  up to 50 per cent of overall spending. As well, these costs vary considerably depending on the STB features, truck roll costs and the STB lifespan.  For example, an SD STB costs in the order of $90-150 while an HD with DVR unit will run around $450.  A well architected middleware - one with an efficient code base and data architecture - reduces memory and CPU requirements, extends lifespan and reduces truck rolls. This can substantially impact TCO as noted in chart above.
Seconds, middleware can affect IPTV Headend ecosystem costs including procuring and integrating video systems with operations/business support systems (OSS/BSS).  This includes equipment to receive, encode, store and distribute the IPTV service to the set-top box as well as OSS/BSS systems such as billing.  Wise selection of middleware can impact this cost area between 10-40 per cent. These savings are attributable to three areas: an open integration environment, multi-domain management and a scalable architecture. An open environment dramatically lowers system integration costs across the entire IPTV Headend ecosystem.  A scalable architecture supports linear growth and ideally can support 100,000+ subscribers per application server. Finally, a multi-domain capability will support separate regional channel line-ups, UI skins, and applications.  This allows a single system to serve the needs of many communities, which avoids duplicative spending. 
To wrap up - prudent middleware selection can dramatically affect an IPTV service TCO.
Kirk Edwardson, Director of Product Marketing

In the first of a regular column for European Communications, Ian Scales looks at why the debates surrounding network neutrality produce so much more passion in the US than in Europe

Just when we thought the ‘Internet neutrality' debate had finally exhausted itself, it reared up unexpectedly in late summer. US cable company Comcast's dabs were apparently found on some disrupted Bittorrent (a popular P2P application) file exchanges and then someone extracted a  ‘confession' from a slightly confused executive to the effect that the back-room boys might have been doing some traffic-shaping.  Bah! spat the blogosphere. This was clearly a deliberate attempt to degrade a competitor (Bittorrent is effectively another way to distribute video, the cable companies' core offering) and a foretaste of things to come unless there is Internet neutrality regulation. The ensuing argument was predictably measured and sober. The ‘Nazis' were implicated:  "First they came for the Bittorrent users," intoned one blogger, "but I said nothing because I didn't use Bittorrent, then they came for the... (and so on)."

There's been lots of that sort of thing from the US but in Europe, while there have been raised voices, there's never been the same fury and hyperbole around Internet neutrality issues. Cultural and political differences play a big part  - Internet neutrality is often linked to freedom of speech in the US and there's a long tradition of grassroots, anti-monopoly sentiment (absent in Europe). But even so there's a noticeable difference in atmosphere and I think telecom competition (or the lack of it) is at the bottom of it. In Europe, with glaring exceptions, competition seems to be at least going in the right direction, especially in the UK. In the US, after decades of liberalization and competition, the general opinion is that the market is in reverse gear and there is much talk of the old monopolies re-establishing themselves and a growing feeling of revolution betrayed.  Ever since early 2006 when the, then, AT&T chairman and chief executive Ed Whitacre started talking about the need for extra payments from Microsoft or Google for high quality content delivery, the more excitable end of the pro-Internet neutrality brigade have been on red alert - the Comcast incident looked like the first signs of a shakedown.   As to the Internet neutrality argument itself, it's notoriously difficult to pin down. Some of it goes around in circles and some of it is just stupid. And if the ‘pro' brigade are capable of a little hyperbole, the ‘antis' are even worse, blatantly misrepresenting the whole concept of neutrality as a communist plot and garnishing their arguments with predictions of an imminent Internet collapse and/or the drying up of network investment (both of which have always been about to happen since about 1993, but never do).

On the other side of the argument, Internet history is actually littered with players shelling out to gain a performance advantage for their applications or users, and on each occasion there has been a grumbling chorus. The introduction of ‘private' peering where, instead of exchanging traffic at delay-prone public peering points, players simply peered ‘privately', was one such. It was the same with content delivery networks that offered ‘better than best effort'. Wedges with thin ends were grimly forecast.  What's different now is that the backstop of broadband ISP choice is felt to be lacking in the US - many users claim they have just one possible provider, two at most, and therefore market forces alone aren't enough to keep the big ISPs honest. And they probably have a point.  In the UK, BT's decision to rearrange itself into retail and wholesale arms with the establishment of Openreach has helped foster an atmosphere of grown-up retail competition. From a truly awful total of a few tens of thousand lines just three years ago, BT's competitors have now unbundled well over 3 million lines. Rightly or wrongly that's generated the perception of real broadband choice and there seems to be a lack of angst amongst UK Internet users as a result. If my broadband ISP starts to exhibit Nazi impulses, I can probably go to another (unless I'm somewhere really remote).  So for the time being, and thanks mostly to a clued-up Dutchman, the UK seem to have cracked the regulatory conundrum.  Now we'll see if Viv Reding can sell the concept to the rest of Europe, and maybe even the US?

Ian Scales is a freelance communications journalist, editor and author.
He can be contacted via: i.scales@ntlworld.com

Mike Hawkes examines the aspects of mobile phone security that seem to be hidden from plain sight

A few weeks ago I was talking with some very well versed individuals from a highly respected anti-virus and security firm addressing mobile security, and was intrigued to note that most of the conversation was around virus protection for mobile phones rather than simple data protection.  Perhaps this isn't surprising, bearing in mind the consumer's innate fear of computer viruses, and how they can steal your life away.  These security companies make serious money combating the destructive and criminal activities of mal-ware distributors, and it makes sense to apply this knowledge directly to mobile phones.
There is, however, a rather important aspect of mobile communication security that is missed in this level of conversation; that of actual data security. 

Viruses have become known as the means of stealing personal data from individuals, which is then sold on for all sorts of fraudulent purposes.  From the hackers' perspective, this has become a necessity for computers because of the trusted and relatively trustworthy nature of PKI for secure Internet communications and also because of the necessity to remotely access PCs through ‘invisible' Trojan-horse applications. 

Mobiles are a different matter.  As m-commerce takes off, an increasing number of services invite businesses and consumers to send and receive sensitive information on the mobile phone. And this trend is only going to grow. 

Yet, measures to tackle security issues that work for PCs cannot be directly applied to mobile phones.  Firstly, mobile phones get lost a lot more often than PCs or even laptops.  It is reported that as many as 10,000 phones are left in the back of taxis in London alone each month. What of all the other taxis in other cities, busses, trains, bars and of course, those phones that are physically stolen?  Anything sensitive left in the inbox or sent items can be readily extracted from the phone.

Data on a mobile phone does not necessarily need to be sensitive for it to be of value to a non-owner either.  Increasingly, items of value are being sent to the mobile, often in barcode format over MMS. There are a number of security risks around this too. For example, with no audit trails, fraudsters can claim not to have received the message and repudiate the payment. Tickets can be bought on stolen credit cards and forwarded for cash.
Possibly more important than the issue of data on handsets, is that of data interception.  Why?  Primarily because radio communications used in mobile phone communications is inherently insecure. To quote a US security expert: "If it has an antenna, it is not secure, period." Additionally, many telecoms businesses are not really aware of what this insecurity entails, let alone of the risks to customers. 

It is true that cell interception remains a low-level threat while the pickings are poor, but as there is growth in localised concentrations of personal data being sent by phone, the incentive for fraudsters to begin cell interception increases. 

A recent example of this can be seen in Westminster, London, where the City Council invites drivers to send their credit card and other personal details via SMS to pay for parking. Other councils around the country are likely to follow suit and introduce similar schemes, creating more honey pots for fraudsters. As cell interception technology is readily and cheaply available on the black market and one can even find DIY instructions on the Internet, cell interceptions poses a real threat to mobile users.

So, there are two clear dimensions of risk here; data that can be taken off the device itself, and data that can be intercepted over the air.  Most interestingly, neither of these risks to personal data is even slightly related to the propagation of viruses between handsets. So where is the opportunity? 

By integrating tools that make phone content only available to the owner of the phone, through on-handset encryption activated through a PIN code for example, lost or stolen phone data becomes unusable for anyone else.  Combined with secure cross-air encryption, the nature of mobile phone communication, particularly SMS and MMS, has the potential to change dramatically. 

Mike Hawkes is CTO of Broca Communications

In a commercial world of increasingly numerous and, some would say, meaningless acronyms, Software as a Service (SaaS) stands out as offering real value to a wide range of businesses says Jerona Noonan

As the world becomes more virtual, so communications technologies enabling collaboration can make the difference between success and failure.  And, in turn, deploying such tools in the best manner possible is critical in ensuring their full value enterprise-wide.
In supporting this, the best Software as a Service (SaaS) solutions can lay fair claim to being the most reliable and cost-effective delivery model currently available, by offering a broad range of benefits to both the enterprise and individual users.  Quick and easy to deploy, you only pay for what you use: in addition, the IT department does not have to manage capacity, performance or maintenance and it is easy to extend use to partners, suppliers, customers and other users outside the organisation's firewall.

In its latest report, Leveraging the Value of Software as a Service: Key Benefits & Best Practices, industry analyst, Frost & Sullivan also points to other benefits ensuring a substantial return on investment.   

Users can avoid getting locked into a single vendor or solution, for example; they are able to test applications without the upfront commitment to a long-term implementation and, as a result of being located remotely, SaaS solutions aid disaster recovery planning in the event of business interruptions. 

More for less
Today, the SaaS concept has assumed a high media profile, as a cost-effective way of large enterprises meeting the broader business imperative for IT investment to achieve ‘more for less'.  As organisations face growing commercial, regulatory and environmental pressures in ever more global markets, there is an unprecedented need for IT to drive greater internal efficiencies, better customer service and within a reduced carbon footprint.  Yet the budgets available to achieve this remain consistently tight, putting pressure on solutions providers to be more creative, both in terms of the supporting technologies and their implementation and ongoing management. In response, across all areas of IT, vendors are developing SaaS-based solutions designed to achieve lower-cost service delivery.  And, as ever in a market where the latest technology ‘buzzword' appears to offer highly attractive returns, their ability to deliver the benefits both promised and desired are likely to vary significantly, depending on the quality of the individual vendor's offering and the particular sector they are looking to support.

Cost-effective deployment
Having said that, by common consent analysts point to multi-media conferencing as an area of especially strong potential for SaaS. 

The reasons are not hard to find.  It is, for example, cost-prohibitive for a large enterprise to build out its infrastructure to support a truly scalable web conferencing solution, including the appropriate network, applications and resources.   

As an online service, there is no upfront hardware or software investment to be made, allowing the enterprise to focus investment on its core competencies and not on applications and technologies requiring continuous expenditure and resources to properly maintain.  In most cases, it requires no more than a simple Windows-based application to be installed on each user's desktop. 

Companies realise that this is not an application that they can easily deploy and support themselves.  Large enterprises could not possibly keep up with the technology and changes in applications, as well as the overall support required to make this an effective and useful capability across the company.

Not surprisingly, in light of this, Frost & Sullivan predicts ‘robust growth' for such applications as audio, video and web conferencing and collaboration.  In the case of web-conferencing in particular, it forecasts that almost 70 per cent of the total market will be services by 2011.  
In today's virtual commercial world, adopting and deploying the right communications technologies has become mission-critical for businesses both large and small.  Yet the SaaS concept has, in various guises, been around for more than 20 years.  For example, the hosted services model - as a cost-efficient alternative to in-house implementation - has formed the basis of Genesys Conferencing's own market proposition since the 1980s.
Having said that, there is a clear distinction to be made here with application service providers (ASPs), in earlier years a well-recognised means of delivering hosted applications.  The fundamental difference is that, unlike the more recent SaaS delivery mechanism, ASPs typically adopted the traditional client-server model, managing the servers without making any modifications to the delivery system. 

Scaling up and managing multiple customer accounts was also problematic as, in effect, they had to add more infrastructure each time they signed a new client.  SaaS vendors, by contrast, rely on net-native applications, which significantly improves performance. 
Further, scalability and performance is greatly improved as dedicated servers are not required for each client site: this not only improves vendor profitability but enables some of the savings to be passed on to customers in the form of lower prices.  
In short, the SaaS model is now both more reliable and cost-effective in providing robust support for virtual organisations.    

For an enterprise business evaluating whether or not a hosted - and, more specifically, a SaaS - solution is most appropriate to their needs, a number of issues should be considered, both with regard to the solution and the choice of vendor.

Once again, the Frost & Sullivan report identifies a wide range number of business situations where a SaaS solution makes sound commercial sense, both financially and operationally. 
It will be especially valuable if the organisation is growing, or where employees need to be able to collaborate with one another and with others outside the organisation.  A SaaS-based approach will also be appropriate if the goal is eventually to offer a complete suite of integrated applications over time, as dictated by need, or if the company wants to avoid costly upfront expenses and ongoing maintenance charges on that up-front capital.
Selecting the most appropriate SaaS pricing model with the right scalability - both up and down - is of course essential.  Yet there are a number of further considerations to be made in choosing the right vendor.  Secure access to the provider's network and software applications is paramount, as is guaranteed 24/7 performance - especially critical in the context of real-time communications applications.

Services and support must ensure on-going availability to end-users and the ability to integrate with other existing applications within an existing infrastructure is also important.  And finally, not only must the communications application be simple to deploy but also easy and intuitive to use: for it is only by encouraging maximum adoption and continuing use throughout the organisation, that the full financial savings and other operational and environmental benefits will be realised.    

Broadly-based benefits
The advantages of SaaS, though wide-ranging, commonly have financial implications, both in terms of cashflow and outright cost savings.  For example, unlike on-premise solutions, SaaS is quick and easy to deploy across the enterprise: and, as a hosted solution, no server hardware is required. 

As SaaS is usually charged on a per user basis, companies can scale their conferencing or collaboration applications to support multiple users whenever necessary, without having to pay for every user within the organisation.  The service can also be scaled easily and quickly to more users - irrespective of location and also beyond the firewall - as the need to participate expands across and beyond the enterprise. 

This is especially attractive, as few employees need permanent availability, yet many will need access at least some of the time.  The result is that costs are kept under control, at the same time ensuring ready participation in virtual meetings and other activities whenever it is required.

As SaaS applications include all upgrades and updates, the end-user organisation and its staff will always benefit from the latest software as soon as it becomes available.  Set-up and  deployment across the enterprise is almost instantaneous: the business does not have to concern itself with setting up accounts, communicating it to each end user and then training them, as this too is taken care of within the SaaS environment.
The result of delivering web conferencing via a SaaS model is that the service is available at any time, anywhere in the world.  All that is required is Internet access and a telephone - indeed, by adopting the VoIP option, not even a telephone is necessary. 
In the context of the latest multi-media communications and collaboration solutions, the term ‘SaaS' significantly underplays and undervalues the enterprise-wide impact of some vendors' service offerings.

In contrast to those who provide the software application only, Genesys Conferencing, for example, goes further, by offering 24/7 support, protection against technology obsolescence, full set-up, implementation and deployment capability and comprehensive reliability, performance and security. 

And, where the provider authors its own software, the customer will also benefit from the flexibility of a solution that can be customised to meet their own particular needs.  The result of adopting a service solution at this deeper platform level is that the customer will be assured of a future-proof yet resilient unified communications solution that will be cost-effective both to install and maintain.

The result?  As an alternative delivery mechanism, "SaaS is changing the way companies buy and deploy software and for good reason," believes Frost & Sullivan.  "Most SaaS providers charge per user or usage, to stay flexible with their applications purchases, scaling up or down as needed and offering applications outside the organisation, as appropriate.  This model also encourages usage within the organisation, boosting productivity and the technology's RoI."       

As today's enterprise businesses face greater pressures than ever, the breadth of financial, operational and service benefits which SaaS delivery offers enable companies in all sectors to remain both productive and competitive - the essential basis of both survival and profitable growth.

Jerona Noonan is Sales Director, Genesys Conferencing

The announcement earlier this year that the MediaFLO standard has been adopted by a major US telecoms operator for mobile TV services has been greeted with disappointment by some European industry commentators and proponents of DVB-H. However, in a wider context the announcement can be seen as a boost for the whole mobile TV market and should pave the way for more widespread roll-outs of the mobile TV technologies in general. John Maguire discusses how regional, commercial and regulatory issues will influence the adoption of different standards and how open and proprietary standards can successfully co-exist

There is a fascinating battle for supremacy being played out across the globe as different mobile TV standards, both open and proprietary, vie for acceptance and market share. Currently there are three key players in this embryonic market: Representing the proprietary solution we have Qualcomm's MediaFLO end-to-end solution, which has launched in the United States. Representing the open standards are ISDB-T in Japan and DVB-H in Europe, both of which have fully rolled out, live services. 

The key advantage that a proprietary system has in an early market is that it allows a single company or joint venture to deliver a single point, turnkey solution to a telco or broadcaster. In the case of MediaFLO, Qualcomm presented a compelling argument by offering a solution that was extremely quick and easy to deploy using existing infrastructure wherever possible. For the US operators that have adopted MediaFLO the opportunity to steal a march on the opposition by investing in an ‘off the peg' solution was too good to miss.

The disadvantage of the open standards approach is because it is fundamentally eco-system based, it is a collaborative approach with certain inherent disadvantages and at first sight the end result does not appear too appealing. There is a steeper leaning curve with an open standard, with more elements to plug together, and the need to drive interoperability and manage that Eco-System in order to find and develop the best in class suppliers across the value chain. However, increasingly organisations are willing to invest in a standards driven process because of the greater benefits over a proprietary solution in the long run. Open standards tend to be supported by a group of organisations, there are more companies involved, and consequently there is a bigger eco-system, which allows participants to drive down the price and to drive up the level of competition.

Another benefit of the open standards approach is that the cost of developing the Intellectual Property (IP) can be assessed and a single IP value fixed by the standards body, for instance MPEG LA. As the cost per handset for using a particular standard is set by an independent body, operators have clarity of cost prior to entering the market and can be confident that there will be little cost variation due to changes within that standard.
The role of government support for mobile TV is another interesting area. In countries where governments currently support either analogue or digital free to air TV there will soon be a watershed when the so call ‘digital dividend' becomes available and spectrum and frequency allocation is agreed for services such as mobile TV. Governments in countries such as the UK and Ireland have a mandate and arguably a social responsibility to provide free to air content broadcast to traditional TVs. The question that must be addressed very soon is how does mobile TV fit into this free to air model, and should the state funded broadcasters such the BBC and RTE be expected to extend their service to include hand-held devices?
If the answer is ‘yes' then a government is going to find it far easier politically to adopt an open standards based solution, which, by definition, can be serviced by any company with suitable knowledge and experience, rather than a proprietary solution. In fact, in Europe the first steps towards government acceptance of an open standard for mobile TV (in this case DVB-H) have been led by Viviane Reding Member of the European Commission responsible for Information Society and Media European Union, who has been extremely vocal about the need for Europe to work together and accept the open standard approach.

Another benefit of governmental support for open standards is that by deploying networks using compatible standards across multiple regions and in the longer term, multiple countries, not only are there benefits through volumes of scale, but there is also the opportunity to develop roaming capabilities.  For mobile TV, deployment of a proprietary standard would not allow this and would lead to more market fragmentation than an open standards approach could potentially deliver. For example, if MediaFLO or a similar proprietary standard gained a foothold in Europe, it might be successful in one or two countries, but there would be little chance of Pan-European organic growth. On the other hand, an open standard such as DVB-H has a far higher chance of widespread success in Europe as there are far fewer barriers to its adoption. In fact, the rollout of DVB-H services is likely to cause a snowball effect in opening up new business models and extra revenue channels for roaming.  Roaming for mobile TV is not as important as for multi cast or unicast TV, but an open standards approach is typically the desired strategy. Hence all the major mobile operators are looking for well defined interfaces, not proprietary standards, so that they can choose best in class. 

However in the face of all logic indicating that open standards are the way forwards, the success of MediaFLO in the US demonstrates exactly what a clever business case Qualcomm have developed.  MediaFLO put together an end-to-end system designed specifically for the US market and, in order to demonstrate the suitability of the concept, went as far as sourcing and licensing the content. Qualcomm were then able to approach operators and offer a complete working solution that could be rolled out in a very short period of time. However once MediaFLO had been demonstrated ‘in the field', Qualcomm took it to the US standardisation body with the aim of having it adopted because even MediaFLO recognises that the only way to drive the business globally is to have it as a worldwide standard. 
However, arguably it is the size of the market that is far more important than which standard is being used, certainly in the short term, and consequently it is interesting to consider the levels of penetration that the commercially available mobile TV networks achieved. When the Japanese government took the decision to launch an ISDB-T service, it first secured a mandate to proceed and positive endorsement of the standard from all of the mobile operators. By securing buy-in from all the major players in the Japanese mobile TV market to contribute to the standard and to develop devices based on it, the government ensured that after one year of operation, the service was available to 70 per cent of the Japanese population. Of these potential users, 10 per cent were actually using the mobile TV service. In the case of 3 Italia with its DVB-H (H3G) network, after one year of operation, 800,000 of the available 8 million subscribers were using the mobile TV service. So, interestingly, in both markets, open standards hit 10 per cent of the available population. In the USA, despite Qualcomm's best attempts to make mobile TV accessible, the proprietary MediaFLO networks have reportedly shipped around less than 1 million devices after about a year of operation, which is well down in percentage terms compared with the comparable Italian and Japanese networks. The USA is a very big, very fragmented market, but not surprisingly there is an ongoing debate as to whether the slow take-up of mobile TV can be attributed to geography, the standard or perhaps external economical factors.

Another more divisive question that is being asked is ‘can mobile TV be a compelling enough offering to drive volumes'?  The answer to this is certainly ‘Yes', but with a number of minor caveats. Evidence from the many mobile TV trials across the world indicate that users will adopt mobile TV, but they expect high quality audio and video as a given. Users are also becoming more sophisticated thanks to the ever increasing range of content delivered in both linear and non-linear formats to PC and television screens.  To be successful, a mobile TV services will need to offer a range of high quality content delivered as linear TV, video on demand TV, and even audio and video downloads, not just broadcast TV on a smaller screen. The success of the video ipod and the iphone demonstrates the value of downloadable content and Nokia are currently rolling out YouTube mobile in a similar vein. The technology to allow unicast, multicast and broadcast delivery is already available and is still improving. Each iteration of improvement will provide mobile operators and device manufacturers with the tools to make mobile TV more desirable to the market. However, in order to reach ubiquity, there needs to be a large degree of interoperability and certainly a collaborative approach throughout the sector. Some form of open standards approach would certainly make this process of growth and integration far quicker, easier and more successful. However in order to ensure that the proposed standard is developed and used to the benefit of the whole sector it would be beneficial to have strong, impartial stewardship. Industry bodies such as the OMA and BMCO forum are taking up that mantle and are driving open standards forward and help to ensure that the right tool sets are developed in order to deploy a system and provide clarity of cost, benefits and issues for potential users. The importance and value of proving clear business guidance to potential mobile TV operators through the availability of information regarding capex, opex, and potential revenues will help bring new players into the sector and help the whole industry in the longer term by helping increase volume. 

The other issue that must be considered is that all the discussion about standards is currently focussed on the generic mobile TV service. Of course, once that open platform is available, there is significant scope for operators to develop and deliver proprietary services based on those open standards. This second stage development will give network operators and device manufactures the opportunity to innovate, build their individual business cases and differentiate their offerings once the basics are in place.

In essence, the most important issue for any company involved in the mobile TV market is to ensure that their business and technology planning is focussed on the medium to long term and the importance of the inter-operability that will certainly be required to make mobile TV a global success. There might be the temptation to roll-out a proprietary solution in order to kick-start their service and provide improved short term revenue, but as the market develops, the benefit of having 10 per cent of potential subscribers locked in, might switch to become the problem of having 90 per cent of potential subscribers locked out. While the size of the market is intrinsically more important than the standards being used, the paradox is that choosing an open standard will potentially help the market develop to its full potential while use of a proprietary standard could lead to fragmentation. Given the choice, I believe that open standards will provide the best opportunities for the mobile TV sector, as they drive and promote interoperability and best in class solutions across the value chain. Open standards also drive competition which fundamentally drives economies of scale and enables device manufacturers to make large scale solutions which proprietary systems don't. Either way, it is important to take a pragmatic view at this stage of mobile TV's development, and as new services are rolled out, more handsets will become available and the potential viewing population will increase. Long may this continue!

John Maguire is General Manager Consumer Mobile at Silicon & Software Systems (S3)

As the deadline for EU Data Regulation compliance passes, and many member states opt to postpone, Ross Brewer warns telcos not to underestimate the amount of work involved in addressing the regulations

The original deadline for the European Union Data Regulations - namely September 2007 - has been and gone.  Instead of telecommunications companies across Europe retaining data to support the crime fighting efforts of regional security forces, most member states, including the UK, have chosen to postpone the application.  It is expected that the laws in the large member states will start to be implemented from the beginning of 2008.  If this is the case, large enterprises will need to have solutions in place around the middle of 2008. 
While extending the deadline may buy additional time in which telecommunications companies can get their data retention house in order, the reality is that too many organisations are dragging their heels in addressing the regulation and don't have an appreciation of the sheer amount of work needed to ensure compliance.  Industry estimates put achieving Data Directive compliance at anytime up to 18 months.  With this in mind, if organisations are to be fully compliant in time for the new deadline, then they can't delay a moment longer.

The European Union (EU) formally adopted Directive 2006/24/EC on 15 March 2006.  The directive related to the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks.  In other words, communications providers need to retain - for a period of between six months and two years - all data that will allow relevant local authorities to trace and source communications to investigate, detect and prosecute serious crime. 
The directive - which applies to fixed telephony, mobile telephony, Internet access, Internet mail and Internet telephony - covers every single aspect of a communication including its source; destination; date, time and duration; the type of communication; the type of communication device and the location of the mobile communication.

In putting the regulations off until the last minute, organisations risk facing additional pressures in a bid to fast track achieving compliance within the imposed deadline - if they can achieve compliance in the first place. There is also the added risk that crime fighting abilities of regional security forces will be detrimentally affected as the EU Data regulations have been designed to help the security services in the fight against crime. 
With over 41 billion text messages alone sent last year, not to mention the millions of calls made every minute of every day, telcos will face enormous challenges.  These don't just relate to storing the data securely, but being able to, should an investigation be required, locate and retrieve the data as quickly as possible so as not to hamper proceedings. 
The amount of audit and logging of information required by the Data Directive has the potential to overload storage capabilities of even the largest and most technical savvy organisations. But in an ever more competitive industry, what can telcos do to put a necessary compliance solution in place which will not detrimentally impact the day-to-day operation of the business in anyway?  

The good news is that the information that needs to be retained by the Data Directive already exists within the organisation in the form of log data.  This log data provides a fingerprint overview of every action that occurs across the enterprise and importantly, across a telco's network.  However, as it is generated at a rate of millions of bytes per second, being able to capture it to provide forensic reports across the enterprise in a timely fashion as mandated in Article 8 of the Data Directive will be a challenge. 

As such, when looking for a solution to manage this problem, organisations need to find a product that has its roots in scalability and can run a single search across all devices across the organisation to help minimise the impact of locating the data in the first place. 
This is where log management solutions can step in and provide a means of searching log data and producing reports.

Steps to compliance
Installing an off-the-shelf compliance solution a couple of weeks before the deadline will not be sufficient.  Any solution will need to be tailored to meet the specific challenges of each organisation and go through a rigorous testing procedure to ensure that it is robust enough and capable of storing and retrieving information within the recommended guidelines.
Getting started with any enterprise-wide strategy for compliance requires an understanding of the requirements particular to each industry and business. Policies should then be put in place for collecting, alerting, reporting on, storing, searching and sharing data from all systems, applications and network elements.  This creates a closed-loop process that governs the life-cycle of enterprise data and ensures the compliance programme is successful.

It sounds an obvious first step, but without taking time clearly to understand the specifics of the EU Data Retention Directive, there is a risk that some controls or requirements may fall through the net, which will have serious implications further down the line.  For example, minimum requirements for the Directive include that the solution must be enterprise scalable and distributed; fault tolerant and ensure no loss of data; be able to prove that the logs are immutable so that they can be used in a court of law and be able to produce forensic reports across the enterprise in a timely fashion.

Once the specifics of the Directive have been clearly understood, the next step is to put in place the IT controls and frameworks to help govern compliance tasks and keep the business on track for complying with the mandate.  

Goals should then be defined and key tasks for successful compliance identified, agreed and set.  Then specific tasks relating to each goal can be set.  Once these tasks are complete, configuration of network elements, systems and applications can then be addressed. 
Alerting mechanisms and scheduled reporting will advise IT personnel when any part of the solution isn't complying with the policies.  Early reporting of any problems will ensure that they can be addressed in a timely fashion with minimal impact on the rest of the operation.  Alerts and schedules can also demonstrate compliance to auditors.

Alerting and reporting on logs must be substantiated with immutable log archives.  It's therefore critical to store logs centrally with a long-term archival solution that preserves the integrity of the data, as required by the EU Data Directive.  Immutable logs require time stamps, digital signature, encryption and other precautions to prevent tampering - both during transit of the data from the logging device to the storage device, as well as during archival - if they are to stand up as evidence in any legal proceedings.

It's easy to view the EU Data Directive as yet another piece of Brussels bureaucracy, but unlike many other regulations, this Directive isn't about preventing or identifying financial regularities within big business.  Instead, the EU Data Directive has been designed to help the ever complex fight against crime - at an individual country, European and global level.
Telcos shouldn't feel daunted at the compliance task that lies ahead of them, after all, they already hold all of the data that is needed by the EU Directive.  Instead of viewing compliance as an isolated IT project, they should instead look upon it as a business issue that requires a cross-functional approach, involving people, processes and technology across the enterprise.  Taking the steps necessary to understand, define and implement the appropriate IT controls and frameworks for the business will simplify compliance and reduce the costs and resources involved in completing compliance related tasks in line with the Directive's deadline.

Ross Brewer is Vice President and Managing Director, EMEA, LogLogic

The GSMA's Mobile World Congress 2008 runs on 11th to 14th February in Barcelona, aiming to - once again - prove its value as the leading event in the mobile industry

The growing convergence of media, communications, entertainment and information being very much on the minds of all those involved in the currently separate industries (though in the words of Keith Willetts, TM Forum Chairman, each in the process of eyeing up the other's lunch), the GSMA's Mobile World Congress is clearly intending to hit all the right buttons, with all the right buzz words. 

Noting that mobile is now about much more than simple voice calls, but is a social and economic phenomenon, providing new channels and opportunities for information, entertainment and the Internet generation, GSMA has declared the four major themes of the conference to be ubiquitous networks, the over-the-top service challenge, mobile's social and economic value, and the mobile digital content revolution, and thus clearly aims to ensure that Mobile World Congress 2008, safeguards its reputation as the largest and most significant global mobile event.  The organisers further declare: "The structure of the upcoming Mobile World Congress is being developed to reflect the ongoing changes in the mobile value chain.  This programme identifies the risks that will be taken and highlights the rewards that can be reaped in brining these changes to all geographic markets and mobile services."

Under this umbrella, the event will tackle a variety of the issues currently dominating telco thinking and concerns.  The convergence of fixed and mobile communications, for instance, may well have a certain natural evolution, but is it inevitable, real or even desirable?  The MWC session will look to unearth what FMC means for leading operators and define the role it will play in their business development.

While the ‘Society on the Move' session will explore the way in which mobile has integrated into modern culture, recognising the growing importance of mobile across society, the commercial aspects of the market are very much to the fore.  So, within the Mobile Entertainment Summit, for example, one session will address the proposition that some believe that mobile advertising is essential to unlock the potential of mobile entertainment, and that advertisers will jump at the chance to spend billions in the mobile arena, while others believe that users will simply turn off.  The session aims to put the opportunity into context and uncover what it will take to deliver the mobile advertising promise.  Under the same Summit, the conference will also address the view that the youth demographic is the darling of the mobile world.  Quick to take up new services and open to new technologies, the youthful user is every operator's target - but is this reputation justified?

On the technology front, the conference will look at such areas as HSPA, under the title ‘The story continues with HSUPA and HSPA Evolved', noting that while HSDPA is already proving a huge commercial success on a global basis, what is the evolutionary path for this mobile broadband technology?  The session will aim to provide operator insight into deploying HSUPA and the move to HSPA Evolved.  Staying in the technology arena, ‘LTE - defining the future' recognises that while standards are quietly being finalised, speculation and rumour pervade.  The session will provide insight into the reality of LTE and its prospect against alternative future mobile broadband technologies.

Noting that mobile penetration has reached over 100 per cent in 38 countries to date, and many more are high in the 90s, the focus of the keynote session on building ubiquitous networks centres on the recognition that while there may be a mobile phone in everyone's pocket, mobile isn't everywhere and the challenge is to take high speed, high capacity mobile services everywhere.  It will examine the development and rollout of ever-more capable networks and their convergence with fixed, to deliver ubiquitous services.
And while MWC recognises that the mobile industry is unrivalled in its technical excellence (well, up to a point Lord Copper), it all too often forgets that the success of its achievements rest on the experience the user gets when he first switches on.  The ‘It's the user experience stupid!' session highlights the importance of putting the user first and the developments that can set consumer imagination racing.

The highly topical subjects of VoIP and Mobile TV must also, of course, have a slot at the conference.  In the case of the former, the ‘VoIP - coming ready or not!' session works on the basis that VoIP has the promise to revolutionise the cost base of voice communications, but will it, and how will VoIP impact the business of mobile operators?  In the case of Mobile TV, it is noted that the application has had a stuttering start, but that, as it finally begins to reach commercial deployment, the race to find the killer business model is replacing the technology debate.  The ‘Mobile TV - taking it to the masses' session looks at the realities of rollout and the business impact of technology decisions.

The above, of course, is a mere snapshot of all that Mobile World Congress 2008 aims to be.  The event also offers both visual and hands-on back up to theories and discussions in the shape of the many products and services on display at the concurrent exhibition.  And, most importantly, it is the king and queen of networking opportunities - of the business and social kind, of course.

A new breed of service providers are helping to reverse the fortunes of some of Europe's most influential mobile companies - by helping them manage their customer lifecycles far more effectively, explains Georgia Hannias

Traditional marketing is no longer a viable communications tool in the mobile telecoms world.  Glossy adverts in the papers, mass-marketing direct mail and telemarketing are not winning customers' loyalty - especially when every operator seems to be offering the same thing.  In this confusing climate of choice, even a great product or service isn't making the real difference to business growth.  What customers are looking for is an operator who can understand their preferences and ultimately tailor their services to give them what they want, when they want it.

Understanding the needs of the customer is what Customer Lifecycle Management (CLM) is all about. This communication trend focuses on managing customer behaviour by finding a way to create a bond between the operator and the customer throughout the customer's lifecycle. Once a one-to-one communication is created, consumer behaviour can be monitored, so that companies know exactly what their customers want and can subsequently provide a tailored service to meet individual needs.  This is usually the first step forward in successfully delivering targeted services and, most importantly, building a strong, long term relationship between the service provider and the customer.
 "Customer lifecycle management has a future in mobile telephony - especially in Western Europe where there is a blurring of fixed and mobile services," explains Rob Bamforth, Principal Analyst at Quocirca. "This convergence is eating away at revenue generated by traditional services such as voice calls. To find new ways of earning income and to keep customers happy, operators have to differentiate themselves from their competitors- and to do so in a way that makes a subscriber feel valued and not just a number. CLM can help achieve this by enabling an operator to target an individual person."

Marketing departments are all too aware of this demand for more personalised services and how customer lifecycle management can help achieve this goal. More importantly, they can usually produce the creative ideas that are needed to develop the campaigns that will engage customers. The main problem, however, is that the internal complexities involved in executing new programmes usually prevent them from becoming a reality. IT departments tend to place more emphasis on the OSS and BSS systems and the call centre side of a business, which means that marketing's requirements almost always takes a back seat internally.

Consequently, campaigns that should only take weeks to complete and launch can take months, and many compromises are usually made to get a programme out the door- leaving marketing departments frustrated by the fact that they haven't quite delivered the programme they wanted.

Fortunately, there is a new way of simplifying the execution and enhancing the flexibility of personalised marketing campaigns known as Marketing on Demand. Simply put, Marketing on Demand is a software-as-a-service model that removes the internal complexities involved in launching new campaigns.  It enables marketing departments to be more focused on the programmes they want to deliver, leaving the technical execution to a team of external specialists. 

With the availability of marketing on demand solutions, marketing departments can choose to implement customer retention programmes quickly and easily without the need for a long-drawn out IT project.  This brings a whole new dimension to the way marketing departments can control and execute their marketing campaigns.  It provides the flexibility to capitalise on existing customer data, regardless of where it is stored within the organisation, as it is fed to a hosted solution which is capable of automating all out and inbound interactions in real-time.  This enables operators to manage customers on a one-to-one basis so that a relevant and consistent communication journey begins and continues throughout the customer's lifecycle.   Best of all, marketing on demand means that any number of programmes can be launched in one go, thereby allowing operators to execute and manage a combination of programmes without any delay or restrictions. 

Software as a service (SaaS) based-delivery model is already driving major growth in the customer relationship management applications market. According to analyst Gartner, mobile operators and other enterprises continued to invest in front-office applications last year, with worldwide CRM software revenue exceeding $7.4 billion (£3.7 billion) in 2007, up 14 per cent from $6.5 billion (£3.2 billion) in 2006.

By year-end 2007, SaaS represented more than $1 billion in CRM software revenue. "The sustained performance of major on-demand solutions providers is driving the growth in the SaaS segment," says Sharon Mertz, Gartner research director.  Mertz also explained this method of delivery was likely to become the dominant one in this market by 2011, as companies updated existing business automation systems to ensure they could meet expected targets for business and revenue growth.

"Marketing on demand deployment models are helping communications providers use their CRM systems as a strategic tool to gain competitive advantage, maximise loyalty, reduce churn and increase ARPU throughout the customer lifecycle," says Mikko Hietanen, CEO of Agillic.  "Customer Lifecycle Management solutions take the CRM SaaS model one step further, by providing communications service providers with an application that enables real-time, multi-channel, cross channel dialogues driven by the customer.  What this helps to achieve is the delivery of a true and consistent understanding of how an individual interacts with their service provider."

Software systems like those offered by Agillic also help communications service providers and their marketing agencies to work even more efficiently as it is programmed to automatically execute the business rules required to reach customers across all touch points from printed direct mail, digital media and customer services.

"Agillic is one of the first companies I have heard of that focuses exclusively on customer lifecycle management," says Joel Cooper, Senior Analyst, Europe at Pyramid Research. "In highly saturated telecoms markets, what Agillic offers is the kind of thing that operators would be advised to adopt. As customer growth dries up, customer lifecycle management is a good starting point in terms of better understanding exactly what the customer wants."
Agillic has many years of experience of deploying marketing on demand business models for its customers.   The general principle is that marketing on demand must keep the co-ordination and collaboration of any project focused on the business value rather than the technology. Achieving this requires the assurance that promotional campaigns are set up correctly with the right messages - which is usually the biggest and most costly challenge for most mobile operators today.

Communication service providers can choose to implement a choice of best practice marketing concepts that are already successfully deployed and generating positive results.  Consequently, marketing does not have to reinvent the wheel but instead can benefit from a quick route to market by simply adapting proven programmes with their own branding and business rules.   As programmes develop, the marketing department can easily monitor the positive and negative effects of the programmes and make immediate alterations in real time so no time is lost to time-consuming code changes. They also benefit from having access to advisers that can guide marketing on the best programmes that will match their most pressing business objectives.

Most operators already possess existing data about customer preferences and usage patterns which can be uploaded to the CLM solutions to execute meaningful, customised and timely marketing programmes by crossing all customer touch points," says Mikko Hietanen.  "More importantly, these communications are only received through channels that are agreed to by the customer, which keeps them happy and strengthens relationships even more. This is something that CRM could never provide as the human dimension does not exist."

Georgia Hannias

Operators are increasingly turning to picocells to improve network quality and control infrastructure costs says Chris Cox

Operators have been struggling with coverage problems since mobile communications were invented.  But massive recent investment in 3G licenses and infrastructure has changed the capex and opex cost equation, affecting the performance of the companies that sell equipment to operators. As this article was being written (in Autumn 2007), for example, Ericsson, Nokia Siemens Networks and Motorola made a series of anaemic financial announcements and all three blamed a slowdown in operator infrastructure spending.
While much of the heat is generated by the spiralling cost of implementing 3G, operators are also increasingly cautious about their 2G commitments.

To cope with sustained customer demand for GSM services, operators are looking for ways to keep further investment in additional expensive GSM infrastructure under control. Every operator is actively looking for smarter, more cost-effective methods to upgrade existing networks and yet still deliver additional coverage and capacity to their customers, exactly where it's needed.

Introducing picocells
Many are increasingly turning to picocells.  Picocells are very small base stations that deliver a mobile signal in areas of high mobile traffic (such as in offices or busy public locations) or poor network coverage (such as indoors) and use an existing broadband connection for backhaul.  They are proving to be very much more cost-effective than the alternative of upgrading the macro network.

Operators originally deployed picocells as a "band aid" for coverage black spots. However, as the technology has matured, operators are increasingly seeing picocells as an important mechanism for improving their macro network capacity and performance. And, perhaps most interestingly, they're beginning to be used in one of the most competitive battlegrounds of all: picocells enable operators to use the promise of great service quality to attract (and win) lucrative business customers from incumbent rivals. 

Every network has black spots where coverage is marginal or non-existent. In areas with marginal coverage, service quality inside buildings can drop off sharply, resulting in dropped calls, ‘network busy' signals, slow data rates and poor voice quality.
The first major application of picocells was to address this issue of signal quality in buildings.
For enterprises, coverage in the workplace is a major driver of dissatisfaction with operators - this is a consistent finding in all global regions. And where coverage is an issue, it's the dominant issue and is a key reason for churn (along with price and handset choice). Critically, coverage is often the primary differentiator between operators - and the decisive factor in awarding contracts.

The traditional solution to in-building coverage problems has been the repeater. But today, planners aren't so quick to turn to repeaters to fill black spots or penetrate buildings.
Repeaters are to network planning what the rock was to the caveman.  Simple and ubiquitous, but not the most sophisticated solution around. But until picocells arrived, network planners generally accepted that the alternative always looked more expensive and difficult to deploy.

While repeaters extend coverage, they act to drain capacity from the cell in which they operate, Picocells by contrast add both coverage and capacity, as well as the ability to enhance data rates. In addition, repeaters can distort the macro cell, causing interference and handover issues, creating severe radio planning problems. Picocells integrate seamlessly into the macro network.

Repeaters can be difficult and time consuming to install and they're also problematic to manage (they don't offer automated fault reporting for instance). Picocells can install in a few hours and offer integrated fault and performance monitoring.
At the beginning, some operators with difficult coverage challenges shied away from picocells because they felt a little uncomfortable with the need to use IP for backhaul, an unfamiliar protocol for network planners used to more established ways of doing things. As IP has become ubiquitous, however, that resistance has melted away.

Providing good service means always having sufficient capacity available. But avoiding ‘network busy' errors in commercial centres and large cities is becoming more difficult as usage levels increase, driven by competitive voice tariffs and attractive new data services.
Subscribers are spending more time on the network doing new, more bandwidth-intensive things. That may affect the quality of service operators provide to premium customers, like BlackBerry and other PDA users, who expect to be able to access services whenever they want. Providing the right level of capacity is tough in densely populated areas and it's limited by the spectrum available to an operator.

Simply adding new macro cells - even if they're micro base stations - is expensive and time consuming. And public opposition to the introduction of more and more radio masts is increasing around the world as well, even if good sites can be found.
An operator's lack of capacity is not only a churn driver but also a brake on new service uptake.

Picocells offer the possibility of highly targeted deployment, rapid rollout and limited impact on the macro network in terms of interference and the requirement for network planning. Each base station is inexpensive and a single base station controller can handle a hundred base stations.

ip.access undertook some business case research recently (see the Case for Picocells: Operator Business Cases at www.ipaccess.com). In one scenario, where the goal was to offload 60 per cent of the indoor mobile usage of 7000 users over a two square mile area, using picocells was 53 per cent cheaper than upgrading the macro network (each of those users was estimated to use 800 voice minutes each month and 5MB of data per month).
Enterprise market capture

As picocells become more and more ubiquitous as the solution to difficult coverage and capacity challenges, more far-sighted operators are beginning to see how to use the technology to attract and retain lucrative business customers.
In Sweden, for example Spring Mobil is the fourth GSM operator, having won a license in 2003. It aims to replace fixed telephony in the office using nanoGSM and has recruited over 500 enterprise customers replacing fixed lines.
Spring can provide fast, low-cost coverage and capacity where enterprise customers need it most. They can sell more minutes while supporting their best customers with the most modern services. The solution reduces churn and drives traffic from fixed lines to mobile networks.
Picocells are an emerging GSM infrastructure play for many operators. They are being used to help operators:

  • Differentiate from commodity networks
  • Increase revenues from voice and data
  • Offer competitive in-office tariffs
  • Decrease churn

Operators have spotted that, with picocells, they can sell new services while improving macro cell performance without the need to over-spend on infrastructure because they change the approach to  ‘Pinpoint Provision', adding coverage exactly where needed.
Picocells are a proven, end-to-end solution carrying billions of minutes of traffic every year, in dozens of operators around the world.
The future looks bright for picocells.

Chris Cox is Marketing Manager, ip.access

Roaming fraud is now of such concern to operators that the GSM Association has developed a new set of fraud detection standards for its operator members. Eugene Bergen Henegouwen details the development, and urges operators to implement compliant solutions today

Some roaming fraud is a fact of life for many mobile operators and, at its worst, has the potential to cause great harm to the bottom line. This type of illegal behavior seems to affect operators most acutely where the borders of rich countries meet those who are poorer. In one form of fraud, SIM cards are cloned by well-organized and technically savvy criminals, who then sell them to consumers who most likely have no knowledge that the cards they're using are illegal. Eventually, the cards are deactivated by the operator, but by then the fraud has been in place for some time and money has been lost.

The criminals who participate in this type of fraud are successful because fraud detection and resolution is hindered by the length of time it can take for a visited operator to notify the home operator of the roaming activity that has been taking place. Currently, data records that track subscribers' activities as they roam are exchanged with the home operator within about 36 hours. This leaves a big window of opportunity for criminals who often target their activity during weekends and holiday times when operators are most vulnerable to fraud.
Roaming fraud - when it does hit - hits hard. It is the equivalent of leaving for a weekend trip and returning to find the whole house flooded even though you only had a dripping tap when you left. Any loopholes are potentially exploited by criminals who will strike decisively and "flash flood" what they can in a short timeframe because they know the extent of the capabilities - and limitations - of the current reporting standard. Clearly, now is the time to form a more watertight seal to combat this fraud.

Roaming fraud has become such a concern for operators that the GSM Association (GSMA) has developed and mandated a new set of fraud detection standards for its operator members. The Near Real Time Roaming Data Exchange (NRTDRE) initiative has been developed in specific response to International Revenue Share Fraud (IRSF), which thrives on the clone-and-sell technique.

The new NRTDRE standard will dramatically reduce the record exchange time and make fraud easier to spot and stamp out. It's not that operators don't have systems in place to fight the risk of roaming fraud themselves, but the current High Usage Report (HUR) system is outmoded. NRTDRE has been developed in its place as the next generation of roaming fraud protection in an attempt to effectively cut fraudsters out of the picture and restore the level of security that operators need to operate successfully and - most importantly - profitably.
According to a GSMA survey of 37 operators some months ago, roaming fraud losses affect networks of all sizes and in all regions. In one instance, a single operator is reported to have suffered losses of c11.1 million in just less than two years, just one more piece of evidence that IRSF has grown to be a costly concern. This may not sound like a huge figure, given that operator turnover can often be in the multimillions or even the billions. But when you take into account that it is all a loss from the profit and not the turnover line - and cannot ever be invoiced - the percentage grows to an uncomfortable level.

In the view of many analysts, this form of fraud has grown to the point where it is not merely a minor irritant. Martina Kurth, principal research analyst for Gartner Group, recently acknowledged that "roaming fraud is a very real and present danger for operators the world over, and it is impacting their bottom lines. Any initiative that enables operators to minimize roaming fraud is, therefore, a strategic business issue on which operators must act if there is not to be further erosion into their profitability."

The GSMA NRTRDE initiative aims to replace HUR to keep operators ahead of the growing sophistication employed by the fraudsters. With NRTRDE, the visited network is required to forward Call Data Records (CDRs) to the subscriber's home operator within four hours of the call end time. If the visited operator is unable to get this information to the home operator in time, the visited operator is held liable for any fraud associated with those calls, and so there is a greater degree of motivation for all parties to make the measure successful.
This approach toward fraud is particularly important in garnering the support of the whole industry of operators because without a shift in responsibility to the visited operator, the motivation to adopt the new standard would be limited. NRTRDE is making operators more accountable for the behaviour of visiting subscribers, and where once it was hard to enforce anti-fraud protection, operators now are much more motivated to work together.
Once the home operator has received the CDR, it can detect fraud via a fraud management system. There is a record format for exchanging these near real-time records, which has been defined in the GSMA's Definition of Transferred Account Data Interchange Group (TADIG) standards document as "TD.35." Syniverse has played the lead role with the GSMA in the development of standards which support NRTRDE, serving both as chair of TADIG and as the official author and editor of TD.35, the fundamental building block of anti-roaming fraud resolution.

The adoption of NRTRDE is expected to reduce the incidence of roaming fraud by up to 90 per cent because of the closing of the roaming fraud window currently open on operator networks that use HUR. Just as importantly, NRTRDE offers operators a far more accurate and timely view of how their networks are performing against fraud.
Rather than waiting until October 2008, the date at which GSMA members are required to implement the new standard, both the GSMA and Syniverse are urging operators to implement compliant solutions today. We believe that as the adoption process gains momentum, operators who continue to rely on HUR will eventually become disadvantaged and may be prone to more attacks.

Syniverse  has been taking part in GSMA trials to ensure our solution as well as the solutions of other providers have the interoperability needed for a global industry. All of the trial work undertaken thus far ensures operators can begin rolling out solutions ahead of the deadline and, in many cases, form a watertight boundary that excludes fraudsters and enables the operator to experience the cost benefits sooner rather than later.

The countdown to the GSMA's October 2008 NRTRDE implementation target date is moving closer by the day, and adoption is gaining momentum in the industry. Moreover, with the cost of roaming dropping in Europe for subscribers, it's reasonable to expect that the amount of roaming traffic will increase, making fraud patterns harder to spot and heightening the risk for those relying on HUR. In partnership with the GSMA, we believe the time to start forming a more watertight seal to combat sophisticated fraud is clearly now as the industry moves to protect not only the operators' revenues but also the future of roaming for subscribers worldwide.

Eugene Bergen Henegouwen is Executive Vice President, EMEA, for Syniverse Technologies

WiMAX is finally making its way into the mainstream telecoms market as WiMAX operators around the world begin to roll out their services. Previous obstacles for launching their ventures, such as obtaining the necessary spectrum licenses, deploying mobile WiMAX infrastructure, or selecting the right vendor have either been overcome or are about to be resolved. Finally, WiMAX is no longer a hyped up term being tossed around in the telecoms industry. It is now a reality, and operators are testing their WiMAX capabilities in real-life environments, with paying subscribers, on loaded networks. David Knox assesses the different types of entrants in the WiMAX market, and reveals the many charging challenges the new innovation faces and the ways in which operators can overcome them

According to a recent report from the Gartner Group, revenue from sales of WiMAX equipment will grow to more than $6.2 billion by 2011, and global connections will reach 85 million in the same year. The current scramble to roll out WiMAX around the world provides enough evidence to back up this statistic, as operators get ready to launch the next big thing in global communications.  Best of all, WiMAX is being regarded as a technology that is being embraced by both developed and developing nations around the world - creating endless business opportunities for vastly different economies.

Paving the way for WIMAX's entry into the mainstream is a new breed of dedicated WiMAX service providers that are starting to successfully target high density, high usage metropolitan areas in various parts of the world, as well as in rural communities where there is no access to fixed line broadband or 3G.

In the developing world, telecoms provider Wateen Pakistan has successfully deployed its WiMAX network in 17 major cities across Pakistan, including Islamabad, Karachi and Lahore. The success of the project demonstrates how the cost effectiveness and speed of deployment offered by WiMAX is allowing competitive carriers to quickly build a wireless broadband network. It also shows how a developing economy can immediately embrace a new, innovative next-generation technology, and smoothly deploy a cutting edge communications infrastructure.

Closer to home, UK companies like The Cloud, Europe's largest Wi-Fi operator, are making positive inroads into the WiMAX space. The Cloud has become the first to offer the service in the financial district known as The City Of London. Heralded as Europe's most advanced WiMAX roll out, the venture has given 350,000 workers and thousands of visitors the chance to get broadband access anywhere within "the square mile".

The success of this project has led to more successes for The Cloud, including its hotspot access deal with McDonald's, which recently rolled out free high speed wireless Internet access across almost 1,200 restaurants in the UK, making it the UK's biggest provider of free wireless Internet access. The Cloud also signed a major new deal with the BBC, which became the first UK broadcaster to have all it's online content made available for free via Wi-Fi. This latest venture enables the public to access all bbc.co.uk content for free through the UK's largest network of hotspots, operated by The Cloud.  The 7,500 hotspots are located at a various locations across the UK, including McDonald's, Coffee Republic, BAA airports (Heathrow, Gatwick and Stansted) as well as a number of outdoor locations including Canary Wharf and the City of London.  

Dedicated WiMAX providers are not the only entrants in this burgeoning market. Established fixed line telecommunications providers with no mobile arm - such as UK's BT- are also trying to get into the WiMAX space to complement their fixed line broadband offerings and to compete with the likes of 3G for high-speed data access. Established mobile service providers are also trying to muscle their way into the market, especially in regions where there is strong demand for high speed data but where 3G is not a feasible option, due to higher infrastructure deployment costs or geographical difficulties. A good example is the Caribbean's leading GSM operator Digicel, which recently rolled out WiMAX in the Cayman Islands. The existing broadband offerings in the country rely heavily on fixed lines, making it expensive and limiting in choice for many consumers and businesses. Digicel therefore used WIMAX as an opportunity to create competition by offering a better solution to consumers at a lower cost. 

Each of these aforementioned types of WiMAX market entrants will have many challenges to overcome before they can achieve profitability - and each one will also have different requirements for WiMAX charging, depending on their infrastructure and what BSS/OSS systems they already have in place.

One of the main challenges presented by WiMAX in terms of charging will be finding a means to perform user authentication in real-time for home and roaming subscribers. Another will be the ability for subscribers to be able to roam on other WiMAX networks and still use the same authentication mechanism and balance information from their home network. In other words, operators will need to find a way to offer customers just one account with their "home" WiMAX provider and not require them to worry about multiple sign-ons or topping-up their balance with multiple providers.

Having real-time access to customer, pricing and product information which may be stored "off-board" in existing / legacy systems will be another essential requirement for WiMAX providers, so that they can have a real-time view of the customer and therefore be able to charge and control the service in real-time and provide a positive user experience.
Being able to enforce post-paid credit limits in real-time in order to reduce windows for fraud and exposure to bad debt will also be crucial for WiMAX charging. This is a very important issue to address, since fraud and bad debt are now considered the largest areas of revenue leakage for telecom operators. According to a new survey published by UK research firm Analysys, average fraud losses resulting from all types of fraud, including external fraud, internal fraud and fraud by other operators, has grown from 2.9 per cent of operators' total revenue last year to 4.5 per cent this year, and this is expected to increase with the rise in popularity of data and Internet services. WiMAX providers must therefore find a way to protect themselves from this kind of revenue leakage if they want to manage short term as well as long term profitability.

The flexibility to offer pre-paid charging will be another challenge for WiMAX. Initially many WiMAX users will be corporate clients, so they will expect to be on post-paid deals, but they will also need the reassurance of not having to worry about exceeding spending limits. A real-time convergent charging solution can significantly enhance the flexibility of pricing plans and allow users to be automatically switched over from a post-paid to a pre-paid payment mechanism if these user-definable spending limits are exceeded. Instead of being an impediment to new service rollouts, the right charging solution will be able to drive change with the fast launch of new pricing strategies - regardless of what type of WiMAX contract a user is on.

Fortunately for WiMAX service providers, the market already boasts the right technology to ease its numerous rating and charging challenges.  Among the rating and charging experts for this new innovation is VoluBill, which offers a comprehensive range of WiMAX solution capabilities - whether it is simply WiMAX charging, or a complete WiMAX solution including integrated customer care, web self-care, billing and voucher management.
We also offer flexible deployment options for the solutions, making it possible to start with a limited scope and functional footprint and to expand the scope of the solution as business requirements demand.

All eyes are on WiMAX as it brings the reality of truly portable high-speed data access to the mainstream public. According to a recent report from Informa, revenues from mobile broadband services will generate more than US $400 billion globally by 2012, giving WiMAX the potential to be one of the most profitable innovations in telecoms history.

The next big step that WiMAX providers must take is to invest in solutions that offer flexible charging and control capabilities. This will ensure that operators will be able to maximise their financial and business potential while bringing a service that is not only enjoyable to the customer but affordable as well.

David Knox is Product Marketing Director at VoluBill

Opting for a managed services solution provides telcos with enhanced agility in a highly competitive marketplace claims Dominic Smith

Being able to adapt to market pressures and respond quickly is obviously key to success in today's highly competitive telecoms landscape. And yet many telcos are weighed down by the sheer weight and complexity of their technological and service infrastructures.
In the mobile domain, operators' portfolios typically include 2G GSM, SMS, MMS, GPRS, 3G and HSPA basic services, not to mention the range of value-added services, content and applications accessible on top. In addition, in all market sectors, operators have to maintain and bill for a broad array of legacy services. And they need to be able to tailor their offerings to meet the specific needs of a wide variety of market segments - from large enterprises to individual consumers.

Today, some operators are making the mistake of trying to be "all things to all people". They are looking to provide customers with a complete portfolio of converged services including broadband, mobile and fixed communications solutions. The problem is, that by so doing, it becomes increasingly difficult for these operators to meet the needs of all of their customers. 

To compete effectively, they need to be agile, able to focus on customer requirements and efficiently deliver the solutions that their customers will actually benefit from. However, with the often onerous requirement to manage and maintain an intricate network of products, services and applications, agility can seem a highly elusive quality.

In this context, it is hardly surprising that telecoms operators are increasingly interested in exploring the possibility of outsourcing their CRM and billing systems to third party solutions providers and, by so doing, freeing themselves up to focus on their core business. 

Steady market growth
Cambridge-based research firm, Analysys expects the Western European market for outsourcing technology and customer services by telecoms operators to show six per cent annual growth between 2005 and 2010, rising from c5.9 billion to c8.0 billion. Our own experience at Cerillion indicates that the appetite of operators to outsource business support systems for customer management, order management and billing is on the increase.

Cerillion's on-the-show floor survey carried out at Barcelona's 3GSM World Congress in February found that 50 per cent of respondents thought operators were more open-minded about outsourced billing than a year before. Just 15 per cent said they were less so.
To underline this positive mood, major new contracts are regularly reported in the media. In recent times, one of the most notable was the March 2007 announcement by IBM Global Services that it had won a 10-year deal with Indian operator, Idea Cellular. Under the terms of the contract, IBM is helping to handle services like billing, revenue assurance, credit collection and subscriber management.

A diverse market
One of the most important advantages of the managed service approach for CRM and billing systems is that it can benefit a wide range of operators, working on a broad array of projects. An operator undergoing a large-scale business transformation project, for example, may benefit from a managed service approach to ensure it remains competitive and retains sufficient agility to be able to launch new products and services for the project duration. 
A telco looking to establish itself in an emerging market, may seek to put a managed service into operation while it is focused on bringing new people on board and training them up, before ultimately transitioning to an in-house managed solution. 

Alternatively, an operator may take a long-term strategic decision not to manage its own CRM and billing systems but to hand that role over to a provider with expertise in the field, leaving the operator itself free to focus on delivering a high quality customer experience. Again, the ultimate goal is enhanced business agility.      

Putting the customer first
This focus on the customer is important. After all, it is customers that will ultimately have to pay to allow operators the luxury of owning and managing their own business support systems. It is often overlooked, but perhaps the most important single benefit operators can achieve from outsourcing their systems is the cost saving that can be passed onto customers.

When purchasing systems, operators typically incur significant upfront capital expenses before they begin to reap benefits. With a managed service model, the entry barrier is much lower. While the operator still has operational costs to take into account, those costs will usually be lower and more predictable than with a traditional licensed implementation.
There is also a risk that telcos who manage their billing and CRM systems in-house end up concentrating more on the technology than on their customers. Although the situation has undoubtedly improved over recent years, the telecoms industry still has an unfortunate tendency to focus more on system functionality than real business drivers. Too many misguided decisions have been made by IT directors intent on purchasing the latest state-of-the-art systems rather than investing in a planned strategy of business improvement and enhanced customer service

In a competitive market, operators looking to achieve enhanced agility should always put the customer first. While acknowledging that technology is important, Professor Robert East, expert in customer behaviour at Kingston Business School, comments: "Customer-facing technology needs to become more sophisticated to deal with recurrent issues more quickly and solve problems more efficiently. Businesses need to focus on technology that actually delivers satisfaction to people."

Reaping the rewards
But it is not only customers who have to foot the bill for operators indulging in the luxury of managing their own systems. On top of the obvious capex and opex charges, operators may be missing out on a range of other business improvements that the managed service model can offer.

One key benefit they could achieve by migrating to a managed service model is the ability to commit their technology provider to a service level agreement (SLA) with agreed turnaround times for implementing new products and incident resolution, for example.    
Such contracts formalise the way that billing and CRM systems are run and, by so doing, enable operators to gauge how quickly system changes and additions can be implemented. Again, this provides them with enhanced control over their systems environment and the ability to react more quickly to external pressures.  In contrast, the IT department within a large telco business will typically have no specific SLAs in place with any other part of the organisation and often no fixed review process either.

Another important advantage of the managed service approach is that it supports improved time to market for new services. This is because the managed service provider, typically with the benefit of extensive experience of a broad range of different customer installations, will usually understand the procedures and processes around those systems much more clearly than the operator does.

Operators working in emerging markets can also benefit by obtaining access to scarce skilled resource directly, rather than facing the headache of trying to recruit people locally with the requisite skills. Telco start-ups in all regions can often also benefit in a similar way.

Positive prospects
The future for outsourcing of CRM and billing systems is looking increasingly positive. As Simon Sherrington, author of a recent Analysys report on outsourcing, points out: "Outsourcing has become an important weapon in a telecoms operator's strategic arsenal. An effective and well-managed outsourcing scheme can deliver flexibility, reduce time to market for new services, and help to deliver profit growth for shareholders."
There is also clear evidence that outsourcing can help operators to achieve significant cost savings. However, in today's highly competitive telecoms environment the most important benefit of a managed service approach is the enhanced agility it brings telcos to focus on their core business of delivering a high-quality service to their customers.

Dominic Smith, Marketing Director, Cerillion Technologies


Other Categories in Features