Hugh Roberts sets the scene for BIMS 2007

Whether you prefer to call them ‘silos’ or ‘stovepipes’, the history of telecommunications has been dominated by the development of vertical towers. Within the traditional telecoms operations environment, every new service created a new conduit from network to subscriber, creating new systems and processes from customer order to trouble ticket via provisioning, billing and termination along the way.

The first wave of activity in the 90's (BPR initiatives notwithstanding) focused on systems consolidation. Incumbents (and ex-incumbents) with 200+ billing systems realised that to compete they needed to cut costs and IT headcount, and – more to the point – could potentially achieve a more consistent 'one-touch' view of their customers at the same time.
Since the Millennium, the second wave of functional consolidation has had to face up to the challenges of systems convergence head on. In addition to the natural desire to simplify operations by integrating disparate business units and functions, the demand for innovative content and eCommerce services to create new revenue streams has complicated both business and operational environments and relationships.
We are now entering the third wave of industry consolidation, where the convergence of market sectors no longer respects traditional boundaries. In the light of reduced margins for traditional services, fixed wire, mobile, cable, satellite and third party brands have all sought to extend their virtual or physical service footprint to achieve ubiquitous market presence and maximise the leverage of their customer ownership. At the same time, new technology platforms have evolved, creating competition from unexpected quarters.
M& A activity is creating – where regulators will allow it – larger operators and vendors that own or provide a greater proportion of the communications infrastructure. Where direct ownership is impossible, virtual service extensions – through MVNO strategies or otherwise – are now presenting customers with full multi-play offerings. However, this apparently inexorable evolution has masked some critical aspects of the underlying market reality: 
1. Corporate competition between technology platforms at the network layer, each with its own mechanisms for customer access, is not sustainable. Increasingly, customers don't want it, and neither do regulators.
2. The 'bitpipe' business model based on multi-channel infrastructure ownership will be very profitable, but as a high volume low margin business, will only be sustainable if it can remove the need to financially support a consumer-facing brand.
3. Multi-play offerings, by their very nature, change customers' perceptions and expectations of the interaction between the branded elements of their overall communications (and entertainment) environment. This in turn will lead to changes in the fundamental relationships between service providers, third party content and financial brands, and the suppliers of their hardware and software.
All of these factors, plus the attentions of the EC Commissioner Viviane Reding, makes structural separation inevitable. It is simply not realistic for network infrastructure owners and communications service providers to continue as integrated entities. The question therefore arises… to which tier in the new horizontally defined 'divergent' environment will the various elements of operational and business functionality need to migrate?
Silos clearly won't work, but then neither will the anticipated tight coupling of the OSS and BSS layers, particularly when considering the integration of new development areas such as customer 'attention data'. This is the debate we need to be engaged in – how do we correlate, filter and share all of the data we can now generate? The repositioned 'telcos' (and their business models) that turn out to be successful will be those that actively contribute to a value chain that can collectively achieve effective information management.

Hugh Roberts will be co-presenting the 'Keys To Establishing A Successful MVNO/E Business' Seminar with colleagues from Logan Orviss International on June 5th, as well as running the  'What's New In Billing?' free Tutorial.

You can visit the Billing & Information Management Systems Exhibition (7th – 9th June 2007) free of charge if you register online at www.iir-billingsystems.com/exh.
Billing & Information Management Systems

Professor William Webb argues that it is possible to predict the next 20 years of wireless with reasonable certainty.  Given that, writing in 2000, his predictions on the development of wireless communications in 2005 proved to be pretty accurate, he may well have a point

Many take the view that the pace of technological change is becoming ever faster. This is seen as particularly so in the worlds of computers and communications. If this were so, it would perhaps be rather foolish to predict what the world of wireless might look like 10 and 20 years hence.

For example, in 2000 I made my first prediction of the future, published in a book called The Future of Wireless Communications. Based on a mix of deduction and contributions it predicted what the world would be like in 2005, 2010, 2015 and 2020. When we reached 2005, I analysed how accurate my predictions had been. They were almost exactly right. The only areas of difference were in the particular standards that would 'win' – for example in 2000 it was unclear whether BlueTooth, WiFi or cellular picocells would dominate in the home, although it was clear that wireless home networks would increasingly emerge. Interestingly, in overall terms I predicted little change of substance between 2000 and 2005, and that is exactly what transpired.
In 2006 I repeated the exercise, taking a long hard look at the current industry position and making some fresh predictions for 2011, 2016, 2021 and 2026. These predictions are detailed in Wireless Communications: The Future, published in early 2007.
Some of my conclusions were contrary to current thinking and in particular to the view that technology is changing ever more quickly. For example, I came to the view that that there would not be a completely new '4th generation' of cellular as 3G reaches the limits of what is possible in a radio channel; that fixed wireless access would not succeed – even with the advent of WiMax technology; and that W-LAN in the home will provide the basis for convergence between home and cellular systems and not cellular 'femtocells'. I considered industry structure and drew the conclusion that the current vertically integrated approach where operators own networks and provide customer facing services was not sustainable in the long term, but would persist for many years and, in doing so, would slow convergence.
I took a look at adoption rates; at average monthly spend on telecommunications and its change over time; and at the process whereby new concepts become assimilated into society. This led to the conclusion that services take between five and ten years to be adopted, even if the service is 'perfect' and that spending on communications can only grow slowly.
Finally I studied the theory of wireless communications and the many physical and empirical laws that have been used to predict future developments, noting that some, like Moore's Law, are likely to continue to hold true for decades, whereas others, such as Guilder's Law, are already inaccurate and cannot be used as a basis for prediction. I examined a whole raft of emerging technologies such as cognitive radio, smart or MIMO antennas, fibre radio and much more. I concluded that none of these advances was likely to make a significant difference, instead steady growth in wireless capacity will be driven by ever smaller cells.

Major growth areas will include ever-enhanced handsets, home wireless networks, intelligent software and service provisioning. Areas where change will be beneficial but very slow coming will be in battery capacity and the ability to cheaply backhaul high capacity small base stations.
Compared to the predictions I made in 2000 there is much alignment. Where there is a difference, it is almost always a delay in 2006 predictions compared to the 2000 predictions, showing how slowly many things develop in the world of wireless and how easy it is to over-predict change.
In summary, I believe that it is possible to predict the next 20 years of wireless with reasonable certainty. For the user, the next 20 years will see a very substantial, but steady change. Users will come to rely on their handset as a single device to manage their communications and indeed, much of their life. It will truly become a “remote control on life”, with massively enhanced capabilities including huge storage, advanced methods of user interaction such as speech recognition and many in-built tools such as cameras, music players, etc. Users will cease to differentiate between different communications channels and instead see the world as one large communications network, able to provide them whatever content they need wherever they are. Users will also no longer see broadcasting and communications as separate and indeed, the concept of broadcasting will change dramatically to one of content provision that is sought out by users – more like the publishing model of today. Users will perceive their lives becoming more convenient with wireless systems automating a range of tasks, providing new capabilities and altering travel according to conditions.
Achieving all of this will require little in the way of change for wireless technology – and indeed no further significant advances in wireless technology are expected. However, there will be substantial progress in the intelligent systems that use context to configure devices appropriately, control interaction with the handset, and control home and office networks in a simple, yet intelligent manner.
Overall, the future is marked by an initial period of stability as 3G and broadband networks are built out followed by the emergence of new services. Beyond this I expect the underlying wireless communications infrastructure to become a slow-changing utility similar, for example, to railways or, increasingly, the core Internet infrastructure, but with substantial excitement and growth around the services provided on top of this wireless platform.

Professor William Webb is Head of Research and Development at Ofcom.  He is a Fellow of the Royal Academy of Engineering and a member of the Board of Trustees of the IET.

Declining ARPU, decreasing EBIDTA and increasing churn rates on the one hand, and ever increasing subscriber acquisition costs on the other, are forcing the telecommunications industry players in saturating markets to find new ways of increasing minutes of use and airtime.  Martin Löffler looks at survival strategies

Mobile network operators are increasingly challenged by market developments. Facing the threats to which they need to respond, new attitudes, and openness to new business models will be critical to survival.

MVNOS - A new lease of life?

Communities such as lifestyle, special interest groups, fan groups, health, sports, music, etc. could be ideal targets. Singly or in combination they could support a viable MVNO business. However, a mindset shift is required to separate the business models necessary for communities of interest - typically service provider-led and driven by 'pushed' services - from those required for affinity groups, where the groups themselves determine key issues such as membership, security and profiling.  In both cases the opportunities to drive up data revenues for the MVNO (and therefore also the underlying MNO) are strong – in Japan there are already examples of mobile data ARPUs above ?25.
Organisations, including Logan Orviss, are closely following the market conditions for MVNOs.   EMarketer, for instance, recently said that 'MVNOs will become increasingly important for both wireless operators and brands', and indeed, MVNO business is expected to account for almost $30 billion in overall revenue by 2010 (ARC Group).
Logan Orviss International's assessment examines how MVNO business models are evolving, including the expanded use of revenue sharing models with the operators, versus today's cost-plus pricing approach.
As mobile markets are reaching saturation, or have already reached saturation, in all European countries, the further appearance of potentially successful MVNOs has added to the competitive pressures on the existing mobile operators. Moreover, competition is arising from new and unexpected sources. In this fiercely competitive landscape operators have coped by differentiating through their business models, with some adopting a walled garden approach, and some utilising a dynamic, open garden approach which requires constantly monitoring demand amongst its subscriber base and constantly offering new services.
Despite the fierce competition MVNOs have proven their ability to enter these markets and still have the capability to increase overall market revenues from basic services like voice, SMS and PSMS. With increasing market share MVNOs can reduce their cost margins and evolve into a high ARPU business. However, not all MVNO business models will survive.
MVNO models have usually been seen as relevant only to developed markets where the penetration rates are high and customer retention through loyalty (coupled with overall ARPU growth) is the critical success factor.  MVNOs allow the MNO to access revenues from market niches that could (a) potentially conflict with the core brand positioning and/or tariffing regime, (b) avoid the cost and risk of going after these niches, and (c) potentially increase their market share – even over and above that allowed by the regulator – by extracting revenues from an MVNO whose ownership is maintained at 'arms length' from the MNO. Although the returns are lower, the MNO's revenues from MVNOs are relatively risk free and under-utilised network capacity is exploited with minimum outlay.
At first glance, this wouldn't appear to be applicable in high growth markets where the current penetration rates are low and the MVNO model is seen as being 'more trouble than it is worth'.
However, in high growth markets, shareholders and boards are continually pushing for an increase in the number and proportion of postpaid subscribers (ie converting prepaid into postpaid subscribers) as a way of increasing ARPUs. (Example fact: No network in Nigeria claims to have more than 50,000 postpaid subscribers.) From the shareholder perspective, a secondary but equally critical factor is that with postpaid (contract based) revenues the potential to project revenue streams forward (under increasingly onerous audit rules) is much greater. This can be of high importance for the CEO/CFO, allowing them to present a stable, auditable and profitable business growth/development plan.
In predominantly prepaid markets there is little point in trying to create a wholesale migration to postpaid - it won't work and is not, at this point, remotely competitor proof. But why shouldn't operators build a wholesale platform that allows them to easily connect MVNOs to grow their overall number of subscribers? In many developing countries the banking system is still weak and people don't trust direct debit, which is an important factor in postpaid environments. An alternative strategy is to redevelop the brands and sub-brands in a way that separates them from the payment mechanism, but not from potential ARPU stratification.
Brand management expertise deployed in many developing markets is becoming increasingly more sophisticated. Currently, global best practice in telecoms markets is, quite possibly, to be found in Pakistan,  not in Europe. One key factor is the cost of advertising, and hence the cost of supporting multiple sub-brands, which is much cheaper in developing markets. Mobile channels also allow direct interaction with the customer. Customer experience management has now moved far beyond basic SMS interaction and into the domains of balance enquiry and AoC (Advice of Charge) marketing, which, if the MIS is set up correctly, is highly cost effective and very good at generating service usage growth.
This approach – based on marketing smoke and mirrors rather than hard technology - benefits from a degree of systems convergence, but does not necessarily require it. .
The CMOs and CIOs in high growth markets tasked with enabling large scale pre- to post- conversion are struggling. Many have 95 per cent prepaid customer bases; the survey and focus group responses suggest that this isn't going to change in the near future, and that if it does it may well negatively impact their overall growth rates. Further, whilst their postpaid ARPUs are higher, the margins achieved are not necessarily any larger as the overall cost of customer management can be much greater. Many shareholders – and even some board members – of companies active in these markets are unaware that in some cases up to 40 per cent of their postpaid subscribers have very low ARPUs. These 'vanity' customers maintain postpaid accounts purely for the social prestige value, and therefore don't generate significant service revenues. Indeed, in many cases these customers may be retained on negative margins, but can't be churned off without creating negative pressure on the 'elite' postpaid brand positioning. The development of 'hybrid' accounts – very effective in other respects - does not help much with this problem.
Brands and sub-brands, therefore, need to be aligned to lifestyles and bundled service offerings of which the payment channel selection is no longer the defining context, but becomes an option.
A growing concept in prepaid is service-led 'customer self-locking'.  Rather than locking a customer in to a contract to ensure their 'loyalty', the customer is encouraged (by a suitable marketing inducement) to subscribe to a single or bundled service subscription over a longer period of time. For example: a customer who might want a one month subscription to a text alert or paypal service is offered three free months if he pre-subscribes for six months, giving him service access for nine months. If the service is new to the customer, a one month 'try before you buy' offer could be added in to the mix, refundable against the free extra three months offer. The customer is now effectively committing himself to the relationship with the CSP – and any brand manager will tell you that this is far more powerful in customer experience management terms than a contract could ever be. Customers in high growth markets can be clever, and many often own multiple SIMs which they 'juggle' to minimise the price they have to pay (or at least their expectation of the price they have to pay!) and the specific services – for example international calling or fund remittance to family members 'back home' - that they frequently use. Leaving aside SIM-locking practices in some countries… anything such as a self-locking programme that keeps the SIM in the phone for longer is good news as it will attract both inbound termination revenues and promote serendipitous additional service usage.
The strategies outlined above potentially offer a greater degree of customer retention, but they don't substantively assist the CFO in providing auditable forward projected revenues, which are an important factor in increasing shareholder confidence. One route to achieving this could be to adopt an MVNO model based on either third party brand ownership, or as a JV through a discretely branded subsidiary. Termination clauses placed in the business agreement between network and MVNO could be designed to enhance and stabilise forward revenue projections. The entertainment industry offers many good examples of how this approach can be exploited. However, the same methodology has been far less successful in the commercial aviation environment.
Whilst the per subscriber margins from MVNO-sourced customers will obviously be lower for the MNO than from a 'direct' customer, the cannibalisation effect will be minimal in the high growth environment. However, assuring revenues on this basis may provide an effective means to appease shareholders on revenue continuity. This just might prove to be important, not least as they come to terms with the fact that mass prepaid to postpaid migration isn't going to happen and that their expectations have been based on the erroneous assumption that customer behaviour in high growth markets will eventually normalise with the behaviour patterns encountered in the UK and Germany.

Martin Löffler  is Managing Director, Logan Orviss International Deutschland

The battery-life of mobile devices is no longer a barrier to video content wherever you go. André Pagnac explains

Less than a decade ago, the advertising slogan for a leading Internet service provider was ‘where do you want to go today?’.  It meant this figuratively, of course, with the implication that by visiting any site in the world, you were travelling there yourself.

MOBILE VIDEO - Video on the run

The revolution in mobile technology now means that the literal opposite is true.  Now you can go wherever you wish and still be connected to whatever you wish to access via the Internet.  In terms of mobility, we have now moved beyond the concept of the office in your pocket, this year is about taking your entire house wherever you go.  In addition to business applications such as e-mail and web, the latest development is of course video content on mobile devices.
Until very recently, however, there have been barriers for consumers in accepting some new technology.  One major challenge for many customers is the size of files they have to download and the capacity these consume.  This is exacerbated by having to pay for content by the kilobyte.
Another obstacle is the drain on the batteries of applications such as video streaming.  The MPEG4 ASP is something of a chief culprit of both of these problems if it is to maintain the DVD-quality of 24 frames per second (FPS). 
To demonstrate this in real terms, take a modern mobile phone handset using MPEG4 ASP to play video content.  After just 90 minutes, it would run completely out of juice.  That would be before the user had made a single call, sent a text or checked their diary.

Quality v consumption
The debate continues to rage about whether or not mobile video content has a future.  This, however, is not what the debate should be.  It is already clear that it does indeed have a bright future.
For example, there is already an influx of LCD colour screens with improved quality, resolution, power consumption and lower prices.  Meanwhile, network bandwidth continues to increase with no real limit on high-speed wireless networks at home, work or public locations.
Yet although consumers are already used to IT, their incomes are rising slower than advances in IT.  However, they do have money to spend and the trend is to create their own content as well as to download that of others.  Whether for their own personal communication, entertainment, download to other sites or even 'citizen journalism', users of mobile devices are undoubtedly taking well to mobile video.
Consequently, they are sophisticated and discerning about user experience, available content and usage and in their choice of products.  The debate should therefore be on how manufacturers can create the best user experience. 
This means that the major business model challenge is for network operators as they strive to achieve further uptake of the potentially prosperous mobile video services.  How can they reduce power consumption and file size of video content without compromising on quality?

Codec wars
Original video source files in the uncompressed form are usually very big, and hard to manage. That is why compression is necessary for more efficient storage and transmission of the video.  Therefore, the key is to select the right codec. 
Most mobile phones today use ARM-based CPUs, operating at about 100Mhz to 400Mhz.  In such environments, most other video codecs struggle to deliver smooth, DVD-quality video.  Video compression and decompression are highly computation-intensive processes, with lots of calculations being done on a tremendous amount of data every second.
The current de-facto standard, MPEG2, is now becoming antiquated, so network providers and content creators alike are now seeking the next generation in compression technology.
We have already discussed the disadvantages of MPEG4, which, owing to its name, had initially been considered the natural successor.  One of the latest market entrants, Mobiclip, offers an alternative on all counts, including price, capacity and power consumption.
For example, compared with Mobiclip, MPEG4 files need to be 30 per cent bigger for the same picture and sound quality and so consume more memory on a mobile device.  Running content in MPEG4 also requires four times as much battery power as it would to run the same clip at the same quality using Mobiclip.
Of course, reducing power consumption and capacity requirements severely impacts on user experience, which manufacturers must avoid.  So, where MPEG4 would allow a user to experience mobile content for just 90 minutes, Mobiclip would allow the same mobile device to play seven hours of video, in even higher quality.
This means that, where someone might not even get to the end of their film using MPEG4, they could view two movies using Mobiclip and still be able to make calls and send texts.
So we see that the technology is there.  Demand is there.  So the debate is how to make creating and using mobile content as enjoyable an experience as possible.  That was why it was invented and that is why consumers want it.
Consequently, Ovum predicts that 2007 will see the real start of the mobile-TV era.  In the next 12 months, we will see mobile phones fulfil their potential as 'video iPods' and become the 'third medium' of entertainment after TV and cinema. 
For this to happen and for the market to reach its tipping point, manufacturers must ensure they select the right technologies to make the experience for users as positive as it was always intended to be.  That is, users can go anywhere and still have access to all the information and content they want.

André Pagnac is CEO of Actimagine and can be contacted via: contact@actimagine.com

European Communications recommends the shows that count in the period from March to June 2007

DCE + Data Centres Europe    London    21-23 March    www.datacentres.com
Telecoms Risk Management    Prague    26-27 March    www.vibevents.com
Managed Services & Outsourcing    Monaco    26-29 March    www.iir-events.com
C5 World Forum    Milan    26-29 March    www.iec.org
The Next Wave of MVNOs    Madrid    27-28 March    www.jacobfleming.com
CTIA Wireless    Orlando    27-29 March    www.ctiawireless.com
NG Network Interconnection    Brussels    28-29 March    www.vibevents.com
Telecoms Fraud & Risk    London    26-29 March    www.iir-events.com
Telco 2.0    London    27-29 March    www.telco2.net
Caspian Telecoms    Istanbul    11-12 April    www.ite-exhibitions.com               
Future Technologies    Oxford    13 April    www.conted.ox.ac.uk/cpd
Fixed-Mobile Convergence     Amsterdam    16-18 April    www.marcusevans.com
Mobile Roaming Strategies    Barcelona    16-19 April    www.iir-events.com
WIMA 2007    Monaco    18-20 April                www.wima.mc
Managing B2B Experience    Vienna    19-20 April    www.jacobfleming.com
Mobile TV    Berlin    23-25 April    www.iqpc.com
ISP & Broadband Forum    London    23-26 April    www.iir-events.com
Service Provisioning    Prague    23-26 April    www.iir-events.com
IMS 2.0 World Forum    Monte Carlo    24-26 April    www.informatm.com
Infosecurity Europe    London    24-26 April    www.infosec.co.uk
Billing & OSS World    Chicago    25-27 April    www.telestrategies.com
Triple-Play and IPTV Forum    Barcelona    25-27 April    www.marcusevans.com
IPTV Opportunities                    Milan    26-27 April    www.jacobfleming.com
WiMAX Business Strategies    Prague    8-10 May    www.iir-events.com
HSDPA/HSUPA    Rome    14-15 May    www.jacobfleming.com
SVIAZ/Expo Comm Moscow    Moscow    14-18 May    www.expocomm.com
Revenue Assurance     London    15-16 May    www.vibevents.com
IPTV World Forum Eastern Europe    Prague    15-16 May    www.iptv-easterneurope.com
Handset World    Amsterdam    15-17 May    www.informatm.com
Mobile Device Management    Amsterdam 17-18 May    www.jacobfleming.com
TM World/Excellence Awards     Nice    20-24 May/24 May    www.tmforum.org
MVNO Business Strategies    Barcelona    21-23 May    www.marcusevans.com
ANGA Cable    Cologne    22-24 May    www.angacable.de
WiMAX World Europe    Vienna    29-31 May    www.trendsmedia.com
Kitel    Almaty    29 May-1 June www.ite-exhibitions.com 
Youth Marketing in Telecoms    Lisbon    31 May – 1 June    www.jacobfleming.com
BIMS     London    4-8 June    www.iir-events.com
Carrier Ethernet Services    Rome    11-13 June    www.marcusevans.com
VON Europe    Stockholm    11-14 June    www.pulver.com
Capacity CEE 2007    Prague    18-19 June    www.telcap.co.uk
NXTcomm    Chicago    18-21 June    www.nxtcommshow.com
CommunicAsia 2007     Singapore    19-22 June www.communicasia.com

As commercial mobile TV and IPTV services are rolled out around the world, it is becoming clear that telecom operators hold some winning cards: in particular, the ability to deliver truly interactive and personalised TV services without the need to invest in brand new networks. Per Nordlöf and Anders Kälvemark fill in the picture

As TV enters fixed and mobile telecom networks, user behaviour is evolving from a ‘lean back’ approach to a ‘lean forward’ one in which consumers want greater control over what to watch, how to watch, and when to watch it. Consumers are getting used to personalising, controlling and interacting with content, services and brands. At the same time, there is greater availability of high-capacity fixed and mobile broadband connections, and content is increasingly available in digital format – making it easier to store and distribute over fixed and mobile IP-based networks.

MOBILE TV - A unique revenue opportunity

The generations that are growing up with sites like YouTube, Google Video and other user-generated content sites will want to do more than just watching traditional 'linear' broadcast TV. In the USA, 20 per cent of TV viewers now surf the web at the same time as they watch TV, for example.
TV of the future will be about giving users access to their favourite programmes and content, wherever they are, whenever they want and whatever device they are using.
As TV moves from traditional one-way (broadcast) distribution towards digital two-way networks, it is undergoing a fundamental change. The addition of a return channel makes the TV experience more interactive and personal; less passive and more active.

Mobile TV over cellular sets the pace
The good news for telecom operators is that mobile and IPTV services can already benefit from the built-in return channel. They enable users to enjoy interactive, personalised TV services wherever they go – whether mobile TV on the go or IPTV in the home.
Out of the 120 mobile TV services that have launched worldwide to date, more than 100 are distributed over existing cellular networks. And there are already more than 10 million regular viewers of mobile TV in developed markets – around four per cent of mobile subscribers.
A recent study conducted by Ericsson ConsumerLab among more than 700 mobile TV users in six countries (France, Italy, Japan, Korea, the UK and the USA) shows that there is strong and growing interest in the services. These people claim to watch an average of around 100 minutes of TV on their mobiles every week. Around two-fifths of those surveyed say they watch mobile TV every day, with viewing situations spread fairly evenly between commuting, during breaks, waiting for someone while out, at home and anywhere when there is a big event on (such as a major sporting event). The most popular time for mobile TV viewing is between 6pm and 10pm – today's prime time for normal TV.
More than half of consumers in the study pay a monthly subscription for mobile TV, or have it as a part of their monthly subscription, and around one-third do not pay anything for mobile TV. On average, consumers pay 14 euros per month for mobile TV, although there were large variations between markets. Interestingly, the payment option does not seem to have any effect on usage – perhaps indicating that users are more willing to pay for worthwhile content.
The study found that mobile users value different types of mobile TV content in different situations. Commuters looking to fill some time want 'light' content that allows for frequent disturbances – to enable them to get on and off public transport, negotiate crowds of people, listen to station announcements, and so on. Such disturbances should not cause the user to miss too much plot or content.
People typically tend to relax at certain times of the day, and mobile TV content should be designed to fit into this pattern of behaviour. For example, the Ericsson ConsumerLab study found that people will watch weightier content early on in the day – on the way to work, for example – when they may want to check the day's business and technology news. This is also true of people who commute long distances and so have time to concentrate on more demanding content.
The enthusiasm of those taking part in the study is echoed in real-world trials of interactive mobile TV services conducted by Ericsson and Norwegian state broadcaster NRK since 2005. Two-fifths of those who downloaded the mobile TV client used it every day – around four times a day on average. One of the most interesting findings is that average session times for users who have access to interactive features are typically double those of viewers watching standard programming.
One new aspect of the NRK–Ericsson collaboration is the world's first trial of interactive, personalised advertising, started in December 2006. The advertisements are interactive, customised to ensure their relevance to individual users, and tailored to the user's age, gender, location and personal interests. Without any marketing, around 200 consumers downloaded an easy-to-use client that enables them to receive personalised mobile TV advertising in return for free TV shows, radio channels and other content on their mobiles (users still pay for the connection). As at January 2007, the trial has shown that clicking on ads increases average session time from around two-and-a-half minutes to over six minutes, and the average click-through rate is 16 per cent. The most popular driver of click-throughs is the offer of free ringtones.
The results of such studies and trials are encouraging, but the key question for operators now is, what is the best way forward for operators to meet the demands of a mobile TV mass market?

More interactive, more personal
Mobile TV is much more than traditional TV delivered to a small screen. In addition to being able to view content wherever they go – with a combination of linear TV, on-demand TV and Podcast TV – users have access to easy-to-use programme guides, fast channel switching, interactivity and personalisation features.
Users have greater freedom in where and when to watch. They can use personalisation features to get a notification when their favourite team has scored and then get the video clip and goal summary pushed to their device. They can interact with friends and other communities, for example through commenting on or voting for their favourite contestant in a TV talent show. Mobile TV clients are available as downloadable client software, which offers consumers a convenient way to access the service, change programmes using the keypads or a menu, and get a programme overview through an electronic programme guide.
For operators, interactivity opens up new revenue streams possibilities, for example through premium SMS when voting.
Personalisation is valued by users. With mobile TV delivered by cellular networks, consumers can personalise their own TV programme portfolio according to their preferences. Based on their video-on-demand consumption history, a recommendation engine can offer a tailored programme schedule. Content can also be provided based on location or TV advertising region.
Ericsson ConsumerLab's study found consumers have clear views about where and when advertising is acceptable. Ideally, ads should be placed before or preferably after a programme – not included as a break in the middle. The length of the ads also needs to be balanced: users have a limited amount of time available when watching mobile TV, and they do not want this time eaten into by long ads. The length of the ad must be in proportion to the length of the programme so, for example after a weather forecast, they would be happy to see a short sponsor's message, while during a longer TV serial episode they would accept a slightly longer ad before, and maybe one after.
What is clear is that a well-functioning and robust service, with great choice of content, is a must for continued mobile TV usage. The study found that features that enable users to take control over their mobile TV viewing would be very popular. For example, the ability to pause and play, rewind and fast forward, and record content would be well appreciated. Consumers are also interested in making their mobile TV watching more flexible and would like to be able to use the mobile phone as a portable media controller.

Converged vision
Mobile TV services via cellular network open up new business models and revenue streams for the operator – for example, through targeted advertising and add-on sales. The vision is one of networked media delivered over three different screen types – TV, computer and mobile phone – enabled by IP Multimedia Subsystem (IMS). 
To enable such seamless, converged service offerings for a mass market, removal of usage restrictions is key. If we want to enable any TV to interconnect with any mobile device or any computer – to be able to view content in the most appropriate way at any given time (TV screen in the living-room; mobile phone when waiting for a bus) – we need to have industry-wide agreement on how these different devices will share content.
Mobile TV and IPTV solutions must meet very high requirements on scalability and performance. The telecoms industry is used to talking about five nines availability: TV services must meet at least the same requirements – just imagine the consequences of large events, perhaps sponsored by major international brands, being interrupted by technical difficulties.
The telecoms industry, with its well-established commitment to global standards and high quality of service is well placed to address these requirements. Interactive, personalised TV should be part and parcel of fixed–mobile convergence and full-service broadband, underpinned by IMS and other open international standards.
Mobile cellular networks already have, by default, both down- and up-link communication abilities in the network, and so are ready to offer interactive, personalised mobile TV services. More to the point, existing mobile networks already have more than 2.5 billion customers and global coverage in place, and their capacity is being given a tremendous boost through High Speed Packet Access (HSPA) and, in the future, Long Term Evolution (LTE).
Today, mobile TV services are delivered over cellular networks using unicast streaming technology. Data packets are transmitted from a single source to a single destination, for example from a content server to a mobile device.
There is more than enough capacity in 3G networks to scale up for mass-market mobile TV services, especially if the operator has deployed HSPA. HSPA provides several capacity increase steps, enabling more users to be served with a greater diversity and higher quality of mobile TV content. LTE moves mobile capacity up to another level: Ericsson recently demonstrated speeds of 144Mbit/s in a live network using LTE.
Multimedia Broadcast Multicast Service (MBMS) will enable broadcasting over 3G networks by allowing traffic channels to be shared by all users that are simultaneously watching the same programme in the same area. MBMS complements HSPA to support higher loads in dense areas and ensure efficient network utilisation (as shown in Figure 1)
Figure 1. Mobile cellular networks can meet mobile TV capacity needs today and tomorrow

By using a combination of unicast and broadcast, network capacity and investments can be optimised. Broadcast bearers can be used for the most popular programmes, and an unlimited number of additional programmes and on-demand content can continue to be delivered efficiently using unicast. In the combined unicast–broadcast scenario, the user will not notice any difference in how content is delivered. The user will have a single user interface (TV client) in the terminal to access all content. This combination unicast and broadcast provides the best way to meet personalisation and mass market.
One glimpse of how converged TV services will work is being provided by a joint project between Endemol, Ericsson Netherlands and Triple-IT. The companies are creating a service that enables subscribers to interact with TV shows – for example, by sending in live news reports or comments from their mobile phones – in real-time, even from overseas.
The opportunity for telecom operators to create unique interactive, personalised mobile TV services is there – all that's needed now is for the right technology choices to be made.

Per Nordlöf is Product Strategy Director at Ericsson, and Anders Kälvemark is Senior Consultant at Ericsson ConsumerLab

Two associations, TIA and USTelecom have joined together to create a single, much-needed industry venue, say NXTcomm organisers

NXTcomm will bring together the unique strengths of each of two trade association - TIA's representation of supplier and technology companies, and USTelecom's representation of the world's leading communications and entertainment companies.


The event will showcase the leading telecommunications companies in the new converged ICET industry who are merging communications and entertainment via innovations fueled by broadband deployment.
"NXTcomm will be about doing business - about advancing an industry of central importance to the modern information economy, and showcasing what's next across the colliding worlds of information, communications and entertainment," says USTelecom President and CEO Walter B. McCormick, Jr.
The show will bring together top executives from every segment of the global industry to exhibit, explore new business opportunities and buy the latest technologies driving the converged communications and entertainment industry. Showcasing hundreds of new and innovative products, the NXTcomm exhibit floor will reflect the dramatic changes in the sector. The show's programming will feature keynotes from the top CEOs in the communications, entertainment and technology sectors. From enterprise users and service providers to technology suppliers and content providers - the forces that drive communication and the solutions to harness it converge here. With 20,000+ attendees and 450+ exhibitors, NXTcomm aims to be at the center of the expanding ICET universe.
The event's keynote speakers include AT&T Chairman and CEO Ed Whitacre; Bell Canada President and CEO Michael Sabia; Cisco Chairman and CEO John Chambers; GE Vice Chairman and Executive Officer and NBC Universal Chairman Bob Wright; Motorola Chairman and CEO Ed Zander; Verizon Communications Chairman and CEO Ivan Seidenberg; and Federal Communications Commission Chairman Kevin Martin.
More speakers will be announced in the coming months.
 “These CEOs are directly responsible for charting the course for the converging communications, entertainment and information industries,” says Wayne Crawford, Executive Director of NXTcomm.
The event aims to bring together top executives from every segment of the global industry to exhibit, explore new business opportunities and buy the latest technologies driving the converged communications and entertainment industry. Showcasing hundreds of new and innovative products, the NXTcomm exhibit floor will reflect the dramatic changes in the industry. The show will also feature a broad range of educational programming.

International Attendees
International attendee who plan to come as a buyers, qualify for special benefits, including:
•    The International Trade Center (ITC), exclusively for the use of international buyers
•    US Department of Commerce assistance in finding U.S. suppliers
•    Interpreters (in the International Trade Center only)
•    Transportation between official hotels and McCormick Place

NXTcomm 2007 - Conferences: June 18 – 21, 2007; Exhibits: June 19 - 21, 2007
McCormick Place, 2301 S. Lake Shore Drive, Chicago Il, USA www.nxtcommshow.com

Network Resilience promotes peace of mind for business of any shape or size according to Piers Daniell

With businesses becoming increasingly reliant on telecom communication it is surprising how many companies rely on a single voice and Internet solution without a backup in place. Consolidation has been a buzz word in the IT industry for the past few years, but a consolidated communications network creates a big risk for business in the form of a huge single point of failure.

CONSOLIDATION - Another string to your bow

Historically, network service levels have been managed by availability, but today poor performance can be equally or more damaging as no service at all. To remain competitive, networked application performance must meet the needs of the business and its end-users.  It is now widely accepted that service provision and receipt should be governed by an agreement – the Service Level Agreement (SLA). This is essential to define the parameters of the service, for the benefit of both the provider and the recipient and it is crucial in setting the expectations of all parties as to what service can be relied upon and what the results of failure will be.  Of course, once the expectations and the agreement have been set, it is essential that the terms are adhered to by all parties. This is not always easy to achieve.
With BT currently suffering widespread and well-documented resource issues it is regularly failing to meet its SLAs.  This can mean that businesses are putting their future at risk by losing their Internet or voice services for, sometimes, extended periods of time.  It is critical that they evaluate the costs to the business of such service losses and seek to protect those services to a relevant and appropriate level.  Obviously, if the business relies totally on the availability of its telecoms infrastructure then greater weight will be given to this equipment and the steps taken to protect those services will be rigorous. 
However, few businesses, in reality, have actually looked in detail at the response they would get from their ISP should the worst happen. And, they need to ask the crucial question “what compensation will we actually receive?”. No telecom service would recompense against the true loss of business and the majority would only look to reimburse the customer for a percentage of their monthly service rental. This equates to a pitiful amount when compared to the lost business if a company loses e-mail, Internet or voice, even for the smallest of companies.
The answer may well lie in a philosophy that has saved many a shrewd businessman from disaster – the idea that spreading the risk reduces the exposure. In this instance, this means putting additional measures in place to limit the impact of a single source problem, and companies should consider this to protect against service outages.  By purchasing, for example, a smaller Internet connection from a different provider, they protect against a technical problem with the main connection.  For larger businesses this is a common solution but for the smaller companies the cost implications or the requirement for additional hardware and network configuration demands can often prevent them from making this provision.  This is where the cost equation comes into play and should be carefully considered before further infrastructure investment is declined. 
Outages can be caused by so many factors that no matter how reliable your connectivity, single points of failure are just too risky and need to be eliminated. Increased resilience and backup will already be in place on the ISP's core network. However, that is only a small part of the overall solution when an Internet connection is provided. The copper in the ground for the service line is most likely owned and operated by BT, which runs to a local telephone exchange where voice and Internet traffic is routed to the major data centres across the country. This 'last mile' suffers from a multitude of potential problems that could take out the vital link for a business and its customers.  And these problems may be as simple as road workers cutting through cables as they dig up a road to lay new pipes.  Although this is an easy mistake to make and one that occurs on a fairly regular basis, it can cause immense problems and take a good deal of time to rectify.  The simplicity of the mistake is no consolation to the companies that are affected by such an outage. 
Then there are the more technology driven problems.  For example, LLU SDSL providers are reporting an increasing problem with users being unceremoniously disconnected by BT engineers unfamiliar with the technology. SDSL operates over a data-only circuit with a true symmetrical dataflow. However this means the copper cannot be used for voice, as is the case with ADSL, and so there is no dial tone on the line. The dial tone is used by engineers to tell if a line has been properly connected and is in use. Without a dial tone engineers might accidentally reuse the copper when cabling a new circuit in the local exchange.
Aside from careless engineers, major outages can cause longer periods of downtime no matter how much care is taken or engineering resource is available. From rats chewing through cables and allowing water onto the copper, to lightning strikes or power surges, faults are difficult to troubleshoot and timely to fix. Businesses should therefore ensure proper procedures are in place and alternative connectivity has been sourced to maintain communications.
Network resilience and connection redundancy are essential. At the very least businesses should look to purchase two different kinds of Internet connections from their supplier so that, should one line or technology have an issue, the business will be able to continue to operate using the other. Also by taking this secondary service from the ISP it is possible to request that both the primary and secondary circuit utilise the same network settings hence making it easier for anyone to switch connections as no further network configuration would be required. There are also systems on the market from companies such as Cisco that have the capability to include two routers in one case, making the switch over from primary to secondary lines pretty much seamless. Many customers who do consider backup stick with the industry standard of ISDN, however with per second billing the service can prove costly should it be used a lot. Also ISDN only offers 64Kb/s of data transfer, which is a very small amount compared to the 20,000Kb/s offered by some ISPs' ADSL services. But with the advent of DSL technology in the UK there has been quite a bit of investment into the technology and now businesses have a number of options when considering backup.
SDSL, which offers symmetrical data connections, has proved popular as primary connections with the SME market but also as a secondary backup to a larger company's leased line service. Moving down the scale, ADSL offers great backup for businesses invested in SDSL and can be used in the aforementioned Cisco routers. When looking to back up an ADSL circuit this is best achieved by actually choosing a second ADSL circuit, but also by making the following provisions. Ensure that the second ADSL line is activated over a totally different phone line within the building. This is because normal phone cable carries three phone lines and hence, if a cable has a fault, it can affect a number of lines. Secondly try to get the second line activated over a different ADSL technology, which will be using a different part of the exchange. Over the past few years telecom carriers have been upgrading BT's exchanges with their own ADSL technology which has been know as Local Loop Unbundling (LLU) providing alternative connectivity options.
Although the concept of bonding or aggregation has been with us for some years now, it is only recently that some service providers have developed state-of-the-art aggregation technology which makes it possible to provide customers with a connection bonded using multiple lines from different providers and even using different technologies. The potential of such a service is huge as it increases the level of resilience many fold. This can be presented to the consumer in a single hand-off, avoiding a lot of the downsides of a simple backup connection, as there is no need to use different hardware, reconfigure the internal network or miss out on the extra bandwidth as the service is completely aggregated together creating one virtual service at all times.
Looking at voice protection standard services such as call answering can prove invaluable when all office communications are severed, ensuring client enquiries continue to be dealt with. Other solutions, especially with new technologies, such as VoIP, empower businesses to forward calls straight to other landline phones or mobiles in the event of loosing connectivity.
Whatever way businesses chose to protect themselves, the message is clear – with the internet and telecommunications becoming key facilitators in day-to-day business activities across a wide range of industries, forward planning and backup solutions are essential.

Piers Daniell is Director of Fluidata

If you think telecom-media-Internet convergence means a few years’ turbulence and the captain switching on the seatbelt sign to make sure nobody gets injured, think again. Keith Willetts notes that it's clear that all three sectors are going to fuse, but also thinks that telecom can play an important role in the rebirth

The imminence of so-called ‘convergence’ has been a hot topic in communications and networking for at least 15 years, probably longer. ‘Will telecom converge with broadcast media?’ we were always being asked.  ‘Will fixed and mobile telephony come together in a single service?’.  And so on. 

Well not any more. Convergence is no longer just something rumored to be around the corner. Like that other imminent happening, climate change, it's already here and is making itself felt.
With the telecom, media and Internet sectors now so clearly banging up against each other it's not surprising that this year's TeleManagement World in Nice (20-24 May) is putting convergence and its opportunities front and center.
With an overall theme of Managing Telecom-Media-Internet Convergence:  Leadership Strategies for the World of Communications, Information & Entertainment, we kick off on Day One with an executive summit 'The Big Bang: Telecom-Media-Internet Collision & Rebirth'.
That's what we think we're looking at now.  Not a gentle coming together of sectors where one group of businesses makes a few incursions into another group's traditional territory, but a full-scale fusion and a subsequent re-birth a little further down the track.
EBay's purchase of Skype is an obvious example of the process in action, but there are plenty more. We see traditional telcos trying to turn themselves into Communication and Entertainment Service Providers by adding IPTV to their service offerings and looking to buy and deliver the content themselves. We see cable operators and even satellite broadcasters also selling telephony; mobile operators selling broadband; and just about every other crossover permutation you can think of.
The most urgent question being asked by people working in all these sectors is, naturally enough, who is going to win out?  Will traditional telecom service providers be killed off by voice over the Internet on one side and mobile on the other?  Will the emerging online publishing industry, enabled by the rise of the Internet, knock out traditional paper-based publishing?
There may be relative winners and losers over time, but a more realistic scenario is that we are entering a technical convergence phase where different types of player will partner to move content over different networks, share revenue, ensure security and so on. 
We think that's where the TM Forum is going to make an invaluable contribution. Up to now our mission has been to provide a framework to enable different pieces in the telco back office jigsaw puzzle to fit together in an efficient, easy-to-integrate way. So we developed NGOSS (New Generation Operations Systems and Software), which could be thought of as a blueprint for the internal convergence of the fiefdoms, and information silos that have developed within our service provider members.
Now we're facing the same sort of convergence problem one level up, between different sorts of companies.  Because we think we know what we're doing – after all, we've had the practice - we want the TM Forum to be a vehicle for bringing together the new converged telecom-media-Internet environment.  After all, we're not just seeing a fusion of technologies but a fusion of different competencies and one of the characteristics of the telecom sector, as exemplified in the work of the TM Forum, is a certain methodical way of stitching together complex, overlapping technologies - it's what we've been doing for about 100 years and we're good at it. 
So come to Nice in May and hear more about how we're going to be an important midwife at the rebirth of the new Telecom-Media-Internet sector. 

Keith Willetts is Chairman, TM Forum

Ever increasing sophistication among cyber criminals is putting pressure on organisations that must also meet the requirements of compliance legislation.  Peter Wollacott looks at ways to fight back

Cybercrime has come a long way since 1988 when the Morris worm - considered by many to be the first piece of malware and certainly the first to gain widespread media attention - hit the Internet.  The worm, technically a Trojan Horse piece of malware, was written by a student at Cornell University, Robert Tappan Morris, and launched on November 2, 1988.

CYBERCRIME - Safety first

The replicating and 'clogging' concept of the worm, which effectively brought the Internet to a grinding halt, has since been copied many times, although no-one could have foreseen the developments in the world of malware and cyber-attacks that would ensue in the years to come.
If we fast-forward just over 18 years to January, 2007, we see the Nordea Bank in Sweden reporting a loss of $1.1 million to Russian organised criminals over a three-month period, with a key logging Trojan at the heart of the scam.
According to BBC news reports, the bank lost its money in relatively small amounts over the three months, with debits spread across the accounts of around 250 business and consumer (retail) customers.  Reports note that the Russian criminals developed their own custom Trojan, which was sent to the bank's customers disguised as an anti-spam application.  Because the Trojan was custom-made and only sent to a small number of Internet users, it fell below the radar of conventional IT security software.
Once the bank's customers downloaded the application, they were infected by a modified version of the haxdoor.ki Trojan, which triggered key logging when users accessed their Nordea bank accounts online. These details were then relayed to a group of servers in Russia, where an automated routine started siphoning money from the customer's accounts.
The bank has borne the costs of reimbursement to all the affected customers and is seeking ways of preventing further attacks of this nature.
Unfortunately for customers worldwide, this type of low-value, multi-account fraud is extremely difficult to counter, unless the bank concerned has both heuristic and holistic IT security technologies to protect its IT resources.
The Nordea bank incident illustrates the fact that modern cybercrime has "come of age" driven at least in part by the arrival of organised criminals using sophisticated techniques to extract significant amounts of money from organisations both large and small without detection.
Most modern organisations have installed multi-vector security technology, including perimeter security systems, to protect their IT resources against almost every conceivable form of external attack, whether it is an e-mail-borne virus, hybrid malware, or a sustained brute force attack on their EFTPOS/financial systems.
But this is only part of the security equation. Today there is also the very real issue of internal attacks, originated by anyone from a disgruntled employee to a WiFi-wielding cracker who gained access to the company's internal network using a wireless backdoor, courtesy of a new Centrino-driven notebook sitting on the marketing director's desk.
Employing user privilege-based control systems on the IT network, as well as installing event monitoring/response technology that can block any unauthorised and/or unusual activity on the IT resource, can protect against loss through internal attacks of this type - as well as sophisticated hybrid attacks from the Russian criminals involved in the Nordea Bank scam.
Unfortunately for hard-pressed IT managers the world over, some of the best IT monitoring/control systems can be relatively expensive option to install and operate, meaning that a compromise in terms of security and cost might seem the order of the day. This could prove to be false economy and impact good governance.
Modern legislation, like the Sarbanes-Oxley Act in the US, the Companies Act in the UK and other equivalent laws around the world, impose a duty of care on senior officers within organisations to install an auditable IT security system that protects against all known and unknown security threats that might impact their organisation.
Perhaps worse, these new laws do not take account of the fact that hacker techniques - as clearly illustrated in the Nordea Bank attack - are becoming more sophisticated and specifically designed to evade existing detection methodologies.
Many of the forensic accounting and data auditing software seen in the last decade, in fact, is now significantly out of date against a backdrop of the increasing levels of authorised misuse, unwitting internal participation and fraud that are starting to appear in many major organisations. Authorised misuse is a grey area that many IT security managers overlook at their peril. If, for example, an office worker starts downloading the entire company customer base, it may be that a legitimate back-up is in progress, or it might be the beginnings of a major contravention of local data protection legislation. But which is it?
An effective monitoring system capable of alerting IT management staff to such an event and taking pre-defined lock-down action, as appropriate, goes a long way towards protecting against loss, keeping the auditors at bay, and, perhaps more importantly, keeping the management on the right side of the law.
This is because a failure to address such increasingly prevalent internal security matters is a breach of a growing number of compliance legislation such as Sarbanes-Oxley in the US and the Companies Act in the UK.
All is not lost, however, as a new generation of monitoring systems, capable of using real-time heuristic and holistic analysis techniques - alongside more conventional auditing and IT security software - can help IT managers meet the demands of increasingly complex risk environments set against increasingly draconian compliance legislation.
Our observations here at Tier 3, where we have developed a behavioural intelligence approach to IT resource protection and control, is that an increasing number of  major organisations around the world that do business with their US counterparts are now adhering to the provisions of the Sarbanes-Oxley Act.  This analysis leads us to the conclusion that most US companies will soon include Sarbanes-Oxley or similar compliance requirements into their commercial trading terms and conditions with other parties.
Improved governance is good business practice and so even those non-US organisations not forced into a 'comply or die' situation with international legislation will, we believe, find it advantageous to move to this best practice approach on IT security.  For this reason, organisations should consider moving from a point-solution based IT security system to an integrated approach, with multi-faceted security technology installed, at all technology levels, across the organisation under the control of a fully automated and auditable database-driven ICT Threat Management system.
In addition to this, if an organisation takes steps to perform a continually updated research and risk analysis on its IT systems and resources, then it is well on its way to ensuring relevant regulatory compliance - as well as protecting against organised criminal gangs using customised Trojans to extract money from the organisation's bank accounts.

Peter Woollacott is CEO, Tier 3

If operators are to build profitable content-based service businesses, they will need to address unacceptably high levels of avoidable revenue loss, says Geoff Ibbett

Year on year telecom operators lose about 12 per cent of their revenue to avoidable leakages. Clearly, operators have a great opportunity to show immediate improvement in their top and bottom line performance if they can successfully tackle this revenue leakage.

REVENUE MANAGEMENT - Stopping the leaks

And there is some good news too. Revenue maximisation programmes managed by CFOs deliver best results in containing revenue leakage. Undoubtedly, telecom operators can show dramatically improved results if they implement an enterprise wide revenue management programme effectively managed by, and reporting to, the CFO.
And there is some good news too. Revenue maximisation programmes managed by CFOs deliver best results in containing revenue leakage. Undoubtedly, telecom operators can show dramatically improved results if they implement an enterprise wide revenue management programme effectively managed by, and reporting to, the CFO.
Unfortunately, over the years, operators have deployed BSS/OSS systems with an eye on the immediate needs of the business without necessarily analysing the impact on existing systems within the chain. This has often had the effect of fragmenting the operations chain into seemingly impregnable silos. An executive can access a lot of data but very little actionable information.
In addition, telecom operators are stepping into the exciting world of content-driven services. These new services will help telecom operators combat the problem of rapidly commoditising voice-based services that suffer from high rate of subscriber churn and falling ARPUs.
These next-generation services will have an even more complex revenue distribution and settlement chain associated with them, involving partners and resellers. Telecom operators will find themselves cast as trusted partners for product delivery and related payment receipt. This new role will sharply bring into focus the impact of revenue leakages. In the conventional voice-based services environment, operators could treat revenue leakage as opportunity loss. In the content-driven service environment, however, operators will suffer real loss because an operator is liable to pay the content provider even if he does not or cannot collect matching payment from the subscribers.
The greatest challenge, for a telecom operator is to establish a strategic framework to foster sustained profitable growth. This is easier said than done. The industry is fiercely competitive, demands rapid response from operators to ever changing business and technology environments but offers little leeway to experiment, let alone make mistakes, and this is where the next generation of revenue management platform comes in.
Revenue management in its broader context though, is much more than just assuring revenue, reducing fraud and managing credit risk.  It should provide a mechanism for actively managing the performance of an operator's business. 
Of course it should monitor, control and ensure revenue integrity within all of the various revenue chains, but also provide the ability to manage the cost base associated with service delivery to allow profitability and product margin to be managed rather than just revenues alone.  This is because not all revenue is good revenue; at least if it costs an operator more to deliver the service than is received in receipts from its customers.  Often this information is simply not available to the business manager.
But the holy grail of revenue management is to provide a single, consolidated, real-time view of the overall performance of the business that supports business managers in their day-to-day decisions, making it a role that directly impacts the performance of their business.
Next generation revenue management moves beyond just managing leakages, it needs to address profitability and even track subscriber behaviour so that the assets of the business are put to optimal use.
One of the biggest hurdles to overcome in achieving this is in bringing information together, from the traditionally separate systems that exist today, and providing a visual representation of this information from a business perspective.
The concept of the Revenue Operations Center (ROC) is in doing just that, and presenting it in a manner that enables issues that are affecting business performance to be easily identified, investigated, diagnosed and corrected.
Modelled on the Network Operations Centre (NOC), it is intended to provide an equivalent view to the financial community of the operational effectiveness of a telecom operator's revenue network, as the NOC itself does for network operations.
A Revenue Operations Centre, though, is much more than just another dashboard; it should be underpinned by an integrated suite of revenue management solutions, providing multiple levels of drilldown to support the day to day activities of different levels of business management.
To support the goal of assessing and quantifying business performance and revenue integrity, the Revenue Operations Centre also needs to provide comparative analysis of revenue operations.  The full power of the ROC can be realised when business performance can be tracked at key stages of revenue realisation.
Six such stages have been identified for monitoring by a ROC:
•    Forecast Revenue, based on revenue targets usually derived from a company's business plan.
•    Predicted Revenue, based on revenue projections of the current subscriber base together with estimated ARPU and AMPU.
•    Expected Revenue, based on the provision of service within the network and service usage recorded within the network
•    Invoiced Revenue, based on actual billed revenues
•    Collected Revenue, based on the revenue actually received by the company
•    Reported Revenue, based on how those collected revenues are reported in the accounts and summarised in the company's annual report.
In an ideal world all of these revenue stages should give the same value, but of course they never do.  For example, the difference between Expected Revenue and Invoiced Revenue can be accounted for by revenue assurance losses and internal fraud, and the different between Invoiced Revenue and Collected Revenue can be accounted for by external fraud and bad debt.
By comparing these key revenue perspectives, the operational effectiveness of a business can be determined and, by combining information from a telecom operator's various monitoring system, gaps between the revenue stages can be quantified and a business is able to understand whether there are any gaps that cannot be explained. 
The process of investigating these gaps will reveal hitherto unknown issues, such as revenue leakages, stranded and under utilised assets, inflated operating costs and inefficient systems and process amongst other things.
It is the Revenue Operations Centre that will become a key business solution that will enable a business to manage its four levers of profitability, namely, price, cost of service delivery, product portfolio and targeted customer effectively.  Those businesses that can achieve this will be able to maximise profit growth within an increasingly competitive and complex industry.

Geoff Ibbett is Director, Product Management, Subex Azure

European Communications presents its regular round-up of the latest developments in the world of telecommunications

ITU goes West
ITU Secretary-General Hamadoun I. Touré recently conferred with some of the leading lights of Silicon Valley,  aiming  to cement ties with the private sector and promote the use of state-of-the-art in ICT to bridge the digital divide.

Among the participants were executives from communications, hardware, Internet, software and venture capital firms, including Intel, Cisco Systems, Nokia Siemens Networks, Hewlett Packard, Google, IBM Venture Capital Group, Visa International, Microsoft, as well as Stanford University and the University of California, Berkeley.
Speaking at the opening of the "UN Meets Silicon Valley" event, Dr Touré focused on three main trends that appear to be influencing the ICT industry: innovation and cybersecurity; changing business models; and the development of new markets. "Innovation is a key source of new products, added value and fresh growth in revenues," Dr Touré told participants. "I want to challenge you to think beyond the borders of Silicon Valley, beyond even the borders of the United States, to the emerging markets in the rest of the world."
He said that closing the digital divide should not be seen as charity, but as a sound business model attractive to industry.
Describing the ITU as a unique intergovernmental organisation, which also has strong relations with business, Dr Touré added: "The Union has a noble mission: to provide access to the benefits of ICT to all the world's inhabitants.  To achieve that goal, we need to work in partnership with governments, the private sector and civil society, and to exploit the dynamism of regions like Silicon Valley."
A road map to connect the unconnected by 2015 was set out by the World Summit on the Information Society that was organised by the ITU in 2003 and 2005. With world leaders recognising the potential of ICT as an enabler for development, Dr Touré said the moment is ripe to harness the culture of innovation and competition in Silicon Valley to connect the world. The ITU has been charged with building the infrastructure required and ensuring security in cyberspace as well as bring together all stakeholders in meeting the goals of the Summit.
Details: www.itu.int

Entertaining  potential
The mobile entertainment market is set for a new era of rapid growth as 3G environments become more commonplace, applications built for mobile predominate, and more users in the mass market exploit the mobile phone as a multifunctional communications and entertainment device says Juniper Research
The value of the mobile entertainment market, including music, games, TV, sports and infotainment, gambling and adult content is forecast to increase from $17.3 billion in 2006 to nearly $77 billion by 2011, driven by mobile TV, video rich applications and a buoyant Asian market. This is rapid growth, but for the potential to be realised, there are still a number of barriers to be overcome.
Principal author of the Juniper Research Mobile Entertainment Series, Bruce Gibson, comments: “The face of mobile entertainment is expected to change significantly over the next five years as next generation mobile services continue to be rolled out around the globe and take up steadily increases. As 3G services become commonplace, sophisticated mobile entertainment products and services can reach the mass market and provide the sort of anywhere/anytime entertainment that has been predicted for some time, but not really delivered.” However, he adds a note of caution: “Whilst the potential to generate dramatically increased revenues is certainly there, many uncertainties affecting sections of the market still exist and could put a break on growth - the development of legislative environments for mobile gambling and adult content, and the success of broadcast mobile TV trials currently underway or planned, are just two examples.”
 Dramatic changes in service delivery are forecast, but some aspects of market structure will not change. The Asia Pacific region currently provides the largest market for Mobile Entertainment services and contributes over 40 per cent of global revenues. Despite more rapid growth in North America and in developing markets, the Asia Pacific region is forecast to retain its leadership through to 2011, when it will still contribute 37 per cent of global revenues.
Details: www.juniperresearch.com.

Internet freedom
The Internet industry must do more to fight governments' attempts to repress Internet users around the world, Amnesty International UK Campaigns Director Tim Hancock noted at the Internet Services Providers' Association (ISPA) annual awards ceremony.
'The Internet has revolutionised free speech and gives a voice to millions. But we must be on our guard against those who want to limit access to information and take that free speech away,” he said.
'The Internet is the new front in the battle between those who want to speak out, and those who want to stop them.  Businesses whose operations impact on freedom of speech bear no less responsibility for upholding human rights standards than other industries.”
He went on the stress that web users and service providers alike have a responsibility to keep alive the things that have made the Internet great - its democracy, its freedom and the way it gives people access to knowledge and the opportunity to participate and be heard.
Over 60,000 people have joined Amnesty International's irrepressible.info campaign, highlighting the repression of Internet users around the world, and the collusion of major Internet companies with governments such as China to restrict access to information over the Internet.
The human rights organisation recently announced that it was joining multi-stakeholder discussions with companies including Google, Microsoft and Yahoo!, together with other NGOs, experts and investors, to establish principles for  safeguarding human rights on the Internet.
Details: http://amnesty.org

EC gets it right
The socioeconomic profitability of the eCall system, proposed by the European Commission, has been independently verified by a new research report from the analyst firm Berg Insight.
The eCall system is intended to automatically initiate an emergency call to 112 from a vehicle and transmit satellite positioning data to the operator in case of a road accident. By reducing the reaction time for the emergency services, the system is expected to save thousands of lives annually when fully implemented. Exactly how many lives that would actually be saved is, however, the subject of a debate between the proponents and sceptics who believe the cost exceeds the benefits. According to the findings of the Berg study, there will be a net socioeconomic benefit for the EU if road fatalities and severe injuries are reduced by 3 per cent or more.
 “The eCall project is based on the well known Golden Hour principle of accident medicine, saying that the chance of surviving a severe injury decreases from 26 per cent to 5 per cent in the first hour,” explains Tobias Ryberg, Senior Analyst, Berg Insight. “Literally, every minute counts when it comes to saving lives, not to mention preventing severe injuries which are a heavy burden on public finances.”
Berg Insight estimates eCall could save 1,400–2,800 lives and prevent 8,600–17,100 severe injuries annually in the EU when fully implemented. Long-term savings would be in the range of ? 5–10 billion, whereas the long-term cost is projected as ? 4 billion. Ryberg believes that segments of the automotive industry exaggerate the cost of integrating an eCall device in every new vehicle, as would be required for the system to work.
“Worldwide production of mobile phones now exceeds 1 billion units, and in five years a majority of those will have integrated GPS,” he says. “I am convinced that the cost of producing another 15 million units - without displays, digital cameras and music playback capabilities - will be marginal once the automotive purchasing departments have done their job.”
Details: www.berginsight.com

Future competitive differentiation lies in the quality of the customer relationship and the ability to meet individual expectations.  Mikko Hietanen explains the importance of providing a personalised customer experience to secure loyalty and increase lifetime values

We are living in a world of high churn rates, but should we sit-back and simply accept it?   A main contributing factor to this phenomenon is that users are expressing a growing dissatisfaction with the quality of service delivery and customer care they receive from their communication service providers.  They, quite rightly, expect high levels of service and support tailored to their own requirements but are normally disappointed and unimpressed with the way it is provided.   


Operators are struggling to deliver a significant improvement in the customer experience.  The use of analytics, better segmentation and outbound campaigns has overcome some of these issues but it's clearly not enough.  There is a definite disconnect between marketing's requirements to build lifetime relationships and a lack of co-ordination and connectivity between the customer facing systems designed to achieve this goal. 
Creating an improved customer experience requires less time and effort than communication service providers may think. The emphasis is to stay in tune with the customer and develop marketing plans to address them on a one-to-one basis by utilising and enhancing existing IT investments.  The pursuit of this essential business requirement is known as Customer Lifecycle Management (CLM), and is fast becoming the single most efficient method of retaining profitable customers.

Unlocking customer data
All service providers share one key asset – customer data.  Ensuring every piece of that data is attainable and delivering its full value is the foundation on which to build an improved customer experience.  Having the capability to build in-depth profiles made up of all historical and contextual data and continually adding to it as more interactions are initiated is the way to really get to know your customers on an individual basis.
However, collating and co-ordinating this data presents its own set of challenges.   Access is often hampered as there are many different customer-facing systems that are incompatible.  Leaving vital data locked away in separate systems such as e-mail, direct mail SMS, IVR, webportals, CRM and campaign management tools.  To be effective every single piece of data needs to be unlocked and integrated to work together as a comprehensive unit. 
Opening up this data is like opening up Pandora's box.  Enriched profiles can be constructed as you start to monitor exactly how each customer interacts with you, why and when.
For example, a customer may be in dispute with customer services over a recent bill.  It is important during this period that a customer is not contacted with other offers until this situation has been resolved.  However with a lack of co-ordination between systems this is hard to prevent.  If the very same customer meets the criteria of a segmented group for a new service, a campaign management tool will automatically include them in the campaign oblivious of the fact that the timing is not right.  A non-response from, an unhappy customer, will then automatically trigger a reminder for a service he may have no interest in and before too long the customer feels frustrated and is increasing the probability of churn. 
This scenario can be avoided if all inbound and outbound campaign data is collated and integrated from one system.

Personal attention
Adding the personal touch makes every customer feel special and delivers a fantastic brand experience.  Initiating truly personalised dialogues and responding in context enriches each and every interaction.  With the level of customer data available there is no need to simply push offers to segmented groups via campaign management tools. 
The customer can initiate the start point of any dialogue by approaching their provider with a specific need.  This need can then be addressed by positioning offers or other marketing-driven content in the context of the interaction.  In some cases the customer may trigger an additional sale opportunity or possibly an educational tip regarding a new service.  The real difference here is the communication is personalised to the user's own relationship and the exposure of the message is driven by the customer behaviour. 
To personalise transactions you need to understand what is needed from each system and what each system needs to know and when, to play their part in the fulfilment of the customer requests.  By evaluating the responses you can benefit from knowing where a customer is in their lifecycle.  Value risk assessments can be made in real time determining the potential risk of churn and decide on the right incentive for that particular customer to take action.
It is key that marketing has the ability to design and control the rules to steer the dialogues in the required business direction so they can plan, create, monitor and manage the dialogues and associated initiatives with little reliance on the IT department. 
To achieve the best results, the personalised approach has to be consistent across all available communication points.  Operators offer a wide choice of communication methods to give their customers the utmost convenience but it is a far from seamless experience.  It's all well and good to offer a personalised service within the confines of the same communication medium but if a customer chooses to adopt more than one method there is normally a disconnect in the service received.    A customer is oblivious to the technical challenges, and quite rightly expects the same dialogue to continue whichever medium is chosen.   
For example, when a customer receives an SMS with an incentive and a call to action this needs to be automatically reflected on the web page.  The content needs to perfectly reflect the offer without the need to search different pages to locate it.   If at the same time, the customer decides to contact the call centre, the customer service representative should be provided with information pertaining to the specific offer introduced and how the customer responded.  This information will allow the CSR to confidently reinforce the offer based on factual information and increase the probability of up and cross sales.
This can only be achieved if all customer interactions are integrated and co-ordinated across all the channels.  The result - continuous and relevant dialogues.
Personalised marketing campaigns and initiatives will often consist of hundreds of different incentives aligned to the business strategy.  With multiple offers and incentives going out to customers simultaneously, successfully fulfilling these offers is important to the overall customer experience.  For example, upgrade of new handsets, redemption of cinema tickets or discount vouchers.   Delivering these items, organising a demonstration of how to use them, sending user guides and even the correct set of additional services such as insurance can prove to be a logistical nightmare.   
A problem at any point in the fulfilment process triggers an immediate negative perception with the customer.  The end result is customer apathy, manifesting as a continuing strong resistance to offers and the take-up of new services.

Customer Lifecycle Management
CLM is a new and unique approach that focuses on all the crucial steps required to develop strong customer relationships.  From one central system it manages and co-ordinates every piece of customer data, across all the communication touch points, personalising the content of campaigns and fulfilling all associated initiatives.   
With CLM there is no need to change or stop using the existing stack of IT systems.  It works in a co-ordinated way orchestrating existing systems in real time, and accessing the data already stored.   It works alongside CRM, IVR, web and mobile portals, campaign management systems and data warehouses, orchestrating these systems according to pre-defined business rules.  Utilising existing systems, CLM is a fast and low risk implementation that needs minimal resources to get immediate business benefits.   It comes with proven, predefined business processes, all the necessary applications, management and integration tools and a complete set of communication gateways.   
Customer Lifecycle Management is the answer to enhancing the customer experience and achieving the ultimate segment of one.   Nurturing each and every relationship is making customers feel special resulting in high levels of trust and increased loyalty.    Isn't it time for you to embrace the power of personalisation to capture your customers' attention?

Mikko Hietanen is CEO of Agillic   www.agillic.com

External Links


Product success or disaster ultimately comes down to how compelling the experience is to the end user. Some of the most significant successes in the ITC industry, like mobile services and more recently iTunes and iPOD, combine new innovative consumer electronics with the value of being connected to a network. Is IPTV the next service to fuel growth for service providers? Is the offer compelling enough to drive migration from satellite and cable? Per Lembre takes a look

The drive to unify video entertainment, voice, broadband and mobile has already had significant market implications. Recent M&A activities like Tele2 and Versatel, BSkyB and Easynet, Telenor and Bredbandsbolaget, were all motivated by gaining access to broader customer bases and to leverage a wider set of services to attract and retain those customers. In parallel, new technologies emerge to increase capacity and provide greater functionality in support of a converged service offer. The rational is to share resources between services, simplify operations and increase end user experience.

Whilst there are many advantages in delivering multiplay services, service providers still need to look carefully at their video offer itself and consider how differentiating and thus how successful it may be. Video over broadband is finally growing rapidly in Western Europe. Point Topic reported almost 3 million paying IPTV subscribers worldwide as per June 2006, with half of those users resident in  Europe.  This is in line with some projections, but lower than what many forecasted just two years ago.
The European market for IPTV is fragmented to the extent that it may even be misleading to say it is one market. Rather, every country is a market in its own right, with its own specific attributes. There are several factors that service providers will need to look into when deciding their IPTV strategy. What is the broadband penetration and what does bandwidth competition look like? Are pay-TV services already popular? What platforms do people use to receive their TV signals? The introduction of digital terrestrial TV in countries like Germany, Norway and Sweden forces long time terrestrial users to change from analogue to a digital solution. This technology shift constitute a window of opportunity for IPTV broadband providers, however the window is rapidly closing as people invest in digital set top boxes to continue to use their legacy antenna solution.

Content not enough
Some of the early IPTV pioneers, like Fastweb in Italy, have successfully secured exclusive content, in particular rights to the national football leagues. By carefully selecting high value content, service providers may drive initial penetration levels for IPTV in a similar fashion to how cable and satellite providers attracted subscribers to their pay-TV content some 20 years ago. The challenge here of course is that content rights are already distributed in all developed countries, so what content can possibly be out there that is attractive enough to drive mass adoption of IPTV?
Maybe that is the wrong question to ask. Over time, most premium content will tend to be available on all distribution platforms, simply because content owners will make more money that way. Instead, let's look into the unique capabilities in IPTV. What can the platform provide that traditional broadcasting can't?
First, IP networks are far better suited to deliver unicast traffic, sending data from one source to an individual consumer. This is perfect for distribution of video on demand (VOD), and to allow for a more personal user experience. Adams Media Research recently forecasted consumer spending on video download at $4bn by 2011.
Second, it allows for greater measurability when compared to broadcast technologies. IP networks can provide information about what the users want, when they want it, and what additional services they may be interested in. This has great implications for the multi-billion dollar advertising market. Targeted advertisements represent two to ten times the value to broadcasted advertisements, and when the big brands start to push new innovative advertising on IPTV platforms to get interactivity and better measurement, then the advertising market will embark on a new journey.
Third, and probably most important to the consumer, IP networks allow the user to play themselves. Few consumers use the service provider home page as their starting page on the Internet. Why would they go to a single service provider portal for all video content? The concept of active users, exploring and even producing and sharing content with others actually play to the traditional strength of service providers: It is about personal communication. Let's embrace it.

Consumers or producers?
User Generated Content (UGC) was one of the hottest trends in 2006 and gained a lot of business interest when Google acquired YouTube for $1.65 million in October. Building strong communities and allowing users to produce, share, view and contribute to content creation has already made an impact on the media industry.
UGC is another example of how different innovations together form a critical mass to allow a new service to succeed. UGC wouldn't have been possible without video consumer electronics that you can carry around in your pocket. Nor would it have taken off without inexpensive PC-based publishing software. Or broadband and community portals like Break, YouTube and national news portals allowing upload of video content from citizen journalists.
When Internet users in the UK, Germany and France were asked if they have shared any video content over the Internet, in average 8.7 per cent claim they have, with French users scoring highest at 11.7 per cent.  This corresponds to almost 3 million broadband users sharing videos over the Internet.  Given the early phase of UGC, this is a very significant number.  Subscribers to IPTV services may not only want to look for the hottest releases from Hollywood, they may want to take part in some of the production itself.
Studying consuming behaviours of video content on the Internet, UGC came out as the most attractive type of content with 47.1 per cent of viewers (1).

Telco TV providers have a unique opportunity to blend UGC with broadcast content. Service providers can potentially play a significant role in adding capabilities such as encoding quality levels for UGC suitable for large screens, infrastructure for micro payments, and the concept of 'family channels', allowing users to broadcast themselves. As the IPTV market unfolds, these capabilities help differentiate IPTV against legacy TV distribution platforms.

Understanding consumer preferences
The European IPTV market is still in it infancy and it is hard to foresee how it will evolve over the next years. Broadcasters have started to put a limited set of content available for on-line streaming. Peer-to-peer technologies are evolving from file sharing and voice applications to distribution platforms for television and on-demand streaming media. To add to the complexity, illegal distribution of TV channels over the Internet puts higher pressure on guarding principles on content rights.
The secret lies in understanding consumer preferences.  Over time, they tend to get what the want. The early video over broadband market indicates that consumers are moving from passive users of TV broadcasted content. They participate themselves, they vote, they produce and share, they put an alternate end to the latest story online, and they brutally rank what they see. IPTV may put an end to zapping, it may bring a far more personal entertainment experience, and it may swing the advertising market around. To succeed IPTV providers need to break out of the me-too services and leverage the inherent personal nature of IP networking.

(1) UGC and news preferred over sports when users are asked what video content they currently download and watch on the Internet.
Source: Juniper consumer survey, Nov 2006

Per Lembre is Head of Multiplay Marketing, Juniper Networks EMEA, per@juniper.net


This website uses cookies to improve your experience. Using our website, you agree to our use of cookies

Learn more

I understand

About cookies

This website uses cookies. By using this website and agreeing to this policy, you consent to SJP Business Media's use of cookies in accordance with the terms of this policy.

Cookies are files sent by web servers to web browsers, and stored by the web browsers.

The information is then sent back to the server each time the browser requests a page from the server. This enables a web server to identify and track web browsers.

There are two main kinds of cookies: session cookies and persistent cookies. Session cookies are deleted from your computer when you close your browser, whereas persistent cookies remain stored on your computer until deleted, or until they reach their expiry date.

Refusing cookies

Most browsers allow you to refuse to accept cookies.

In Internet Explorer, you can refuse all cookies by clicking “Tools”, “Internet Options”, “Privacy”, and selecting “Block all cookies” using the sliding selector.

In Firefox, you can adjust your cookies settings by clicking “Tools”, “Options” and “Privacy”.

Blocking cookies will have a negative impact upon the usability of some websites.


This document was created using a Contractology template available at http://www.freenetlaw.com.

Other Categories in Features