From its roots as a broadcast technology conference and exhibition, IBC has evolved to become a leading event focussed on the creation, management and delivery of content for the entertainment industry. Ian Volans takes a look at what will be on offer at the show this year 


Like telecommunications, the broadcast television sector is living through a period of rapid change.  Audiences are fragmenting as the handful of national channels that originally broadcast on analogue migrate to digital terrestrial transmitters, and accommodate new services.  At the same time, high definition is increasing the quality expectations of viewers and changing production techniques.
Even in countries where multi-channel satellite pay-TV and cable add to the competitive mix, broadband is challenging the status quo with the introduction of a new distribution channel for television and video entertainment.  And with virtually every European adolescent and adult carrying a mobile, the concept of watching television on the “fourth screen” is beginning to gain traction, if more slowly than the mobile industry would like. 
In the face of this changing landscape, the IBC 2007 conference and exhibition continues to advance, reflecting new technologies and commercial realities. 

Broadcasting by broadband
Mobile has been a recurring theme in IBC’s conference in recent years.  In 2007, the growing importance of IPTV and the distribution of video content over the Internet will be reflected in the opening theme day of the conference, Broadcasting by Broadband on Thursday 6th September. 
After several false dawns, broadband now offers a delivery channel that offers an alternative to the traditional television and radio broadcast business model. Broadcasting by Broadband will explore how broadband providers, which are more closely aligned to telcos than to broadcasters, will affect the broadcast landscape.
Regulators, equipment manufacturers, service providers and content owners are all stakeholders in this developing new world of interactive multimedia, but the rules are not yet fully understood. Each technology has its proponents, but they are more focussed on competition than co-operation. The end users – consumers - have a desire for the content they want, at the quality they want, in the place they want, at the time they want it.
The opening theme day for IBC2007 will begin with a jargon-free technical description and analysis of DSL, WiFi, WiMax, Powerline, ultra wide band, digital terrestrial, digital satellite and mobile TV to provide a comprehensive understanding of the technologies, their capabilities and their role in a business plan.
A business environment session will look at regulation and finance; at the implications of telcos and ISPs being successful in challenging for spectrum released as terrestrial services go digital; and how the regulation of content developed for broadcast may be applied to broadband delivery.  The day will conclude with case studies from organisations already providing a mix of services, including broadcast radio and television, over various broadband-enabled platforms.
The latest developments in mobile TV and video consumption will be a central strand in the Digital Lifestyles - media to your home or on the move conference theme day on Saturday 8th September.
As the communications and media industries converge, opportunities to serve the digital home expand. This is just part of a broader trend towards a digital lifestyle, characterised by media on the move, digital delivery of media to the home and accessible media storage within the home.
Whether it is broadcast, webcast or user-generated, all content is increasingly contributing to the growth phenomenon of social networking. This presents new challenges to the traditional business models of broadcasters and advertisers. The interplay of the four screens – cinema, television, computer and mobile - demands cross-platform media solutions.
The Digital Lifestyle theme day brings together case studies and guidance that address the options for repurposing content for different networks, consumption environments and storage; the DRM challenges of cross-platform delivery; the potential impact of the one billion mobile phones shipped in 2006 - and again in 2007 - on media capture and delivery; and the growth of a possible fifth screen – in-car navigation devices.
As well as exploring new media opportunities and challenges within the conference, mobile and IPTV technologies also feature strongly in the exhibition. 
In 2005, a dedicated Mobile Zone was created within the IBC exhibition to provide an opportunity for application developers, content providers and technology companies to showcase their capabilities at the centre of the broadcast industry's leading international event. It doubled in size in 2006, and will be bigger again in 2007. 
Mobile Zone exhibitors are diverse and drawn from across the ecosystem that is rapidly growing up around mobile TV and video.  This year’s exhibitors range from designers and turnkey suppliers of end-to-end mobile TV broadcast networks such as ENENSYS Technologies and LARCAN to weComm, the company that developed the Sky Anytime on Mobile solution that enables users of 120 different mobile devices to access the UK satellite broadcaster’s content.  Frontier Silicon, expects to use IBC to unveil a multi-standard RF & baseband system-on-a-chip that will be vital to delivering economies of scale in mobile TV handsets when the addressable market is fragmented with a variety of broadcast standards deployed in different countries. 
Qualcomm will be present in the Mobile Zone for the third year running.  While in much of Europe deployment of broadcast mobile TV is stalled pending the release of digital dividend spectrum, Qualcomm is progressively rolling-out its MediaFLO network across the US.  At a mobile TV conference in March, Jeff Brown, Head of Global Strategy and Development for Qualcomm cited forecasts from Wall Street analysts Bernstein Research that suggest that MediaFLO could become the world’s largest single multi-channel pay TV platform within five years.  By the time of IBC in September, Qualcomm may be in a position to provide a progress report on its US venture.
New in 2007, the IPTV Zone will provide an opportunity to explore the technologies and developments that are allowing broadband providers to compete with traditional broadcast distribution. Exhibitors in the inaugural IPTV Zone encompass big broadcast names such as Grass Valley; global technology players like Texas Instruments; middleware specialists such as MHP software solutions provider Osmosys; and HD set-top-box specialists ADB and Vidanti.
Some Zone exhibitors have relevance across mobile and IPTV.  Snell & Willcox, renowned for its image processing conversion and compression technologies, is adapting its expertise to improve image quality or increase channel capacity across wireless, IPTV and Internet delivery platforms.  For broadcasters or carrier providers who need to repurpose content for multiple distribution methods, the company’s iCR automated content repurposing workstation can simultaneously create separate outputs optimised for IPTV and mobile TV.
Conceived to complement the peer-reviewed IBC Conference, a programme of Business Briefings provides an opportunity for companies exhibiting in the two Zones to share their experiences with any IBC delegate, visitor or exhibitor.  Each day of the free-admission Business Briefings begins with a presentation from an independent analyst on the current state-of-play to provide context for the Briefings that follow.
M:Metrics, a pioneer in the study of consumer consumption of multimedia content on mobile devices, will introduce the Mobile Business Briefings, while Decipher, one of Europe’s brightest new digital media consultancies will introduce the IPTV Business Briefings.
IBC 2007, RAI Amsterdam: Conference 6-10 September; Exhibition 7-11 September.  More information: www.ibc.org
Ian Volans is Mobile Consultant to IBC

As the Mobile Digital TV (MDTV) market begins to ramp up, numerous new broadcast standards and technologies have emerged, creating a highly fragmented market.  The subject of whether mobile TV will become a reality is no longer in question, but the issues of ‘how’ and ‘where’ this will happen remain to be seen. Will Europe eventually emerge as a unified mobile TV zone, or will it be split among several different technologies? Alon Ironi takes a look

MOBILE TV STANDARDS - Setting the standard

There are currently more than ten competing broadcast MDTV technologies worldwide, including those under development. These are DVB-T, DVB-H, DVB-SH, MediaFLO, DAB, T-DMB, DAB-IP, TMMB, ISDB-T, CMMB and DMBT – which are deployed over multiple spectrum bands – VHF, UHF and at least two slices of “L” band and the “S” band for satellite broadcasting. 
The implications of the multi-standard reality mean that operators and handset manufacturers require maximum flexibility.  In a traditional cellular communications market, operators are in charge of the entire operation from A to Z – the network, the infrastructure and everything required for delivering services to the consumer. The MDTV market operates differently. Cellular operators do not build the networks and infrastructure but instead follow a TV service provider model, where broadcasters such as T-Systems, Swisscom Broadcast and Mediaset install the mobile TV network, including infrastructure, towers and content aggregation. Operators will then buy the service from the TV service provider to bring it to the end user as a commercial service.  Even though operators each have a preference for standards, they need the flexibility to work with two or more different MDTV service providers to create competition and get a better deal. Phone manufacturers also want the ability to sell their phones in multiple countries or in countries where there is more than one MDTV standard – requiring a multi-standard phone supported by a multi-standard receiver chips.  For handset manufacturers, multi-standard chips present an opportunity to streamline hardware and software design while reducing the overall design cycle and R&D costs. With a multi-standard chip, manufacturers can invest in one platform, which can then yield several commercial models for different standards.
Therefore how Europe’s fragmented MDTV market will evolve in the next few years will ultimately impact the strategy of local operators and handset manufacturers.  Taking a look at the current and recent European MDTV rollout – conclusions can be drawn about which direction Europe is moving in vis-à-vis unification of MDTV broadcasting standards.

European MDTV overview
Telecom Italia’s introduction of commercial MDTV services, together with the birth of free-to-air DTV within Europe, firmly kick-started Europe’s MDTV efforts in 2006.
The DVB-H standard made a moderately successful debut in Italy during the 2006 soccer world cup event, achieving over half a million users in a period of nine months.  Telecom Italia currently offers DVB-H mobile TV services in over 2500 towns and cities across Italy and is aiming for over 1 million subscribers by the end of 2008.  H3 is also competing for customers in Italy, again offering MDTV over the DVB-H network. Three leading handset manufacturers, LG, Samsung and ZTE are enabling end-users in Italy to receive the DVB-H signal, and a range of new consumer handheld devices are emerging in Italy, already integrated with the relevant DVB-H receiver chips ready to support local MDTV.   However, outside of Italy different countries within Europe are dabbling with various other standards.
British Telecom had developed a new broadcasting standard called DAB-IP, a derivative of the DAB family.  However, it’s commercial rollout in the UK with Virgin Mobile in October 2006 turned out to be a failure, with limited reception and poor picture quality resulting in disgruntled consumers.  But the UK was already exploring other opportunities, having trialled DVB-H with O2 in early 2006 and teaming up with Qualcomm and BSkyB to trial MediaFLO across certain towns in the UK in the latter half of the year.   Results were positive across the board, with 85 per cent of 375 users satisfied with the DVB-H service (providing around 16 channels) – and 72 per cent indicating willingness to start paying for it. The MediaFLO trial also achieved promising results – showing superior channel switching time to DVB-H.
Germany presents a slightly unique picture within the European framework.  Enjoying around 82 per cent DAB coverage nationwide, T-DMB commercial rollout began in time for the FIFA World Cup in June 2006 through operators Debitel, Mobilcom, Vodafone, E-Plus and O2 and handset providers Samsung.   Although user adoption was disappointing (only 3000 handsets sold), a second stage is planned that will use additional channels and more devices to entice users.  Like a few other countries within Europe with strong deployment of DAB radio, such as the UK, Belgium, Norway, Switzerland and Denmark, the existing DAB infrastructure in Germany makes it easier, and cheaper, to deploy T-DMB or DAB-IP.  Despite this, Germany is still poised to become a DVB-H country with leading German operator, T-Systems, opting to trial DVB-H also during the FIFA world cup in 2006.  With strong indoor and outdoor coverage, a wide selection of channels and devices from BenQ, Sagem, Nokia and Motorola, the trial was a great success.  A consortium led by Vodafone plans to commercially rollout a DVB-H network by the end of 2007, meeting consumer demand ahead of the European Cup in 2008.  But it isn’t just football matches that are shaping the MDTV market across Germany and Europe. 
Other countries following suit with DVB-H include Austria and Finland. The latter has DVB-H since end of 2006, but still lacks content and operator support, while the first enjoys state financing and free offering of the service by 3Austria and MobilKom.
Additional European countries expected to commercially deploy DVB-H services in 2007, indicating the beginning of DVB-H domination in Europe, include the Czech Republic, France, Netherlands, Portugal, Spain, Switzerland the Ukraine and Russia. 

Europe – the case for DVB-H
The expected evolution of Europe into a DVB-H zone is a conclusion that is currently shaped by a combination of technological, economic and political influences.
In March 2007 the EU Information Society Commissioner Viviane Redding filed a recommendation for a single uniform MDTV standard across the EU that favourably backed DVB-H.   There is also a new EU law being developed that will allow broadcasters to operate under their own national law and sell mobile TV broadcasts through the 25-nation union, again strengthening the case for a unified single standard across the EU.
On the technological front, analogue switch-off throughout Europe in the next decade has resulted in DVB-T deployment in homes in more than 20 countries in the continent, but until recently, power consumption and reception quality did not permit mobile usage.  However, the availability of advanced receiver chips, and the debut (in Q1 2007) of the world's first DVB-T mobile phone by Taiwanese maker Gigabyte led manufacturers to view DVB-T as a practical, effective MDTV technology. This has not won support from mobile operators, since most DVB-T content today is free of charge. Technically speaking, there is nothing that prevents subscriber-based DVB-T services, so this is a one viable direction for MDTV in Europe.  However, the extensive deployment of DVB-T networks actually supports the case for DVB-H in the future, as DVB-H can operate on existing DVB-T networks.  Thus it is most likely – given the current DVB-H trials and the existing DVB-T infrastructure - that this will lead to the natural evolution of DVB-H in Europe.
Although DVB-H can operate on existing DVB-T networks, the infrastructure must be significantly expanded for mobile devices, costing a few hundred million dollars for a sizeable country like Germany or France – a negative factor for broadcasters and operators.   It is also worth nothing that there are additional challenges for the adoption of DVB-H – and mobile TV across Europe as a whole.  These include regulation, protection of IP rights of the content owners, aggregation of content at low prices and consumer preferences.  Surprisingly, recent polls have found out that mobile TV end users enjoy watching mobile TV at home. This calls for deep indoor coverage, which might increase the cost of the broadcasting infrastructure. The challenge partially falls on the terminal and component makers – high sensitivity of the antennae and the receiver chips and careful design of the terminals may reduce the required network density, thus the infrastructure expenses.
Despite competition and market challenges, the political and technological factors still weigh in favour of DVB-H.  However, with a range of additional standards and alternative MDTV broadcasting infrastructure in places such as the UK and Germany, what advantages does DVB-H offer?
DVB-H exhibits benefits such as extra error correction to allow for reception in poor conditions and a time slicing mechanism to reduce power consumption.  In effect this means that consumers can watch mobile TV on various portable devices, without loss of battery power, for longer than other standards.  In addition, DVB-H offers strong spectral efficiency, allowing up to 15 channels including radio and data channels. For full nation-wide MDTV schemes, DVB-H is complemented by DVB-SH (still under development), which is capable of covering out-of-town zones directly via satellite, thus reducing infrastructure overheads.  The only other standard that is comparable in performance to DVB-H is Qualcomm’s MediaFLO, which offers better channel switching time but inferior power consumption.
Thus the anticipated regional segmentation points towards Europe becoming a DVB-H zone, supported by the above factors.  Economies of scale strengthen this argument, as rising costs of alternative MDTV standards within the EU will be eradicated by a single uniform standard.  Perhaps DAB-IP and T-DMB may prevail in the UK and Germany respectively, but ultimately we are likely to witness urban and rural consolidation of DVB-H, the highest performing standard on the market.

Alon Ironi is CEO of Siano Mobile Silicon, and can be contacted via e-mail: siano@siano-ms.com ;
tel:  +972-9-8656993

Mobile television is one of the most discussed topics in the telecoms industry today and destined to provide new opportunities for many organisations within the sector. The technology promises to provide a brand new platform for media companies that want to extend their content into the mobile space, and new revenue for operators faced with falling voice ARPU. Kamil Grajski discusses the issues in developing mobile television, and provides an overview of regulatory, technical and economic challenges in bringing this opportunity to market

FLO TECHNOLOGY - It's all about the business

The catalyst for this new market is a thriving mobile communications sector. Third generation (3G) mobile technology is taking off, with new subscriptions now outstripping second generation (2G) technology in many European markets. This, in turn, has generated a rapid increase in consumer expectations of services and functionality from the mobile phone. Meanwhile, content owners and broadcasters see an exciting opportunity to extend their brands into the mobile space, reaching new consumers and pushing content to a far-reaching platform. Yet there are challenges faced by those developing the technology in delivering a high-quality product, making the business model work, and standardising.
The opportunity is born of the inherent popularity of television – almost everyone in European markets owns a set. Moreover, around 1bn mobile handsets will be sold globally in the next 12 months, which illustrates the huge potential for mobile TV to become a truly mass-market technology.
Mobile operators, however, need to deliver mobile television without the prohibitive costs associated with transporting data over 3G networks, which is inherently ‘success limited’ for the mass market. Broadcast overlay technologies such as FLO – and competing standards like DVB-H, ISDB-T and DMB – support the delivery of high quality streaming live video and audio to the mass market at a low cost per user per bit. FLO, for example, enables, in a typical 8mhz UHF European channel, more than 30 QVGA linear video channels, 10 high quality audio channels and hundreds of minutes of ‘Clipcast’ (file-based) short format cached content per day.

Mobile TV business model
DVB-H, DMB and ISDB-T also envision an overlay network for multimedia broadcasting services, but the FLO advantage lies, in part, in the support for advanced multiplexing schemes that promise greater coverage with fewer transmitters and a broader service package. The real market advantage is the successful pay TV model that can be followed closely with a ‘large channel’ package, made up of a base-level service and premium channels that earn the broadcaster a higher margin on their content. For a network operator, lower capex and more capacity means a greater channel package to offer to customers, generating a higher rate of return. As a financial model this is the difference between profitability or not.
Technology is only one part of this picture – one of the key debates in the past year has been the establishment of a sustainable business model for mobile TV. In Korea, for example, during 2006, the T-DMB free-to-air platform saw more than two million devices sold. While raw user numbers point to successful deployment, advertising revenues from T-DMB broadcasts are below initially forecasted levels. Moreover, network build out costs for T-DBM have been a drain on the resources of free-to-air broadcasters and handset OEMs, with return on investment now in doubt.
There have been other issues with deployments too. In Finland, home of one of the major proponents of DVB-H technology, Nokia, the promised live commercial mobile TV service is yet to materialise. Issues with rights for content have stalled a fully commercial service, with further delays likely. The problem: a free-to-air business model that disincentivises too many of the key players and hinders the development of innovative and compelling services.
Meanwhile, in Japan the deployment of One-Seg, launched in April 2006, has encountered a number of teething problems. The service, which is based on ISDB-T technology, is currently offered to consumers on a free-to-air basis with a mobile contract. The free-to-air nature of the service initially led consumers to sign up, but to cancel the phone contract, thus obtaining TV on their phones for free. If mobile TV is to be the revenue generator that many operators anticipate it to be, then such anomalies must be ironed out.
There is, however, a positive case study in the United States where Verizon Wireless launched its pay TV V-CAST mobile TV service on March 1, 2007 in approximately 25 selected markets.  The service, priced at $15 and initially offering eight live channels of content, is operated on the MediaFLO USA broadcast network. FLO Forum member MediaFLO USA promises to create the world’s largest TV market. This is an argument that was underscored by the announcement in March 2007 that the country’s largest operator – AT&T Wireless – will deploy a MediaFLO-based mobile TV service in late 2007. The incentives created by a paid-for, premium content mobile TV service means that all members of the value chain – technology providers, network operators, content owners and broadcasters – stand to benefit.
Technology licensing remains a crucial question to be answered in the coming months. In some cases licensing terms for mobile television technologies remain unclear. Indeed, a number of industry players have questioned how much they will be called upon to pay DVB-H essential patent holders for the technology. There is yet to be a definitive proposal and the exact terms of the negotiations are not known, which has caused a number of major broadcasting players in Europe to express concern about the risk of “patent ambush” – where technology is deployed and the cost is only known at a later date.  It is this uncertainty surrounding licensing for mobile television that could delay the commercial implementation of networks in Europe.
FLO, on the other hand, is licensed under a broad-based licensing program that enables the development, manufacture and sale of FLO-enabled handsets.  The program is designed to encourage existing CDMA chip licensees to develop and market CDMA/FLO multi-mode chips without increase in the standard royalty rate for CDMA-based handsets. CDMA includes CDMA2000 and/or WCDMA/UMTS.

Standardisation and spectrum
Standardisation is another important consideration for industry. The FLO Forum, which now boasts 80 members from all parts of the mobile TV value chain, has moved rapidly towards global technology standardisation. In August 2006 the Air Interface Specification was published by the Telecommunications Industry Association (TIA) as TIA-1099. There then followed TIA-1102, 1103, 1104 and 1120 which cover minimum performance standards for transmitters and handsets, and the FLO transport layer.  In parallel, the ITU-R Study Group 6, in a recently approved New Recommendation relating to broadcast multimedia services and applications, included FLO as a referenced technology with the designation ITU-R Multimedia System M.  Last, ETSI has undertaken initial efforts in the area of FLO standardization with the recent approval of a New Work Item in the ETSI Broadcast Committee.
Standardisation of mobile television technology is important for global and European markets, in particular, because it drives down both component and development costs, speeds up time to market for devices and ensures that carriers’ requirements are obtained first-hand. Standardisation also ensures interoperability and lowers operational costs for FLO-related products and services.
The nationwide 700mhz spectrum footprint acquired by Qualcomm at FCC auction 49 in 2003 in the United States ultimately led to the launch of a full commercial FLO-based mobile TV service in the US in 2007. In Europe the picture is different and the availability of spectrum will play a pivotal role in the rollout of services. A number of regulators across the continent are predicting analogue switchover as late as 2012 – and only then freeing up UHF spectrum for new broadcast services. However, in certain markets there appears to be a realistic path to commercialisation over the next 18 months. There are, for example, ‘L band’ spectrum auctions planned in UK and other European territories may follow suit in the next year. It is the harmonisation of UHF spectrum that will prove to be the catalyst for widespread deployment of mobile television technologies.
Yet there are technical and commercial challenges to be met by all those engaged in the business of broadcast mobile television services. While the many competing technologies are now at either commercial or pre-commercial stages of deployment, market forces will select the winners. The prevailing view is that there will not be a single dominant global standard for mobile broadcast.  Each global region will present a unique combination of regulatory, technology, business and legislative conditions.  Today, FLO Forum member companies are planning accordingly.  Indeed, many FLO Forum member companies that have previously announced support for other mobile broadcast technologies have publicly announced support for FLO technology as well.
2007 and beyond is a critical period for mobile television – one in which commercial deployments will grow and mass market subscription services offered over the first truly large-scale nationwide network in the United States will be closely scrutinised. The market is set to absorb copious amounts of data relating to how consumers use mobile television, what content is most compelling and what further challenges lay ahead. One thing is certain– mobile television will remain at the forefront of debate in our industry for the foreseeable future.

Dr. Kamil A. Grajski is President of the FLO Forum, and can be contacted via e-mail: kgrajski@floforum.org

Eric Lemarechal looks at how the barriers to mobile application development can be broken down

It's becoming increasingly obvious that it's possible to reach both employees and customers through a feature-rich communications device that they carry around all day. That device is the mobile phone. There are major obstacles to the development of mobile apps since the market is seen as being totally fragmented. The facilities a handset can offer vary enormously not just between mobile phones made by different manufacturers but within each manufacturer's own model range.
In the USA, for example, there's a degree of uniformity provided by the fact that Qualcomm's Brew has established itself as the premier development environment. In Europe, however, there's only one common factor - Java. Virtually all handsets – from the very basic, prepay entry level phones right up to the very latest smartphones – are capable of running the mobile version of Java, J2ME. Hence the ability to build mobile-aware applications using Java has enormous potential.
Wouldn't it be great if you could write just the one piece of code and run it on exactly the sort of mobile phones that everybody already possesses? That's exactly what vendors of conversion engines aim to provide. These packages enable fast and efficient porting of Java apps to hundreds of different mobile phones.
The objective with any conversion engine is to take the pain out of ensuring that an application will run on an ordinary handset in exactly the way its author expects. The conversion process doesn't just cover obvious features like keypad mapping; it also covers sound, graphics and connectivity – such as Http, Bluetooth, SMS and NFC (Near Field Communication).
The first sector to pick up on the advantages of conversion engines was the mobile games industry, while those in mobile marketing have quickly appreciated the potential offered by application conversion. However, many believe that this technology is just as relevant to ordinary, everyday business enterprises as it is to companies with an existing mobile focus.  It's particularly suitable for rolling out CRM style applications, for example. By harnessing the power of the mobile phone, corporations can reach out directly to their customers with their applications, not solely to their own workforces.
Those companies connected with transportation are amongst the obvious potential customers, for instance. A good example of a transport-orientated application is one that utilises a mobile phone's NFC capability. Such an application could enable handset owners to purchase a train or bus ticket simply by touching the phones against a ticket barrier. More impressively, the conversion process could be employed by a car rental firm to create an application that enables customers to open the hire car's doors via the mobile phone. That very same app would then load a planned route into the phone's built-in mapping and navigation software module.
Mobile apps don't have to look dull and boring, either. Using conversion engines, it is possible to create the kind of 'feature rich' applications that companies are accustomed to building on the web. One capability, which has, until now, been missing from mobile applications, is the ability to mimic Adobe's Flash environment. One problem is that there are not many handsets with sufficient processing power and available memory to run Adobe's FlashLite offering – which is produced specifically for mobile phones. One solution is a facility called Flashlike Forms. Forms enable designers to utilise pull-down menus, text fields, radar buttons which can all be mixed with animated graphics for example. Once the user makes a particular selection via the keypad, the menus are then correctly populated with text. The chief benefit with Flashlike Forms is that it can run on very basic mobile phones – not merely on powerful smartphones.
The most popular way of distributing mobile apps is via an OTA (Over-The-Air) download. The drawback is that such apps are restricted to a maximum of around 1 MB. An alternative method of rolling out a feature-rich application to low level handsets is therefore to take advantage of a product from SIM card specialist manufacturer, Gemalto. This company produces the Multimedia SIM card, which can offer up to 1 Gigabyte of storage. The advantage here is that high resolution images, for example, can be stored on the SIM card along with the Java code that enables the handset to read the data. The Multimedia SIM card could then be swapped from phone to phone and still function regardless of the type of handset into which it has been inserted. Loading applications and data onto a SIM card also opens up the possibility of distributing applications via a broader range of outlets. Instead of requiring customers to download an application themselves over the Internet, apps could be loaded onto the SIM via existing mobile phone sales outlets.
To date, many IT departments are wary of venturing outside a very tightly controlled software development environment, which is heavily dominated by Microsoft. The reality is, however, that there are very few opportunities to create a 'consistent Windows-based experience' in the mobile world. It's been estimated that less than one per cent of existing handsets run Windows - let alone support the very latest version, Windows Mobile 6.0. By contrast, taking the Java approach enables an IT department to roll out an application to the kind of phones which employees and customers already own. Given that conversion engines offer support for hundreds of phones, there isn't even any real necessity to carry out an audit to discover what make and model of phones users already possess.
Companies have already learnt from the WAP experience and appreciate that mobile applications have to work perfectly when they're rolled out. Otherwise, if they don't, you probably won't get a second chance. The typical customer for a conversion engine won't select the entire base of supported phones but will pick a subset of, say, around 220 handsets. Out of those, normally it's only necessary to thoroughly test some 60 to 70 handsets to ensure the application works exactly as intended. Plus there's flexibility. It's also quite common for users to customise parts of the conversion process – by selecting a specific level of graphics resolution to run on a particular mobile phone, for example.
While those involved in producing mobile phone games formed the first wave of conversion engine users, those involved in supplying financial services will almost certainly form the next wave of mobile apps adopters. As an example, mobile phone apps will provide a highly convenient way for banks to inform their customers that they have sufficient credit to make a purchase when they're standing at the point of sale. Previously consumers had to return home to their PCs to discover if they had sufficient credit or were forced to accept the finance package being offered by the sales outlet itself.
As a final point, more and more mobile handsets are now capable of accessing the conventional web – not just WAP sites. Mobile apps can tap into this capability, too. A Java app can be designed to send a request to a web server to download an image or text. This effectively means that mobile phones can tap into the many web-based applications which corporates have already spent time and energy building.
For those businesses whose IT departments are already over-stretched, there's also good news. Conversion engine vendors are rapidly building relationships with established systems integrators who can readily take on the necessary development work rather than requiring the work to be done in-house.

Eric Lemarechal is co-founder, Mobile Distillery

The mobile marketing industry is growing at a rapid rate with new innovations and business models being developed and deployed at an increasing pace. This makes for a dynamic and fast moving industry, says Laura Marriott, but also one that needs to adhere to a common set of best practice guidelines in order to grow responsibly and protect the consumer experience

MOBILE MARKETING - Playing by the rules

To ensure that as the mobile marketing industry develops, consumers not only have a positive mobile experience but are also treated fairly by all in the value chain, the Mobile Marketing Association (MMA) and the mobile industry, have developed and adhere to the MMA’s Consumer Best Practices (CBP) Guidelines.  Best practices also help to create simplicity and commonality for the industry and enable all players to operate according to a common set of criteria - ultimately, growing the mobile platform as a new business channel.
The CBP guidelines are published every six months and highlight important areas in regards to cross wireless carrier mobile content services to help ensure a sustainable mobile channel. The guidelines have been integrated into carrier and aggregator contractual agreements with brands and content providers and, as such, are adhered to by all players in the ecosystem.
The guidelines are built upon the MMA’s Code of Conduct for Mobile Marketing, which was first ratified by the MMA board in 2003. The code of conduct itself is organised around six main themes:
•    Choice. The consumer must “opt-in” to a mobile marketing programme
•    Control. Consumers must be allowed to easily terminate or “opt-out” of a programme
•    Customisation. Any data supplied by the consumer must be used to personalise content (e.g. restricting communications to those categories specifically requested by the consumer.)
•    Consideration. The consumer must receive or be offered something in return for receiving the communication (product and service enhancements, entry into competitions etc.)
•    Constraint. The marketer must effectively manage and limit mobile messaging programmes to a reasonable number of programmes - defaulted to a maximum of two new campaigns per week - unless the consumer opts in for further information. 
•    Confidentiality. Commitment to not sharing consumer information with non-affiliated third-parties (unless given permission to do so by consumer)

Best practice in action
The following demonstrates how the code of conduct principals are used in the ‘real-world’ by showing examples of shortcode programmes as defined in the best practice guidelines.  The over arching guideline is that the consumer is in control of their interaction with the programme.
There are basically two kinds of shortcode programmes: standard rate SMS and premium rate SMS, the former requiring single opt-in and the latter requiring double opt-in. Regardless of type, the goal is to ensure that the consumer opt in is clearly communicated to the subscriber, along with the obligation they will incur by participating in the programme.
For standard rate programmes, subscribers should indicate their willingness to participate in a programme and receive messages from the programme as follows:
 - Subscriber sends a Mobile Originated (MO) message to the shortcode.
 - Programme responds with pertinent phone, programme, and contact information via a Web/WAP/handset application-based form.  This opt-in applies only to the specific programme a consumer is subscribed to and should not be used as a blanket approval to promote other programmes, products, and services. However, after the subscriber has been given the complete details about the opt-in scope, the subscriber may specifically agree via their handset to receive other messages (this would be referred to as a double opt in).
The following table is an example of a standard rate mobile marketing campaign for “The Sandwich Shop Health Alerts.”
By contrast, premium subscribers must positively acknowledge the acceptance of a premium charge before premium charges are applied to their account. This is why the first time a subscriber participates in any premium programme, they will be asked to confirm their participation and accept the rate charges, hence the “double opt-in.”
This requirement should apply the first time a subscriber participates in a specific programme on a specific shortcode. Separate programmes, even if they are offered on the same shortcode, will require a separate opt-in and the content provider/aggregator is responsible for tracking the programme opt-in information by subscriber.
There are two mechanisms acceptable for opt-in activity: web-based, and handset-based. In all instances, however, the subscriber must take affirmative action to signify acceptance of the programme criteria. Within the double opt-in flow, the following information (at a minimum) must be provided to the subscriber:
•    Identity of programme sponsor: Defined as the organisation that markets the programme.
•    Contact details for the programme sponsor: Either a toll-free number or a website address.
•    Short description of the programme: e.g. “Fun Stuff”; “Premium Chat”.
•    Pricing terms for the programme: e.g. $0.99 per mobile originated message; $3.99 per month; whether standard messaging charges apply in addition to premium charges.
•    Notice that the charge will be billed on the subscriber’s postpaid phone bill or deducted from their prepaid balance.
•    Opt-out information
The following table is an example of a one-time premium weather message (transactional programme):
The following table is an example of charges the next time the same subscriber tries the same programme:
Many consumers prefer to provision and interact with SMS programmes from the Internet. If the second opt-in is from the Internet, the content provider must positively confirm that the authorised subscriber is acknowledging the opt-in. This can be done using a web-based PIN or phone MO message. This message must also include programme pricing and terms, and opt-out information. In addition, the content provider should use this channel to provide more detailed information about the programme. Regardless of the web opt-in details, the goal is that the entire terms of the offer must be clear to the subscriber throughout the process.
The following table is an example of a subscription programme with web sign-up:
It is important for subscribers to understand and be in control of their participation in shortcode programmes, and programme information should be easy to interpret. Regardless of manner of entry for a subscriber, help messaging commands, phone numbers, URLs, and e-mail addresses should result in the subscriber receiving help with their issue. Dead ends that do not result in the ability for subscribers to resolve their issues are not acceptable.
If the shortcode has multiple programmes (keywords) on the same code, the application should respond in one of two ways:
•    If the subscriber has opted in to only one programme, the application should supply the information for the programme the subscriber is opted-in to.
•    If the subscriber is opted-in to multiple programmes, the application should present a multiple-choice question asking the subscriber what programme they would like help on.
These messages should not result in premium charges to the subscriber’s bill and should be available to anyone who requests help information from the shortcode via SMS.
To help subscribers understand their participation, each programme should respond with the programme details listed earlier (contact details, description codes, opt-out information, pricing terms etc) when the subscriber sends the keyword HELP to the programme shortcode.
Should there be multiple programmes running on the shortcode, the subscriber can be directed to a Web site, WAP site, SMS quiz session, or toll-free number that provides an enhanced customer care experience, as long as basic information about the programme is in the help reply message.

Building on best practice
The above is just a sample of what is covered in the MMA’s Consumer Best Practices guidelines. Best practices are a first step in building an industry and reflect its sustainability and maturity. The industry must monitor and enforce against these best practices to ensure the success and integrity of the mobile content business - as well as adherence to the best practices guidelines. Most carriers and aggregators today perform a version of their own monitoring, with enforcement at the discretion of the carrier. Collectively, the industry will soon launch a new industry-wide monitoring initiative.
Best practices help ensure a level playing field and consistent consumer expectations in all mobile data services. Best practices are important not only to grow the industry but also to ensure a positive consumer experience. Understanding and adhering to industry best practices, therefore, are key to everyone's success. So make sure you understand the rules we play by!
Laura Marriott is president of the Mobile Marketing Association (MMA)

David Knox examines the potential of GPS mobile phones and how real time charging and control systems can help the innovation successfully map the future of the mobile phone business as a new marketing device for consumers

GPS ON MOBILE PHONES - Marketing maps

This is the year of the GPS revolution, a technical achievement that hasn't stirred this much industry excitement since the launch of 3G phones. Every major mobile maker around the world is scrambling to become the first to launch the most efficient and user-friendly satellite navigation system for their new handsets. No longer confined to the dashboard of your car, the GPS technology will be available in many new handsets and will not only tell us where we are, but also give us tips on where to dine, shop, or see a film.
The concept of GPS navigation software for phones has been around for a while and has made some significant progress in the US market, where it is currently used as an enhanced emergency system that enables emergency operators to work out the location of someone calling from a mobile phone to help them out of trouble.
In Europe, however, the success has been minimal. Despite its general availability, mobile GPS has never hit the mainstream jackpot – primarily because of fussy, user-unfriendly gadget requirements (i.e. a separate GPS module) and GPS's unsuitability for
pedestrians trying to get from point A to point B
without a car.
This is all about to change with a range of innovative new handsets, complete with integrated GPS receivers, offering a useful and more relevant piece of navigational technology for any user. Leading the pack of new phones is Nokia's N95, which has already gone on sale in the UK and other markets. Boasting excellent, computer style graphics, owners will be able to use the handset as a full-blown navigation device, whether in their vehicle or on foot.
GPS also promises greater customised service as well, enabling mobile users to combine calendar and contact functionality with navigation. For example, the user can tell the device to navigate them to their next appointment, which may be a friend's birthday party, and also navigate via a shop or outlet selling whatever they may need to pick up en route.
This level of mapping sophistication opens many doors for both the consumer and the mobile operators. The GPS phone can help users make lifestyle choices by not only telling people where they or going but, with the help of advertising campaigns, letting them know what can be enjoyed along the way. Imagine turning on a GPS phone outside Bond Street tube station and trying to find the best route to the nearest park. The phone will not only be able to tell you how to get to Green Park but also inform you of relevant special offers at the Fenwicks department store- which you need to pass to get to the park.
Indeed, mobile GPS will not just be about helping users get to a location, but will also present an opportunity to enhance their lifestyles though location-based marketing. This raises the question of whether customers will be willing to accept mobile marketing with their GPS handsets, as well as the privacy issues that go along with them.
Blogging and GPS
The blogging phenomenon suggests that many customers are ready for a customised approach to marketing and social networking. In Japan, for example, car company Honda has already introduced a GPS device that allows drivers to make comments on points of interest along their route. Whether it is providing a review of a restaurant or a description of a museum exhibit, the navigation system offers a social-networking opportunity that allows drivers to make information available to other GPS users in real-time.
What has emerged in the Internet world is indisputable evidence that users rely heavily upon the word of like minded individuals when making choices on which restaurant to eat at, which hotel to stay at and so on. Combining navigation functionality with instant access to reviews and tips along the way will provide the mobile user with a truly mobile Internet experience.
Mobile GPS has the potential to fine-tune this method of networking, by allowing each subscriber to specify their interests and subsequently receive customised itineraries, targeted advertisements and other useful information based on their destination- without the need for them to trawl the Internet to find what they are looking for. This could be the latest new restaurant located close to the theatre which the GPS system is helping the mobile user find on the map, or the nearest toyshop on the way to attending the birthday of a friend's child.  Combining mobile GPS and marketing is not merely a possibility but inevitable as people become more engaged with the technology and want to enjoy as many new experiences as possible.
So how will mobile GPS be transformed into a profitable, commercial success? Convincing customers to purchase GPS handsets, which are still relatively expensive is the first step. The second one is the delivery of the aforementioned marketing tools. A crucial element in launching a successful ad campaign is through effective consumer profiling, which has already been touched on. Simply put, in order for any mobile operator to make any money out of location based services, they must be able to have the potential to earn revenue from value added services such as mobile marketing and targeted information provision, as there will not be a charge for the mapping service itself.
Real time charging and control applications can help operators conduct effective advertising campaigns through their ability to store and access user profile information, and then use that information in combination with real-time location data to deliver relevant adverts to the mobile device.
Information about the brand tastes and interests of the mobile user can be gathered by the operator before the user agrees to subscribe to advertising. All this information can then be stored in the network and be accessible to the charging and control solution to ensure that advertising is targeting the right audience and will appear whenever a customer uses the GPS device to map out their journey. So, for example, if one mobile user is identified as a football fan and has turned on their GPS phone to find directions to the stadium where a match is being held, then the charging and control solution could access previously stored profile information in-real time and automatically check to see whether anything nearby the stadium would appeal to the mobile user.  If it is a restaurant near the stadium offering a two for one lunch deal then an advert could appear letting the subscriber know about the offer.
A charging and control device can also identify that the user has viewed the advert and to subsequently check whether it has been followed or not. For example, if the user wants to avail himself of the offer, he request a promotional code by clicking a link on the advert, and then subsequently uses this code to validate the offer in the establishment.  This potential method of monitoring not only helps to track the success of a particular campaign but also to determine revenue share that is generated as a result. For everyone that received the message and subsequently went to the restaurant, the charging and control device would be able to provide the operator with details on how many and which subscribers viewed the advert, and how many actually responded to the promotion. Most importantly, the charging and control device could calculate the revenue share from the campaign based on the agreement between the operator and the advertiser.
Handsets with GPS technology are ideally placed not only to enable navigation and true location based services, but also to combine this dynamic information with social networking and targeted marketing services.
The next step is creating the commercial models necessary to make GPS handsets a hit in the marketplace. So far the concept has had very few critics and everyone seems to be taking a 'wait and see approach' when determining the success of the pocket-size navigator. What is certain, however, is that the demand for social networking and the marketing it enables has already been proven by the blogging generation.  Whether it will win mobile customers at the same phenomenal pace is anybody's guess, but perhaps the combination of GPS navigation capabilities and real-time access to targeted advertising and networking will finally deliver upon the promise of true location based services.

David Knox is Product Marketing Director at VoluBill

External Links


For 3G to be a success, Alon Barnea explains, users need to be motivated to use it and be given an easier way to adopt the technology

3G TAKE-UP - Pulling the usage trigger

The introduction of 3G and video calls was not met with the fanfare response that was expected by the industry. Even now with over 100 million (and rapidly growing number of) subscribers, 3G mobile users still remain a rather small part of the overall two billion mobile subscribers worldwide and video usage within this video enabled community is still deemed as a disappointment. Point-to-point video calls are evidently not a big enough draw to encourage people to jump on the 3G bandwagon, and with an estimated one in ten mobile phone users actually owning a 3G phone, it seems unlikely that person to person video calling will be the phenomenon that SMS has become. Though most share the notion that video will become mainstream and a major revenue source, we still need to address the question of what then will make video communications a success?

The UMU factor
Much has been said about the limiting factor of 3G video, stemming from peoples’ reluctance to accept “intrusive surprise” video calls. That’s where the User Motivated Usage (UMU) factor comes in. The UMU factor is related to video applications where an entity is generated, at a given moment, to motivate users to make (rather then receive) a video call and is the key to elevating the level of video usage over mobile devices by taking out the “surprise call” element and the absolute necessity to be seen.
For 3G to be a success, users need to be motivated to use it and be given an easier way to adopt the technology; rather than having to wait for their friends to catch on too. With the UMU factor, 3G can be used by anybody today for an exciting experience that is independent of the 3G availability of other participants.
Naturally, “traditional” video communication is happening now. Take a group of female friends, for example, getting ready for a night out together. The advent of 3G mobile to PC communication opens up a new avenue for these women to get their friends’ opinions on how they look in a particular outfit. Using their 3G phones, or webcams on their PCs, the group of women can ‘meet’ in their own online community and compare clothes from their separate homes before meeting later in the evening.
Another example is that of a businessman who is travelling. While travelling through France on the TGV, he can still take part in a face-to-face briefing with a client based in Scotland using his 3G mobile phone to his client’s PC, while conferencing in his partner who is sat at her desk in Brussels.
But wouldn’t it be appealing to those avid sports fans to see and hear the Most Valuable Player right after a major basketball game?  Members of the team’s fan club can call to see and hear what the MVP has to say, and maybe even be selected by a moderator to be seen by all of the viewers in the fan club community to ask a question.  At the same time, someone sitting in their living room watching the game on their new HDTV can join the live video session as well because their cable STB is also a video-enabled client. It could be just your luck that you are stuck at work, but you can enjoy this live from your desktop PC.

The secrets of success
For 3G to be a success, operators and service providers need to answer the following questions:
1.    Have we secured a strong enough trigger/interest for usage?
Without a reason to use 3G, why should users start paying out extra to make video calls on their 3G phones? Users need to be given something to inspire them to pick up their 3G phones and make a video call. No matter what the lifestyle, users need to know that 3G can benefit them; that it is there for everyone.
2.    Is there a specific context/timing for when a service will be used (here and now)?
A 3G mobile device is always smaller, with lower quality and is more expensive than any other media (PC,TV, STB etc.) but it’s the only real mobile device and always available. An event or community that triggers a use at a specific time or context that motivates the user to join at that moment will create the need and reason to use a mobile device. If the service is timed with the end of a sporting event, and targeted to the viewers who are at that time mobile, then the usage trigger exists.  When a service is geared for people on the move, then the user motivation is created.
3.    What is the guaranteed success and completion rate?
Successful services must have high completion rates. The way to guarantee this is to deploy a service that is not dependent on a high ratio of other 3G enabled handsets – for example where participants and content can also originate from the IP. In this manner, those who do own 3G phones will be ensured a successful service, with a 100 per cent completion rate.  Furthermore, a converged environment, including video-enabled PCs, expands the boundaries of the relevant communities, and increases the levels of participation thus increasing adoption rates.
The more limited or complicated a medium is, the more necessary it becomes to correctly understand the above factors” to ensure usage and success. In the context of video and the mobile handset, the success factors are definitely challenging, and given the low usage rates experienced by the industry today, it can safely be said that the answers as indicated, still need to be integrated into the 3G services provided by the operators.

Pulling the trigger (of usage)
Users need to be shown and delivered true benefits of what can be done with 3G and what advantages it has. What video over mobile has to offer, besides a small screen and limited quality, is unprecedented mobility and availability; no matter where you are and what you’re doing (within reason, of course) you can always see your friends and family, make that meeting, and enjoy video content. By taking advantage of the mobility and availability of 3G and considering the success factors, an interesting and promising future for mobile video can be seen.
If the UMU factor is put into effect, then the usage possibilities are endless. Even now, some of these possibilities have become reality. 
Imagine how many such UMU “events” we are missing every day! 
The options for 3G are infinite – imagine how much mobile video traffic can and should be generated when influenced by the UMU factor. Consumers just need a little push in the right direction and before you know it they will be saying: “I’m here, I’m interested and I’m ready to pay! And I will use the best device and access method that is available to me at any given moment.”

Alon Barnea is General Manager of RADVISION’s mobile business

Since joining forces last year, OSS/J and the TM Forum are proving that by combining their strengths, not only the OSS community, but the communications industry as a whole, has much to gain.  Doug Strombom takes a look over the past twelve months

OSS/J - In Perfect Harmony

The OSS through Java Initiative’s (OSS/J) decision in early 2006 to join with the TeleManagement Forum (TM Forum) appears to have been a good one.  OSS/J is a rising star within the TM Forum, making a strong contribution to the technical programme there, and increasing its influence on the TM Forum’s New Generation Operations Systems and Software (NGOSS) standards-making efforts.
In January 2006, when OSS/J first discussed joining the TM Forum at OSS/J’s face-to-face meeting in Dusseldorf, Germany, there was some trepidation that the group’s strong focus on standards-making might be diluted within the much larger organisation.  But the following day when the idea of merging OSS/J into TM Forum was presented to the telecommunications service providers present at the OSS/J Service Provider Roundtable, it was greeted with acclaim.  The move would put to rest the concern by service providers that there are too many different standards and standards-making organizations within the telecommunications industry.  By removing the uncertainty factor of having multiple competing standards, the service providers agreed that it was to everyone’s benefit to widely adopt a single OSS integration standard.
The path to the standardisation of OSS interfaces has not been smooth.  With hundreds of telecommunications service providers worldwide, not to mention hundreds of OSS vendors and system integrators, reaching agreement on common standards can be a real challenge.  The proverbial chicken-and-egg problem is often cited, with service providers agreeing to adopt open standards only when sufficient OSS vendors support them, and OSS vendors agreeing to provide open standards only when sufficient service providers agree on which standards they require.  Because there are so many players in telecommunications, it is much more difficult to agree on standards than in more concentrated and vertically-integrated industries like the automotive industry.
That’s why industry standards bodies like TM Forum are so important, and it helps that the TM Forum has plenty of prestige within the telecommunications industry.  Its members include approximately 600 telecommunications service providers, OSS vendors and system integrators.  When the TM Forum says “this is the standard that our members want to adopt,” it is a very significant statement. 
The TM Forum can justly claim that its choice of standards is impartial and in the best interests of the whole telecommunications industry.  It is an open group, with a Board that is elected by its corporate members, with councils representing service providers, vendors and system integrators, respectively.  That Board appoints a Technical Committee to sort through industry best practices and make the final determination on standards issues.  Within the TM Forum, standards-making programmes like OSS/J perform their work at the behest of the Technical Committee.  The Technical Committee’s overarching goal is to define NGOSS, into which OSS/J fits neatly as an implementation-oriented interface standard.  These open governance mechanisms of the TM Forum are helping the OSS industry find a unified voice in favour of standardisation.
An immediate result of OSS/J uniting with the TM Forum was an upsurge in OSS/J membership.  With OSS/J now under the auspices of the TM Forum, more industry participants were assured about the impartiality of OSS/J.  Major OSS players like HP and integrators like TCS (Tata Consultancy Services) added their considerable industry weight and technical resources to the development and maintenance of OSS/J APIs.  OSS/J development is performed under the open Java Community Process (JCP).  Each API project is led by a Spec Lead from an industry insider, and participation on the project is open to other companies who can contribute their requirements and technical support.  In the parlance of the JCP, each API project is called a “JSR” (Java Specification Request).  HP took on the new OSS/J Fault Management API.  TCS began to participate by constructing Reference Implementations (RIs) for many OSS/J APIs.
Membership in the OSS/J Programme at TM Forum is open to new members who are willing to make a technical contribution to the development of OSS/J APIs.  The TM Forum assigned the job of negotiating technical contributions and following up on those promises throughout the year to a dedicated Technical Programme Manager.  This important job went to Antonio Plutino, who has successfully managed OSS/J deliverables over the years.  Of course, one does not have to be an OSS/J member in order to contribute to API standards: many individuals and companies contribute to JSRs at the invitation of the Spec Leads.
A second major impact of moving OSS/J into the TM Forum relates to the professionalism and governance of TM Forum’s standards-making process.  OSS/J subjects itself to a formal JCP process because the JCP is a tried-and-true development process with build-in checks and balances.  Because the JCP is an open process involving key experts from the industry, the quality of inputs to the JSRs is very high.  And the review steps that are inherent in the JCP process help to ensure that many reviewers validate the approach taken to define interfaces and the quality of the resulting specifications.  This additional governance has breathed fresh air into the TM Forum’s standardisation process, as the TM Forum itself has begun to adopt governance that can stand up to ever greater scrutiny.
OSS/J helped introduce advanced techniques for producing interface specifications such as use of a common information model and model-driven tooling.  All of the new OSS/J APIs have been specified using the Core Business Entities (CBE) from the TM Forum’s Shared Information Data (SID) model.  This helps to ensure compatibility between APIs developed according to the OSS/J standard.  In addition, OSS/J interfaces are built using Tigerstripe Workbench, a model-driven tool developed by Tigerstripe, an OSS/J member.  This software allows JSRs to design an abstract specification of an interface, and then to generate specific code in XML, Java and WSDL, in order to support the different deployment profiles required by OSS/J. The use of a common model and model-driven tools has greatly sped up the development time and quality of OSS/J APIs.  One JSR reported a 70 per cent reduction in specification effort through the use of the Tigerstripe tools.
A third impact has been on creating user-focused standards, as opposed to purely technical specifications.  OSS/J and the JCP have high standards for defining interfaces.  In addition to the interface specification, there must also be a use case or ‘Reference Implementation’ (RI) and a testing framework or ‘Technology Compatibility Kit’ (TCK).  This allows an uninitiated integrator to see how the interface was intended to be used in a real-world scenario, and to test the compatibility of his or her application with the open standard.  The TM Forum now requires that these useful tools to be delivered with all of their interface standards. 
OSS/J helped the TM Forum craft the PROSSPERO™ programme, which certifies open standards as being ready for market adoption.  PROSSPERO-ready interfaces package everything that an implementer needs for OSS or BSS interoperability including: interface specifications, testing frameworks, guidebooks, online developer support; access to reference implementations, plus educational, marketing, and developers’ tools. PROSSPERO interfaces must meet criteria of market adoption and having documented use cases.   The idea behind PROSSPERO is to make it even easier for telecommunications companies to adopt open standard interfaces by setting high criteria for market readiness.
Now that OSS/J is well entrenched within the TM Forum, the organisation has shifted into high gear.  Most of the OSS/J APIs will be upgraded and delivered as a ‘Summer Release’ in August 2007.  The APIs that are planned to be released then are:
•    Common API (which underpins all OSS/J APIs)
•    Fault Management API
•    Order Management API
•    Trouble Ticket API
•    Inventory API
New APIs that are slated to be released before the end of 2007 are:
•    Pricing API
•    Discovery API
Meanwhile, OSS/J has an open call to fellow members of the TM Forum to contribute resources and expertise to these and other specification efforts.
Going forward, more exciting news is likely from the OSS/J and TM Forum.  One thread is the growing movement to harmonise all standards-making effort within TM Forum.  At TeleManagement World held in Nice, France, on 20-24th May, 2007, the Harmony Catalyst demonstrated a unified approach to integration that incorporated OSS/J and MTOSI standards.  This work demonstrated that OSS/J and MTOSI, two of the most popular standards from the TM Forum, are compatible with each other.  The TM Forum Technical Committee is underscoring the need for a single standard, and a Harmony Architecture team is taking up the challenge to define the common guidelines for TM Forum standards.  In addition, the TM Forum is reaching out to other standards bodies to use its PROSSPERO programme to promote other valuable standards in the market place.
The TM Forum has the right scope and clout to address the need for OSS integration standards.  Never before has an organisation with a global perspective like the TM Forum – with reach into wireless, broadband, IP, billing and content – stood so firmly behind a unified standard for OSS integration.  The focus that TM Forum is bringing to standardisation is unprecedented, and in combination with the rigor of OSS/J standards-making process, the impact is sure to be felt far and wide in the telecommunications industry.  We may finally have the answer to the question by service providers, vendors and integrators alike: which OSS integration standard should we use? There’s a growing consensus behind a harmonised OSS interface standard from the TM Forum.
Information about OSS/J can be found at www.tmforum.org/ossj or by contacting Antonio Plutino at aplutino@tmforum.org
Doug Strombom is a Steering Committee Member of the TM Forum’s OSS/J Programme and CEO of Tigerstripe, Inc.

Is holding on to customers for life a real possibility for
operators? Alastair Hanlon believes that a new approach to CRM will provide the answer

ADDED VALUE CRM - Dreaming the impossible dream?

Given a choice between winning new customers or holding on to the ones they have, any operator worth its salt would plump for ‘both’. It’s a reasonable choice but the fact is that many operators have been less successful at tackling the perennial problem of churn than at luring in new customers with attractive but costly offers. Of course, one person’s churn is another’s new sale.
For many, the solution has been to make heavy investments in IT, CRM systems in particular, in the hope that these will build longer term customer relationships. So far, this has not turned out quite as hoped, partly because the much-vaunted CRM and back office systems have tended to operate in silos and not as a seamless facility that provides a full picture of the customer relationship across all aspects of the business. This is not a deliberate policy, simply a result of rapid expansion and the need to add new systems to support new services. The information usually exists; it is just not readily accessible.
It becomes even more difficult in this era of convergence, as operators add new services.
Unfortunately, a customer trying to contact an operator can be forgiven for wondering whether they are dealing with one company or several. They soon find that call centre agents, their first point of contact, rarely have all the billing, service, offer and helpdesk information at their fingertips, as they might expect.
Things can be just as frustrating for the call centre agents themselves. In order to build a complete picture of a particular customer’s relationship with the company they have to switch between different CRM and back office applications, often resorting to handwritten notes to relate what they find in one with the information they collect from the others. It is inefficient, time consuming and highly frustrating.
If an agent is dealing with someone who is ready to churn, the question is what scope they have to make new offers or set up different deals, in order to retain the customer. Without a complete picture and firm policies in place to guide them, agents can end up giving less valuable customers more than they are worth and neglecting those with the greater long-term value. The customer with greater long-term revenue potential could decide to leave simply because the phone queues are clogged up, e-mail responses are too slow, service levels are poor and offers are unattractive.

Ending the silo culture
It is time for a strategic approach that makes the best use of technology, provides all the data needed and equips organisations to identify the highest potential customers and make the decisions needed to keep them on-board for the long haul. 
This has to be supported by technology that goes further than most do at present. Existing CRM systems are just not capable of identifying the true lifetime value of individual customers and turning that into action every time there is a customer contact. Through integrating CRM and back-office systems, upgrading processes and reviewing business rules and procedures, a whole new world of opportunities is opening up.
This will provide a complete view both of the services available and of the customers. This makes agents very much more effective because they are able to see into all systems at the same time and no longer have to ferret for information from different sources.  Business policies also become more consistent. This means that customers get the same information and level of service across all contact channels, whether they are interacting over automated self-care or with a call centre agent.
By integrating and making accessible information on customer history and services subscribed to, and having it delivered in a clear and transparent manner, it will be possible to raise the bar in customer service and focus on strategic areas that can transform the business performance.

Revenues for life
A comprehensive route to solving these problems can be found in an approach called Lifetime Value Optimisation. This is a process developed by strategy consultants, McKinsey, which moves well beyond traditional CRM.
LTVO can have direct impact on revenues and profitability by addressing the core issues of customer relationships – perceptions, loyalty and churn.
McKinsey has shown that LTVO can generate dramatic increases in EBITDA (earnings before interest, taxes, depreciation and amortisation). An incremental rise of between three and five per cent in EBITDA is possible and, depending on the size of the business, this can translate into hundreds of millions of dollars per year.
For LTVO to succeed service providers must be able to capture and respond to real-time events in the areas of customer care, billing and service delivery. The focus should be on four key areas, each of which directly influences the customer relationship, namely: customer satisfaction, customer retention, increased usage of existing services and take up of new services.
Rather than just reacting to problems when they come up, automated systems based on LTVO can make agents more efficient and allow them to be more proactive when dealing with customer queries.  Automation can be made more effective by tailoring voice or web self-care to the customers’ needs in real time. With fewer incoming calls, call centre agents are freed up to focus on new sales and on serving high value customers.  Not only does this reduce the cost of care and help raise revenues, this unified approach also has the potential to improve the overall customer experience and so increase customer satisfaction and loyalty.
With real-time data and proper micro-market segmentation, service providers are in a position to offer an immediate response to situations as they arise. These can be in such areas as billing queries, response to changing usage patterns or solving problems in real-time. Most importantly they can focus on customers with the highest potential lifetime value and give them targeted attention and high quality service.
Greater convergence means that a complete picture of individual customers, the services they use and their past behaviour and future potential is more essential than ever.
As well as increasing customer satisfaction and loyalty, the operator is better placed to take the initiative by making relevant and attractive offers that will increase each customer’s overall value to its business.

Real-time interaction
Underpinning all this is the principle that every time someone gets a bill, makes a payment, uses a service, makes a call or downloads some content, there is an opportunity to improve the effectiveness of those interactions.
By applying the LTVO approach operators can capture real time events in the customer care, billing, and service delivery environments, evaluate policies related to those events, and carry through real-time actions related to those policies.
Ultimately it is all about giving individual customers the level of service they warrant. Those with high lifetime potential are treated differently from those with lower potential but no one is left feeling neglected or unwanted. The system will identify incoming calls from high value customers and route them to an agent, while a lower value customer might be transferred to an automated self-care system.
This approach produces individual solutions for individual customers, personalised to their needs, habits and tastes, and it does this proactively and in real-time. So, if a customer starts downloading music or ring tones to a mobile, the system might suggest an offer – a good-value subscription offer or a two-for-one option. Similarly, another customer who is about to buy their third ‘pay-per-view’ movie in a week might be offered a particular movie package, possibly a free movie as a reward for buying ‘now’.
Taking advantage of these ‘warm’ sales opportunities might be done by e-mail, phone or text but, above all, it happens at precisely the moment when the customer is focused on a particular aspect of the service – right when they are about to make a purchase. Rather than seeing this as ‘hard sell’, they are more likely to view it as a response to a real need.
Proactive troubleshooting
LTVO is not restricted to expanding sales opportunities, it is just as effective in solving customer problems, particularly in anticipating and addressing problems before the user even asks for help. If a customer changes their usage patterns or they suggest that they are having a problem of some kind, help can be offered even before it is requested.
This kind of proactive response to an issue flagged up by the integrated system, in whatever form it takes, can surprise and impress customers who have come to expect, slow, ‘after the event’ reactions.
This will not only help to resolve the customers’ problems, it can also make a strong impression on them and build their trust in, and loyalty to, the operator.
The types of events that might warrant an immediate, pro-active response include: someone who appears to be struggling on a self-care web site; a customer using a service for the first time or showing signs of becoming a regular user; when a fault occurs like a dropped call, failed download or device failure; when bills are being paid or get left unpaid.
The more you know about your customers’ behaviour and priorities, the more you can do to strengthen your relationship with them.
The main benefit of LTVO processes comes from better policy enforcement and improved treatment of inbound contacts, while between 10 and 15 per cent will result from outbound actions triggered by the real-time data being provided through the system.
The holistic approach contrasts strikingly with the traditional tendency to deal with problems piecemeal. At Convergys we see this new approach providing an effective solution to even the toughest sales challenges and, most importantly, one that will be reflected in greatly improved profitability.

Alastair Hanlon is Director, Innovation Strategy, Convergys Corporation, EMEA, and can be contacted via tel: +44 1223 705000

Who knows more about their customers, a mobile phone operator or Google? The answer, you would think, should be straightforward… but you may be surprised says Adrian Kelly


The significant advantage that mobile phone operators have over other industries, and that includes the Channel 4s, Skys and even Googles of this world, is the vast volume of customer data they accumulate from a consumer’s daily interaction with the most personal of devices, the mobile phone.  Clearly the operators are sitting on a customer information goldmine. At the moment however, operators are simply not using this wealth of information and as a result are missing an incredible opportunity.
It is an opportunity upon which they will be looking to capitalise over the next 12-18 months as marketing continues to be a key battleground for service providers looking to avoid becoming a bit-pipe.  Under threat from many quarters, including media companies and Internet brands, operators’ marketing initiatives have to become two pronged.  Acquisition marketing remains a battle of the brands, where expensive sponsorship and clever pricing are essential to stand out in the increasingly crowded marketplace.  Cross and up sell however, as well as retention marketing, is much more of a fine art.  Marketing to existing customers requires a ‘mass-personalisation’ approach, based on deep customer knowledge.
Operator’s retention tactics (offering an incentive to stay the moment a subscriber requests their PAC number) are well known among subscribers.  However, the aim must be to offer the appropriate and relevant incentive in anticipation of a customer’s natural churn cycle, or to encourage them to adopt new services when the time is right for that individual customer – not waiting until it is more costly or potentially too late to keep them. Today, marketing departments are often restricted by a lack of up-to-date information about current subscriber behaviour, as their hands are tied by a dependence on technical teams to extract the information they need to target campaigns, and to assess their success rate.  The upshot of which is that marketing teams are left unable to react quickly and accurately to opportunities and events. 
Numerous service providers are finding that established techniques of segmentation based on demographics do not create the depth and accuracy of knowledge required.  The latest generation of Customer Intelligence Management solutions offer a whole new level of depth, accuracy and speed of knowledge acquisition for service provider marketing departments, allowing them to truly capitalise on the customer data currently sitting unused within the operator’s network.  Segmentation is performed on service usage data, and so represents their actual behaviour, rather than assumed behaviour from demographics, and is updated daily direct to the desktop. Customer Intelligence Management is already proving to be a compelling prospect for operators hoping that it will give them a unique advantage over their Internet-based challengers.
Cross and up selling focused marketing should, by its nature, be easier than acquisition marketing.  You are talking to a captive audience.  One that you know, has already bought into the brand proposition, and is probably reasonably happy with the service.  To use a business analogy it’s a little like walking into a sales meeting where you already know the people you are going to see.  Compare it to the acquisition scenario, which is much more akin to the cold call, and you should be in for an easier ride.  However, if your preparatory information is out of date, or you don’t research the motivations and preferences of the people you are meeting, you will not be able to take advantage of the situation. In fact, if your research is so poor that you are making offers that are completely irrelevant, you may even damage your existing relationship.
Service providers can now have access to incredibly detailed behavioural information. With data services continuously on the rise, operators now know when someone sends an MMS, what type of multimedia it was, who it went to, what application-to-person services are used (horoscopes, TV show information), and what TV shows they interact with through voting and content applications.  As the mobile Internet is becoming an increasingly real phenomenon, service providers also have access to much more web browsing information than a search engine can record – wherever the consumer goes online using their mobile or PDA, the operator has a click by click record of their behaviour.
Effective Customer Intelligence Management logs and analyses service usage patterns and mobile browsing habits as they occur – presenting them to marketers in an easily actionable format.  Such a revolutionary approach will put operators in the unique position of not only understanding the habits, behaviour and interests of the user, but also their wider social circle - and crucially, being able to act on them.
Operators have tended to segment their customer base down to around ten profiles (such as heavy talkers, texters, business data users).  Limiting to so few, mainly demographic based groups of subscribers tends to overlook less mainstream usage trends and character traits of users, and becomes increasingly restrictive as operators look to offer more niche services and move into the content and media markets.  With effective real time analysis, a ten-segment model will become a thing of the past, as media industry modelling with up to 100 segments (as used by the BBC for example) becomes a real possibility, and not a management nightmare.  More precise segmentation is the passport to a ‘mass personalisation’ approach, with individuals’ brand loyalty strengthening when marketing is more personally relevant.  From a product planning perspective there is also further opportunity for service providers to tailor new services to suit ever-evolving communities.
The next twelve to eighteen months will present two major challenges for service providers; increasing market competition and continued hesitancy among consumers to adopt new and unproven services.  The key to success on both fronts will be a service provider’s ability to understand its customers; their motivations and preferences.   Only by knowing their audience will they be able to offer effectively personalised services and, in doing so, stay one step ahead of the market.

Adrian Kelly is head of Customer Intelligence Management for Acision

Kari Pulkkinen looks at how online cost control can help operators build a business case for convergent charging

CONVERGENT CHARGING - Holding the purse strings

The uptake of converged communications has brought with it a wide range of new opportunities for service providers. Triple and quadruple-play services, including video services as well as applications, download and other content services, are being increasingly accepted into the mainstream and demanded by business users and consumers alike. However, these news services all need to be accurately charged for and billed to the customer to ensure ongoing usage and maximised revenue. How best to achieve this is currently of major concern to operators and service providers alike. 
In addition to the concerns around accurate billing, is the question of how to ensure that all customers receive the same level of experience – whether they are post or prepaid. Currently, prepaid customers tend to receive limited services and charging models from their providers due to concerns amongst those service providers that their billing solutions have limitations that offer a potential window for fraud.  Although in the past, operators have been hesitant about allowing full service offerings to prepaid subscribers, they are now looking for solutions that allow them to fully capitalise the potential of prepaid services without having revenue leakage and fraud problems.  One such solution, which can enable operators to offer more services to the prepaid user, is online charging.  By deploying online charging solutions, operators can offer all services to all users while closing the gap on fraud and revenue leakage.  Such a solution allows operators to fully capitalise their prepaid potential and, ultimately, fulfil end-user needs with wider service offering.
One final consideration revolves around the issue of ‘usage control’. Traditionally usage control has been linked to the prepaid payment option. However, there is much wider need for usage control regardless of the payment method. As an example, given the focus on children’s use and exposure to such services, an increasing number of parents require this additional level of control. Cost control is an important element, as parents want to control their children’s spending. Particularly important for younger children, as they get their “first mobile”, this type of cost control can educate younger users about usage of mobile services. Online cost control helps both parents and children in these tasks.
There are a variety of service concepts that could cater to helping parents and children control spending, for example, a fixed monthly fee and, on top if it, controlled usage with a user (parent) defined limit. This type of personalised billing model ensures that parents remain confident about costs, and encourages long term usage. For operators, the fixed monthly fee ensures at least the minimum revenue from customers.
In addition, it is vitally important for operators to recognise the role of online cost control in managing both fraud and credit risk, as this can have the greatest impact on their bottom line.  Offering new services, particularly those in emerging markets, is creating new opportunities for revenue, but it also risks exposing operators to increased credit risk.  As part of convergent charging, online cost control can help mitigate against risk through a hybrid approach.  A customer would have a fixed limit for post-paid usage that, once exceeded, would automatically switch the payment method to a prepaid mode. The use of pre-paid account mode is enabled through top-up. 
While it is increasingly clear that the key to successful convergent charging lies in a unified charging infrastructure, achieving this ‘holy grail’ continues to be a major consideration for operators. The more forward thinking operators have already started to develop the business cases and service concepts around convergent charging. A significant building block in this model lies in accurate ‘online cost control’ both for users and operators alike.
As discussed, the number of new services being introduced open up the operator to increased risk. Even though the majority of users do not set out to maliciously defraud the service provider, their unfamiliarity with new services and pricing structures means that it is much more likely that they will exceed anticipated costs, which can result in large costs and a resulting unwillingness to adopt the service long term. This can be exacerbated when the user is trying to utilise such services while travelling, as the roaming fees can dramatically add to the cost. 
In each case, the result is that the customer will be surprised and shocked by the service bill. From an operator’s point of view this outcome can be the death knell for new service adoption, as the customer decides never to use them again and the operator loses all potential future revenues associated with those particular services. Online cost control means that the customer is able to track costs and avoid bill ‘shock’, and are, therefore, much more likely to continue using the service.
This approach benefits operators by allowing them to ensure the credit-worthiness of customers while at the same time maximising revenue streams.  Operators can also use this model to differentiate their service offering and set out truly unique propositions not easily imitated by competitors, as often happens when introducing new price plans.  At the same time, cost conscious subscribers can feel they can be in control of their spending, while benefiting from the availability of a wide range of services.
Many of these concepts are not new, but there have still not been that many online cost control implementations. This is often due to the fact that operators’ existing billing and prepaid systems have limitations when supporting these types of online cost control service concepts.  Again, one effective way to implement this capability is to deploy an online cost control solution. This approach is able to provide flexibility for operators to build their own, individualised service concepts for online cost control. This type of solution can also provide an easy extension path for additional convergent charging areas, such as online data charging for post and prepaid, as well as IP prepaid and other charging solutions and related service concepts.
It is becoming increasingly apparent that online cost control is a must if operators wish to ensure the credit-worthiness of their customers, while enabling those same customers to better control their spending.  For both operators and customers, this is a key element to the successful introduction and ongoing uptake of news services and applications. Added to the recognised benefits of service innovation, online cost control goes a long way towards building a business case for convergent charging.

Kari Pulkkinen is VP, Business Development, Comptel

Considering the scale of revenue losses that many telecoms operators incur, it is vital that they identify the causes, quantify their magnitude and then set about addressing these leakages in a holistic manner. Dominic Smith looks at the main causes of revenue leakage, and outlines ways in which operators can resolve these with the help of end-to-end pre-integrated business support systems

Revenue assurance continues to be a key concern for most telecoms operators. An on-the-show-floor survey carried out by Cerillion at the 3GSM World Congress in February identified it as one of the three most important business issues facing telecoms operators today, with 15 per cent of respondents acknowledging it as their most urgent concern.
This is hardly surprising when you consider the scale of the problem. Latest estimates suggest that as much as 10 per cent of total provider revenue is still being lost due to revenue leakages. In today’s competitive telecoms environment, this situation is unacceptable. And to retain competitive edge, operators need to ensure they are tackling the problem proactively.

Arguably the most important cause of revenue leakage is poor systems integration. Unfortunately, this is often a characteristic of the traditional best-of-breed approach to the implementation of business support systems. With this model, systems integrators are often tasked with implementing and integrating multiple heterogeneous systems to build a complete solution. Invariably, they encounter two key problems that make effective integration difficult.
First, they typically discover incompatibilities between the data models used in the best-of-breed systems. Synchronising data across different applications is complex because of the need to align different ways of identifying the subscriber, service and orders. However, if these mappings are not carried out properly, the operator will struggle to trace orders across the systems.
Second, the systems integrator may not have an in-depth understanding of all the best-of-breed components. As a result, it may integrate the systems inefficiently and introduce data replication or unnecessary layers of complexity, all of which can result in holes where revenue leakage may occur.
Process problems
Poor integration typically also results in a host of process problems. It may for example lead to data entry in multiple systems or incompatible configuration between solution components. The consequence of this may be, for example, rating/prepaid charging errors - essentially applying an incorrect price to a customer record or not being able to price the record at all. These errors will result in usage that cannot be billed for and, ultimately, revenue leakage.
Incomplete or incorrect usage data is another primary cause of leakage. This problem often occurs when network switches produce erroneous information and prevents the operator identifying the type of service used by a customer or the customer using that service. In either case, the result is an inability to bill for usage incurred.
Poorly integrated systems with no common workflow can also lead to delays in billing. Sometimes manual set-up processes for new services cause a delay of several days to occur before the operator can start invoicing the customer, inevitably resulting in a loss of revenues. In contrast, a fully automated process with flow through provisioning enables the operator to start billing for service use immediately. 
Invoicing system errors are another potential cause of revenue leakage. Traditionally, the problem is thought to be primarily one of under-billing - operators failing to invoice customers for services received. In fact, over-billing can be just as significant. This typically occurs when a service is terminated but the operator continues to bill for the service in error.
It will often result in costly customer disputes and the requirement to generate refunds or provide credit as a goodwill gesture. Valuable time and resource may be required to fix the offending process, and further revenue leakage will occur indirectly as a result of growing customer dissatisfaction and increased rates of customer churn.
Launching new products and decommissioning old ones are two other areas where a badly coordinated system can cause further revenue assurance problems. Businesses often leak money both by providing incorrect tariffs for new services and by not taking older, more costly products out of service quickly enough.

Reactive versus proactive
Putting additional systems and checks in place is largely a reactive approach to revenue assurance in a best-of-breed solution. In essence, it is a ‘sticking plaster’ approach to plugging the gaps in the system. Rather than dealing with problems at source, it focuses on putting processes in place which track where revenues are being lost and then try to correct these errors retrospectively.
As a result, problems can stay hidden for some time and their source can remain obscure. Operators may initially believe that they have billing issues or that they are suffering from credit management problems. In fact, when they carry out thorough ‘root cause analysis’, they often discover that their problem is order management related.
If the system is not proactively managed, a mistake made in this initial order process will not be discovered by the operator for a month or six weeks, when the customer receives his first bill and finds he has been placed on the wrong tariff or is being billed for a service he never received, for example. 
In contrast, the best end-to-end pre-integrated solution suites give operators the confidence that all elements within the product suite will work together in harmony. The holistic approach of these systems is clearly in line with operators’ increasing desire to address and monitor the whole lifecycle from the initial order placement right through to billing and cash collection.
These solutions also enable operators to be much more proactive. Rather than merely reacting to problems when they occur, their seamless connectivity offers a means to prevent ‘gaps’ in the system appearing in the first place. In other words, they treat the root cause of the problem rather than the symptoms.
The tight integration of these solutions helps eliminate data replication and synchronisation problems. In addition, embedded workflow and order management functionality allows front-end orders to be successfully transitioned to the back office, ensuring all services can be billed for and eliminating revenue leakage at source.
The pre-integrated nature of these systems allows key business information to be proactively tracked, detailed reports to be generated for each process, revenue leakages quickly identified and revenue losses minimised. It is hardly surprising, therefore, that ever-greater numbers of operators see end-to-end pre-integrated solution suites as a vital weapon in their ongoing battle to achieve genuine revenue assurance.

Dominic Smith is Marketing Director, Cerillion Technologies

Rapid assembly of services will be the key differentiator for telcos striving to beat out cable, entertainment and Internet companies encroaching on their customer bases says Brian Naughton

Telecom carriers will have to go through a significant metamorphosis as the lines blur among telecom, entertainment, retail, and Internet domains. In hotly contested triple- and quad play markets, carriers must become customer service providers (CSPs) capable of making the transition from me-too services to truly converged, on-demand services that differ from those offered by MSOs and non-traditional competitors.

To achieve that end, CSPs will have to work with third-party developers to create scores, if not hundreds, of niche services that leverage their substantial investments in IP networks. After all, they laid the fibre to enable voice, video and data to come together over the same connection in very short time frames. That unique ability should enable CSPs to create prodigious catalogues of converged services without disrupting the underlying architecture.
The goal should be the rapid assembly of services. To that end, a mindset change will be necessary. Carriers will have to move away from the staid and stodgy belief that service launches must take months or years, to a mindset that products can be rolled out in hours, if not minutes.
That will require CSPs to move into a manufacturing mindset, where the concepts of computer-aided design (CAD) and computer-aided manufacturing (CAM) come to fruition. The marriage of the two enables hundreds, if not thousands, of services to be rolled out in an “assembly line” fashion.
In the same way that the car manufacturing industry illustrates components for new products in CAD systems, carriers can illustrate the components of new products and move service “components” along an “assembly line” to CAM systems, where coding, rules and algorithms can be determined automatically.
The lifecycle management enabled by the CAD and CAM principles is now beginning to burgeon in telecom. In other words, the knowledge of bundling will be removed from existing systems and centralised in a location in which all service and product building blocks can be modelled within a “workbench” environment.
That reflects somewhat the precepts of service-oriented architecture (SOA), which promulgates the interchangeable use of building blocks among applications.
 “While SOA has been hyped for many years as a common framework for segmenting operations and coupling services, the reasons for it are far more compelling now,” says Larry Goldman, co-founder and senior analyst with OSS Observer. “The Internet has created an expectation of immediate gratification, so carriers have to figure out how to roll out services at the time of demand.”
After heavy investments in IP networks, Goldman believes operators have to concentrate on the software side of the equation. “CSPs should focus on re-use within their execution environments. That means services must be decoupled from networks for integration with business processes.”
Goldman says carriers can then begin to drive re-use –not only of common data models, but of formats, naming conventions, interfaces, and design processes across the organisation.
To galvanise the concept of ‘re-use’, CSPs must break back-office silos down into components that represent operational elements of network and IT systems, as well as product, service and resource specifications. These components can ultimately be turned into loosely coupled “building blocks” for interchangeable use across different services and products.
As carriers create a library of building blocks, SOA environments become true service delivery platforms (SDP) from which new functionality can be driven (i.e., SIP capabilities around presence, location and more advanced voice mail services that can be used in creative product bundles). By implementing common SIP servers for applications needing connectivity over IP networks, carriers can procure data from disparate sources so that billing authorisation and billing detail are consistent across the organisation.
As new services are created through increasingly agile SDPs and execution environments, CSPs will have to simultaneously orchestrate changes within OSS/BSS applications. The complexity of orchestration for dynamic services will require full automation of activation, ordering and billing processes so that fulfilment and assurance processes can seamlessly work for new service rollouts.
Within the TeleManagement Forum’s Product & Service Assembly (PSA) Initiative, an independent consortium of leading telcos and vendors has been working to develop a revolutionary IT reference architecture to satisfy the burgeoning need to standardise and simplify the way that products and services are designed, assembled and delivered. This reference architecture incorporates the CAD/CAM manufacturing approach by enabling the creation of “building blocks,” which carriers can assemble into service or product offerings.
At the heart of the IT reference architecture is an active catalogue that is a design-and-assembly environment within which service components can be defined and configured without any need for writing code. This catalogue aligns service design and creation with service execution so that product managers can decouple management of product lifecycles from OSS, BSS and network engineering.
Within the building-blocks lies is a rich library of components and products through which product managers and architects can drive dependencies, prerequisites, exclusions and visual metaphors about service components.
“We have leveraged our deep understanding of the fulfilment process as well of that of our customers and partners to define components that could be used interchangeably across services and functions,” says Simon Osborne of Axiom Systems, one of the founders of the PSA Initiative, noting that Cable & Wireless, BT, TeliaSonera, Atos Origin, Huawei, and Oracle have worked to define the building blocks.
To simplify the definition and configuration of services using those building blocks, a visual and intuitive GUI has been created for product managers to view loosely coupled composites or aggregate services, as well as for IT to create, test and publish components for re-use across the organisation.
The essence of the IT reference architecture is that it has been designed with a “bilateral” top-down/bottom-up approach in mind.
 “This IT reference architecture empowers marketing professionals to define service components without having to go through IT departments, and enables IT to use pre-tested business options and variants to drive component use across the organisation,” comments Osborne.
For example, ringtone downloads, VoIP, VoD, and find-me services each require their own sets of fundamental parameters around availability, order-taking and activation. However, there inherently exists overlap in what each service requires. The active catalogue helps carriers to leverage that fact by establishing interchangeable building blocks in one catalogue that can then be rearranged to support other services as well. Rather than having to write new code to launch each new service, carriers can specify necessary attributes in reasonably basic forms so that one catalogue and order-handling system can handle many different services.
Simon Farrell, IT Architect, Cable & Wireless comments: “We can define residential VoIP and the prerequisites for broadband DSL, and are able to stitch together relationships among end points to execute on fulfilment request” - demonstrating that graphical representations, such as a ‘green light’ for ‘it’s a go’ or ‘red light’ for ‘outstanding dependencies’ enables C&W to assemble end-points that must exist on the enterprise service bus (ESB).
In other words, there are distinct interfaces, order types and end points specific to any services that are to be fulfilled. Through the interface, the active catalogue provides an environment for modelling end points into an assembly landscape that defines relationships and polices exceptions or dependencies.
 “A residential home triple play service that requires a broadband and VoIP server, as well as IPTV server, will rely on rules around what third parties must be called upon to provide that hardware, and in what sequence those systems should be called upon,” explains Osborne. “That sets the stage for how data travels interface to interface as the service transitions through the lifecycle.”
While the active catalogue does not run every task, it calls the service end points that, in turn, run the processes externally. “This active catalogue provides a way of defining the end point and rules around those endpoints, so fulfilment dynamically figures out what end points to call upon,” he says.
As orders are fulfilled through the active catalogue, the software creates an inventory of pre-existing capabilities for end users. The software records against every instance of an order, using the same language that was modelled at service end points. Ultimately, that means CSPs end up with rules sets that are usable for up-sell and cross-sell capabilities. “If 35 per cent of customers have a certain type of access, CSPs can target them with new services that tie to that type of access,” notes Osborne.
In the long run, that ability drives versioning and lifecycle management. “If a service is to be deployed for only six months, there can be published rules stating that the service will be decommissioned in a certain time period, and warnings can be issued at the end of the period to those parties with bundled components.”
That can be particularly important among partners who are re-branding wholesale offerings, or for inter-departmental strategies at large telcos, where orchestrating processes can be complex. “Ultimately, you get a federation of catalogues with clear demarcation of where the SLAs are among different departments,” Osborne explains. With a federation of catalogues, CSPs start to create a topology through which all catalogues and associated end points can be referenced for more intelligent cross-sell and up-sell actions.
To ensure there is an accurate model of infrastructure, this revolutionary IT reference architecture has been designed to sit on top of most major network resource management systems (inventory) that serve as databases of record for carriers.
The architecture can serve as the foundation for collaboration among product managers, service and network engineers, as well as operational communities. By creating a central point for standardising multiple vendors' products, carriers can move closer to the SOA principles they strive to embrace.
As carriers continue to expose their design environment to different departments and customers, they can begin to truly “mass market” the configuration of products. That sets the stage for commonality in how components, access controls and security measures are employed across the enterprise and partner environments.
As that commonality grows, carriers can get closer to self-service in management of product and service lifecycles. Then, they can be better positioned to create value-adds in their IP services domain—especially if they can roll out sophisticated services in a matter of hours, or even minutes.

For further information about the IT reference architecture and the active catalogue, please visit www.psainitiative.org  or e-mail info@psainitiative.org.
Brian Naughton is VP Strategy & Architecture, Axiom Systems

Service quality management offers a critical pathway to the delivery of quality of service in developing markets, says Tony Kalcina

Accidents happen. People make mistakes. Nothing or no one is infallible. We all know this. Which is why, when we buy a product or service, what is important is not so much whether or not it has faults, but what happens after a fault occurs.
It is a well-know maxim in client service that a customer whose problem has been dealt with in an exemplary fashion is likely to be more satisfied and loyal than one who has never experienced a problem to begin with. The former knows from experience that they can rely on the provider of the service or product; the latter has no idea what might happen if things go wrong.

This principle applies as much in telecommunications as elsewhere, but with an added twist: customers want to have the certainty that problems will be dealt with effectively and efficiently before they happen.
This means service providers have to provide a high level of assurance at the contract stage, typically through a service level agreement (SLA). But there are SLAs and SLAs.
In fiercely competitive developing markets, the ability to offer and deliver on meaningful, measurable and manageable standards of service is becoming a major competitive differentiator.
Telecommunications SLAs traditionally underpin service quality management (SQM) programmes, which aim to monitor performance, pinpoint faults and prevent them from recurring.
SQM is valuable to corporate customers because, in theory, it provides analysis and verification of the performance they are paying for. And, in the event of a problem, it serves to provide a measure of the recompense they might be entitled to.
For operators in developing markets, SQM also has an important role to play in the supply chain by policing incumbent operators, for instance when competition rules allow Local Loop Unbundling (LLU) for third-party providers of DSL services.
The inclusion of an SLA in the supply chain process ensures protection for third party operators and their customers; if incumbent operators fail to undertake the LLU in the time agreed, the third party operator can often claim a rebate.
At the same time, the end customer may also be entitled to compensation for failure to deliver the requisite level of service mandated by the regulatory body.
In practice, this can be problematic to claim at an individual level, but the automated monitoring and reporting of SLA violations can be a useful input to the process of managing collective performance by the incumbent. 
Elsewhere, it stands to reason that savvy customers will pick suppliers whose SLAs offer the highest level of financial security; in other words, those which pay out the most in the event of a problem.
This means that in order to satisfy the most demanding customers, telecommunications operators need to embrace SQM so that any faults and liabilities can be fully verified to the satisfaction of both the operator and its customers.
SQM allows operators to measure and gauge the validity of customer complaints; whilst the customer should always be put first, operators can determine the need for - and level of - compensation required for a perceived service fault. Clearly, then, there are massive benefits to be had from being seen to possess a market-leading SQM programme. But not all operators currently have one.
Currently, performance data, where it is available, often only involves some fairly basic measurements of the state of the network. In addition, delivering SQM often relies heavily on expensive manpower.
An operator will not be able to cost-effectively differentiate its service offering unless manual steps are kept to an absolute minimum and, preferably, eliminated altogether to avoid the higher cost and delays of manual processes.
Finally, many of the current low-cost diagnostic tools that are in place can only provide basic alerts to the effect that certain pieces of equipment are failing, without identifying which customers (if any) are affected, or how.
What this means in practice is that operators relying on these basic SQM tools cannot truly be said to be delivering quality of service to their customers—and risk either losing credibility or paying over the odds for SLA failures. The situation need not be thus, however.
More complex SQM tools exist. They combine service fulfilment and assurance capabilities and can be integrated with a provisioning package to automatically identify faults or dips in service and restore the services or compensate customers with additional offers or refunds.
Clarity, for example, offers a pre-integrated product and database that features the TeleManagement Forum’s 17 electronic Telecom Operations Map model elements of Operational Support Systems (OSS) in a single suite.
These systems allow operators to see the impact that network operations are having on revenue and customers’ experience from both a service fulfilment and assurance perspective.
Clarity’s OSS is network and services neutral, rapidly configurable and widely deployed, supporting an end user base of 50 million subscribers worldwide. Companies that have taken SQM seriously have reaped significant benefits.
Sri Lanka Telecom, to take an example from the developing world, has been able to clear 84 per cent of faults within hours thanks to a single OSS information store for fulfilment and assurance data, coupled with real-time correlation and integrated SQM workflow processes.
Other operators can follow this path. All that is needed is a greater awareness of the importance of SQM as a tool for achieving competitive advantage. Telecoms operators, specifically in developing markets, must realise the importance of service assurance in helping to predict, monitor and manage in real time the availability and quality of services, ensuring conformance to the business’s strategic SQM objectives.
Investing in OSS to support state-of-the-art SQM programmes is no longer a ‘nice to have’, but increasingly a vital component of strategies to attract and retain loyal residential and commercial customers, improve operational effectiveness and to accelerate the order-to-cash process. SQM may have until now been something of a minority interest for telecommunications operators. But as the battle for customers heats up in developing markets, it looks set to become a key weapon for competitive advantage.

Tony Kalcina is founder of Clarity


This website uses cookies to improve your experience. Using our website, you agree to our use of cookies

Learn more

I understand

About cookies

This website uses cookies. By using this website and agreeing to this policy, you consent to SJP Business Media's use of cookies in accordance with the terms of this policy.

Cookies are files sent by web servers to web browsers, and stored by the web browsers.

The information is then sent back to the server each time the browser requests a page from the server. This enables a web server to identify and track web browsers.

There are two main kinds of cookies: session cookies and persistent cookies. Session cookies are deleted from your computer when you close your browser, whereas persistent cookies remain stored on your computer until deleted, or until they reach their expiry date.

Refusing cookies

Most browsers allow you to refuse to accept cookies.

In Internet Explorer, you can refuse all cookies by clicking “Tools”, “Internet Options”, “Privacy”, and selecting “Block all cookies” using the sliding selector.

In Firefox, you can adjust your cookies settings by clicking “Tools”, “Options” and “Privacy”.

Blocking cookies will have a negative impact upon the usability of some websites.


This document was created using a Contractology template available at http://www.freenetlaw.com.

Other Categories in Features