Features

With communications technology now filling every conceivable  area of our daily lives, Rob House reckons that – for the sanity of all – responsible management of communications should be firmly placed on the agenda

Communications are everywhere. Internet, e-mail, mobile and fixed phones, text and video messaging, voicemail...we have made the technology all-pervasive. But is it invasive? We've all been irritated by information overload, stressed by multiple messages, and exasperated by the behaviour of other users.
Working in the communications business, we seem to be the worst culprits of all. How many of us check e-mail in an almost addictive way? How many of us can't – just can't – switch off the mobile? How many meetings have dragged on and on because of interruptions to make or take calls? How many train journeys are made unnecessarily noisy by the caricature person on the mobile phone? And was it you?
How complicated is it for someone to turn off, or even just turn down, their ring tone? Or to speak softly?We are now all technology-competent. We use whatever we're sold, or whatever we're told to use. And, largely, we'd be lost without it. But it is also essential to know what is socially acceptable when using technology – that is, if you are to avoid the dangers of being treated like an outcast. Human relationships are still vital to success – and sometimes we tend to forget this. We promote the technology and devices upon which people's personal and professional lives are becoming increasingly dependant, but are we guilty of ignoring the protocols of technology etiquette?
We are a mobile society: business professionals and technicians tote wireless phones; mobiles have become fashion statements for teenagers; phones ring in meetings and loud conversations are conducted on public transport. The use of data is different, being less intrusive. But the ease with which e-mails can be sent to PCs and PDAs contributes to information overload.
We now receive 64 times more information than we did 25 years ago and growth continues: in fact, the curve is exponential. So, Siemens Communications in the UK set out to find out how effectively this communications revolution was being managed. The company commissioned a study into the etiquette of business communications in the digital age, which was undertaken by Surrey Social and Market Research (SSMR) at the University of Surrey, Guildford, England, with additional analysis by the University's Digital World Research Centre.
Seeking to determine attitudes and opinions with regard to acceptable communications practices, the research looked into the way in which today's business communications are affecting workers' attitudes, performance and interaction – and concluded that too much technology can make you SAD.
The new SAD factor
A few years ago, researchers in Scandinavia identified a condition known as Seasonal Affectiveness Disorder (SAD) caused by prolonged periods working without much available natural light.
The Siemens study reveals a new form of SAD that can affect workers all year round – especially those of us exposed to communications technology. Mismanagement of communications tools can be a root cause of workplace Stress, Anger and Distraction (SAD). The research findings clearly demonstrate that an over-reliance on communications is becoming a friction point in offices, causing stress which is affecting personal relationships both at work and at home.
Technology has given us a myriad of ways of communicating. We have desktop and mobile phones as well as PDAs and notebook PCs that can be used as softphones. We use the Internet, intranets and extranets, as well as public and private networks, both wireline and wireless. It is hard to imagine a communications landscape more fragmented than the one we currently endure. With all the new technology, Siemens has estimated that trying to get in touch with a colleague can waste an average of 30 minutes each day for each knowledge worker, which equates to a UK national wage bill of some £22bn per annum.
The research identified an underlying demand for the better management of availability and for integrated communications systems. And, ironically, it highlighted the fact that many office workers resent the interruptions that communications caused to meetings and workflow, but at the same time demanded almost instant contact when trying to reach colleagues.
So, how do we cope without drowning in the digital mire? Well, a bit of old-fashioned courtesy is a good start, coupled with implementing a system that controls your communications, rather than letting them control you.
Working with the University teams, Siemens has devised Eight Simple Rules of business etiquette to act as employer and employee guidelines to working behaviour.
1. Have your mobile off or on silent in meetings: The research showed that only 11 per cent of business users think it acceptable to have a mobile on during a meeting
2. Change your mobile voicemail to request text for urgent messages:Texting is generally thought to be too informal for business use and implies that you cannot be bothered to speak to someone. However, its use by request or prior arrangement for messaging has some potential.
3. Turn your device screens off when holding meetings in your office:74 per cent of respondents felt it unacceptable to read e-mail during an office meeting
4. If you are expecting an urgent call apologise and warn others in advance:The research was clear that interrupting a meeting to take a call should only be undertaken with prior warning and for urgent matters.
5. The person you are talking to deserves your full attention:11 per cent of respondents felt that an emergency was the only acceptable use of a mobile phone during a face-to-face meeting or discussion.
6. Hold private calls in private places:Clearly some calls – business and personal – are inappropriate for public places.
7. Break out of e-mail jail – talk to your colleagues:In many cases e-mail is becoming the easy option  and is being overused – overuse actually reduces its effectiveness.
8. Technology is not power – it doesn't signify your importance:Mobile phones and other personal devices do not signify a person's importance.
Interestingly, only slightly more than 50 per cent of respondents felt that it was inappropriate to use any form of IT equipment in a meeting or when talking to a colleague. Subject matter, location and relationships were all factors in determining how someone behaves in a meeting – with relationships being perhaps the most critical. Meetings now cover a wide range of discussions and many are informal and relaxed – on these occasions, interruptions are more acceptable if they are sufficiently important.
The University's research supports Siemens' own findings concerning availability management, a key feature in OpenScape – the company's presence-based application that utilises Microsoft's Live Communication Server infrastructure. Availability management applications like this allow users to flag to their colleagues their degree of availability and their preferred method of contact – integrating voice, e-mail, mobile, voicemail and text messaging systems to maximise contactability while minimising intrusion. In short, availability management applications are able to deliver the benefits of managing communications – such as eliminating the risk of missing an authorised, important, interruption, which traditional behavioural approaches are unable to filter – as suggested by the research, but without the manual intervention overheads.
Surely it is time that technology went back to the future and gave us the equivalent of everybody being in the same location at the same time and being able to communicate in real time? We need to enable better, less stressful ways of communicating and collaborating, starting with voice-data convergence and instant messaging (IM). IM is an application that uses icons to show the 'presence' – on-line or off-line – of nominated colleagues ('buddies' in Internet parlance). Note also that IM is actually a misnomer since it's used to communicate in real-time; e-mail is a messaging medium.
IP phones are, in effect, data devices that are visible to the network. What availability applications achieve is to develop and integrate IM applications that use icons to indicate presence and availability. This means that you do not waste time calling parties who are busy and you spend more time communicating and less time messaging.
IM programs also allow users to display additional availability information alongside the presence icon. Availability denotes a persons willingness to communicate and it is based on preferences and policies – i.e. it is managed at both the individual and corporate levels.
Who, what, when and where
Managed availability therefore solves the 'who, what, when and where' of communications. Presence and availability management tools put users firmly in charge of their communication devices. People can even leave their phone on during an important meeting confident in the knowledge that it will only ring in a genuine emergency. Why? Because users define their own rules that help them decrease interruption while simultaneously increasing their availability.
Presence and management are powerful communications parameters that minimise telephone tag, thereby boosting personal productivity, yet at the same time reducing stress. They also allow one colleague to talk to another immediately when there is a need to react to an event or an urgent issue.
Introducing real-time managed communications is the quickest way of cheering up the SAD office worker. The trick is to ensure that you control the technology, rather than it controlling you.

Rob House is Head of Collaboration and Integrity Solutions for Siemens Communications and can be contacted via tel: +44 1908 855 000; e-mail: rob.house@siemens.com  www.siemenscomms.co.uk

Maria Martinez, Corporate Vice President, Communications Sector, Microsoft Corporation, tells Alun Lewis why she believes the software giant is uniquely placed to help drive the telecommunications sector

Transformation seems to be the name of the game in telecommunications these days. While the financial discipline of the last few years has to remain, there are a host of new challenges on the horizon to deal with – from VoIP to the addition of content services to traditional service portfolios.European Communications– Alun Lewis caught up with one of the architects of this diverse new world at the TeleManagement's recent conference in Long Beach, California, meeting Maria Martinez, Corporate Vice President, Communications Sector, Microsoft Corporation.

AL: Maria, your keynote speech focused very much on some of the challenges facing us as an industry. Can you summarise these as you see them? MM: For a start, there's a tremendous amount of FUD – fear, uncertainty, and doubt – out there. We all know that convergence is changing the industry in a big way and that it's a big challenge to plan where a business should go in this new environment. However, figuring out which technology will take you there can sometimes be even more difficult.According to Gartner, the worldwide revenue for telecommunications services and equipment is only expected to grow 3.9 per cent by 2007 – down from 9.7 per cent in 2003. What's most interesting is the role that software is playing in this growth. Software will grow at a rate nearly 10 per cent by 2007. So, if software is clearly a key growth area that is contributing to the overall industry improvement, it's surely more a question of "how can we use software to create new services and revenue?" 
AL: And what's Microsoft's strategy in this space?
MM: In the final analysis, convergence will only truly come to life through software-powered service networks. Microsoft's committed to the telecommunications space and we've made three big commitments that will help companies survive, thrive, and win in this newly converged marketplace.
Firstly, we're simplifying relationships across the board between application developers, systems integrators, content owners, distribution channels, services providers, and end users. Microsoft has important technologies and significant partnerships in all of these areas, with over six million developers and thousands of partners – now delivering mission critical applications, the largest databases in the world, delivering content to mobile phones, OSS/BSS systems and recently large billing applications.
Turning such a highly complex communications services value chain into a profitable business requires dealing very efficiently with many issues such as service aggregation, business process automation, business-to-business transactions and end-to-end quality of service management. The group I lead at Microsoft – called the Communications Sector Group – is today bringing all of Microsoft's internal assets together with industry partnerships in a co-ordinated way.
Secondly, we're facilitating the deployment of those rapid and cost-effective breakaway applications and services that will define success in rapidly changing markets. A great example of this is how BT joined forces with Microsoft to launch a one-stop-shop IT and broadband solution, including Hosted Exchange – an innovative first for small businesses in the UK. We've also helped AT&T Wireless differentiate their mobile data services through a portal that improves the end user experience of provisioning a mobile phone. Another example is our recently announced IP Television solution that's in trial now with several service providers around the world.
The solutions we are building around collaboration, hosted services, media and entertainment are designed to get to service providers to market quickly with the minimum investment in a way that they can rapidly monetize. But this new set of services demanded by the market creates unprecedented challenges for the integration of these platforms with existing OSS/BSS environments – which is where I see the TeleManagement Forum's New Generation OSS (NGOSS) effort playing a key role.
Finally, we're providing our service provider customers with a multi-services platform that offers the best opportunity to extend and differentiate new service offerings – while also reducing overall costs. Our biggest contribution in this area is our commitment to Web Services and driving standardisation through our joint efforts with IBM, Vodafone and others. Web services are the universal connectivity layer that will solve the many interoperability challenges.
AL: Obviously one of the biggest shifts underway is towards universal IP and an end to the old circuit switched environment. How's Microsoft interpreting that change?
MM: Today, IP, mobile cellular and broadband have emerged and are all pretty much ubiquitous. IP is now the leading protocol and forms the basis for most of the new services out there, whether through data, voice or video or the integration of all of them. Microsoft strongly supports standards like TCP/IP, IPv6, http, XML and SOAP, as well as the development and standardisation of key technologies like Windows Media and Digital Rights Management.
In transport, where IP is the basis for most of the new services, security and quality of service are vitally important. But the introduction of IP is forcing a change to a new business model in which service providers are no longer paid by distance.
Revenue must instead be made up in services and content. Here, developers using Microsoft platforms are driving solutions that include ring tones, music, video-on-demand and voice over IP for a unified communications experience, and innovative data services such as location and tracking information. Content provision is essentially about relationships, and that means having a set of partners who can build new and compelling offerings, increase distribution, and raise market share.
AL: Microsoft's business breadth must be a significant help in enabling this cooperation.
MM: Partnerships are definitely the key to succeeding in a converged world and, at Microsoft, around 96 per cent of our revenue is delivered through our partners. We have an existing base of thousands of best-of-breed partners who makes it easier than ever to streamline processes and solve business problems collaboratively. This means that service provider customers can pick the combination of partners and solutions that are right for their business needs – no one-size-fits-all, long-term commitment.
AL: And specifically in the OSS space?
MM: NGOSS is a critical piece of the puzzle when we integrate our services framework with a service provider's existing OSS/BSS network. We are designing Web Services interfaces that use the Telemanagement Forum's NGOSS specification to seamlessly integrate services with our customers' operational and business networks. We are very interested in partnering with other industry players to define Web Services interfaces in this area.
AL: Security and network integrity is also becoming a hot issue as we rely more and more on networked communications. What's the take from Redmond on this?
MM: Spam is a big issue that continually interferes with digital communications. We are working hard to educate the public and build anti-spam safeguards into our products to keep spam under control. We're also committed to pursuing legal action against spammers. Earlier this year, we teamed up with America Online, EarthLink, and Yahoo! to file the first major industry lawsuits under the new federal anti-spam law. And, several months ago at the RSA Conference, Bill Gates announced a detailed vision and proposal on how technology can be used to help put an end to spam. This included outlining our Co-ordinated Spam Reduction Initiative and technical specifications for the establishment of Sender ID for e-mail.
Addressing privacy concerns as new technology and services take hold is another challenge. Again, public education is the key here, with clear explanations of why information is needed, what information will be used for, and giving the consumer control over it. Our Chief Privacy Strategist, Peter Cullen, is leading this effort to keep consumers and businesses educated on privacy.
As always, security is a big concern for all of us, especially with the proliferation of new solutions and services that are arriving. Viruses, the risk of identity theft and other threats can have serious impacts along every link in the communications chain. Here again, we are working with service providers, software vendors, and even competitors, to define the legal environment, the processes and deliver the tools that will make the Internet safer. One of the ways we're working hard to improve security is by leading the formation of GIAIS, or the Global Infrastructure Alliance for Internet Safety. This organization provides technology and communications support to the worldwide service provider industry, and facilitates collaboration to help manage and improve security for millions of end users. During a recent virus attack, GIAIS members were able to e-mail 200 million Internet users worldwide within twenty-four hours of the attack to alert and advise them on specific actions to protect themselves from the virus.
And, of course, standards are also a big challenge. We will continue to drive web services standards across the board. We are also working with several standards organisations, like the TeleManagement Forum and the ITU Telecommunications Standardisation Sector, to further develop standards to best address the needs of the industry. 
AL: You mentioned earlier the term 'software-powered service networks.' What exactly do you mean by that?
MM: A network used to be defined in terms of physical nodes and physical links. A service network can be thought of in a similar way, except that the nodes are services – such as messaging, authentication and billing – and the links are provided by the Web Services interfaces.
For example, location and third party content can now be thought of as network 'nodes', that can be leveraged and easily interfaced with any other node on the network to develop richer and more dynamic service packages. These new nodes will therefore need to be manageable in the same sense as traditional network components – provisionable, meterable, measured against a service level agreement, and so on. The service network is not tied to any specific type of device or transport. In fact, it is an inherently cross-physical network, and cross-device – for example, the recent addition of voice communications to the Xbox Live network has taken gaming to a whole new level.
Now, there are a number of implications of this trend.
Firstly, it allows operators to leverage all current capabilities and investments of the current networks while allowing for much larger and more dynamic networks, as it includes many more 'nodes' and participants. Second, it brings the network world and application world together, which means that the business application people, IT people, as well as network people, are increasingly speaking a common language. Thirdly, it also means that features and applications that are exposed as services need to adhere to principles of network management, which many today do not.
Going a step further, for Microsoft this means:
• providing the tools and framework for creating, deploying and executing aggregated services – such as a 'price-check' service that automatically compares prices for a product on several e-commerce sites and displays the  information on a mobile phone.
• defining how applications and solutions can become well-managed services; and of course NGOSS is an important piece of this work.
We believe this approach will provide a powerful, flexible environment that will enable companies to choose the best possible directions and future strategy – and win. Change, after all, is a constant, and it always brings challenges and opportunities.                               

Alun Lewis is a freelance communications writer and consultant: alunlewis@compuserve.com [l=www.microsoft.com/]http://www.microsoft.com/[/l]

In the murky world of telecoms fraud, criminals are already sizing up next generation technology for any security weaknesses. So, operators should regard fraud management as an essential weapon in the fight – and not merely an afterthought. Jason Lane-Sellers explains

Fraud has been a thorn in the side of the telecommunications industry for decades. It is estimated that US$35-40 billion of revenue is lost annually on a global basis due to fraud (CFCA 2003) and this figure is likely to increase with the emergence of next-generation services. The cost is not only financial, as operators also run the risk of damage to their corporate reputations and customer relationships.
It is, therefore, hardly surprising that new research from leading telecoms analysts, Analysys, has revealed that fraud is becoming of increasing importance to operators (Operator Attitudes to Revenue Assurance 2004).  In order to tackle the problem effectively it is imperative that telecom operators understand where fraud is likely to occur. Consequently, education in fraud management is as important as the use of detection technology, particularly as techniques to misuse networks are continually being refined.
It is important to recognise that fraud does not necessarily have to be perpetrated by an external party; significant revenue can also be lost through internal fraud or fraud committed by other operators, who may be exploiting weaknesses in interconnect contracts or manipulating tariffs.
However, the main threat continues to be external fraud, perpetrated by individuals or organised criminal gangs. This can range from the illegal routing of calls through a company's switchboard, to more elaborate and complex fraud scams such as premium rate service attacks utilising SMS and GPRS services, or trojan programs placed on mobile devices. Next-generation services will become particularly attractive to fraudsters as potentially higher transaction volumes mean that the risk is changing from the loss of call revenue to the wider world of content and e-commerce scams.
Having the correct fraud management practices in place will be essential, as the move to next-generation services will result in increasingly immature and complex technologies entering the market, all of which are potentially more susceptible to fraud. The recent Analysys survey revealed that there appears to be a widening product 'planning gap' with fewer operators taking revenue assurance matters, such as fraud, into account when planning communication products – which will doubtless lead to more widespread fraud.
Much of this has to do with operators seeing next-generation services as major revenue opportunities and therefore launching services very quickly in order to attract subscribers and gain competitive advantage. Consequently, issues such as fraud management often end up as an afterthought. For instance, as operators have tried to sign up customers quickly, they have kept the subscription process quite simple, which has provided many fraudsters with easy access to networks through basic subscription and identity fraud.
Operators are likely to become even more vulnerable once better handsets and improved content becomes available, as these will be of greater value to the fraudster. Handsets will be targeted for physical theft because of their higher value and immediate access to highly personal information. In addition, by providing open web and Java access to services, fraudsters can potentially download content, then resell it without authorisation, resulting in operators and content providers missing out on due revenue. The reselling of content could also impact on an operator's brand in the near future, particularly with the proliferation of adult and illegal content.
The growing number and type of services combined with increasingly complex value chains has resulted in operators moving further away from the customer, meaning that it has become much more difficult for operators to identify new, as well as conventional, fraud patterns simultaneously. 
However, the latest fraud detection systems, which incorporate event 'fingerprinting' technology, are seen as going a long way towards alleviating the problem. Event-fingerprinting technology allows operators to monitor event patterns in real-time and complements existing rules-based and AI (artificial intelligence) fraud-detection engines to provide a third 'in-line' detection method.  This information can then be directly fed into the fraud case-building process adding to data that has already been interpreted and integrated to provide operators with fraud alarms.
Event fingerprinting allows operators to recognise individuals or communities that have previously been identified as being of interest for investigation and monitoring. Even though a fraudster's given identity may have changed, their methods and communications patterns may not. By identifying communication behaviour, operators can create fingerprint-enhanced profiles to verify whether any fraudulent activity is taking place. Such systems will be of particular benefit to mobile operators, especially those in the pre-paid arena. They will be able to detect customers with a previous fraud history when they take out service on new pre-pay mobiles.  Previously they would have been identified as new customers, rather than existing customers on new phones taking advantage of new contract offers.
Problems for operators 
The move to IP-based networks will also pose problems for operators, as managing fraud will become more an IT, rather than strictly a telecoms, issue. For example, IP networks will be more prone to attacks from hackers and viruses in the same way that traditional IT networks have been. This means that operators will need to adopt an IT culture of regular system and software upgrades. 
The ability to detect VoIP calls in real-time will also be a key requirement in the next-generation environment.  By having an 'always-on' real-time view of VoIP events across the network, operators will be able to detect any anomalous activity and close down a VoIP session immediately.
Additionally, VoIP detection will allow carriers to enforce local regulations on the legality of VoIP calls. Appropriate and effective monitoring of customer activity will be essential, but operators need to tread carefully, as there is a fine line between effective monitoring and encroaching on a customer's privacy.
It is well established that the longer a fraud is allowed to continue, the greater the losses to the operator.  Managing and tackling fraud in a next-generation environment will require much more highly automated techniques that work in real-time. By creating real-time boundaries, operators will be able to start protecting themselves from fraudulent activity. In order to help identify new fraud threats and become more efficient in managing the problem, fraud management systems will have had to evolve as quickly as new telecoms services, otherwise operators will be playing constant catch up to the fraudsters.

Jason Lane-Sellers, Fraud Product Marketing Manager, Azure, can be contacted via tel: +44 207 826 5300; e-mail: jason.lane-sellers@azuresolutions.com
[l=www.azuresolutions.com/]http://www.azuresolutions.com/[/l] 

With jargon aplenty, John Blake reckons that companies should be carefully considering all the options before taking the VoIP plunge...

Voice over IP has been around for years but it is only in the last 12 months that it has started to gain traction, moving beyond the early adopters to mainstream customers beginning to consider it as a viable alternative to traditional telephony.
The impetus to upgrade to VoIP is coming from many different angles, not least from customer demand and manufacturer and service provider commitment to the new generation of voice services. The fact that it is now a 'hot issue' reflects the coming of the 'all IP' phenomenon in the corporate sector and the growth of broadband among smaller businesses and the residential market.
So is this renewed enthusiasm for the technology proof that VoIP has thrown off its old reputation as an unreliable and second-rate voice service?  The answer is that it is certainly starting to, but the key to this lies in raising awareness that VoIP is not a stand-alone technology, but dependent upon the network that delivers it.
The promise of advanced and more flexible communications and the lure of cost savings are encouraging enterprises both big and small to look to the technology. But the terminology can be confusing. VoIP covers an array of different types of digitally packetised voice communications. While IPT can be hosted or company managed communications carried over an IP network via an IP device, these services together with Voice-over-DSL-broadband and VoIP via a standard Private Branch Exchange (PBX), technically fall under the umbrella term of VoIP.
The platform for VoIP
Despite its name, VoIP can be run over a number of different network technologies. To illustrate this point, let's look in more detail at the different types of network VoIP can be delivered over and why companies are opting to implement them.
The trend towards an 'all IP' communications infrastructure is well underway. Established network providers around the world are investing in IP and looking to new revenue streams in the digital networked economy through new networked IT services. BT recently announced its own forward-looking plans to create a 21st Century network where all traffic, even that which is initiated on the PSTN, is routed over an IP network. It aims to have this up and running by 2008. In the corporate sector, many large international companies already have an IP Virtual Private Network (VPN) in place. These networks enable swifter and more flexible communications and are designed to carry advanced IP applications. They also have Quality of Service (QoS) technology, which ensures that mission-critical traffic, such as IP voice packets, is not disrupted in the Local Area Network (LAN) by other less important traffic like e-mail.
Multi Protocol Label Switching (MPLS)-based IP VPNs have Class of Service (CoS) technology which categorises traffic by importance into separate channels and ensures that, as traffic travels on to the Wide Area Network (WAN), it continues to be prioritised. So, for companies with an IP platform in place, is the decision to switch to VoIP simply a 'no brainer'?
Running all traffic over a single converged network  and thus having a single point of failure  brings its own complexities. However, increasingly companies are considering the productivity, time and cost saving benefits to outweigh the risks of network downtime. To counter this risk, managed network services will include resilience planning, while other measures designed to eliminate single points of failure.
One example of an organisation already reaping the benefits of having a single network for all its communications streams is business information company, Datamonitor. It upgraded its network to a managed MPLS-based IP VPN for both its voice and data traffic between key sites in the UK and US. By implementing this managed service, Datamonitor was freed from the need to invest heavily in, or run, its own infrastructure and has cut the costs of calls between these sites by 50 per cent.
But the switch to VoIP over a dedicated IP network is not just being made once the corporate data network has been upgraded. Having carried out a technology refresh at the end of 1999, to ensure their infrastructure would not be hit by potential 'millennium bugs', many companies are nearing the end of their five-year PBX lifecycles. They are therefore faced with the question of whether to renew their standard telephone exchange or to invest in an IP PBX  a more 'future proof' solution that supports next generation IP telephony. With pressure from manufacturers, who are increasingly announcing plans to phase out the production of traditional PBXs to focus on IP products, the impetus to move to VoIP is an important driver behind the decision by companies to migrate to a converged network.
Convergence solutions
Once the decision to implement VoIP via a dedicated IP network has been made, the options open to companies are twofold: gradual migration via a convergence VoIP product; or complete migration to a pure IP telephony environment:
" Convergence VoIP products connect traditional digital TDM phones via a PBX to a gateway, which turns TDM speech into IP for transport over the IP network. For many companies, gradual migration to VoIP via an IP gateway makes financial sense since it does not require huge upfront investment in expensive IP equipment. It also offers the company's employees the chance to adapt culturally to IP voice communications, a consideration that is not to be underestimated since for many the notion of picking up a phone and not hearing a ring tone can take time to come to terms with!
" Pure IPT is the most advanced and future proof form of VoIP, where all elements are IP based. This involves overhauling a company's telephony infrastructure and installing new IP equipment  including the applications, PBXs and the phones themselves. Pure IPT supports advanced IP communications such as video conferencing, file sharing and white boarding (providing the right software is in place). It is also the most forward-looking form of the technology enabling companies that embrace it to derive competitive advantage.  For organisations with a mainly office-based work force, including call centres, where the quality of voice and video communications are imperative to the success of their business, full migration to pure IPT is a sound choice. 
Abbey, one of the largest UK high street banks, recently installed BT's pure IPT solution Multimedia VoIP (MMVoIP), a hosted service that incorporates Cisco technology, and expects to save millions of pounds over the five years of the contract by putting telephony and data over a single network. MMVoIP is an example of an IPT product, which is hosted off-site by a carrier using IP Centrex. By using a hosted service, companies can avoid investing in an IP PBX since this is hosted on the service provider's site. For companies without a skilled IP networking department making large-scale IPT migrations, a hosted service is to be recommended.But IP is not the only technology that is luring customers to VoIP. DSL broadband in the form of Symmetric Digital Subscriber Line (SDSL) or Asymmetric Digital Subscriber Line (ADSL) also enables cost-effective transmission of voice over a data network. The UK alone already has 6 million broadband subscribers and the success of broadband voice providers like Skype and Vonage, and indeed products such as BT Communicator, have helped to raise the profile of this flavour of the technology. BT Communicator takes the concept a stage further by allowing customers to manage their communications centrally in a variety of ways, such as voice, email and text, switching easily from one to another at no extra cost.
In addition, we are seeing consumer market technology pushing functionality into the corporate space  many corporate workers are discovering new VoIP technologies at home and are expecting to see the same functionality at work. For many small and medium sized businesses, particularly those whose workforce is not predominantly made up of office workers or whose business model does not rely on voice communications, broadband VoIP is a good choice. It is very easy to install, requires minimal investment, offers a converged environment that allows employees to use voice, Instant Messenger and Internet at the same time and can achieve impressive cost savings on voice calls. There are a number of enterprise specific broadband services available. BT has just launched its own Business Broadband Voice service, which enables customers to make internet calls from any broadband internet connection, keeping the same number whether they're in the office or working remotely.
Companies considering broadband VoIP, however, should be aware that it does not offer traffic prioritisation capability and its voice quality can diminish if multiple users make calls at the same time.
The choice of voice
So to summarise, transmission of voice over broadband and IP networks looks set to become prevalent in corporate communications. Ensuring the quality and efficiency of these communications is key to maintaining a competitive edge in today's digital networked economy. With all the variants of VoIP technology available, choosing the right solution is a critical task. Companies looking to upgrade to packetised voice must first consider what network they need to guarantee quality calls are delivered and then carefully choose between convergence or pure IPT solutions, DIY or fully managed options. Making an informed decision will ensure that their corporate communications requirements are not merely met, but superseded.

John Blake, Head of International VoIP at BT, can be contacted via tel: +44 207 356 5000
www.btglobalservices.com

Generating carrier class Ethernet services for business can be tricky, with a number of issues needing to be addressed if Quality of Service is to be assured. Robert Winters provides some guidance

Metro Ethernet service deployments are continuing apace on a global basis with a variety of service offerings and enabling technologies that offer 'real broadband' as an attractive alternative to lower bandwidth DSL and Cable products, high cost leased lines, ATM and Frame Relay. Depending on the region of deployment there are a number of Ethernet technology alternatives and build-out strategies in progress.
For example, in Europe many incumbent service providers are maximising their use of existing SDH transport assets by upgrading equipment to support Ethernet services. With the insertion of new Ethernet line cards a variety of non-switched pure Ethernet transport implementations such as GFP (Generic Framing Procedure) and more QoS oriented switched services such as VLANs (Virtual LAN) etc, coupled with the value add of MPLS, can now be offered and new revenue models instituted. 
Alongside the transport network there are also deployments using hybrid switching and routing technologies with next generation protocols such as MPLS (Muliprotocol Label Switching) and RPR (Resilient Packet Ring).
Along with the enhancements to existing SDH transport equipment and switched Ethernet networks there also exists a growing number of European state sponsored broadband initiatives in countries such as Sweden and Ireland. These programmes encourage the rollout of dark-fibre thus enabling competitive broadband service providers to build their networks over a ready-made physical layer transport medium. This type of initiative offers a reasonably clean slate approach to building an Ethernet product offering. The competitive service provider can at least focus on a deployment technology of choice, such as Ethernet over MPLS or RPR.
However, nothing is ever that easy. As can be imagined when business class services (as opposed to best effort home consumer type) are being guaranteed on an end-to-end basis between two major metropolitan areas, or indeed within the confines of a particular metro ring, there are challenges where 'Carrier Grade Ethernet QoS' is required. In this situation, service providers are expected to offer high bandwidth Ethernet services but also reliability, redundancy and high quality business class applications. Applications are increasingly delay and jitter sensitive, such as multicast video, time sensitive e-commerce web solutions and voice over IP (VoIP). This article focuses on the requirements of carrier grade Ethernet QoS at a layer 2 service and IP application level and assumes other carrier grade issues related to hardware redundancy (for example, MPLS fast reroute guarantees, inherent SDH protection and RPR protection) are addressed.
To capture the enormous enterprise business market with differentiated Carrier Grade Ethernet QoS products requires an understanding of the capabilities of Ethernet Services and the applications being transported over Ethernet.  What to look out for when offering Carrier Grade Ethernet QoS:
1. Understand the performance of QoS and CoS (Class of Service)
The IEEE 802.1p,q standards for Virtual LAN (VLAN) services offer a method for identifying a service stream, setting bandwidth and assigning a priority setting that determines class of service (CoS), rather than it being a pure QoS parameter in network implementations such as ATM which offers attributes such as CBR (constant bit rate) settings etc. So, in order to benefit from inherent Ethernet cost effectiveness and high bandwidth, plus offer QoS, there is also a need to bring additional quality metrics into the mix such as those offered through connection admission control (CAC) for end-to-end bandwidth and using MPLS signalling and traffic engineering capabilities. Ethernet industry-focused organisations, such as the Metro Ethernet Forum (MEF), have defined service types including Ethernet Line and LAN Services for point to point and point to multipoint/multipoint to multipoint services. 
CoS identifiers within these services can include specific source and destination MAC addresses, customer edge VLAN ID/IEEE 802.1p. Inspection of these packet headers requires processing power from network devices that may impact performance and requires verification of performance on a per service basis. The MEF has defined traffic profiles per CoS identifier that include Commited Information Rate (CIR), Peak Information Rate (PIR) and associated burst sizes. The provider can thus offer a greater number of service options to their customers. For example, a subscriber may connect to a metro Ethernet service at one location with 10Mbps user to network interface and another location at 100Mbps. The CIR in this case could be 10Mbps. More is to come. With the development of VPLS and loss-less packet transmission in metro Ethernet networks, the number of network options will continue to increase.
2. Check ability to guarantee service stream and IP application flow quality.
In the past it has been difficult to test on a per service and per application flow basis since traditional test methods relied on packet blasting at layer 2 only.
Basically, if the layer 2-service pipe was rated by RFC2544 throughput tests, this was generally viewed as a sufficient guarantee of quality. However, to really guarantee carrier grade Ethernet QoS, a far more granular approach is required. Service providers need to be confident that each service, each user and each IP application flow using that service are thoroughly tested for quality. 
Therefore, a pragmatic approach to testing is required whereby corporate Ethernet service and application flow models can be quickly built, then emulated and analysed for quality issues throughout the network under test with varying load and network status conditions. Using this test method, QoS boundaries can be realistically determined for both network services and application layers.

3. Guarantee end-to-end QoS
Ethernet services invariably start out their 'circuit life' as a layer 2 service (e.g. VLAN) originating at the customer premise into some point of aggregation and transport such as MPLS/RPR. The transport method can be a layer 3 VPN such as MPLS RFC2547 and then converted out the 'other side' back to the layer 2 VLAN and into the remote customer premise. It is important to test on an end to end basis. For example, with the possibility of an MPLS misconfiguration the number of hops and propogation time can change and requires end-to-end test for different traffic engineered service configurations. Also, it is important that each CoS priority assignment for 802.1 VLANs effectively maps onto MPLS EXP bits (equivalent quality metric) and back again. In situations involving MPLS fast reroute, how long does it really take for an individual end to end Ethernet service to get back to normal if a disruption occurs? Another consideration is restoration of service when normal conditions resume.
4. Understand the effects of TCP/IP application flows on 'guaranteed' Ethernet services bandwidth
Yes, we all know that Ethernet is a layer 2 service and you should not care about the IP and application layers above. However, when it comes to offering bandwidth guarantees you need to pay attention. It is extremely important to consider the effect of multiple TCP/IP application traffic flows running over a given layer 2 service and the potential side effects such as a drop in effective bandwidth. Due to TCP congestion notification schemes, layer 4-7 performance can rapidly degrade leaving customer bewildered and confused about the service specification and network performance. Rather than facing an irate customer who believes in getting the bandwidth pipe they paid for, it is worth testing a variety of scenarios with voice, video and data traffic in advance that can cause an excessive amount of dropped packets. In this way a service provider can better understand how and why this occurs, but also explain to customers why, for example, a 20Mbps service at layer 2 does not necessarily translate into the equivalent 'application bandwidth'. Of course, with full RFC2544 tests, throughput can be guaranteed at layer 2, but add real application TCP flows into the mix and see what happens.
5. Have a method of isolating Ethernet quality service issues from customer application problems
The ability to offer end-to-end carrier grade Ethernet QoS usually assumes the customer has the perfect set of well-behaved applications. Again, as an Ethernet services provider, the last thing needed is blame for a service issue caused by application problems. Aside from bandwidth hogging applications such as peer-to-peer (P2P) transactions that sponge bandwidth at an enormous rate, even standard IP applications such as Web, E-mail, VoIP, Multicast and Streaming applications can contribute to latency and loss of bandwidth. The ability to quickly isolate a problem source - and to prove it - is a key element in customer satisfaction.

Pressure on service providers and the equipment vendors supplying them to provide carrier grade Ethernet quality of service (QoS) guarantees are being heightened with the introduction of an array of value added applications such as Video on Demand, response time critical Web applications and VoIP. In order to improve competitive advantage there are a number of areas in which QoS issues can be determined and mitigated with practical quality boundaries worked out in which premium level business class services can be more effectively and confidently guaranteed.                                   

Robert Winters is Chief Marketing Officer and Co-founder Shenick Network Systems Limited, and can be contacted via tel: +353-1-2367002; e-mail: robert.winters@shenick.com [l=www.shenick.com/]http://www.shenick.com/[/l]

Ethernet is emerging as a key component for bridging the gaps from the access network to the customer demarcation, as Troy Larsen explains

As business and residential customers raise the bar on demand for new voice, data, and video services requiring higher bandwidth and faster speeds, Ethernet is poised to become a key technology in the subscriber access network environment. Ethernet in the First Mile (EFM) - or perhaps more appropriately Ethernet in the First Kilometre - is rapidly moving to the forefront of service provider options because of three major advantages: simplicity, scalability, and interoperability.
European service providers, in particular, have been quick to embrace Ethernet over fibre during the past few years as a 'low cost, no nonsense' approach to giving customers expanded services to meet soaring expectations. Europe's demand for metro access solutions rose quickly, perhaps due to the density of customers in most geographical areas putting businesses and consumers in close proximity to embedded fibre.
Today, most service providers, including incumbent local exchange carriers (ILECs) throughout North America, are bullish on Ethernet. With competition heating up and revenue opportunities for fibre to the home and business increasing, carriers sorely need solutions that can bring flexible, low cost bandwidth straight to the demarcation points.
According to a recent report from Current Analysis, a US-based telecom market analysis firm, Ethernet is one area that appears to have solid customer demand and North American ILECs are rolling out aggressive development plans. SBC, Verizon, BellSouth and Quest have eyed the opportunities and jumped in early to build national Ethernet coverage. MCI more recently launched plans to expand its Ethernet service portfolio and footprint, despite playing catch-up after having to work through WorldCom's Chapter 11 bankruptcy issues.
Ethernet evolution
Ethernet provides several cost-saving benefits for bringing high bandwidth services to customers. First and foremost, Ethernet has been, and will continue to be, the easiest protocol to implement in any type of network topology. It's not only simple to install and maintain, but it is ubiquitous throughout the industry. There are well-defined standards and a worldwide industry supply chain.
Scalability has always been an asset in Ethernet deployment. There is no denying that Ethernet, throughout its long history, has continually increased its bandwidth capacity, as well as its ability to handle larger and larger network topologies. For metro-area backbone networks, 10-Gigabit Ethernet is providing the same scalability advantage.
Interoperability is an issue for any telecom technology, new or old. The standards organisations, as well as interest groups such as the Metro Ethernet Forum, have spent a great deal of time and effort creating standards that make Ethernet the easiest protocol to implement - from the carrier network all the way down to the subscriber network.
Because of these and other advantages, Ethernet provides economical benefits that make it very attractive, particularly in access networks. Compared to asynchronous transfer mode (ATM), synchronous optical network (SONET), synchronous digital hierarchy (SDH), and other protocols, the cost difference is significant for the carrier. Ethernet not only lowers equipment costs, but it costs less to maintain in the network.
Leaping over the hurdles 
Despite its many advantages, Ethernet was plagued in its early implementations by a number of hurdles that prevented it from becoming the protocol of choice for connecting the First Mile. First, carrier customers required the highest (99.999 per cent) reliability and uptime. Although 'best effort' reliability is acceptable in many local or enterprise network situations, it is not tolerated in the telco realm. Because of that, carriers had been reluctant to invest money into deploying Ethernet until standards could provide a more acceptable reliability factor.
Another major hurdle was operation, administration, maintenance, and performance (OAM&P) monitoring. Since carriers were creating these networks to generate revenue, the management issue was not only an extension of reliability in general, but a key component for reducing operational expenses while creating competitive pricing structures. Profitability in the long term is essential.
Restoration capability was also an issue that needed to be addressed. Restoration simply means having a redundant link between a carrier's point-of-presence (PoP) and the customer site in the event of a major physical problem, such as a fibre cut. This is generally provided by having dual links to one PoP, known as 'single homing,' or separate links to two different PoPs, known as 'dual homing.'
To be effective, the switchover between the primary and secondary links must be quick enough to remain transparent. The benchmark for this is the < 50 ms switchover time offered by SONET. This precluded normal Ethernet restoration protocols, such as Spanning Tree, which can lose a significant number of packets during longer reconvergence times - unacceptable in a mission-critical network.
The answer to these and other obstacles to EFM viability is the IEEE's recently ratified 802.3ah standard. This standard provides a set of management tools with the specific goal of making Ethernet acceptable in the carrier environment for deployment in access networks. The 802.3ah standard calls for the ability to remotely manage a demarcation unit or the customer premise equipment (CPE) with full OAM&P.
Vendors of Ethernet access solutions are extending the 802.3ah standard to include further functionality. Addressing link restoration requirements, some vendors have introduced implementations that meet or even exceed < 50 ms switchover times. Other solutions enable carriers to offer and administer multi-tiered service level agreements (SLAs) using CPEs with built-in rate limiting capabilities.
Long-awaited standards
At its core, the long-awaited 802.3ah standard sets the groundwork for giving carriers the confidence to deploy today's Ethernet. They can now reap the benefits in managing Ethernet services to the customer premise while guaranteeing any level of service. Additionally, operational expenditures are minimised through remote management capabilities that eliminate the expensive truck rolls of the past.
Rarely considered when planning Ethernet access service is the need for a management agent at each network device. For example, a carrier normally manages the central office through a simple network management protocol (SNMP) that requires an IP address and a management agent. Using the same scheme for every CPE device makes management of the IP resources alone a huge burden. Worse, the added complexity this creates in the network results in reduced liability.
However, with the 802.3ah standard, the need for an IP address at the customer premise is eliminated. This not only simplifies the setup of each device - which today involves a plug-and-play module with auto-discovery features - but greatly simplifies maintenance requirements over the long term. All of this, of course, equates to less cost and more revenue opportunity.
Another overlooked aspect in access networks is packet size. The maximum size for IEEE standard Ethernet frames is 1522 bytes. However, many Ethernet and IP switches/routers make use of extensions to the frame that result in a larger maximum frame size. To enable all the commonly used protocols - as well as the new emerging IP/MPLS/Ethernet protocols - to run undisturbed between physical locations, service providers require access equipment to have the ability to transmit frame sizes from 64 bytes to a maximum of between 1548-9000 bytes.
Any Ethernet demarcation solution with a maximum frame size below 1600 bytes will substantially limit its attractiveness to service providers. Emerging protocols for transparent LAN services accept the fact that in full duplex mode, Ethernet has no practical limit on packet size. Emerging services and new protocols are already requesting mini-jumbo frames (1900 bytes) and in the future may request jumbo size packets that extend to 9000 bytes.
Finally, to ensure restoration features, carriers should look for an Ethernet services demarcation unit that has built-in 'link-state' redundancy capability. This is the ability to detect a loss of link on a primary interface and instantaneously switch to a redundant link. With the advent of pluggable optical interfaces and intelligent, remotely manageable CPE devices, it is possible today to provide a single solution that incorporates all the intelligence needed to provide a redundant link with transparent restoration if and when the customer requires it.
Ethernet revolution
The two basic service solutions for Ethernet access are Ethernet LAN (E-LAN) services and Ethernet Line (E-Line) services. E-LAN services provide multipoint-to-multipoint solutions over a wide-area network, sometimes referred to as a wide LAN solution. E-Line based services, on the other hand, are point-to-point in nature, and fall into three categories.
Simple point-to-point E-Line services physically connect one location directly to another. More advanced E-Line point-to-point services rely on a network with multi-service solutions, meaning that quality-of-service (QoS), advanced VLAN capabilities, circuit emulation, and possibly even encryption services are available at the demarcation or aggregation point. The third E-Line category is point-to-multipoint services wherein one site is connected to several other sites through the network.
Why Ethernet in the access network? Simply put, it is scalable, simplistic, and interoperable. Ethernet is the most widely used global protocol, supporting data, voice and video traffic while easily bridging the gap between provider and subscriber networks through transparent, but fully managed demarcation capability. At the end of the day, Ethernet meets every carrier's primary demand: a lower cost solution capable of reaping additional revenue from new and existing access networks.

Troy Larsen is technology marketing manager at MRV Communications
 [l=www.mrv.com/]http://www.mrv.com/[/l]

Alun Lewis talks to Gordon L Stitt and Martin van Schooten of Extreme Networks about the current market climate and the company’s strategy in the Ethernet arena

The scale and strategies involved in network investment often provides a pretty sensitive barometer for the general health of our businesses and our wider economies. When times are good, the network is a tool for growth - when times are bad, networks can also help companies and even whole industries compete more effectively with limited resources.

It is with these thoughts in mind that European Communications recently met with two senior members of Extreme Networks, Gordon L. Stitt, President and Chief Executive Officer, and Martin van Schooten, Vice President of Marketing, to discuss their take on the current opportunities for Ethernet in both the enterprise and public service sectors.

AL: Gentlemen, it was probably around two years ago that we last spoke, just as the industry was still headed south into a gathering recession. What's your take on the current situation now?
GLS: For a start - a lot more optimistic! It's important to remember that in the space of only a few years, both business and industrial strategies and the underlying technologies that they use have continued changing, even though the recession was obviously hitting the mainstream telecommunications sector pretty hard.
On a region-by-region basis, Asia is looking very positive in a number of countries thanks to the continued take up of broadband services by both consumer and business customers. Japan currently has the world's largest metro Ethernet network in Tokyo, with some 200,000 end points, while Ethernet is also being rolled out on a large scale in Korea, supporting a real hunger for bandwidth that's often being driven by domestic applications such as on-line gaming.
The picture is far more mixed in the US, where the post dotcom crash is still impacting on network operators. There's also uncertainty in that market, as operators continue to evaluate different access technologies. Because of the scale of investment that is faced in re-engineering their access networks, things are moving comparatively slowly there. That said, growth in the enterprise market for Ethernet solutions continues to increase steadily.
One of the interesting drivers for this - which we may see echoed in the EMEA region - is the increased demand for compliance with industry regulations, such as in the finance and healthcare sectors. Companies are finding out that unless they can substantially automate even more of their processes and improve the flow of information around their organisations, they'll both drown in paperwork and fail to meet their legal obligations.
We recently had a good example of this kind of development in a contract we signed earlier this year with Pine Digital Security, who are providing a Lawful Intercept solution to Dutch ISPs following the issue of a number of subpoenas to enforce this. This is an important application area, wherever you look around the globe and, in support of this, Extreme has also recently joined the Trusted Computing Group (TCG), which is an open industry standards organisation that produces specifications designed to protect critical data.
MvS: Generally speaking, Europe's showing a nice mix of opportunities for us. On one hand, many operators, both incumbents and CLECs, are actively deploying metro Ethernet networks, though usually on a rather piecemeal process. Their strategies though aren't set in stone yet and there are some interesting opportunities for Extreme emerging there.
In both the UK and other parts of the continent, there are emerging opportunities from Internet exchanges as well as from the ISPs themselves.
There's also a lot of Internet catch-up going on in the other various EMEA regions as well, such as in the 'new' Europe and in parts of the Middle East, such as Dubai, which is transforming itself into a major hub for electronic businesses of all types.
AL: So what's Extreme Networks itself been up to since we last spoke?
GLS: While business growth has been steady - and lately we haven't been hit as badly as some of our competitors - we've been able to take advantage of a relatively quiet period to continue investing in new technology and in the company's organisation, and have been able to stand back for a clear look at where the whole industry - and our customers - are heading.
For a start, we've seen a lot of VoIP start to be deployed in the enterprise space and that naturally puts a strain on the capabilities of the traditional data network. Alongside that, there's the continued convergence of the wired and wireless environments, most significantly in our case with the take-off of WiFi as an access carrier for both voice and data traffic. Voice is extremely intolerant of any delay or degradations in the Quality of Service and requires extremely high levels of availability if a company is to seriously consider moving off a traditional infrastructure - and being able to deliver that is exactly one of Extreme's main selling propositions.
MvS: Convergence in the public network space is also starting to pick up on the next wave of convergence, though this is obviously focused more on the 'triple play' kinds of offerings that involve video as well as more familiar voice and data connectivity services. Domestic customers who find their entertainment suddenly cut off because of network problems can be just as unforgiving as the most hypercritical CEO or CTO of a large business, so our QoS focus goes down extremely well in these markets. Supporting this approach, we're also able to enhance services still further through our strengths in policy management, ensuring that the right data arrives in the right place.
AL: And the wider drivers for growth in the business sector?
GLS: I'm afraid it's convergence again, but this time another aspect of it is involved. What we're also seeing in the enterprise space is an accelerating move to interlink communications and IT applications in ways - and at prices - that have never previously been really possible. While we're all familiar with dedicated call centres, where telecommunications and applications come together, these sorts of functionalities are now starting to be rolled out to support other business departments and applications.
That in turn means that the network has to be far more adaptable and intelligent than it ever had to be in the past. Any networking solution has to deliver a balance between all the hardware and software involved as a totality - and that's where the 'smarts' that Extreme Networks can deliver comes in.
Hardware's good at doing some things; software is good at doing others. Only by taking a sensibly holistic approach to the entire environment - and that means implicitly understanding the wider business objectives of your customer - can you hope to deliver a solution that is fit for purpose.
A good example of this is a recent European contract that we signed with Trader Media in the UK in August this year. Best known for their Auto Trader series of publications and owners of the UK's busiest automotive website - which processes over one million searches for cars on a busy day through both fixed and mobile services - the network for them really is their business. Using our policy management techniques, they're able to ensure that when the NAS (Network Attached Storage) devices that support their two Oracle databases synchronise, there is still ample bandwidth in the network for visitors to their internet sites to access the overlying Web services. These QoS factors also increase the frequency of data synchronisation, meaning that Auto Trader can stay and remain a truly up to date source of information for its customers.
Supporting the Trader Media is our EpiCenter software suite, which allows staff to manage the network from a single console, taking advantage of the open standards supported by EpiCenter to integrate the network with its existing systems management applications.
MvS: As we said, there's a similar high focus on reliability - irrespective of total network scale - in the carrier market. Here, we've been building on a technology we announced to the world towards the end of last year - EAPS, standing for Ethernet Automatic Protection Switching. Essentially what this does is replicate the kind of protection and survivability that's traditionally been enjoyed by SDH/SONET networks, but on an Ethernet topology.
We had BT Exact, BT's R&D organisation, carry out extensive test on the solution and they found that EAPS delivered sub 50-millisecond failover on both copper and fibre interfaces. With that kind of performance, carriers can now deploy a highly dependable - yet inexpensive - fibre-optic ring spanning hundreds of miles, combining EAPS in conjunction with a redundant design, with aggregation, edge and premise switching platform fully integrated into the entire solution.
AL: The old cliché has it that a network is always more than the sum of its parts. What's your strategy for working with other members of the networking community?
GLS: Extreme understands the need for openness and transparency whenever commercially possible, which is one reason why we've made the Application Programming Interfaces to our new operating systems announced last year available to other members of the networking community.
The sad truth is that if you buy from some of our competitors, you'll find yourself locked in to end-to-end, proprietary solutions that may deliver advantage in the short term, but ultimately limit the scope for technological - and hence commercial - freedom and adaptability.
We have a very broad range of partnerships with many leading players across the whole networking ecosystem. Probably one of the most significant of these is with Avaya, with whom we've recently signed an important agreement to support an ever-widening range of integrated applications, using SIP, for example, to link presence and availability information about staff or a mobile engineering force directly with voice and data communications and with IT systems.
To support this sort of environment, the network, naturally, has to do the job. It also has to be manageable, so we design our solutions to make the extraction of traffic management data as easy to use as possible. Again, openness is one of the key criteria essential for success in such a dynamic environment.
MvS: It's also worth mentioning that there's a bit of a paradox here that our customers often face - and it's one that our open strategy is designed to resolve efficiently and cost-effectively.
The network may be recognised as being at the heart of most businesses these days, irrespective of whether they're a national telecommunications service provider, or a medium sized business. The problem is that the network is going to have to change and adapt as the organisation itself adapts to changing market conditions and business opportunities, and as new applications are added or old legacy ones removed from the networking environment. With many historic networking solutions based on a 'one size fits all' model, it's often been a challenge to make the necessary changes without major service disruptions, followed by often extensive periods while the network is fine tuned to cope with the new world.
That's obviously a situation that is, frankly, unsupportable in today's 24/7 culture, and the reason why we've introduced our new network Operating System in a modular format, capable of being deployed in an incremental fashion to fit with new applications and interfaces.
AL: So it's finally looking like there's some light at the end of the networking tunnel?
GLS: Very much so! Service providers need new, value-added Ethernet-based services to generate additional revenues and protect their customer base from new competitors. Enterprises are adopting multi-service networking platforms to gain an early edge from converged IT and communications solutions. And both in their own ways are contributing to the revival of a sector that's been relatively dormant for far too long - though it's a far more pragmatic world than the fevered pace of the late '90s.                                         
Alun Lewis is a telecommunications writer and consultant  alunlewis@compuserve.com
[l=www.extremenetworks.com/]http://www.extremenetworks.com/[/l]

Described as a new wave of opportunity, the Central and Eastern European markets could provide a new hunting ground for service providers. Alun Lewis talks to Wolfgang Hetlinger of T-Systems about the company’s strategy in these new markets

While it's an obvious truism that telecommunications knows no boundaries, for much of the last century there was a huge divide between the countries of Western Europe and the former Soviet Bloc. Now, as the EC extends its boundaries eastwards, and Russia, and other members of the CIS, take their first steps towards creating free markets, a new wave of opportunity in telecommunications is starting to appear. The benefits for both sides are potentially enormous. While both telecom service providers and enterprises in these regions will need access to Western technology and business methods, the West also has much to gain through the energy, enthusiasm and commitment of the emerging commercial cultures in these countries.
European Communications recently visited T-Systems International GmbH, an information and communications (ICT) service provider serving Deutsche Telekom's business customers worldwide – including telecommunications service providers. The basis for the discussion with Wolfgang Hetlinger, Executive Vice President Sales CEE, Telecommunications Industry, T-Systems, was T-Systems' focus on telecommunications service opportunities in Central and Eastern Europe.
AL: While everyone has naturally heard of Deutsche Telekom, fewer may know T-Systems in any depth. Can you give me a brief overview of the company?
WH: Certainly. We're one of the largest information and communications service and solution providers in Europe, with around 40,000 employees and a 2003 turnover of 10.6 billion euros. While our headquarters is in Frankfurt, with other offices scattered around Germany, we also have a growing presence in around 25 other countries. While we've only been in formal commercial existence since 2001, our experience goes back a lot further than that, and our teams of experts have played vital roles in supporting both members of the Deutsche Telekom family and other service providers for many years – as well as an incredibly broad set of industry sectors, such as finance, public sector, healthcare and manufacturing.
The telecommunications market makes up the main part of our business, contributing around a quarter of T-Systems' total revenues. While Deutsche Telekom naturally remains our biggest customer, other important service providers such as KPN/e-plus, mm02, AOL Deutschland, Kabel Deutschland and AT&T also rely on our experience and expertise.
AL: And now you're targeting Eastern Europe and the former CIS. How do you see opportunities emerging  there?
WH: Very positively – and that's the reason we've set up a specialised team to cover the region. It's important though to understand that each country is very different and opportunities must be approached sensitively! ICT strategies from both the carrier and enterprise perspective are implicitly linked to local realities – whether it's regulatory issues, the role of legacy infrastructure, or just the general cultural ways in which business is done locally.
With organisations like Gartner measuring growth in telecoms services in these regions at around seven or eight per cent a year, these are obviously attractive and often greenfield markets, when compared to the comparative steady state of telecoms investment in much of Western Europe.
Our strategies for entering these markets vary according to local conditions and, where possible, we initially partner with local organisations. In Hungary, for example, we already have quite a high profile through the work that we have been doing with Matav – part of the Deutsche Telekom Group. We're also increasingly active in Poland, Austria, the Czech and Slovak Republics, the former Yugoslavia and Turkey – and even as far away as Siberia. We also have a strategy of following our customers as they expand, so we're also getting involved in new regions by supporting VPN customers from other industry sectors as they enter fresh geographic regions.
Our corporate strapline is, after all, 'Managed Business Flexibility' – and that means that we often help our clients by taking over management of their technologies, while they decide on business strategy. This makes their businesses faster, more manoeuvrable, and far more powerful. We therefore have to practice what we preach to our customers! One size never fits all in ICT.
AL: What do you see as the most popular offerings that you have for these new territories?
WH: For our telco customers, a lot involves sharing our expertise in the Operations Support and Business Support Systems areas. Many of our customers are now rolling out broadband services such as DSL to their own customers who are hungry for bandwidth to support their own business applications. Trying to do that with systems and processes originally designed for circuit switched networks will quickly cripple any innovation, so we find our ability to automate many of the underlying processes is a very attractive proposition.
This can involve designing, building and delivering entire systems – or advising on the best processes to adopt to get the maximum return on network and service investments and their human resources. If our potential customers want evidence of our abilities in this area, they only have to look at Deutsche Telekom's domestic network to see the kind of quality that they will get.
One important aspect of this involves keeping our own integration strategies as open and as flexible as possible. Like an increasing number of service providers, we use the TeleManagement Forum's eTOM model to provide an important level of consistency, while simultaneously adopting a Commercial Off The Shelf (COTS) approach to OSS design, where we often carry out extensive pre-integration work before even making a bid.
We also have a policy of working with other industry leaders in particular technology areas. We already have close business relationships with Telcordia and Micromuse, who are also targeting these regions, for example, giving us access to even more product and expertise.
While each customer has their own particular set of issues, some are commonly shared and many involve getting the most from the network while keeping costs down. As convergence becomes a reality through the shift towards all-packet infrastructures, there's far less focus than there used to be on point solutions that fix a single problem. Our customers recognise the importance of treating the telecommunications value chain as an integrated whole and the holistic perspective that we take on this is much appreciated. Since we already work in all the sectors that a service provider is targeting, we can help them anticipate and understand their own customers' needs and ambitions.
AL: And what are the most important issues confronting your service provider customers?
WH: Ensuring quality of service across that whole value chain is an important aspect of our work. Many business users in these regions – or even Internet cafes – are paying a premium for broadband connectivity over WiFi or DSL and expect a similarly premium level of service. In this context, business processes are often more important than the underlying technologies, and so we often get deeply involved in advising on aspects of engineering workforce management, CRM and billing, as well as providing the supporting systems. Of particular interest to many new operators is our ability to provide an outsourced service for billing, using our sophisticated  operations office in Germany.
When it comes to the core network and other central operations of telco customers, we often find ourselves helping them make the transition to new network architectures. To do this successfully, they often have to confront many inherited problems that have accumulated over the years. For instance, operators commonly find problems with their network inventory – even to the extent of having effectively 'lost' up to thirty percent of their network assets through inaccurate data, incompatible data formats or even just through their engineering experts reaching retirement age or moving to new companies. We can help them recover from this dangerous position and turn their inventory systems into a real business enabler.
We can also help an operator's key executives make the best decisions by ensuring that they're presented with the best management information possible. We can do this by integrating data flows from across all the different multi-vendor, multi-technology subsystems that they have in place and then presenting it coherently and consistently through management dashboard interfaces.
Finally, we're able to offer these services in a variety of different commercial models. Some may be risk-sharing partnerships, where our own revenues depend on the success of the service or network that we're supporting. In others, a customer might outsource the complete management of their infrastructure to us – allowing them to get on with their core business of developing and marketing attractive new services.
AL: Anticipating the future is always tricky in our industry. What do you see coming up on the horizon technologically?
WH: Number portability is an issue as it's becoming mandated by law in a number of countries in our target regions – and it's one that we already have extensive experience of. Content is naturally another hot topic as operators start to evaluate their strategies to deal with the so-called triple play services of voice, video and data. As we've been involved with these concepts since their earliest days, we find we're well positioned to advise on everything from portal design to the technologies involved in streaming video to managing digital copyright issues across different media and technologies.
AL: And your perception of the future of the region?
WH: In global terms, they might look like modest markets at the moment, but they have a huge potential in both directions. We see ourselves as long term partners to the region, helping to enable the free flow of best practice between our different countries and cultures. As one demonstration of our intention to engage, we're currently running a series of road shows, initially in Budapest, to help carry our message, face-to-face, across the region.
Alun Lewis is a telecommunications writer and consultant  alunlewis@compuserve.com
[l=www.t-systems.com/]http://www.t-systems.com/[/l]

With broadband entertainment now becoming firmly established, the question of which services to offer and how they best be deployed is key for operators, says Murali Nemani

Service providers have long been debating the merits of entering the global broadband entertainment (BBE) space. Declining traditional revenues and aggressive new competitors have them looking for new revenue and business growth opportunities. In 2003, BBE deployments by service providers in many regions have demonstrated strong demand for entertainment services. It's no longer about entering the BBE market but rather about which entertainment services to offer and how best to deploy them.
Service providers are asking five fundamental questions when considering the BBE services market:
1. What are our market opportunities and challenges?
2. What are the prospective business models and which are most likely to succeed?
3. What key groups comprise the value chain and what roles do they play?
4. What is the optimal framework for BBE service deployment?
5. Which service providers have performed early trials and how has the market reacted?
Market opportunities and challenges
In Spring 2003, InSites E-Research and Consulting asked European Internet users to choose the most attractive set of advanced services and the price points they'd be willing to pay. Video on demand (VoD) ranked as the most sought after service both by males and females, followed by interactive TV and online gaming. Another study – the Cahners In-Stat study of broadband television subscribers – predicts that 15.9 million subscribers will take up this service by 2006.
The film industry has come to view online distribution as a welcomed tool in their negotiations with the ever-powerful video rental distributors. They have also begun to make content available online to early adopters around the world.
In support of this industry-wide momentum, the consumer electronics industry is developing a large number of broadband-ready devices, ranging from game consoles and set top boxes to mobile terminals. With all of these devices in the hands of consumers, the need for networks capable of delivering broadband content ready for consumer consumption becomes critical. Subsequently, a market enabled by consumer devices complimented by high-bandwidth networks will accelerate adoption rates, and open new revenue streams to service providers.
These market changes offer service providers the business opportunities they need to offset declining voice revenues and reduce customer churn. With BBE services, service providers also have the opportunity to increase their average revenue per user (ARPU) while providing a solid defensive strategy against fast moving competitive providers.
For BBE, significant challenges lie in the opportunities themselves. The sheer complexity of managing the BBE value chain, the numerous alliance initiatives, and the mastery of technology integration all require significant effort in business design, customer trials and standardisation. The next few sections will help clarify some of the issues.
Prospective business models
BBE is about selling entertainment services such as video, music and gaming over broadband networks. In the BBE value chain – as in its offline equivalent – content flows from the content owners, through a distribution network, to the content consumers. In this case, the service provider operates the distribution network. How the service provider positions itself in the service delivery process will define its role in the entertainment services value chain. We'll look at three models here.
• The Public Garden model
In this model the service provider is limited to providing a transparent connectivity pipe between the consumers and the content owners. A consumer uses the service provider's network infrastructure to seek out the content owner's website, selects the desired content, and consumes it. This is much like today's Internet.
From a consumer's point of view, the choice in content sources is large. Anyone who has content can put it online. However, it is nearly impossible for the content provider to guarantee a satisfying user experience. For example, when the network connection between the content owner and the consumer is congested, the user experience is sacrificed – especially for video. Also, the wide spectrum of content sources and its fragmented nature often make it a very difficult and frustrating experience for consumers. With independent payment options for each content source, fear of potential fraud makes consumers reluctant to purchase on-line content.
Content providers not only establish no long-term customer relationships, but payment authentication and verification becomes quite cumbersome. Targeted marketing for their content is difficult, since they have no way to selectively target new consumers.
For service providers, this is an unattractive business model. They generate a flat connection fee from their broadband consumers, independent of bandwidth consumption. Since bandwidth consumption is a cost driver, service providers have no control over their network costs. In addition, content providers are using the service provider's infrastructure to generate business without compensating the service provider proportionally for the associated cost.
• The Walled Garden model
In the walled garden model, the consumer is put in a garden with pre-defined content. All the content is licensed by the service provider from the content providers and offered as a service pack to consumers. The service provider handles all content layout, authentication, billing, and quality of service (QoS).
For the consumer, this model eliminates the risk associated with credit card payments. The end-user experience is also guaranteed since there is only one party (the service provider) involved in the end-to-end network path. The limitation is that consumers are restricted in content choice.
For content providers, this model allows them to focus on their core business of producing content. They can then allocate the distribution of content to the service provider who handles the billing, customer support and QoS.
Service providers are now at the centre point of the value chain, with every penny flowing through their books. The trouble with this model is the service providers level of exposure, as significant resources will be dedicated to content aggregation, layout, maintenance and support.
• The Gated Garden model
In this model, the service provider establishes a tollgate concept through which many content providers can offer content in exchange for a revenue share with the service provider. Content providers have a vested interest in the success of this service offering and will likely promote the carrier's initiatives. Content owners focus on content creation; the service provider is responsible for the user experience through QoS, authentication, billing, etc. The main enabler for this model is a horizontal network platform that not only provides the features of the walled garden model, but also maintains a business-to-business interface with content parties.
BBE value chain and key contributors
There are three categories that represent the main bulk of the supply chain.
• Content Providers: movie studios, music labels, content aggregators, broadcasters and programmers, producing, aggregating and selling consumer content.
• Content Retailer: video distributors, cable and satellite distributors, and newly emerging telecom operators.
• Content Consumers: at the end of the value chain, they desire access to content any time, from anywhere and on any terminal.
Finding the business model that fits the needs of all three may seem straightforward on the surface, but the relationships that exist between these and other market players are complex. Consider:
• the integration of the many components from networking, application and consumer electronics vendors
• the often-overlapping value chain for service delivery with various players and technologies causing friction and affecting customer service
• the dynamic regulatory regime that plays a prominent role in determining the nature of the relationships between the market players
Optimal framework for BBE service deployment
For reasons noted above, the 'gated garden' architecture is emerging as the model of choice for carriers. It offers economies of scale as third-party content providers can easily plug into a pre-determined store-front; strong brand recognition allows for customer ownership and high customer service standards; innovative technology development enables continued service innovation and service differentiation. The optimal framework will include:
• consumer-desired content sources
• business relationships with BBE supply chain players
• a go-to-market service model
• an open network platform for service deployment
Early trials and market momentum
While BBE services are still relatively new, BBE service providers have begun to deliver standardised and scalable services and products. Market pioneers like Kingston Communications (UK) and Aliant (Canada) have played an important role in helping to develop innovative and cost-effective services tailored to the unique needs of end users.
Success stories like Italy's Fastweb – which is delivering voice, data and video services to tens of thousands of users over both fibre and DSL infrastructures – demonstrate the growth potential for BBE. Fastweb uses the walled garden model when it comes to managing their own VoD service via license agreements with movie studios, but has chosen to migrate to the gated garden model for scale and added variety of content. A similar trend is emerging in Japan, where competition from BB Cable TV is actively pushing the incumbent service provider to begin deploying triple play services. Yahoo Japan started with the public garden model for hosting content provided primarily by Yahoo, but has now migrated to the gated garden model, expanding their BBE offering to a wide variety of content providers.
The BBE market presents many new growth opportunities for service providers and content aggregators alike. While it is still too early to determine the clear market leaders, early successes in the space suggest that this market is becoming one of the new broadband battlefields.
While it is clear that telecommunications companies have made the decision to enter into this space, it remains to be seen how aggressive they will be in upgrading their network infrastructure and adopting the right business model for bandwidth-intensive entertainment services.
The opportunity is evident, but to seize it – and stand apart from the ever-growing crowd – requires courage, know-how, and the conviction to find the right partners with the right business model.                       

Murali Nemani is Director of Strategic Marketing for Alcatel's Fixed Communications Group (FCG), and can be contacted via Helen Simpson at e-mail: helen.j.simpson@alcatel.co.uk
A complete white paper on this topic is available from Alcatel's Broadband web site at [l=www.alcatel.com/broadband/]http://www.alcatel.com/broadband/[/l]

Although SMS has provided a lucrative avenue of revenue for mobile service providers, MMS might not be so straightforward. Margrit Sessions explains

Originally built into the GSM specification, Short Messaging Service (SMS) is without question a success story for the wireless industry. End-users have shown they are addicted to sending SMS messages and have even created special languages for communicating with their friends.
SMS has the perception of being cheap, with end users paying on average EURO0.15 for each SMS. However, the cost for the network operators to deliver SMS is but a fraction of the cost to the end user, typically EURO0.2. Virtually no bandwidth is required, thus enabling network operators to make a good profit.
Messaging services have grown fast, with 22 billion SMS messages sent worldwide in 2003, compared to 16.5 million in 2002. In Europe, 18.3 million MMS messages were sent in 2003, with a tenfold increase of MMS users during that timeframe and an average of over 4.5 MMS messages sent per user. Some 39 per cent of all new handsets sold in Europe in 2003 were MMS enabled, while 14 per cent were camera phones.
Multimedia Messaging Services (MMS) has been hailed as the next great SMS. It is being positioned as a simple evolutionary path for SMS. But whereas SMS was a success story, MMS, may not be. The factors, which contributed to SMS' success, don't necessarily apply.
MMS, first introduced in Europe by Hungarian operator Westel on 18th April 2002, is an entirely new wireless protocol created specifically for GSM GPRS networks and, in the future, 3G UMTS W-CDMA networks. It is designed to support a wide range of content types including low-resolution images and music. End-users will have the promise of being able to create content themselves. They will have the ability to create and send pictures to their friends.
MMS services do not come without a cost: there are costs for network operators and costs to end-users.  For starters, network operators must upgrade to GPRS networks and MMS will be dependent on GPRS networks which can deliver a quality proposition to end-users.
Network operators must also put in MMS servers as  legacy SMS servers will not support MMS. These servers will control content delivery, roaming, user profiles, transcoding, device capability negotiating, create charging data records, and so forth.
End-users will need to upgrade their phones. Whereas all GSM phones automatically come with SMS capabilities, this will not be the case for MMS.  In order for MMS to be valuable, both senders and receivers will need to purchase MMS phones. More memory and higher resolution displays will be required to support MMS.
Network operators who have launched GPRS are still wrestling with how to charge for content. MMS will not make this task easier. Whereas SMS uses very little bandwidth and is therefore cheap to end-users, MMS is not necessarily so. A simple MMS picture of 10 KB consists of about 300 to 400 times more data per message than an SMS message. A complex MMS with text and audio clips could be as much as 50 KB, about 1,500 to 2,000 times more data per message than an SMS. Pricing MMS is certainly a challenge, and profiting from it will be equally difficult.
Tarifica compared the MMS prices, and every effort has been made to ensure that these prices are up-to-date and accurate. Prices are in Euros per message and apply to post-paid services, while prices exclude VAT. During the launch phase, many operators offered MMS at no charge. Since launch we have seen some changes in the pricing of MMS. Operators also started offering MMS bundles, charged at a monthly fixed rate for x MMS, which offers the per message price at considerably lower rate than as when charged individually.

Margrit Sessions, Senior Analyst, Tarifica, can be contacted via tel: +44 207 692 5292; e-mail: msessions@tarifica.com  www.tarifica.com

Revenue management can be divided into separate, distinct stages and objectives, all of which are crucial to operators seeking to maximise profit. Alan Laing explains

There's a lot of talk these days in the telecom market of the need for revenue management systems within carriers, and most of the companies offering them are from one or other segment of the billing industry. They may have come from retail or interconnect billing; they may be offering licensed software or a managed service. They all want to say they can offer revenue management, so what is this new holy grail for the sector?
In simplistic terms, you could say revenue management is the billing industry's adaptation to the market downturn in telecoms that started in 2001 and from which there are still only timid signs of emergence even now.
As consumers and businesses around the world have pulled in their horns, reining in spending on all things including communication costs, revenue growth at telcos great and small has slowed in comparison with the glory years of the Internet boom. When your top line is growing conservatively, or not at all, sound business sense dictates you must look to your bottom line, and it is no coincidence that carriers now emphasise profitability over revenue growth, in some cases at the behest of the financial markets.
Leaner and meaner times
Operators have got leaner and meaner, trimming their staff, lowering net debt and concentrating back on their core businesses. Far-flung empires, often comprising a mishmash of minority shareholdings around the globe, have been pared back to the manageable and, wherever possible, profitable – or at least with the prospect of becoming that way. For every international conglomerate like Vodafone there is now a downsized giant like BT, doing what it does best in the countries it feels most confident about doing it, rather than trying to be all things to all people half way round the world.
If my priority has gone from growing my revenue – even if it cost me a fortune to do it – to increasing the profit I derive from my business, I must run a tight ship, and keep a close watch on every phase of it to ensure there is neither waste nor squandered opportunity. This is, in essence, what revenue management seeks to do.
Another aspect of increasing profitability is selling more services to the same customer, and it is for this reason that DSL and CATV providers now want to offer you voice telephony while mobile operators want to offer you data services as well as voice. With multiple services on offer from a single provider and customers picking and mixing them, then paying for them all in different ways, revenue management is vital to achieving a single view of the subscriber and knowing best how to target him or her with future products.
In other words, if it's a teenager that looks at lots of video clips, offer them funky ring tones based on what the clubs are playing, while if it's a business executive using lots of WiFi to connect remotely to office applications, offer loyalty points that can be spent at hotels and restaurants abroad.
The four stages of revenue management
An analysis of the journey revenue makes through an operator led us, at Portal, to coin the phrase Revenue Lifecycle which, like the Ages of Man, can be said to fall into four stages. There is Revenue Generation – which is when a subscriber consumes a service and starts to generate revenue for the carrier. This stage can only begin once processes such as provisioning of the service, activation and authorisation have taken place, so it is in the carrier's interest that these are carried out as quickly as possible after signing up the customer. This will also increase customer satisfaction (ever signed up for a service then waited three weeks for it to start?), a sine qua non of upselling them to other services and growing share of wallet.
Next comes Revenue Capture which, on the face of it, sounds straightforward enough. It's knowing how much of a service has been consumed in a given timeframe in order to bill correctly and promptly. In fact it's considerably more complex these days, as a service (fixed as well as mobile) may be prepaid, in which case the carrier must know in real time how much credit the subscriber has, in order to warn them to top up before it runs out. In the case of content services, this may mean advising them that the next video clip will be the last for which they have funds.
Another layer of complexity in modern telecom services comes from the fact that, increasingly, families may want to have several different numbers, one for each member yet all grouped together on a single bill to the account holder, the pater familias. Equally, if a customer has a find-me service whereby calls to an office phone are rerouted first to a home number and then on to a mobile number, these will need to be billed for correctly, based on the rates for each of those individual services, plus an additional fee for the trouble of re-routing the calls.
Even a single subscriber today is liable to be a multi-faceted one, maybe defining certain calls from his or her phone as billable to an employer, while others are strictly personal, or paying for content downloads by credit card while voice calls are postpaid and e-mailing is prepaid. A parent may wish to stipulate that a teenage child whose number is normally a prepaid one, fed by pocket money, should be able to make postpaid calls to a taxi firm, billed to the parent's account, if credit has run out.
Then there is Revenue Collection, the bit we all love, when we bill someone for some work we've done. Again, in today's world, this phase has grown in complexity, however, as a carrier may be billing on behalf of other players in the value chain, such as roaming partners or content providers. Their presence in the chain makes it even more important to bill correctly and quickly, as they want their money too.
Revenue Capture and Collection should also give the carrier the information on customer behaviour to be able to develop new products and respond to market trends swiftly, lowering prices on certain services, bundling different products or launching new ones.
Last, but by no means least, is Revenue Assurance, which means making sure that there is no leaks, when for instance someone's account has been deactivated, yet they continue to receive the service for another week. Or when an interconnect partner is being paid too much because you don't have the wherewithal to check what they bill you for and dispute any discrepancies from what you think you owe them.
Billing system is key
An operator's billing system is key to success in all the four stages described above. It needs to interact with provisioning, activation and authorisation for revenue generation to begin. It needs to work with data from the network to know how much revenue a given subscriber has generated for the operator, i.e. it needs to carry out revenue capture. It must generate the bills, or reduce credit levels, and enable settlement with partners during revenue collection, and it must scrutinise and report on network activity to avoid leakage, overpayment for interconnect traffic sent or undercharging for interconnect traffic received, thereby providing revenue assurance.
The objectives of revenue management
The three pillars around which a billing vendor's revenue management offering should be developed are:
• Optimisation of value, achieved by delivering an integrated suite of revenue management tools for current and future areas of activity
• Maximisation of profitability, by offering a single platform for multiple services, business models and customer types, and
• Promotion of business agility, enabling operators to move towards real-time response to changes in the market.                                                         
The third element means integration with other systems operating inside a carrier, in particular CRM and ERP. CRM requires subscriber behaviour data to help craft better, more personalised services to promote customer satisfaction and up share of wallet. ERP can interact with the billing and revenue management system in place to streamline business processes as well as to oversee invoicing and make financial projections.
Integration with business applications
Some billing vendors have acquired CRM businesses, while we at Portal have preferred to integrate with the market leader, Siebel. We have a similar relationship with that heavyweight of the ERP world, SAP. These pre-tested solutions are designed to speed implementation, again enabling a faster time-to-revenue for the carrier as well as reducing complexity of management.
As top-line growth has slowed, telecom operators are turning their attention to the bottom line, prioritising increased profitability over general subscriber growth. They are focused not only on stripping cost out of existing legacy billing systems, but also on achieving higher revenue per subscriber through next-generation revenue management solutions that operate across offerings, channels and geographies. The benefits of this approach include:
1) the ability to rapidly generate revenue from customers immediately after they have subscribed (Revenue Generation);
2)  knowing in real-time how much to bill customers for, or in prepaid environments, only allowing access to services they have subscribed to and still have enough credit for (Revenue Capture);
3) taking the correct payment for services promptly and settling with third parties such as content or service providers, but only for exactly what the carrier owes them (Revenue Collection) and
4) eliminating revenue leakage through fraud or first generation back-office procedures that enable terminated customers to continue using services, or interconnect partners to charge more than they are owed (Revenue Assurance).
What's more, as today's new revenue management solutions integrate fluidly with other back-office systems such as CRM and ERP, operators will benefit from more sophisticated trend analysis for product/service development, both for end customers (new billing packages, tariff bundlings, services offerings) and third-party suppliers (portals on which they can accompany their content's reception on the network, new ways of advertising its availability). 
As carriers begin to recognise the bottom line value of taking a unified approach to Revenue Management, we can expect to see operators more successfully advancing their efforts to build differentiated global brands. Revenue management will become a central strategy to better service their most profitable customers through the launch of innovative new products and services that not only meet, but exceed customer expectations.     

Alan Laing, Vice President and General Manager Europe, Middle East, Africa, Portal Software can be contacted via: alaing@portal.com [l=www.portal.com/]http://www.portal.com/[/l]

Although offered a degree of flexibility by the European Commission during the dark days of recession, should mobile operators now be pressured to meet commercial Location Based Services accuracy targets, asks Jake Saunders

On the 11 June 2004, the European Commission (EC) carried out a quorum on value-added data services to canvas concerns and opinions about the current state and future direction of that sector. Also, by the end of the year, the EC will have also carried out a review of compliance to its emergency location identification E112 initiative. Concise Insight believes that the two areas of mobile communications should be reviewed in light of each other. Commercial mass-market LBS and E112 deployment are intertwined.
When the EC framed its E112 mandate for EU cellular operators it took a more flexible line than its US counterparts. Cellular operators were obliged to hand-off the location co-ordinates and personal details of cellular users in distress to the emergency services: but no level of accuracy obligations was placed on the operators.
In the UK, the average level of delivered accuracy is between 1 and 4 Km, and the spread of x-y location readings is quite large indeed.
This flexibility was merited in light of the concern that the European cellular operators were feeling the strain from the 3G license fees, paid at a time when the industry was experiencing one of the most significant downturns in company performance in history. It was a prudent course of action. But what should the EC do now?
The tide is turning
2004 is proving to be a much more optimistic year than any of the previous three years. Many operators have cleaned up their balance sheets and are reporting positive cashflows. Until now, the commercial LBS market has failed to meet expectations as the anticipated take-off in mobile data services failed to materialise. But that is changing, Vodafone UK reported 16.9 per cent of its service revenues came from non-voice applications as of March 2004 and 2.6 per cent was content and value-added data service related. That is more than double the March 2003 figure. The contribution, however, from location-enabled data applications is still very small. Across the top 20 markets in Western Europe, commercial LBS revenues represented just US$ 285 million in 2003.
Distinctive and robust LBS applications are continuing to be developed, but if the EC believes that the current cashflow from LBS applications will spur on European cellular operators to purchase high accuracy positioning determining equipment, they may have to hold their breath for a while yet. As research from Concise Insight's European Location-Based Services 2004 report shows, commercial LBS applications are gaining traction but, at this rate, European cellular operators are unlikely to have high accuracy equipment for mass-market use before the turn of the new decade.
We would like to argue that it may be constructive for the EC to tighten the mandate on E112 to not only require operators to pass on the contact details and the end-user's very approximate location to the emergency services, but also to insist on a requisite level of accuracy. The accuracy target could be set rather flexibly in the early days and reviewed over time. The key to facilitating this process is a finance-raising mechanism.
Changing society
There are a number of challenges facing European society. In most markets, nearly everyone who could own a mobile phone, does own a mobile:
• From its own research, the EC are aware that approximately 40 to 60 per cent of calls made to the emergency services are made on a mobile phone, and that figure is rising;
• As a result, in 2004, 64 million calls will be made to the emergency services in Western European, and that figure is rising;
• The volume of intra-EU country visits of tourists and business personnel has reached over 85 million per year;
• The location of every fixed line phone is known, that is not the case for mobile handsets. The original premise for being flexible on location accuracy is no longer tenable:
• Operators have cleaned up their acquisitions sheet and restructured their operations to be more profitable;
• Applications providers have developed LBS applications but they have little leverage to persuade the operators to install high accuracy equipment.
Therefore, given the current industry trajectory, high location accuracy for the mass market could well not appear until the end of the decade at the earliest. This is acceptable with most commercial applications but in a world where everyone has a mobile, there are the wider social and welfare implications that should be taken into account as well.    
If the current bottleneck in the transfer of personal and location data between the operator and the emergency services is not resolved and location accuracy does not improve, ambulance, fire brigade and police response times could well go down, not up.
Tougher mandate
Therefore, could a tougher mandate be a 'win-win' situation for all parties? Certainly the industry does seem to be caught in a negative triangle of interoperability, capital expenditure, and handset feature-set concerns:
• A lack of interoperability currently prevents international roaming subscribers from taking advantage of LBS, although current standards being finalised through the hard work of the 3GPP, ETSI, the OMA and the GSM Association are slowly but surely rectifying the situation;
• Network-based solutions such as E-OTD and U-TDOA, other than base-station cell ID solutions, face challenges of getting critical mass as operators have been hesitant to pick up the tab on the capex;
• A-GPS handsets have not materialised because the handset manufacturers have been reticent to install GPS in a large number of GSM handset models due to intense competition from other components/applications vying for battery life, space in the handset and overall cost considerations.
Therefore the EC and the member states need to make a clear decision to either:       
1. Remain flexible on the issue of location accuracy and accept that the emergency services could find it increasingly difficult to locate distressed individuals within the shortest time possible;
2. Or impose a high location accuracy mandate on the operators. Certainly there will be grumblings from the operators over political interference and capex but there is a clear-cut application that is desperately in need of accuracy, and that is emergency response.
Knock-on effects
The benefits of high accuracy PDE equipment would also have knock-on benefits for value-added data services and therefore the operators can spread the cost of location accuracy to a large number of direction-finding, traffic notices, community friend-finders and even gaming scenarios.
If the EC does opt for a stricter interpretation, then the key issue of consideration must be raising the money. The implementation of personal and location data IT transfer systems between the operators and emergency services has been disjointed in most countries, primarily due to the fact that no clear finance-raising model has been established.
Different countries, national emergency authorities and operators are prioritising at different levels of commitment. There is a danger that fragmentation also affects the deployment of location accuracy. It would perhaps be an unpleasant possibility that your choice of operator not only dictates the quality of coverage you may have in certain parts of the country but also how quickly the emergency services arrive if you are in distress.
The US introduction of E911 has been contentious, and certainly the EU could learn to avoid some of the US's pitfalls, but it does clearly set out a fund-raising model. For Europe, either a fixed fee per subscriber or percentage levy of ARPU could then be put towards the E112 budget. A proportion of that budget could then be set aside for implementation of personal and location data IT transfer systems between the operators and the emergency services, and the remainder could be allocated to the high accuracy solution of the operators choice. If the operator prefers a network-based solution, they can start to draw the funds together; or if the operator prefers a handset-based solution, the allocated budget could be used as an incentive to handset manufacturers to incorporate GPS into more mainstream handset models to ensure mass adoption.
In most countries, consumers already pay 17 to 22 per cent VAT. You could argue that the EC/national governments could meet the operators half-way, by declaring that x percent of that VAT paid by the end users is retained by the operator. Market driven solutions have, for the most part, been the most cost efficient mechanism to deliver results, but sometimes the wider social context needs to be considered. Mobiles have become the norm in our lives.
Due the current state of the LBS E112 market-place, there is a distinct possibility that it could be safer to make that emergency distress call from a fixed-line phone. It could even be argued that if a laissez-faire policy on high accuracy is maintained, there is a danger of social discrimination if an end user were to choose a network that had a lower level of location accuracy or could not afford a handset that had A-GPS built into it.
A 'raise the bar' higher accuracy location policy would have a knock-on benefit for value-added data services, as they are a whole host of applications that could benefit. Addressing the higher accuracy funding mechanism issue would allow the operator to install the position-determining equipment of their choice that benefits their commercial LBS application needs, as well as deliver improved response and location-fixes of subscribers in distress.               

Jake Saunders, Director, Concise Insight, can be contacted via e-mail: Enquiry@Concise-Insight.com

    

 

European Communications is now
Mobile Europe and European Communications

  

From June 2018, European Communications magazine 
has merged with its sister title Mobile Europe, into 
Mobile Europe and European Communications.

No more new content is being published on this site - 

for the latest news and features, please go to:
www.mobileeurope.co.uk 

 

@eurocomms

Other Categories in Features