Features

Enterprise CIOs and analysts discussed how to align enterprise wide area network performance with business goals over breakfast at BT Tower this week. The overwhelming message was that networks need to be managed a lot better than they are now, but are enterprises prepared to stump up the costs?

Enterprises across the globe froze their telecoms budgets in 2009 because of the economic crisis. But they now face growing network issues as a result.

The problem is that although enterprises may not have spent much money on upgrading their wide area networks, they have continued to add data-heavy services and applications. Many company networks are therefore in need of an upgrade of some description as their networks begin to struggle with the heavier demands on bandwidth.

According to Phil Sayer, principal analyst at Forrester Research, most large multinational companies have been forced to undertake some sort of WAN optimisation measures to improve the performance of their networks. He added that the best approach is to tackle how actual applications can be speeded up, such as by eliminating the number of TCP "handshakes" an application has to make on its journey along the network.

But some enterprise CIOs still have their work cut out when it comes to making the case for potentially costly network improvements and for installing tools such as those from Ipanema that provide greater network visibility.

"We've noticed that pure bandwidth prices have come down," said Simon Jones, service coordinator WAN at Clariant, a speciality chemicals manufacturer. "But we are ramping up costs on the technology side. We're having to pay more."

As Jones said, some enterprises still prefer to throw more bandwidth at their problems because it is the cheaper short-term solution. "If it then does not work, you have the argument that you need to invest in network improvement measures," he commented.

Jones said Clariant first installed Ipanema tools around six years ago and this has allowed the company to see which applications are being used on the network and to better prioritise traffic.

Persuading management to allocate more budget to IT services is not helped by the fact that "network management" is far from cool: "Cloud computing is sexy, but network management is not," said Sayer.

In an ideal world, he added, people would be spending more money on tools and services to improve how things will work.

"The cloud is getting the headlines now," agreed Ivor Kendall, director of network services at BT Global Services. "But it's all about application performance; how apps work over the network."

Sayer warned, however, that WAN optimisation tools are not "fit and forget" applications. "They need to be managed," he said. He pointed out that some carriers such as BT Global Services are also now offering CPE management as part of their overall managed network service portfolios. "But the future of WAN optimisation is getting it off the customer premises and into the network," he added.

The WAN optimisation market certainly appears to be a buoyant one: Sayer pointed out that up to a dozen companies are still competing for market share, with Riverbed, Blue Coat Systems and Expand the current market leaders. Other players include Cisco, Silver Peak, Juniper, F5, and Citrix. Ipanema Technologies is slightly different in that it does not sell directly to enterprises but uses channels such as systems integrators, VARs, and operators such as BTGS and Orange Business Services.

Infonetics Research estimated that the global WAN optimisation market exceeded the US$1-billion mark in 2008, but declined slightly in 2009 because of the recession. Growth is expected again once economic conditions have improved this year.

The TM Forum is learning from its work in the area of defence to outline an approach to security within the cyber community

CICyber security is arguably one of the greatest concerns for companies around the world and an area where big chunks of enterprise budgets are allotted in order to keep sensitive data and other corporate assets safe and sound.

Security management may be top of mind for IT managers and CIOs everywhere, but today the most common way of addressing the threats out there is through highly customised solutions that are specific to a particular company.

While there are plenty of off-the-shelf security solutions on the market many enterprises will use those products in a custom fashion that's geared to their requirements, and often this comes at great expense and effort to their IT department.

There is no doubt that the need to secure cyberspace is practically a mandate at most type of companies; government, private corporations and other enterprises are looking for a safe and reliable environment for their information transactions, and service providers, application developers and solution integrators are realising that they need to take it upon themselves to implement best practices in all aspects of network and data security.

TM Forum may not come to mind immediately when you think about enterprise security, but because our Solution Frameworks and other standards are so widely used at companies around the world, we have jumped into the fray.

Learning from Defence
In 2008, TM Forum launched a Defence Interest Group to acknowledge the fact that defence and military agencies and contractors had been embracing and adopting TM Forum standards such as our Solution Frameworks. The group was formed in order to create a community of interest focused on exploring new areas of standardisation as well as enriching existing TM Forum standards for the defence industry.

Charter members of the group included the U.S. Department of Defense (Defense Information Systems Agency (DISA), and the NATO C3 Agency, and assorted defence contractors and suppliers such as Boeing, Thales Communications, TNO, QinetiQ Ltd., EADS, Logica and more.

Just a few months later the group was promoted to a Sector within TM Forum.

We have a number of defence contractors and related agencies active in TM Forum, which was the impetus of the formation of the Government and Defense Initiatives within our organization, but I really believe there is a huge opportunity for a much larger collaborative effort with regard to security management.

So far, the U.S. DoD has been driving our work in this respect, but because this area is also of great interest in Europe and other parts of the world, we're looking to attract a broader group of members and non-members to learn about TM Forum, understand the applicability of TM Forum best practices and standards as related to defense and most importantly get involved in our work.

Security Management Initiative
With that in mind, earlier this year we launched the Security Management Initiative within our Collaboration Program. While it's being spearheaded by our defence members, the work that will come out of the initiative will have much broader relevance across industries.

The ultimate goal is a product that has some kind of certification or seal of compliance around security. But as you can imagine, it will take a number of smaller steps to reach this end point.

The team has completed a draft whitepaper that will outline our approach to security management and all the areas within TM Forum's Frameworks that have the potential of being impacted by security management or should have security management incorporated.

The second step is our project plan that will lay out exactly when the work will be completed. The end result is to incorporate this work into our Solution Frameworks, and beyond the whitepaper and timeline, we'll also be relying on contributions from within our membership and from outside.

We're looking for anything that will help us reduce the amount of time and effort that it would normally take to complete this work, so we are taking the best of breed that exists today rather than reinventing the wheel by starting from scratch.

Cyber security is a very critical area for companies of all sizes and across all industries. It's an ongoing threat that's not going away, and we hope through our aggressive efforts and work we'll be able to keep the threats at bay.

About the author:
Christy Coffey is Head of Cable Sector and Defense Sector, TM Forum

Martin Bishop takes a look at the potential of Ethernet WAN services, and looks at the dilemma facing global enterprises

Ethernet Wide Area Networking (WAN), although described as one of the most exciting developments in corporate networks, has taken longer than expected to gain genuine acceptance in the enterprise market. Of course, Ethernet has been ubiquitous in company LANs for years, and the prospect of being able to extend this simple-to-use, cost effective technology over larger areas and between both regional and global offices has enormous potential.

For the past 3 years however, Ethernet WAN services have been at a crossroads. Enterprise take-up has been restricted, and end-user companies have faced a dilemma; does it really make sense to employ Ethernet WAN services, or should they stick with private circuits and MPLS based IP-VPN services?

There are certain areas that have made this dilemma even more acute. Without doubt, demand for cost effective, high bandwidth business connections over large geographical areas is ever-increasing. With the growing sophistication of services such as Telepresence, the need to move larger encrypted files faster than ever, and the convergence of voice and data onto a single network, severe pressure has been placed on the bandwidth of enterprise data networks.

Additionally, at a time when capital and operating budgets are under constraints, enterprises have found it difficult to balance the need for increased infrastructure expenditure to ease this network congestion, with the need to reduce overall costs.

In many ways, Ethernet WAN could be the technology to relieve these concerns. With its support for higher bandwidths, and its ability to reduce total cost of ownership without additional infrastructure requirements, it has the promise to improve both the efficiency of a company's network, and help the bottom line.

Ethernet offers granular speed options, which makes the technology both cost-efficient and scalable, with companies able to pay for what they need, whilst having the flexibility to increase bandwidth as business needs demand. On top of this, Ethernet could help to reduce running costs in comparison to other MPLS technologies that are widely used by enterprise -  up to 20 per cent on a like-for-like, per Mbit/s basis.

Despite all these clear advantages though, why haven't the Ethernet services offered up until now proved more successful? Firstly the idea of using Ethernet "point-to-point" or "any-to any" services across a shared infrastructure has left businesses, especially financial institutions, concerned about the implications of sending sensitive data over a shared IP infrastructure.

Secondly, and perhaps most importantly, the services have had severe limitations when it comes to scale, both in terms of the number of sites and geographical reach. The vast majority of Ethernet WAN options offered to enterprises have been Metro services, restricted within one city. This has meant that companies have been able to enjoy the benefits of Ethernet WAN over small areas, but multinational corporations looking to connect branches and offices internationally have been left with extremely limited options.

Ovum recently forecast that the market for Ethernet WAN services is set to grow in value to US$31bn in 2012 from US$14bn in 2008, suggesting that more valuable Ethernet WAN services are now being offered to enterprises. This is no doubt partly thanks to security concerns being allayed, as enterprises have been assured that the availability, integrity and confidentiality of business-critical data over WANs is guaranteed.

Equally important however, the market is set to boom because the geographical reach of the services being offered has greatly expanded, opening up scope for connecting international offices thousands of miles from each other over Ethernet. In 2008, 71% of the entire Ethernet WAN market was comprised of Metro services, but the technology demonstrates its greatest potential over a truly international scale.

European companies looking to expand eastwards by opening up offices in the emerging markets of Asia for example, now have an increasing range of options. Thanks to Ethernet WAN, they are now able to connect offices globally, from Hong Kong, Malaysia, and Indonesia, to central offices in the UK or United States.

It is crucial that service providers continue to expand the geographical range of Ethernet WAN services to further international locations such as these. If this trend continues, companies will finally be able to reap all the benefits of the LAN on truly global scale. Finally, after all its promise, Ethernet WAN may be about to come to fruition.

About the author:
Martin Bishop is Head of Global WAN Services, Telstra International, EMEA
The opinions expressed in this article are his and not necessarily the opinions of Telstra International

 Figures taken from Ovum Ethernet Market report (Forecast: Enterprise Ethernet services, global) 15th October 2007
 Ovum Ethernet Market report, 15th October 2007

As we enter the new decade battle lines are being firmly drawn. Amichai Shulman, Imperva's Chief Technology Officer, advises application owners to get their act together and tackle five key trends head on

1. The Industrialisation of Hacking
There is a clear definition of roles within the hacking community developing, forming a supply chain:

  • Botnet growers / cultivators whose sole concern is maintaining and increasing botnet communities
  • Attackers who purchase botnets for attacks aimed at extracting sensitive information (or other more specialised tasks)
  • Cyber criminals who acquire sensitive information for the sole purpose of committing fraudulent transactions

As with any industrialisation process, automation is the key factor for success. Proactive search for potential victims relies today on search engine bots rather than random scanning of the network. Massive attack campaigns rely on zombies sending a predefined set of attack vectors to a list of designated victims. Attack coordination is done through servers that host a list of commands and targets. SQL Injection attacks, "Remote File Include" and other application level attacks, once considered the cutting edge techniques manually applied by savvy hackers are now bundled into software tools available for download and use by the new breed of industrial hackers. Search engines are becoming a vital piece in every attack campaign starting from the search for potential victims, the promotion of infected pages and even as a vehicle for launching the attack vectors. 

Imperva tracked and analysed a compromise that affected hundreds of servers injecting malicious code into web pages, these were cross referenced with keywords that scored highly in Google search engine generating traffic and thus creating drive by attacks.

Organisations must realize that this growing trend leaves no web application out of reach for hackers. Attack campaigns are constantly launched not only against high profile applications but rather against any available target. An application may be attacked for the value of the information it stores or for the purpose of turning it into yet another attack platform. Protecting web applications using application level security solutions will become a must for larger and smaller organisations alike.
 
2: A Move from Application to Data Security
The effectiveness of network layer attacks has decreased dramatically in this past decade largely due better network layer defences. This gave raise to application level attacks such as SQL Injection, Cross Site Scripting and Cross Site Request Forgery. As these are being gradually addressed by the use of web application firewalls, attackers will turn their attention to more sophisticated attacks either from the outside (business logic attacks) or from the inside (direct attacks against the database). Together with the fast growth in the number of applications that access enterprise data pools these will drive the evolution of data-centric security.

While organisations invest in protecting their major applications using application level tools, many of the smaller applications are still unprotected. Additionally, we see no apparent decrease on the part of internal threats.

It becomes apparent to organisations that controls must be put not only around applications accessing the data but also around the data itself. This holds true to data in its structured format within relational databases as well as unstructured data stored in files on organisational file servers.

To protect these vital assets, Organisations must have a complete change of mindset focusing on protecting data at its source, regardless of the application accessing it, if necessary utilising a combination of technologies such as a data based firewall, data and file activity monitoring and the next generation of DLP products.
 
3: Mainstream Social Networks and Associated Applications
Large populations not previously exposed to online attackers can now be targeted by massive campaigns. Elderly people as well as younger children, people who did not grow up with an inherent distrust of web content, may find it very difficult to distinguish between messages of true social nature and widespread attack campaigns.

Imperva's team was able to demonstrate that specific ads carrying attack vectors could be presented to named individuals at an attacker's will. This in turn allows attackers to easily get their foothold inside specific organisations by targeting individuals within those organisations. Much like searching through the Google search engine for potentials target applications, attackers will scan social networks (using automated tools) for susceptible individuals, further increasing the effectiveness of their attack campaigns.   

As social platforms grow at an exponential rate I find this problem to be one of the most challenging for us in the next decade. An entire set of tools that would allow us to evaluate and express personal trust in this virtual society are yet to be developed and put to use by platform owners and consumers. In the meantime, end users should rely on frequently updated anti-malware solutions as well as automatic security updates for their workstations. Organisations, who by now gave up on restricting the usage of social platforms from their enterprise networks, should emphasise the use of centrally managed anti-malware protection and secure surfing gateways.
 
4. Password grabbing/password stealing attacks
As stolen personal information is increasingly available, the price it commands on the black market is falling, thereby forcing attackers to seek more profitable data. To this extent, the last few months have seen hackers target application credentials. Application credentials hold more value for certain types of attackers as they can be further used in automated schemes.  And an attack that makes use of valid credentials for an online banking system can be fully automated. Of particular interest to attackers are credentials for webmail applications as these may further allow compromise of other credential sets through the password recovery feature of applications. This feature usually sends the credentials of an online application to an email account designated by the owner upon registration. Worthy of mention is also the assumption that it is not uncommon for people to have the same username and password used for their Facebook account as well as their Twitter account and their Airline Frequent Flyer account.

Attackers use many different techniques for obtaining application credentials these include Phishing campaigns, Trojans and KeyLoggers on the consumer side and SQL injection, directory traversal and sniffers on the application end. Earlier this year the media became aware of a partial list of Hotmail user credentials traded on the net. The list was probably obtained through KeyLoggers.
 
5: Transition from Reactive To Proactive Security
To date the security concept has been largely reactive - waiting for a vulnerability to be disclosed; creating a signature (or other security rule) then cross referencing requests against these attack methods, regardless of their context in time or source. A lot of resources are invested in distinguishing "bad" requests from "good" requests based on request content alone - a chore that is becoming more and more difficult due to advanced evasion techniques and sophisticated attack schemes.

Rather than waiting to be attacked, security teams must start to proactively look for attacker activity as it is being initialised over the network, identifying dangerous sources or malicious activity before it gets to attack a protected server and even establishing a defence against attacks before they become publicly disclosed.

We are seeing different projects world-wide approaching this problem from different angles. Projects like DShield (www.dshield.org ), ShadowServer (www.shadowserver.org ), commercial companies like Cyveillance and others, all try to create their networks of cyber-intelligence sensors. They gather information that can be used to create a real-time threat map from which actionable security policies can be created automatically in real time. Our own research activities into this domain show a lot of interesting data. We can daily detect a list of applications that are soon to be targeted by attackers. New attack vectors show-up in early stages, before they are massively used through botnets and recently active source of attacks are being revealed.

The online security community is in the early stages of digesting this information into actionable items. The future will reveal more offerings around IP reputation, early warning systems and other proactive tools. It will be at the hands of application owners and web application solution vendors to integrate with those tools to provide a proactive security suite for applications.

So, are mobile payments safer that cards? And how can customers, regulators, financial institutions or mobile operators know for sure? Dave Birch and Neil McEvoy examine market developments in m-payment security. Read on to discover their answers...

If mobile phones are going to be used as credit cards with £10,000 credit limits, or annual season tickets worth £3,000 on the rail networks, or as corporate identity cards or log in devices for bank accounts, or for any of a myriad of other transactional purposes, then the service providers, their customers, and their regulators will need to be confident about the security of the systems. We think that if stakeholders carry out methodical risk analysis and implement appropriate countermeasures, they will determine not only that mobile payments can be made safe, but that they would be crazy to carry on using cards!

Mobile payments, mobile ticketing and mobile transactions of all kinds will be central to our lives in the future. In fact for most people, in most of the world, most of the time, mobile will be the only electronic transaction channel, not just the most popular. With 4.6 billion mobile phones in use, with 1.2 billion mobile web users in the world, with 3 billion active SMS users supporting a $100 billion business (yes, as Tomi Ahonen is fond of pointing out, the SMS business is bigger than music, movies and videogames combined) and with the mobile infrastructure continuing to spread, it is not much of a prediction to say that mobile phones will become the world's foremost transaction platform as well as its foremost communications platform.

The transactions may be local via barcodes, bluetooth or proximity contactless interfaces (such as NFC) or remote via SMS, GPRS and 3G IP and there are many parts of the world where these transactions, both person-to-person and person-to-business, are already commonplace. In Japan, half of all mobile phones now have a mobile wallet and proximity interface and around a sixth of mobile phone subscribers use the proximity interface, mainly for transit ticketing. In Korea, the T-Cash (mobile proximity purse) scheme already has hundreds of thousands of users. In Kenya, the transaction turnover of the M-PESA mobile money transfer system is already more than a tenth of GDP. In France, all of the mobile operators, the main banks and the payment schemes have co-operated to develop national specifications and are starting a national m-payments roll-out in Nice. In the UK, a mobile proximity scheme will be launched by Orange, Barclaycard and MasterCard in 2010.

Exciting times. With new products and services coming from handset manufacturers (Nokia Money), operators (Orange Money), banks (Wing) and specialists (Monetise) coming thick and fast, the global market for mobile payments alone is forecast at more than $5 billion in Western Europe (Frost & Sullivan, 11/09) and more than $100 billion worldwide (Research & Markets, 9/09) in 2013. But for many people, across all markets, the first response to these new systems is the same: what about security? This is a perfectly reasonable response: first of all, mobile payments are new and both customers and providers are naturally unsure about new transaction channels; secondly because the "headline" reporting of security can be somewhat misleading. 

Here are a few real headlines taken from newspapers and magazines:
"Mobile wallets may be convenient but they also carry a degree of risk"
"Investigators replicate Nokia online banking hack"
"Report blasts holes in contactless security claims" 
"Cracked it" (concerning contactless passports)
"Hackers start poking holes in NFC"
"Microscope-wielding boffins crack Tube smartcard"

Well, mobile payment sounds like a combination of risky mobile platform, plus risky smartcard technology, plus risky contactless interface! Surely you would have to be crazy to consider implementing such a system! What are stakeholders to make of this?

If you are consumer, which you undoubtedly are, is your season ticket more likely to be stolen if it is on a card in your wallet or in software in your smartphone?

If you are a shopkeeper, are you more or less likely to be paid when someone waves a mobile proximity phone over your point-of-sale (POS) terminal or when they put their card in the slot and punch in a PIN?

If you are a law enforcement officer, are you more or less likely to catch a criminal (or terrorist) who is using stolen credit cards or stolen mobile phones?

If you are a regulator, should you be more or less worried about the systemic failure (for technology reasons, not for business reasons) of a handset-based payment service or a web-based payment service?

These are important questions to answer. Fortunately, there is a well-established mechanism for doing so: it is called risk analysis. The goal of risk analysis is to support good decision making: at Consult Hyperion, we use a particular method known as Structured Risk Analysis (SRA) that we have refined over the years to analyse transactional systems thoroughly, but all risk analysis methods share some basic concepts. One of these is "vulnerability". Vulnerability is a characteristic of the infrastructure, not the business. Thus, if we move a well-understood and well risk-managed application from one infrastructure to another, we may introduce new risks into the business via new vulnerabilities.

Consider the example of taking the EMV application (the software that provides "chip and PIN" functions on bank-issued payment cards) and installing it in what is called the secure element on a mobile phone. This secure element may be a special chip in the handset, it may be part of the SIM card or it may be in an SD card or some other removable device. But in any of these cases, the bank issuer's whole supply chain has changed and so the risks are different. When your bank orders your debit card, it orders it from a supplier that has well-established (and audited) procedures for obtaining a chip, embedding the chip in a plastic card, loading the software on to the chip, and testing the hardware and software. When you order a debit card for your phone, then without some special measures the bank will have absolutely no idea what chip your phone has as its secure element, how the software is to be loaded into the chip or what other software is already there. From the bank's point of view, the chip is certainly an "element", but it may not be "secure".

This means, of course, that the risk analysis for a mobile payment application is different from the risk analysis for a traditional payment application. If we put to one side the generic vulnerabilities of GSM and EMV, which are well-known and well-understood, then it is interesting to reflect on the new vulnerabilities.

Last year, the European Network & Information Security Agency (ENISA) published a paper on the security of mobile payments that included a useful classification of these mobile vulnerabilities, dividing them into (broadly speaking) those relating to the secure element, those relating to the handset and those relating to the NFC interface. We have found this to be a very practical breakdown. The vulnerabilities of the secure element are to a great extent the generic vulnerabilities of smart cards and therefore straightforward to feed into the risk analysis process, but the other categories require more thought.

The mobile handset was never designed for transactions, so it is hardly surprising that there are many issues with the current generation that could turn into major problems if not handled properly. For example: suppose that your mobile phone were to contain a "Trojan Horse" that captured the PINs or passwords that you are using. This is a genuine issue, because the keypads in mobile phones are not secure (rather like the keypads in the POS terminals, and it will take another generation of handset design for companies to introduce trusted processing to mobile phones so that, to build on this example, payment applications can lock the keypad.

There are pros as well as cons, naturally. The network-connected nature of a mobile device means that the payment mechanisms in the phone can be "shut down" if the phone is lost or stolen, the payment applications parameters can be changed on the fly and new applications (and, indeed, security updates or patches) can be added almost instantly.

There may be additional "cross channel" vulnerabilities in handsets because of the way they are designed and implemented. It may be that, for example, the Bluetooth interface could be exploited to learn something about the data going to the screen or the NFC interface may be exploited to learn something about the software running on the handset. This is why Consult Hyperion began funding Ph.D research into mobile cross-channel vulnerabilities at the University of Surrey this year. The findings of this research will, we are sure, introduce more security to the mobile transaction platform and the industry as a whole will be benefit.

In the last category, the vulnerabilities of the NFC interface, we have a great head start because the vulnerabilities of short-range 13.56MHz contactless card systems have been studied in great detail. Like any wireless interface, it may be vulnerable to eavesdropping and so forth but there are well-understood countermeasures (such as encryption) to minimise risks. There are lessons that have already been learned from the large scale and widespread use of contactless cards in mass transit (Transport for London alone has issued more than 20 million Oyster cards) and payments (Barclays has issued three million contactless cards and has committed to add contactless to all of its UK cards).

So, are mobile payments safe or not? It's not a "yes" or "no" question, as we hope this discussion has shown. Let's ask another question instead: can we make the risks of mobile transactions manageable: yes. In fact, in the particular case of mobile proximity payments, we happen to believe that there is more security overall in using a mobile than in using a card payment. For a start, people are more likely to notice if their phone is missing, compared to their credit card. Research seems to show that on average it takes a few hours, even almost a day for someone to notice and then cancel a credit card when they lose it, where as it will take just eight minutes to call a phone operator and report your phone missing. Add to that that it is easy to determine the location of a phone, and even to communicate with it, which greatly changes the risk and countermeasure situation when compared with cards. In fact, we think the question should be the other way around: from a security point of view, does it really make sense to carry on with little plastic cards, magnetic stripes and passwords?

About the auhors:
Dave Birch and Neil McEvoy are Co-Founders of independent technology consultants Consult Hyperion

It's been a dream, and then a disappointing reality, but conferencing should be about  collaboration, rather than forcing technology upon the market, says Tim Duffy

Video conferencing is such an obvious solution to shifting traffic from the roads and airlines and making business faster and more agile. The amount of money and energy wasted on futile business trips is enormous, and the consequential environmental damage is enormous too.

Widespread adoption of video conferencing has been a dream of the communications and PC industry since it all became possible, and in fact before that with the launch of AT&T's prototype PicturePhone in 1956. Following years of trials the first services were launched by AT&T in 1970 with predictions of a million users within 10 years! BT launched its Confravision services in the early 70's linking custom built studios between London, Manchester, Birmingham, Bristol and Glasgow. Similar services were launched in Germany, all utilising full bandwidth analogue video circuits to full standard resolution TV quality, with excellent quality.

These custom studios were state of the art at the time, but clearly very expensive to operate using full bandwidth video. I was a regular user and this is where I gained my first experiences of video conferencing and became an early convert. Throughout the early 80's the digital revolution was gathering pace and the European PTT's recognised early that without common standards video conferencing could never gain widespread adoption. After years of collaborative R&D the ITU (the CCITT at that time) ratified the first European standards for digital video conferencing called H120 which standardised the transmission of video conferencing signals at 2Mbit/s. Switched networks and digital processing developments were gathering pace and it quickly became clear that transmission at switched 64Kbit/s and ISDN was possible so new collaborative standards were developed leading to the first global standard H320 in the late 80's. Since that time the standards have evolved to the full suite of AV standards that we have today that support a variety of coding algorithms and network types from mobile telephony through to high end Telepresence sytems utilising IP networks with QOS.

So the technology has evolved at a rapid rate and the quality possible today is truly stunning, so why then has this market not exploded?

The reality today however is patchy uptake and utilisation and an industry that has really stalled with so many well intentioned attempts to drive video in the mass market.

There is a very long list of video initiatives that have fallen by the wayside in every corner of the world, from AT &T's Picturephone service, BT's Confravision, Deutsche Telekom's BigFon service, BT & IBM's push into mass produced PC video phones in the mid 90's, Amstrad's video phone, Marconi's video phone, Intel's Pro Share PC technology that Andy Grove championed, 3's massive push into mobile video telephony, and many other high profile market initiatives that came to very little. We are seeing the same concept again now with Cisco's push into high end Telepresence and it will be interesting to see how this evolves.
The widespread adoption of video is always just around the corner, and despite the exceptional technologies and networks available today, and the potential productivity gains, the market is still struggling to hit mainstream.

Looking back on the technology developments in video over the past 20 years, the industry has been transformed, from grainy monochrome to HDTV quality with HiFi sound fidelity. Video conferencing today is almost perfect and utterly transparent so how could it not become the dominant way that we all communicate?

Having been around the industry for a long period I have come to realise that the adoption of video conferencing is not in any way a technology or network issue. Although that may seem obvious, it is lost on the companies building the equipment and networks who believe if you build a better mouse trap they will come. But will they? The history of the video conferencing industry as outlined above tells us the opposite is true.

We humans are all basically programmed to find the path of least resistance and this is true in communications as in other areas of life. We take choices everyday in how we communicate, we chat by the coffee machine, we send an email, we pick up the phone, we send a text message or we jump on a train or plane. We do what we need to do and rarely anymore. Hence my preposition that video conferencing will never become mainstream and will never replace the bulk of real time communications, the phone and the web, quite simply because these communication methods are far easier and require no special equipment.

In 2001 for example the equipment market for video conferencing end points was around 21,000 units per quarter. In 2005 this had increased to around 34,000 units per quarter, reaching around 44,000 units in 2009. Telepresence systems accounted for only a few percent. Cisco Telepresence systems have sold less than 3,000 units in total according to their announced data. These statistics hardly demonstrate an exploding market given that many of the shipments will be replacing the older systems sold in previous years with the latest HD technologies.

So what is happening in collaboration?
The interest in collaboration and travel reduction has never been higher on everyone's agenda, but the answer is utilising technologies and communications tools that are appropriate and sufficient, and satisfy the needs of the organisation. In comparison with the video conferencing market the audio conferencing market has done rather better. This can be illustrated with the growth of the audio conferencing traffic on a global basis. During the past 10 years audio conferencing traffic on a global basis has expanded from an estimated 3B minutes per quarter in 2002 to a market now touching 15B minutes per quarter, an annual growth of around 25-30% in volume terms. If you couple this with the very high growth rates now being experienced in web conferencing (sharing information and documents in real time) it is clear to see where the real application areas are. (Data Courtesy of Wainhouse Research 2009)

Marc Beattie Managing Partner, Wainhouse Research is quoted as saying "With the exception of mobile services, no other telecommunications service has achieved the sustained growth, popularity, and global adoption as audio conferencing. While new markets in Asia, Latin America, and Eastern Europe develop; established market such as the UK and US continue to realise significant growth. Since 2001 the global market has realised 25% compound annual growth (CAGR) while the adoption of new complimentary services, such as web conferencing, continue to push further use."

Video may be perfect in this age but the sheer convenience and simplicity of voice coupled with a good web conferencing tool can solve many collaboration issues, and the market data indicates clearly where the growth is.

I would argue that the key to effective collaboration in any organisation is an appropriate range of solutions from in house intranets, simple screen sharing, and web conferencing coupled with audio conferencing and of course video for that small section of users that need the hammer to crack the communications nut, and don't have access to the corporate jet!

If conferencing and collaboration is to have a real impact on the way we do business and shift our CO2 emitting business practices into the network, users are going to need complete simplicity and ease of use. The power users will of course need to have the latest all embracing unified solutions, but for the rest of the market, and I would argue the bulk of the market users need to have access to ultra simple reliable solutions that use their phone and mobile phone and their PC.

What of the future, it is clear that the advent of HD video is transforming the visual experience, so why are we happy to accept 3.4k Hz audio bandwidths when we could have FM quality? I see one of the biggest growth areas of the future is hifi audio conferencing and the ability to add graphics and collaborate is a fast and effective way.

Our job as service providers is to ensure that we are developing and offering services that hit the sweet spot of the market, make it simple and make it reliable, and if we continue to do this there is real hope that organisations will learn and adopt and become converts to this very important application area.

About the author:
Time Duffy is CEO, MeetingZone

The latest advances in mobile imaging software have helped camera phones to rise far beyond their modest point-and-shoot beginnings, says Scalado's Fadi Abbas

If mobile phones are going to be used as credit cards with £10,000 credit limits, or annual season tickets worth £3,000 on the rail networks, or as corporate identity cards or log in devices for bank accounts, or for any of a myriad of other transactional purposes, then the service providers, their customers, and their regulators will need to be confident about the security of the systems. We think that if stakeholders carry out methodical risk analysis and implement appropriate countermeasures, they will determine not only that mobile payments can be made safe, but that they would be crazy to carry on using cards!

It's hard to believe, but the camera phone as we know it today has only been around since the mid-1990s. And it wasn't until 1997 that Philippe Kahn instantly shared his pictures from the maternity ward - with more than 2000 family, friends and associates from all over the world - when his daughter Sophie was born. More than any other moment in the camera phone's history thus far, this simple story marked a turning point for the integration of a digital camera with a mobile phone, and the dawn of a whole new era of instant visual communications.

However, back when digital cameras were first integrated with mobile phones, they were considered by many to be something of an afterthought, perhaps even a gimmick.  With phone manufacturers desperately trying to differentiate themselves in a highly competitive market, it was tempting to pack as much as possible into each handset - without much concern for quality or usability. 

This model, however, has changed rapidly over the years, with end-users becoming more tech savvy, and with competition in this area becoming even more fierce. As a result, the latest generation of camera phones are now grabbing the headlines as much for their high-end camera functionality, as they are for the phones themselves.

In fact, by 2003, more camera phones were being sold worldwide than stand-alone digital cameras, and in 2004, Nokia became the world's best-selling digital camera brand. By 2006, half of the world's mobile phones had a built-in camera, and in 2008 Nokia sold more camera phones than Kodak could match with its film-based cameras, making Nokia the biggest manufacturer of any kind of camera, anywhere. And to give just one final staggering statistic:  at the end of 2008, the world installed base of camera phones was 1.9 billion.

All of these figures lead us to the same conclusion: the camera phone market is enormous, and getting bigger all the time. In order to fully realise the potential of this market, innovators working in this space are concentrating on new ways to make imaging on mobile phones more efficient, by bringing higher usability for end-users, and by cutting hardware costs for handset manufacturers who are increasingly having to compete with products which can offer phone, email, camera, and music functions in a single device.

As such, the latest mobile imaging Software Development Kits (SDKs) are now able to provide handset manufacturers with CPU and memory efficient software solutions that can drastically decrease image processing times when capturing, viewing, and editing large images.  The latest innovations made possible by this focus on mobile imaging software means that end-users can now create high quality, high-resolution multi-megapixel shots automatically, with an ordinary camera phone.

Likewise, photographers using a camera phone can now instantly capture multi-megapixel images without any "shutter lag", thereby freezing the exact moment of capture. Until now, only advanced Digital Still Cameras (DSC) and Single Lens Reflex (SLR) cameras have been capable of managing the delay between pressing the capturing button and actually saving the captured image. As a result, 'shutter lag' has typically been one of the biggest technical challenges facing camera phones, especially when photographing moving objects. However, with the latest 'zero shutter lag' technology shipping very soon on many camera phones, users can be sure that the image which they see in the viewfinder is the image that they capture. Users can then  instantly zoom into the resulting JPEG images to review the details of the image in real-time.

Additionally, with another innovative integration of hardware and software in one of the most amazing imaging technological breakthroughs for the cameraphone to date, software can continually store the latest number of images seen through the viewfinder, so that once the user pushes the capture button, the application will save the images from both before and after capture. With this advancement, full-resolution images can be captured from a point before the user hits the capture button, even as the viewfinder continues to show live images in real-time. With this unique feature, mobile users are then able to choose from a number of captured images, and even use "time travel" to move backward and forward in time in order to find the exact moment they want to capture.

By providing instant random access to the captured JPEG, mobile imaging software can deliver unprecedented JPEG handling performance, as well as faster image browsing in the phone's photo album.  As a result, there are now mobile imaging solutions that can provide 20 frames/second instant full-resolution image handling, instant zoom/pan at the moment the image is captured, and burst-mode image capture. As with shutter lag, burst-mode image capture - the rapid capture of multiple images - has, until now been associated only with expensive, high-end cameras.

Indeed, traditional features are getting more sophisticated, and yet being made more accessible with lower price points.  For example, it is not uncommon to find a 8.1 MP camera along with some impressive photo features like face detection, video recording capabilities, Smart Contrast (that balances light and dark areas), image stabilisation, and a Xenon flash at an affordable price. At the same time, built-in accelerometers are  making sure that photos are automatically changed to landscape mode when the user turns the camera, whilst A-GPS technology allows photos to be "geo-tagged" with geographical identification metadata.

The demand for mobile imaging software solutions that are powerful, scalable and modular - and suitable for any camera phone - is growing exponentially. The response from the industry is to make imaging on mobile phones more efficient - bringing higher usability for end-users and cutting hardware costs for device manufacturers. By accessing a number of unique and patented software technologies, manufacturers are finding new and innovative ways to solve critical performance issues with less memory and CPU requirements.

Enhancements like these really are the way of the future.  Since the launch of Apple's iPhone, the mobile world has begun to focus its attention on improving user experience, but without sacrificing the latest features and high specifications. However, despite all the positive reviews and feedback about the iPhone's usability, the phone's camera fell short for many. 

Increasingly, software developers working in this space believe that the end users should have it all: a combination of high megapixels, amazing speed, and powerful imaging software that is underpinned by a unrivalled user experience. After all, a positive user experience is absolutely essential if a mobile handset is to be successful in this competitive market. After all, according to a survey conducted by InfoTrends/CAP Ventures, camera phones are expected to account for 89% of all mobile phone handsets shipped by the end of the year.

The introduction of a wide range of sought-after features - such as mobile search and multi-touch screens - are becoming even more important than the technical specifications that manufactures have traditionally used to compete. As a result, handset manufacturers will continue to deliver high-specification phones with intuitive  functionality and optimum usability in order to complement their customers' personalities and lifestyles. In other words, manufacturers are focusing on technology that fuses seamlessly into people's lives.
The latest study from InfoTrends has revealed that consumers place a high value on having the ability to take photos with their mobile phones. In fact, over half of respondents cited a camera for taking photos as a vital feature on their mobile phone, second only to text messaging capabilities and ranking significantly above all other features included in the survey.

This same study also aimed to determine what factors encourage consumers to use the camera on their phones. The survey results suggest that consumers with higher resolution camera phones capture, edit, upload, and print more of their camera phone photos. This is in part because respondents with higher resolution camera phones are more likely to capture photos due to their increased image quality, and it is also likely that those with a greater interest in photography are more likely to deliberately purchase a camera phone with higher megapixels. Regardless, as time goes on and specifications continue to improve, the availability of higher quality camera phones will likely provide a much needed boost to the entire mobile imaging industry.

It's been around for a few years, but VPLS may be the technology that makes Global  Ethernet a truly viable option for carriers, and is set for further adoption in 2010, says John Dumbleton

Who would have thought that a networking technology which many predicted would die in the 1990s would evolve into the hot next generation network, growing at an annual rate of 20% in 2009? Yet that's what is happening according to market research company IDC, who estimates the worldwide market for Carrier Ethernet services will grow to $17.5 billion by 2011.

The explosion in Ethernet roll-outs is in part being driven by demand for the transport of greater amounts of information between offices, due to increased remote collaboration across businesses. This change is feeding the growth of real time applications (such as video conferencing and VoIP) and the sharing of large files, none of which easily move across traditional WANs. The qualities of Ethernet allow for the high bandwidth these modern business applications require, while the protocol eliminates the need to have separate networks for different traffic profiles. Ethernet is therefore an ideal solution for companies wishing to migrate to a converged IP environment, or for enhancing application availability to their global network of offices.

For many network providers, setting up a global Ethernet network is not easy or cheap. Delivering Ethernet services across a kludge of technologies creates a complex and inefficient offering. Fortunately, network providers without the baggage of legacy networks are able to offer customers a cost-effective, simplified global network that guarantees quality of service (QoS) end-to-end.

Pure Ethernet uses existing Layer 1 networks, the majority of which are fibre. The bandwidth capabilities of fibre have increased dramatically since 2000, when light was split into multiple colours to allow the transmission of many times more data down a single cable. Cable companies in the UK are showing steady growth, and although fibre is still the physical access method of choice, Ethernet over copper and microwave will become a more viable and cost effective option, sustaining future growth.

Older networking technologies, such as frame relay, were rolled out in the 1990s to support point-to-point and hub-and-spoke networks that interconnect LANs running less-demanding applications. Frame relay was initially designed to handle LAN traffic that was bursty in nature and is suited to processing frames of different length. Enterprises could effectively support these older applications over T1 (1.544 Mbps) and sub-T1 hub-and-spoke frame relay networks. Frame relay is not efficient, however, at carrying real time traffic such as voice and video because the hubs delay the connections between LANs.

Ethernet is seen by many as a superior option to frame relay, ATM and private line, all of which have been steadily losing ground to newer technologies such as IP, which are able to provide increased reliability, scalability and cost-effectiveness in transforming network solution offerings. IP-based networks are optimised to run in a point-to-multipoint topology, which enables fast connections between LANs and WANs to support real time applications and large data transmissions.  A global Ethernet service significantly reduces the complexity of network implementations, allowing the end-user to make use of the available bandwidth more effectively. This simplification is possible as direct interconnections to customers can be done using existing LAN equipment, without the need for additional WAN routers and CSU/DSUs.

An Ethernet network also offers significant cost savings by negating the need for special equipment and port adaptors, allowing companies to reduce the cost of rolling out their communications devices. Fully capable routers with high speed interfaces (T1/E1/DS3/OCN) and the ability to interface with any type of network connection are often much more expensive than Layer 3 Ethernet switches and interfaces, the standard devices used when deploying a global Ethernet solution. The equipment used within a global Ethernet network is easier to source, manage and replace, resulting in a lower total cost of ownership (TCO).

So how is global Ethernet best deployed? A growing number of observers believe that a network based on VPLS (virtual private LAN service) is the superior method of providing a global Ethernet service. Praised by many - and none more so than analyst house Frost & Sullivan - VPLS provides a scalable multi-point Ethernet VPN service. VPLS allows multiple Ethernet LANs at different sites to be connected together as if they were connected to the same Ethernet segment, effectively making all customer sites appear to be on the same LAN, benefiting from the same bandwidth and QoS. Since all customer routers in VPLS architectures are part of the same LAN and the service provider hand-off to the customer is always Ethernet, customers can maintain complete control over their Layer 3 while benefiting from a simplified IP addressing plan.

As early as 2002, we saw the potential market for VPLS as an alternative to frame relay with none of the usual decreases in service quality, such as security and reliability. Our customers began asking about a network service that would extend the customers' LAN across their WAN, while providing fully-meshed Layer 2 multipoint connectivity. This led to MASERGY's roll out of the first commercial VPLS service in June 2003. In June 2004, we launched a unique delivery service called Intelligent Transport that offers Ethernet on a serial connection to deliver Ethernet anywhere enterprises do business. The Intelligent Transport service uses an Intelligent Bridge to provide an Ethernet hand-off over a serial connection, which allows the delivery of multiple transport services (Public IP, Private IP and VPLS) over virtual local area connections (VLANs). Customers are given the flexibility of choosing a single service or multiple services over a single interface. When native Ethernet is unavailable at competitive prices, we simply deploy a serial connection and install the MASERGY Intelligent Bridge to provide Ethernet hand-offs to customers. This IP MPLS transport service is unique in being able to guarantee end-to-end QoS, agnostic of the access type in each country, while also allowing customers to put their private and public networks on the same circuit. Today this service continues its differentiation by providing dynamically allocated bandwidth across multiple service types (hierarchical QoS) for highly-efficient bandwidth utilization.

My observation is that global Ethernet adoption is following a pattern similar to that of earlier network technologies, and going forward I expect take-up in 2010 will move this technology from the early adoption stage to the fast-followers stage. Furthermore, Ethernet's growing acceptance in the industry as the future of global networking allows businesses to employ Ethernet with the peace of mind that this truly is a future-proof solution that provides compatibility with all network solutions.

About the Author: John Dumbleton is UK Managing Director of MASERGY

Are telcos any good at understanding their customers and then responding to them?  Gordon Rawling presents research that provides the answers

The widespread adoption of broadband, coupled with the rise of social networks, has led to a dramatic shift in the traditional relationship between brands and their customers - consumers have never been so empowered. They now resist one-way dialogues where brands broadcast messages to them, and instead, demand to manage their own relationships. And a central part of these expectations is for a consistent experience across all channels. Whether researching online before buying in-store or browsing in-store before ordering from a contact centre, customers expect to be able to pick up where they left off. The dynamics of maintaining customer relationships has changed radically in recent years. Consumers now expect relationships to be managed on their terms. As a result, brands need to demonstrate that they are actively listening, responding and personalising their services to individual customers' wishes.

As technology companies, you would expect operators to be successful at responding to customers' needs, particularly when large marketing budgets are being spent on driving more profitable customer relationships. We wanted to understand how much progress the telecoms industry had made in this direction and commissioned some independent research to help us.  The research questioned 46 senior customer management executives at operators in Western and Eastern Europe and the Middle East along with 3,750 consumers.

Here, we outline the key findings and provide insight into how operators can use their existing infrastructure to deliver a consistent and compelling experience through the consumer's channel of choice.

Fragmented systems and inconsistent customer service
The research reveals that customer-facing teams in many telecommunications firms are operating in isolation (or at the very least out of synch). This lack of integration is highlighted by operators' inability to provide one service number for customers to address their all their queries: two-thirds of operators (65 per cent) admitted that customers are unable to resolve queries by calling just one number. This, perhaps, is not surprising when we consider that just one in six operators (17%) claim that all teams work from the same system and are directed to follow the same strategies for customer service, retention and recruitment across all channels.

With this lack of coordination between departments, providing a consistent brand and customer experience across retail, online and contact centre channels becomes extremely difficult. In order to serve the customer effectively across multiple channels, the whole business - and particularly sales, marketing and customer service departments - needs to be coordinated and structured around addressing the customers' very specific needs.

Organisations across all industries should view the customer from a total life cycle perspective, rather than as separate interactions with the sales, service or marketing teams. Operators will no doubt already have in place the technology to manage relationships with their customers (through any channel of their choice), as they hold details on an individual's interactions with a business, billing history with perhaps some insight into demographic data. What is required is to make these systems work harder and smarter by integrating customer data and systems across the entire organisation, using the wealth of customer data they hold to gain genuine insight into customer behaviour. The entire organisation, and the systems supporting it, needs to be fully integrated.

Inability to retain customers
Retaining customers is widely accepted as the essential basis for operators to grow operations and average revenue per user (ARPU). And yet, the research strongly suggests that the systems in place at the vast majority of operators leave them unable to meet this fundamental requirement. Just one-fifth (20 per cent) of respondents confirmed that their organisation actively monitors customers reaching the end of their contracts with systems and processes in place to retain them. One-third (30 per cent) stated that although they could identify end-of-contract customers, they couldn't actively manage customer retention in this way. This neglect was borne out by the public, with more than half (53%) of consumers with mobile contracts claiming that their mobile providers had never contacted them at the end of their contract to entice them into a new one. If we assume ARPU of €20, the total cost to European operators in lost revenue from customer churn could be as much as €46bn per year.

Operators across the globe spend millions each year on marketing in a bid to attract new customers, and yet they are allowing customers to simply walk away at the end of their contact. Organisations should consider Customer Relationship Management (CRM) technology that has the ability to flag events in the lifecycle of a customer - such as reaching the end of a contract. It should also be able to recommend to staff the best course of action required to give the business the best chance of continuing the relationship with that customer. The smallest piece of customer information, such as a change of address, should also be the trigger for a whole new set of relevant, revenue generating services (such as new broadband or television). But to take advantage of these opportunities, organisations need to be able to make sense of the wealth of information they hold on their customer base - wherever it resides in the business.

Clear demand for personalised, consistent and interactive services
The public indicated a clear preference for web-based customer service. When asked to rank the various service channels in order of preference, 83 per cent chose the internet as their first or second choice, 62 per cent chose e-mail, while just 32 per cent selected a contact centre. But simply having a web page for customers is not enough. Instead, they want their internet experience to be personalised and interactive. When asked what would encourage them to use the internet instead of ringing a contact centre, 47 per cent said the ability to view account trends and ways of saving money; 45 per cent said a personalised online service with tailored offers and 44 per cent were keen on services such as live chats with agents.

The research highlighted how operators are failing to meet these demands. Just under half of operators (46 per cent) claim to offer the ability to view account trends and ways of saving money online, while a mere 13 per cent provided online support agents with instant messaging facilities. More worrying, however, is operators' basic inability to meet customer demands for a personalised service due to the limitations of their systems. Only one-third were able to make recommendations to customers based on the context of each interaction both online and in contact centres; just 11 per cent could do so online.

A clear appetite among consumers for using the internet in preference to all other channels is not being satisfied. Operators are missing a trick here, particularly when we consider the significant cost savings on offer by satisfying this appetite:  a single customer interaction can be reduced from $4.50 on the phone to $0.10 online . When the offer is both timely and clearly relevant to the customer, they are more receptive to listening and accepting these offers than if they received a direct mail piece describing the product. When an individual gets in touch with its service provider, organisations should really make the most of this window of opportunity in having the customer's attention.

The key lies in instilling real-time intelligence into any type of business process or customer interaction and combining analytics to recommend the most appropriate recommendation based on the context of the last interaction. For instance, when a customer logs in to their account online the CRM system should swiftly and effectively make targeted product and service offers which relate directly to the customer's issue. Ideally, customer service staff working in contact centres should have access to the same system, supporting the delivery of a consistent experience across multiple channels.

Excessive contact centre resource spent on low value queries
The survey also found that operators spent a disproportionate amount of contact centre time dealing with queries that brought little value to the business and that they could easily and more cost effectively handle online. Half of the customer management professionals stated that between 40 per cent and 80 percent of inbound calls concern billing queries. Although operators have attempted to drive these calls to the web, with 96 per cent stating that answers to common billing, product and service questions were available online, there has clearly been limited success: when asked to rate the level of success of their online self-service functionality (1 being unsuccessful and 5 very successful), the average score was just 2.65.

Deflecting high-volume low-value calls to a website frees agents to handle high-value calls such as queries about new services or products. As the research demonstrates, customers value being able to resolve queries around their own availability and respond with increased loyalty. Despite their best efforts though, telecoms firms have enjoyed little success so far with the self-service initiatives they've deployed. The key here is to ensuring that customers feel confident that the self-service tools online are able to address and solve their queries appropriately.

Along with this customised experience, encouraging adoption of self-service tools is also a matter of creating a compelling online experience.  Self-service can be the platform for creating the personalised, interactive, engaging websites that customer's desire. For instance, it can provide customers with the tools to access support information online, including forums, wikis, feedback forms, support communities, demos and downloads - all at the click of a mouse. Instant messenger tools for live web chats with agents alongside discussion forums with engineers stepping in to resolve queries provides a powerful scenario where customers can channel ideas for product improvements, and gain immediate guidance from technicians on complex issues. A dashboard to compare products, services and tariffs also adds to the customer experience, allowing them to find the most appropriate service that meets their individual needs.

In an extremely competitive sector, operators that have succeeded in uniting customer-facing departments and focusing systems around customer needs are the ones with a distinct advantage. However, digging deeper into the findings, we also see that many operators already have in place the foundations necessary to deliver a consistent experience to customers across all channels. By using technology to support three basic principles, organisations will have in place the right customer service strategy to serve customers in an increasingly digital world. Firstly, they need to provide a seamless and consistent experience for all customer interactions. Secondly, they need to extend the understanding of the customer throughout the enterprise, allowing all areas to make informed customer-based decisions. And finally, operators need to use intelligence on their customers more effectively, by transforming customer data into actionable information and getting the right information to the right person (or customer) at the right time. With this approach, operators will be ideally positioned to take advantage of the commercial opportunities presented by having a loyal and lucrative customer community.

About the author:
Gordon Rawling is Senior Marketing Director, Oracle Communications

Larry Ellison, the famously blunt boss of Oracle recently said of Cloud Computing: "The  interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?"

In many ways his rant was right that almost anything you can think of in the IT and communications world has suddenly been converted by marketeers into a "cloud' something or other. With public cloud, private Cloud, inter-cloud, everything-as-a-service and hectares of newsprint, cloud is right off the top of the Gartner hype curve.

The thing is, cloud is mostly a different way of doing many of the things we already do. It's a move from owning or using your own physical computing infrastructure or applications, to sharing them in a virtual and online approach. That's going back to 1980's timeshare model in many ways because computing was so expensive. Then mini and micro-computers made that unnecessary. Now ubiquitous and cheap broadband has turned the wheel again. The big thing about cloud services is that they are Pay-and-Go, so you only pay for what you use and that can generate huge savings - the average server is only used for much less than 10% of its capacity, as servers are usually sized to mean peak rather than average demands.   

But haven't we seen this all before? Weren't Application Service Providers (ASP's) supposed to revolutionized the applications market about 10 years ago and bombed? Well, just like O2's famous silver surfer, WAP, location based services and a whole host of other technologies ASP's were a bit before their time and the user experience was simply not as good as sitting in front of your own PC, mainly because the networks simply weren't fast enough for enough people. But ASP has quietly gotten on with it and now Software-as-a- Service or SAS is key branch of the cloud world and companies like Salesforce.com are booming.

We're at the beginning of the cloud market and its very early days but the figures are so compelling that provided we can overcome the downsides, it has a very interesting future. Cloud services, (such as cloud computing, storage and applications) represent a significant evolution in the use and provision of digital information services for business effectiveness.   Yet already the market is becoming littered with a confusing array of technical features, names, terms, and proprietary operating characteristics.  As buyers start to look in detail at using these services, it's clear that a number of barriers to adoption are showing up.

Which is why we launched our exciting new Cloud Services program in December last year, at Management World Orlando. The primary focus of the program is to help the industry overcome these barriers to adoption and assist the growth of a vibrant commercial marketplace for cloud based services.  We are doing this by enabling an eco-system of major cloud service buyers, cloud technology suppliers and cloud service providers as well other industry organizations that specialize in various aspects of this emerging market. The aim is to agree common approaches on important aspects of cloud services that will reduce buyer resistance such as a common terminology, common approaches to allow transparent movement for supplier to supplier; security issues and benchmarking.

Key problems to overcome include:

  • Avoiding cloud Service Provider (SP) relationship "lock in" and trapped islands of information/data
  • Information security concerns
  • Network latency and other communications related issues.
  • Integrating established in-house IT operations, including establishing "hybrid clouds" effectively
  • Maximizing return on investment in necessary cloud technology assets
  • Ability to choose between various suppliers through transparent and common metrics/ benchmarks.

The Forum is well placed to act as this kind of orchestrator across the growing cloud services eco-system. With our 20 plus years of experience as a growing membership-driven trade organization; now with over 700 member companies in 185 countries as a global, the Forum has a successful track record of bringing buyers and sellers of digital services and applications technologies. Our work has resulted in many best practices and standards and its frameworks have been adopted by enterprises all over the world. 

Look out for continually updated details of this exciting new program at http://www.tmforum.org,/ or contact the Program lead, Jim Warner on jwarner@tmforum.org

Despite low rankings in broadband tables, the UK's enterprise broadband infrastructure is already world class, claims Phil Male

We all know the reputation the British has - justified or not - that we can be seen as a fairly gloomy race, tending to exaggerate problems and often taking the worst-case scenario as given.  It seems that this extends to our views on the nation's broadband infrastructure.  However, a clear distinction must be drawn between consumer broadband networks and networks designed exclusively for the use of businesses.  A lot of the analysis of the state of broadband in this country blurs this obvious distinction and therefore draws incorrect conclusions.  When it comes to enterprise broadband networks, the UK is actually one of the world leaders. 

Looking at international consumer broadband ranking tables the UK often appears near the bottom, leaving the impression that, compared to the rest of Europe, the UK is a technological backwater, burdened with poor coverage and leisurely download speeds. But the situation is not nearly that bad. While the UK's apparently poor consumer broadband infrastructure has attracted the attention of the Government (which, following the Digital Britain report, is starting to introduce measures to improve it), the nation's businesses have access to truly world class networks that can deliver a range of value-add applications allowing them to compete with any rival across the globe.

Enterprise requirements
When analysing the quality of the UK's business broadband networks, first off, we must begin with an understanding of the requirements of enterprises and then look at how well these are being met by operators. 

In today's environment, more and more is expected of networks used by businesses.  Provision of voice and data services is no longer enough, modern organisations expect to be connected to suppliers and customers anywhere, anytime and on any device.  It is this responsiveness that provides them with the competitive edge they need to drive their business forward. 

Enterprise applications are increasingly seen as being critical to success.  They can improve productivity, cut costs and change the way businesses communicate with all levels of their ecosystem.  The choice of network is critical for these applications to run successfully. 

Features of the latest enterprise wide applications, such as Oracle and SAP, can be very demanding, particularly when running high performance voice and data applications concurrently over a single network. The business case for a new application only stacks up if the network is flexible and responsive and this is driving the requirements of businesses.  Put simply, competitive edge comes from the applications used by enterprises and the only thing standing in the way of this being realised is the network.

How enterprise broadband services can be provided
In the UK, there should be no barrier to the take up of feature-rich applications - networks of sufficient speed and intelligence are available today and are, in fact, being deployed and used across a wide range of sectors.  State-of-the-art networks based exclusively on next-generation IP, run applications that businesses hadn't even dreamed of a decade ago. The potential cost savings with the right next generation network (NGN) makes the business case for using just one network possible. With it, businesses can get the best from their applications, host them in a single data centre, prioritise, measure and manage their performance - cutting costs through simplified operations while at the same time minimising their carbon footprint.

The NGN infrastructure available for businesses today ticks all the boxes when it comes to providing a competitive national infrastructure.  They are fast and affordable, providing speeds of up to 10Gbps which enable even the most data-heavy applications to run at blistering speeds.  In addition, through advanced multiservice access node (MSAN) technology, a network is now capable of delivering speeds that are in excess of traditional leased lines.  This means that backhaul and contention ratio service levels can now be tailored to support businesses requirements.

Networks are also scalable and resilient, providing up to 99.999 per cent availability.  They can be delivered over a range of access media including fibre, copper, radio, Asymmetric digital subscriber line (ADSL), Symmetric Digital Subscriber Line (SDSL) or Ethernet, allowing businesses to tailor their networks to best suit their needs.  What's more, whilst the right infrastructure means using broadband last mile connectivity is an efficient way of delivering corporate networks into many sites (such as retail footprints), the networks of today are able to seamlessly integrate other access technologies into the same network.

Critically, enterprises are now being given the opportunity to better control their networks through services such as Application Performance Management (APM).  This gives them the levels of control previously only seen in the Local Area Network over the Wide Area Network, ensuring that every application runs as efficiently and as effectively as possible.

There is little doubt that the technical innovations of IP, varied access methods, Ethernet, APM and multi-service platforms have revolutionised networking in the UK.  The country can hold its head high and boast a truly competitive network.  The story, however, is not quite that simple.  For a network to offer optimum service levels across all of an enterprise's areas of operation, they need to look beyond the shores of the UK and take a more global perspective too. 

The importance of global network reach
The quality of network infrastructure can vary greatly from country to country, but also between individual suppliers within a given region.  For multinational companies (MNCs) this can add layers of additional complexity into the network build.  As well as creating issues around the need to interconnect between networks built on different standards and equipped with different capabilities, the MNC is hindered by having to arrange separate network deals with all the different suppliers.  It is an inelegant approach that can damage the quality of the overall network, both in terms of technology performance and operational activity by the multitude of suppliers.

MNCs need to choose an operator that can provide the same high-levels of network performance across the globe.  The requirement is for a single architecture and design function, a single network operations centre and a single set of design standards. This removes the complexity you find when individual networks are designed locally, or where there is no single owner for network operation.  Businesses should, therefore, concentrate less on the broadband performance statistics of individual countries and look instead for a global provider that can deliver a leading edge network across all areas of operation.

Digital Britain - a world class broadband network
The question of whether Britain has a world class broadband network is, therefore, more complicated than it first looks.  We must make sure we differentiate clearly about what type of network we are discussing - consumer or business - and then put this into the context of a globalised economy.  In the business world, Britain can provide Private and Virtual Private global networks that can compete with those of any other nation - a point that seems to have been underplayed in the Digital Britain report.  Whether the same can be replicated for consumers is a question for other operators to answer.  But by looking at the business networks on offer today, these operators can get a good understanding of the technologies and service levels that will be required to provide consumers with data-rich applications and services that will be the envy of the world.

About the author:
Phil Male is Operations Director,
Cable&Wireless Worldwide

Testing interactions between elements, and simulating real-life traffic, will be crucial  to the timely development and deployment of IMS architectures, says Bruno Deslandes

The move towards an IMS architecture for delivering services on all IP backbone networks of wireless and wireline carriers is undoubtedly on the way. In a recent report on IMS and VoIP sales, Infonetics predicted worldwide IMS equipment sales will more than double in 2009 as compared to 2008, and reach $1.6 billion by 2013. Because Evolved Packet Core, the 3GPP 3GPP release 8 core network architecture, is based on IMS for its service delivery architecture and the solution to the voice over LTE problem may well be based on IMS, LTE should be a strong driver for IMS adoption. Right now, 12 major carriers have already announced LTE commercial openings for as early as 2010. According to a study by 3G Americas, 120 telcos have voiced commitments to deploy or at least trial LTE before end 2014.

When deploying an IMS architecture, carriers are facing a wealth of challenges. IMS has been designed to be a modular architecture where functions are implemented in specialized network nodes, each node interacting with the others through standardized interfaces. This standardisation enables IMS, for instance, to operate over GSM/UMTS/LTE, WiMax, WLAN and PSTN access networks.

One of  IMS's assets resides in the isolation of services (like ring back tone, multi-party voice and video conference,...) in the Application Server nodes (AS); the role of the AS being to coordinate and synchronise the resources and functions of the network to implement the required service. Deploying a new service is then a matter of deploying new software in an existing AS or deploying a new AS. Since interfaces of AS are well defined, such a new service can be set up without disrupting the overall network architecture.

Such a modular architecture opens up the competition in the network equipment market resulting in rich and innovative offer. Carriers can then escape the traditional vendor lock and cut down CAPEX. On the other hand, this expandable design multiplies the number of nodes, exponentially raises the number of interfaces between the nodes and brings about yet another complexifying factor to the signalling call flows for delivering a call.

Getting the benefits of a multivendor environment means carriers have greater responsibility for building their network. Where before, they could rely on vendors to provide a working solution, they now have to endorse the role of network integrators.

One of the first challenges is making sure that all the selected components interwork well. Because of the diversity of the vendors, of the node functionalities and the carrier-specific configuration, no vendor would fully guarantee that its equipment interoperates with the others selected by the carrier in the configuration chosenby the carrier. Requirements can be put on conformance to protocol and interface specifications and carriers can check this conformance. But, there is no standard defining the conformance rule for SIP or Diameter, the two main signalling protocols used in IMS. Initiative is left to the market. The UNH-IOL test lab has developed a SIP conformance test suite and proposes conformance testing services. Most testing tool vendors covering SIP and Diameter propose SIP conformance tests suites. Protocol specification coverage and completeness depends on the tool's vendor.

Moreover, conformance does not necessarily mean interoperability. Effective interworking of equipments has to be tested. The interdependency of equipment in the IMS architecture should lead carriers to build an IMS network dedicated for testing. With such an IMS testing infrastructure, they will be able to check interoperability with the equipments they selected but also conduct end-to-end testing which is in the end what counts the most for delivering high quality services to their customers.

IMS testbeds blend real equipment as deployed in the network, test tools and simulators. Test tools and simulators bring flexibility and improved testing capabilities. They are essential for producing well defined traffic patterns in a reproducible way. They make the production of error conditions, that would have been tedious to produce with off-the-shelf equipment, easy.

Simulators also are a way to reduce costs. The Home Subscriber Server (HSS) is the IMS central identity repository delivering authentication. An IMS testbed must include an HSS, but this is a very expensive piece of equipment and it requires specific knowledge to be set up and operated. Using an HSS Simulator that offers test and debug facilities in addition to HSS functions cuts down the costs, enriches the workbench testing capabilities, and gives flexibility in operation. Vendors are proposing tools capable of emulating multiple instances of different kinds of IMS nodes. These tools can be used to enlarge the topology of the testbed, making it closer to what is deployed in the field without the expense of buying real network equipment.

Given the price of an IMS test bed most operators will not be able to afford several such units. With market pressure to deliver new services at a steady pace, there is a risk that IMS test beds become a bottle neck slowing down the release of new commercial offers. With their flexible configuration and test automation capabilities, test tools and simulators are critical for the IMS testing infrastructure to be efficient (run the fastest possible test campaigns) and agile (quickly switch between different testing conditions). They significantly contribute to increasing the quality of delivered services and reduce testing time and costs.

With an IMS testbed, carriers can submit their new services and equipment to functional, load, stress, robustness and performance tests in conditions close to their real deployment. It gives them the ability to assess the new services efficiency, reliability and impact on their existing network.

While this assessment is required, it is not sufficient. IMS empowers carriers to deliver feature-rich services and customers to combine the use of these services. Trying to reproduce the diversity of invocations and combinations that may occur in real life would sky-rocket testing costs.

To deal with this issue, labs testing can be complemented in the early deployment stage and during ramp-up by monitoring the network. Watching alarms and statistics on a per equipment basis notifies problems and gives hints on their cause, but this does not offer the global and comprehensive view which is required to diagnose them. IMS call flows can be very complex - involving dozens of message exchanges in many different protocols (SIP, Diameter, H.323, H.248 ...). The more sophisticated the services are, the higher the complexity will be. Manually analysing is a tedious, very time consuming task and hinders correction responsiveness.

An effective strategy can consist in deploying a network monitoring tool that captures, archives and analyses signalling traffic in real-time. When a problem occurs, such a tool will be able to recollect the complete history of multiprotocol signalling message exchanges and will enable network specialists to quickly locate errors and their causes. Such tools often offer on-the-fly traffic analysis automatically flagging errors or abnormal behaviours like slow response time, message loss or other deviant traffic patterns which are not errors but indicate service quality degradation. This observation of network behaviour enables preventive maintenance to take place before the problems become noticeable at a large scale and contributes to watching and maintaining network service QoS.

Because coupling between network and OSS/BSS is tighter in IMS, testing shall cover the interaction with the OSS/BSS system. Many IMS nodes have a Diameter protocol interface with the billing system for real-time charging. For instance, an AS may query it to check the customer has enough credit before delivering a service. This means OSS/BSS may be directly involved in the call signalling and should be part of network testing.

Provisioning or service activation is also a concern. With the distributed nature of IMS, a user service profile change is likely to impact several IMS nodes. Tests shall guarantee that such kinds of changes are correctly and consistently performed. With customer self care now being the common rule, the rate of such changes can reach several tens of thousand per day in big carriers' network. This is real challenge!

IMS open standards is a tremendous opportunity for carriers to leverage multivendor market offers to reduce costs for building their IMS network and getting a richer services portfolio. This heterogeneity combined with OSS/BSS interaction results in a complexity that raises the cost and length of pre-deployment tests. IMS testbeds equipped with test tools, HSS simulators and other IMS node simulators, traffic generators... are powerful means to keep these costs and times compatible with market pressure. Tools for monitoring IMS signalling and protocol exchanges complement the testing approach in the deployment and operation phases, and make sure the required service quality is effectively delivered.

About the author:
Bruno Deslandes is IMS Product Manager,
Marben Products.

Lynd Morley was editor of European Communications for 17 years. In that time she charted the course of an industry that changed out of all recognition. Yet Lynd herself remained a constant source of good writing, sound analysis and accurate, good-natured, opinion on the industry she covered.

The magazine you read today is just one tribute to her. How many other print titles have remained so widely-read and influential in the current media marketplace?

But Lynd was not just a fine editor of this title. Prior to taking over at EC she had distinguished herself as reporter on UK IT title Computing and her work also saw her published in the FT, The Guardian, and others. It is also a little-known fact that she worked on early issues of Cosmopolitan magazine.

In recent years she also acted as editor of the TM Forum Yearbook - a testament to the esteem she was held in within that sector of the industry, and in June 2009 was voted winner of The People's Choice Award (Open Category) at the World BSS Awards held at the BSS Summit in Amsterdam.

She was known among her friends for her commitment to human and women's rights causes, and was a supporter of Amnesty International and the National Union of Journalists.
Despite fighting lung cancer for over a year it was often possible during that time, for those not closest to her, to forget that she was ill as her humour and positive outlook on life, and concern for others, remained uppermost. She continued working, producing issues of European Communications to her usual high standards, and attending industry events and trade shows.

Perhaps the best tribute to Lynd is the flow of tributes that we publish from some of her closest industry friends and colleagues. There could have been many, many more, and we apologise to those we have not been able to include. We will all miss her.
Keith Dyer

Alan Burkitt-Gray, Editor, Global Telecoms Business
I first met Lynd in September 1980 when I became news editor of the weekly newspaper Computing, where Lynd was one of the reporters.

We took no prisoners when we reported on our industry, and Lynd, with her sharp ability to spot when someone was talking nonsense, was a superb member of an excellent team.
A decade later when I was freelance, one of my last assignments was editing two issues of European Communications, following Adrian Morant as editor. I was then offered a job launching a cable magazine for another company, and the owner of European Communications showed excellent judgment in recruiting Lynd, as she did a far better job of it than I ever did.

Later on we were more closely in parallel, she still as editor of European Communications - they were so lucky to have her for 17 years, unmatched stability in this business - and me editing one of its competitors. It was great that we remained good friends, and could phone each other up and moan about business, and that there was a friendly face at telecoms conferences and press events so that one could go into a corner and ignore the talk about eTOM and NGOSS and discuss families, friends and former colleagues, and politics and life.

Dee Gibbs, Managing Director and Founder, MiLiberty
It has still not sunk in that I can't just call or email Lynd and organise to meet up on one of her excursions to London from her homestead in Wales. We would arrange to meet to discuss all manner of technological advances in the telecommunications industry, but more often than not we'd end up swapping gossip and recommendations for good food and wine.
As a journalist, Lynd was respected and admired by those who knew her in a professional capacity. We would often talk about our chosen industry and how she viewed technology advances as necessary but only if there was a human element or benefit. This was Lynd all over, in fact we used to joke about how she would make a good PR - taking the complex and making it appeal to the masses.

Lynd was full of ideas, always thinking ahead and wanting to introduce new things and facets to her life. She strived to make things better and was gracious, charming and giving. I recall media briefings where she took the wrap for PR's or for the client if there had been any misunderstanding. Her generosity was real.

For me, my personal memories of Lynd include her gift of a celebratory dinner in New York when she found out that I had become engaged to be married when we were both on the same press trip - right up to her acceptance to be a guest at my wedding. I still have her email saying that she and her husband, Alan would be there with knobs on! That was so very Lynd, and just like her forced absence on my special day, she will be sadly missed.

Alun Lewis, Freelance journalist and regular contributor to European Communications
I first got to know Lynd in the early '90s when I was a still-youngish PR. Telephone pitches to Lynd soon turned into longer conversations on life, the universe and telecoms and we discovered common interests as well as a mutual dislike of much of the marketing hype that came to infest telecoms by the end of that decade! The friendship deepened over the years and, in typical Lynd style, she was incredibly supportive when I became a freelance writer some years later.

More recently, she and her husband Alan moved from London down to my ancestral turf in West Wales, where the conversation continued often late into the night with much shared laughter at the follies of the world. As mortality entered the picture, first with the nearly simultaneous deaths of her father and my mother - and then Lynd's diagnosis of cancer - her and Alan's robust pragmatism, sheer good humour and perpetual curiosity about the world continued to make life something to appreciate and enjoy. In what often seems to be an increasingly corporatised world, her individual passions, opinions and instinctive contrariness against the status quo will be sorely missed by many. I, for one, feel like I have lost a much-valued and much-loved sister.

Annie Turner, freelance journalist and contributor to European Communications
Lynd Morley was a generous, loyal, wise and witty friend, who held her family dear. She had a genius for being proper without being prissy, for being realistic but cheery, and for lending a sympathetic ear, no matter how many troubles she had herself. It says much for her that I cannot ever recall her being unkind about anyone, nor anyone having an unkind word to say about her. She wore her extraordinary knowledge of the telecoms industry lightly and a great many of us will miss acutely her calm, laconic and irreverent presence personally and professionally.

Juliet Shavit, President and CEO, SmartMark Communications
I was fortunate to have gotten to know Lynd well over the past decade through my professional dealings with her, but I am also grateful to have had spent time with Lynd outside the office - often in Nice, where we sat in the sun together or caught a meal by the ocean and reminisced about a writer's life and where the publishing world was headed. We both shared an undying love of literary tradition and the print world, but we were equally fascinated with the possibilities new media was bringing into both of our lives.

Most recently we talked about what we could do if given opportunity and time. I know Lynd was actively engaged in women's issues, and we both saw infinite possibilities of the human spirit, in women in general, and in ourselves - if given the chance. 

I am deeply saddened by her loss, and the vacuum of things left for her to do. But, I am equally respectful of the great things she accomplished as one of the few women editors and significant influencers in the communications industry. Outside work, she was a writer above all else and a simply classy lady.
 
Keith Willetts, Chairman & CEO, TM Forum
Lynd interviewed me many times over the years on Forum issues and was always generous with her time and reporting. We collaborated closely to establish the Forum's publications activity and she was the first Editor of our Yearbook (now called Perspectives) from 2006-2008. A wonderfully warm and effervescent person, interested and articulate on a huge range of issues from managing horse paddocks (of which we spoke often!) to human rights, Lynd will be sorely missed by everyone who knew her. She became a much loved part of our TM Forum "family," and her passing seems just like losing a close relative.

Jeremy Cowan, Editor and Publisher, Vanilla Plus
Lynd's skills as a journalist and editor were exceptional, and uniformly appreciated within the telecoms fold. Although European Communications and VanillaPlus could be viewed as competitors in some areas, it has always been easier to see her as a friend than a rival. Lynd was that rare person who is more interested in hearing your news than telling you their's, a modesty that concealed a keen judgement and entertaining wit. She will be much missed by those of us who knew her and valued her kindness and great good humour.

Mark Bradbury, Sales Director, European Communications
Lynd and I started our journey on European Communications in 1992, within just a few months of each other. Lynd quickly guided me through the minefield of Telecoms techno-babble and played a huge role in giving my working life the focus and meaning that it had previously lacked.

Lynd was an incisive editor and a far better journalist than she ever gave herself credit. She had the rare ability to communicate complex themes to the uninitiated and invariably made the subject more readily accessible by incorporating a human angle. It speaks volumes that Lynd's heroes were those to whom the welfare of others was paramount (I recall how particularly thrilled she was to hear Nelson Mandela speak at the World Economic Forum in Davos).

Our partnership on EC spanned a period of 17 years and out of this came lasting friendship, respect and unswerving loyalty, not to mention an infectious sense of humour that I will forever cherish. As fellow "home workers" in rural environs our phone conversations and e-mail correspondence often took on a comically agricultural tone and whether bemoaning a lost advertising sale or sharing a personal worry, a chat with Lynd on a Friday afternoon could be counted on to lift the spirits in time for the weekend. Lynd retained this overriding concern for her friends, family and colleagues throughout her illness.

I recall some of Lynd's final words to me being along the lines of "take a holiday and spend some quality time with Harriet and the kids"...a week on Lundy Island has been duly booked! I can't help thinking that we could all do a lot worse than to follow the sign-off instruction contained in so many of Lynd's e-mails to "Keep smiling".

Self-organising networks will be crucial to the efficiency and cost-effectivess of mobile broadband networks. But will they work?

Self Organising Networks (SON) is a new technique being introduced alongside LTE/SAE as part of the next generation mobile broadband network technology. The objective is to automate the configuration and optimisation of base-station parameters to keep best performance and efficiency. Previously a drive test team would go out into the live network and take a ‘snap-shot' of the performance, then bring this back to the lab and analyse to improve the settings, but this data acquisition process is expensive, difficult, and not repeatable.

SON will enable network operators to automate these processes using measurements and data generated in the base-station during normal operation. By reducing the need for specific drive test data, this technique should reduce operating costs for an operator. By using real time data generated in the network, and reacting in real time at the network element level, this should enhance customer experience by responding more dynamically to changes and problems in the network much earlier.

Basic principles
SON is the top level description of the concept for more automated (or fully automated) control and management of networks, where the network operator has only to focus on policy control (admission control, subscribed services, billing etc) and high level configuration/planning of the network. All low level implementation of network design and settings is made automatically by the network elements. The self organising philosophy can then be broken down into three generic areas relating to the actual deployment of the network, these are configuration (planning and preparation before the cell goes live), optimisation (getting the best performance from the live cell), and healing (detection and repair of fault conditions and equipment failures).

Self Configuration
This covers the process of going from a ‘need' (e.g. need to improve coverage, improve capacity, fill a hole in coverage etc.) to having a cell site ‘live' on the network and providing service.  The stages involved here are roughly:

Planning for location, capacity and coverage
Setting eNb parameters (radio, transport, routing and neighbours).

Installation, commissioning and testing
The Self Configuring network should allow the operator to focus on selecting location, capacity and coverage needs, and then SON should automatically set the eNodeB parameters to enable the site to operate correctly when powered on. This will in turn minimise the installation and commissioning process, and enable a simple "final test" at the site to confirm that the new site is up and running.

Self Optimising
Once a site is live and running, there are often optimisation tasks to be made that are more of a ‘routine maintenance' activity. As the geography of the area changes (e.g. buildings constructed or demolished), the radio spectrum changes (e.g. new cells added by the operator or by other operators, or other RF transmitters in the same area or at the same tower), then the neighbour cell lists, interference levels and hand-over parameters must be adjusted to ensure smooth coverage and handovers. Currently, the impact of such issues can be detected using an OSS monitoring solution, but the solution requires a team to go out in the field and make measurements to characterise the new environment and then go back to the office and determine optimum new settings. SON will automate this process by using the UE's in the network to make the required measurements in the field and report them automatically back to the network. From these reports, new settings can be automatically determined.

Self Healing (fault management and correction)
The third element of SON is to automatically detect when a cell has a fault (e.g. by monitoring both the built in self test, and also the neighbour cell reports made by UE's that are/should be detecting the cell). If the SON reports indicate a cell has a failure then there are 2 necessary actions; to indicate the nature of the fault so the appropriately equipped repair team can be sent to the site, and then to re-route users to another cell if possible and to re-configure neighbour cells to provide coverage in this area whilst the repair is underway. After the repair, SON should also take care of the site re-start in a similar process to the site commissioning and testing.

Technical issues and impact on network planning                                                                   To deploy SON in a multi-vendor RAN environment requires standardisation of parameters for reporting and decision making. The eNodeB will need to take the measurement reports from UE's and also from other eNodeB's and report them back into the O&M system, to enable optimisation and parameter setting. Where there is multiple vendor equipment involved then this must be in a standardised format so that the SON solution is not dependant on a particular vendor's implementation.

The equipment vendors who are implementing SON will need to develop new algorithms to set eNodeB parameters such as power levels, interference management (e.g. selection of sub-carriers), and hand-over thresholds. These algorithms will need to take into account the required input data (i.e. what is available from the network) and the required outcomes (including co-operation with neighbour cells).

Further more, as SON is also implemented into the core network (Evolved Packet Sub-system, EPS), there needs to be standards on the type and format of data sent into the core. Inside the core network, new algorithms will be required to measure and optimise the volume/type of traffic flowing taking into account the Quality of Service and service type (e.g. voice, video, streaming, browsing). This is required to enable the operator to optimise the type and capacity of the core network, and adjust parameters such as IP routing (e.g. in an MPLS network), traffic grooming, etc.

Effects on network installation, commissioning and optimisation strategies
Equipment vendors have the opportunity now to develop algorithms that link eNodeB configuration to customer experience, allowing fast adaptation to customer needs. The challenge is to link RF planning and customer ‘quality of experience' closer together at a low level technical implementation. The benefit is that the network can adapt to meet the user needs in the cell without additional cost of optimisation teams constantly being in the field.  The network planners' simulation environment will now need to take into account the SON operation of eNodeB when the making simulations of capacity/coverage for the network. As the operator may not directly control/configure the eNodeB, the simulation environment will need to predict the behaviour of the network vendors SON function in the network.

Operator/Installer's site test must verify that all parameters are correctly set and working in line with the initial simulation and modelling. This will ensure that the expected coverage and performance is provided by the eNodeB. The SON function will then ‘self optimise' the node to ensure that this performance is maintained during different operating conditions (e.g. traffic load, interference). This should reduce the amount of drive test required for configuration and optimisation (in theory reduced to zero), and drive test is only needed for fault finding (where SON is not able to self heal the problem).

Conclusion
SON can simplify operator's processes to install new cell sites, reducing the cost/time/complexity to install new sites. SON gives an obvious benefit if deploying femto cells, as the operator is not strictly in control of the cell site and needs to rely on automated processes to correctly configure the cell into the network. In addition, the running costs of the site are reduced, as drive test optimisation is reduced and site visits for fault investigation and repair can be reduced. All of this leads to OPEX savings for the network by using automated technology to replace manual operations.

The OSS monitoring systems and SON should work together to automatically detect usage trends and failures and automatically take action in real time to correct errors.

About the author: Jonathan Borrill, Director of Marketing, Anritsu EMEA Ltd.

    

@eurocomms

Other Categories in Features