Matthew Finnie looks at the ever growing bandwidth requirements in Europe, and details what can be done to meet this increased demand

For those of us that have been working for a decade or more, the speed and reliability of our Internet connection is now assumed. We want access to be ubiquitous and limitless. We no longer hunker down and dial into the Internet or marvel at a T3 transatlantic interconnect which speeds it all up.

Europe - the new Internet superpower
The Internet was a North American invention and for a long time America was the Internet. Now those days have gone. Europe is now the largest Internet presence in the world, and the fastest growing market.  Recent data from telecoms analyst group Point Topic's Global Broadband Statistics service suggested that Eastern Europe continues to show growth, and was the only region to record more than 10 per cent growth in Q2 of this year. In March 2007 Romania passed the one million subscriber mark, which made it three Eastern European countries in the 10 fastest growing countries worldwide.

Western Europe is also setting a fast pace when it comes to broadband growth. Greece was the top grower in percentage terms in Q2, expanding by 27 per cent, and the biggest mover in the top 10 countries by number of subscribers was France, achieving the highest percentage growth rate of 9.36 per cent in the quarter.

As well as being the fastest growing territory, Europe also has the highest number of broadband subscribers. A recent audit by Internet World Stats revealed America had 64,614,000 subscribers and China had 48,500,000. The total number of subscribers in the nine highest European countries is 77,706,870, so Europe is clearly a long way ahead, even without including the rest of the continent.

The increase in broadband subscription is being driven not by businesses but by consumers and their relentless experimenting and evolving applications. The sharing of videos, photos, music and more across sites such as YouTube and Facebook, places enormous demands on bandwidth. The challenge for the DSL providers giving consumers access, is that they aren't sharing in the valuations the content providers are seeing. Given this, how do they maintain spend to keep delivering service while access prices in real terms are declining on a per meg basis?  Perhaps part of the problem is that many of these have still not embraced a connectionless NGN world where access to a customer is not a guarantee of all service revenue?

New applications are now being developed that assume that the broadband service is available. While the provider looks to include TV, there are whole sections of people with no network who see the penetration of broadband as the meal ticket for their latest venture. The most bizarre turn in this trend is mobile operators freeing up capacity in their own wireless networks by placing femtocells on the consumer premises and in some cases using the existing DSL as wireless backhaul.

Bandwidth, and its availability, has hit a tipping point where people expect to see ever increasing levels of service. One note of caution, most consumer broadband networks are horribly asymmetrical, while bandwidth is going up there is still a bias toward download making it unsuitable for many business applications. But the global business community is also demanding ever-increasing volumes of bandwidth as more and more business critical applications sit on the Internet.

Business 2.0 
Web 2.0 is a phrase coined to capture the collaborative nature of a network, and now people are talking about Business 2.0 in the hope that some of this social networking and agile application delivery will rub off in the corporate sector. The demands for corporate bandwidth are also being challenged.

The practical needs of a corporate are less glamorous but by no means less network intensive. The IT Director no longer has the luxury of time or resources to embark on a grand IT plan spending millions for a promised brighter future in two years. IT Directors are talking about embracing a "service aware" approach to developing IT applications that borrows much from the experimental evolutions seen in the consumer Internet experience. The challenge for many is that invariably this approach is network centric (it has to be) but requires a more iterative and agile approach to application development.
And the on-going move towards collaboration and unified communications is a key factor in demands on corporate networks too. Unified communications - the embedding of tools such as IM, presence conferencing and voice into one platform - is going to have as big an impact on business communications now, as e-mail did in the 90s. Earlier this year we launched our own integrated communications platform, Interoute One. This service enables corporate customers to manage voice calls as easily and cost effectively as e-mail, and without the need for complex integration or upgrades to their existing telephony infrastructure.
And with its recent launch of Office Communications Server (OCS), even Microsoft is looking to get into the unified communications market. Yet despite its strengths, OCS is only as strong as the network that carries it. Without a network provider able to route calls the OCS doesn't achieve its ambition.
So with the demand brought about by unified communications, combined with Web 2.0 and businesses running more and more applications over their networks, a clear picture emerges - there is an incredible increase in demand for bandwidth to ensure the highest quality end-user experience.
This demand can only be met by more bandwidth, yet for service providers, what is the best way to meet this demand?

Buy or build?
The demand for bandwidth will stretch many existing service provider infrastructures to breaking point, so what are the options available? DSL has traditionally been the preferred technology to deliver broadband services in Europe. It uses existing copper access networks to deliver broadband and is well entrenched in Europe, but is struggling to cope with bandwidth demand now, and in the future will struggle massively as demand grows yet further.

Consequently, service providers have to start looking at deploying fibre deeper into the network, even to the home or building, to meet future bandwidth requirements. Fibre is as future-proof a communications technology as is possible to get, and several service providers have already made a commitment to deploy fibre-to-the-node or fibre-to-the-home networks in the next three to five years. But is building new fibre networks a viable option?
It's a paradoxical situation. The main problem with fibre is that it is just so expensive to build from scratch, it's basically real estate. The industry has had well-documented problems with a number of carriers who built large fibre networks in the 90s, and then watched as the telco market dipped and their businesses suffered massively. Now the telco market is booming again, demand for bandwidth is high, yet the best (and arguably only) way to meet that demand requires fibre. So for service providers looking to stay in the wholesale provider space the only way to sensibly achieve this is through fibre ownership.
This means that in the wholesale space, if you are without the physical assets, you are shortly going to run out of capacity. Even if you have fibre but only a leased pair you will shortly be buying more. This leaves a small but battle hardened minority of carriers with multiple fibre backbones that will be the dominant suppliers in the market place. But even owning fibre is not quite enough - you need to have an operation that understands how to deal with Europe's different laws and regions. For example, the Interoute network connects 85 cities in 22 countries across 54,000 cable kilometres of fibre. Up to 48 fibre pairs have been deployed throughout the network, which means it has the capacity to carry over a petabit (a billion megabits per second) of traffic. The backbone is complimented by 19 deep city fibre networks that interconnect with the necessary diversity of access methods required to deliver 21st century telecoms

Fibre is vital to business communications in 2007 and beyond - it is the ultimate delivery mechanism - and the harsh reality is that a service provider without the physical fibre in the ground will not have the raw material necessary to satisfy the huge demand for bandwidth. The die is now cast, fibre is the bandwidth raw material of choice for the Internet, but building fibre networks is now simply too expensive, so the only real option is to buy bandwidth from someone who has it.  There is a tipping point with all technology when it passes through a point where the user understands it and operates accordingly. Bandwidth and the Internet has hit that point, and fibre is the only viable way to meet the demand.

Matthew Finnie is Chief Technology Officer at Interoute

At twenty-six, Ethernet is something of a ‘Grand Old Man' of networking technologies.  But as Mark Bennett points out, Ethernet is still one of the most agile and reliable networking standards available, despite its relatively advanced years.  Like a good wine, it just gets better with age

In the early 1980's, technology was offering the world new ways to live and work.  If you chose to, you could drive to the office in the ‘revolutionary' Sinclair C5, unwind with a Beta-max recording of ET or even play a few games on the latest Amstrad.  Fortunately, while these high-profile ‘flash-in-the-pans' were hogging the limelight, our predecessors in telecoms engineering were putting in place some more enduring technologies.  The first Ethernet products hit the market in 1981, and over the past 26 years the standard has established itself as the de facto choice for LAN and MAN connectivity.  Such longevity is rare in the technology space and bares witness to the versatility of the standards. 

Ethernet has been such a huge success because it meets the criteria essential for the mass adoption of any product:  It is inexpensive, it is flexible, it is simple and delivers an elegant solution to a potentially very complex problem, so it should be no surprise that it is now ubiquitous.  Any technology though, can only last if it can change to meet the demands of users.  It is this Darwinian ability to continually evolve that marks out the true survivors of the technology space.  Ethernet has proved that it can do this and is now moving beyond its traditional spheres of LAN and MAN, to provide a much more comprehensive approach to networking.  It is becoming increasingly clear that, aged 26, Ethernet is going from strength to strength and emerging as the standard of choice for long-haul access technology.
The reasons for this can be traced to the changing needs of businesses, especially in specific sectors.  Many of these organisations are demanding ever-increasing amounts of bandwidth in their networks, and Ethernet is emerging as the best means for providing this.  This market includes a wide range of organisations in sectors as diverse as utilities, finance, public sector and smaller business.  The key markets here are companies which can be termed ‘DIY': businesses which have their own in-house IT managers and want to retain management of their own IP-based networks.  Such organisations are typical of utilities, media and finance companies.  The public sector also fits the DIY mould, and we are seeing high levels of demand from local and central government, as well as all areas of schools and higher education. Demand from the indirect markets of mobile, national and international carriers, as well as from IT services companies, is also growing fast.

So what, exactly, are these larger businesses and public sector bodies using Ethernet for?  Ethernet has a number of properties that appeal to these organisations.  It's speed and comparatively low cost makes the technology ideal for inter-site connectivity, and we are seeing a great deal of demand for this.  As LAN speeds increase - with 100Mbps being standard and Gigabit (1,000Mbps) more common as well - it makes sense to ensure that the wider network is not a bottleneck so 100Mbps and 1Gbps inter-site networks are in regular deployment.  Such organisations are also increasingly looking to leverage the benefits of data, voice and applications convergence so Ethernet is establishing itself as the ideal method to provide access to converged next-generation networks (such as MPLS-based core networks).  Convergence brings with it the need to handle ever-increasing amounts of data traffic as more and more applications are placed on a single network - everything from rich content e-business applications to services and IT centralisation applications.  Ethernet has the capacity to handle vast amounts of data without putting any strain on the network.  So it seems that service convergence is breathing new life into Ethernet networking.

It's not just large enterprises that are driving demand for Ethernet connections.  Although Ethernet has typically been associated with larger enterprises, demand from small and medium sized businesses (SMBs) is also growing. SMBs are looking for increased bandwidth to deploy next-generation services and Ethernet is an appealing option for them because of its simplicity and familiarity from its use in the LAN. Ethernet is enabling SMBs to roll out the advanced applications that help them compete with bigger companies, such as VoIP, distributed WANs IP-based video conferencing and real-time collaboration.

We are seeing demand from SMBs, which have traditionally used low bandwidth leased lines, grow particularly fast. These businesses will often have more than one site and be too large for DSL, especially due to the limitations of upstream bandwidth that ADSL gives when used for a VPN.  They require permanent, dedicated, always-on bandwidth, to support applications critical to their business. This sector is increasingly turning to Ethernet for security, reliability and quality of service on a private network, which is not currently available on DSL services, as a solution to their increasing bandwidth needs.  Ethernet is attractive as it offers more bandwidth at a lower price per megabyte, a critical consideration for SMBs. Solutions that can be scaled to deliver multiple services are particularly attractive as these can deliver additional business benefits such as applications like VoIP.

The indirect market for Ethernet is also seeing considerable growth.  This market is made up of service providers investing in the technology in order to deliver advanced services to their customers or to extend their networks.  Both national and international carriers as well as mobile operators business ISPs and IT services companies, are all using Ethernet to enhance the services they deliver, both for high speed Internet access and to provide hosting and connectivity to their customers.

Mobile operators in particular are trying to leverage Ethernet to reduce operating costs of their access networks as well as to provide the higher levels of bandwidth, required as Mobile Data begins to be adopted more widely.  Mobile operators are trying to increase ARPU to recoup their investment in 3G by offering data-heavy applications such as mobile Internet and TV.  The high-speed 3G and WiMax networks supporting these services are rapidly increasing backhaul bandwidth requirements.   As traditional backhaul technologies and architectures struggle to support growing demands, operators are turning to packet transport technologies, such as Ethernet, as a cost effective solution.  Ethernet is therefore increasingly supporting new, revenue-generating services for mobile operators without allowing backhaul costs to spiral, helping to maintain mobile operators' profitability.    
International carriers, on the other hand, are moving to Ethernet to support the increasing demand for bandwidth being made by their large multi-national customers.  Traditionally leased lines would have been used to cater for such demand, but Ethernet has proven it can offer a much more scalable alternative at a lower price per megabit.   International carriers can use Ethernet tails to provide their customers with a range of applications and services, including MPLS IP VPN, Ethernet connectivity, Internet and VoIP.  Because Ethernet is a layer two protocol, it allows international carriers to retain full control of layer three IP routing and their own IP classes of service SLAs.

National operators are driving demand for Ethernet along similar lines.  In the UK, Ethernet backhaul provided by altnets is proving particularly popular as a cost saving option, as their wholesale offering can be much less expensive than the incumbent's.

A number of organisations want to centralise the hosting and management of servers, data and storage systems, and other IT assets in one location.  To do this they need dedicated, very high bandwidth, high quality connectivity to that centralised facility. This centralisation of IT assets is designed to reduce their operating and capital costs, as well as enable delivery of advanced new services. 

Historically, however, the business case for centralising IT assets may not have been viable due to the high cost of bandwidth required to connect remote sites to central data centres.  Ethernet offers more bandwidth at a lower price per megabit than the technologies traditionally deployed, helping to drive the business case for centralisation. Ethernet together with MPLS IPVPNs, is ideal for connecting remote sites to central locations with Gigabit Ethernet services providing the ideal, very high bandwidth connections between data centres. In addition, the cost savings can then be diverted into other applications that fit with Ethernet, enabling these organisations to layer multiple services and applications on to the same network.

With Ethernet, bandwidth can be increased or decreased at very short notice.  This means that companies need only pay for the bandwidth they actually use, increasing capacity easily and quickly as and when required, either to handle a short-term spike in demand or a longer term increase in traffic. This flexibility is very attractive to organisations of all kinds, and the scalable nature of Ethernet is one of its key selling points.

Ethernet, therefore, has proved itself in the face of a fast changing telecoms market.  Change usually forces technology to sink or swim, and the recent move towards convergence, and bandwidth-hungry applications has shown how resilient Ethernet is.  Indeed, the future for Ethernet is looking good.  Carrier Ethernet is emerging to offer operators a flavour of Ethernet with the same characteristics of leased lines, while the IEEE has launched a new set of OAM standards for Ethernet in order to better manage and maintain the technology.

At 26, therefore, the Ethernet story is far from over.  Ethernet looks set to be the dominant access standard for the foreseeable future, driven by the demands of business and ideally placed to meet these demands.  The versatility and simplicity of the standard that led to its dominance in LAN and MAN, is extending to the WAN, delivering access to the next-generation of applications and services.  It goes to show that in the world of telecoms you really can teach an old dog, new tricks - tricks that are increasingly appealing to business.    

Mark Bennett is Head of Data Portfolio at THUS plc

by Dominic Smith, marketing director, Cerillion Technologies


In line with the current emphasis on conserving resources, reducing wastage and cutting carbon emissions, telecommunications services are at the forefront of a revolution in green thinking, which is affecting every business sector today. And telecoms operators themselves are under ever-greater pressure to adopt environmentally friendly strategies.

Telcos have long been pioneers in helping businesses from other industries pursue a green agenda. The deployment of robust global wide area networks and connectivity for video-conferencing applications, for example, have both played an important role in reducing the need for business travel and unnecessary face-to-face meetings.

However, operators increasingly need to see the provision of tools to help other businesses become greener as just one element of their overall environmental strategy. Today, they need to be focused on making a more direct contribution towards the future well-being of the planet.

How Electronic Billing Helps Conserve Resources 

Telecoms operators have already made a start in the direction of greater environmental responsibility, with several key initiatives already well advanced. One of the most significant is the work they are doing to promote electronic billing. In the past, potential cost savings were the major incentive for operators and end customers alike.

And electronic billing does offer clear business benefits being comparatively inexpensive both for the operator and the end customer. In contrast, hard-copy itemised bills typically entail significant print costs and wastage of resources.

Yet, despite operators actively pushing the benefits of electronic billing, customer take-up has been slow in general. One of the likely reasons is that the environmental benefits of the service have until recently not been highlighted by operators.

BT recently began to ‘push' a green angle to electronic billing by encouraging its major customers to convert to its OneBill service in preference to hard-copy paper invoices. Further underlining its green credentials, BT's efforts to convert customers to paper-free billing have resulted in more than 500,000 trees being planted in the UK so far.

The landmark has been achieved thanks to BT's partnership with the Woodland Trust, which guarantees that every time a customer signs up for paper-free billing, BT pays for a native broadleaf sapling to be planted in a UK woodland creation site.

BT's approach is just one sign that, in the future, telcos are likely to give the issue of ‘green billing' a higher priority. Indeed, most operators today are actively looking at ways in which they are selling the service. With the public better informed about environmental issues than ever, the ‘green' approach is likely to have great resonance with customers in the future.

Going forward, software providers will increasingly join forces with their telco customers to promote the reductions in environmental waste and operational cost that can potentially be achieved by pursuing this methodology. Over time, greater numbers of businesses are likely to elect to become part of what has been described as ‘the paperless electronic billing and payment manifesto.' 

Operators often see electronic billing as just one element of a larger integrated self-service strategy. The concept of self-care for customers is in itself an environmentally friendly one.

Customers can be encouraged to carry out key tasks easily themselves online such as updating personal details or ordering new services without drawing too heavily on the resources of the operator, either in terms of systems or of people. Self-care typically results in reduced customer dependency on large call centres, and, by association, radically reduces the amount of physical equipment required to service customers. 

Pre-integration and the Managed Service Model

Operators can also make a more direct contribution towards protecting the environment through the systems model they choose to implement. Opting for a pre-integrated set of components when updating billing and CRM systems or IT systems architectures, is a good initial step.

Adopting such an approach enables operators to significantly reduce the time projects take, together with the amount of travel undertaken and resource usage required, especially when compared with the traditional best-of-breed approach, which can involve large integration teams visiting the operator's site every day for months on end.

Operators can take the benefits achieved from pre-integration one step further by choosing to follow a managed service model, thereby reducing the expense of integration teams flying to different locations to install and maintain new systems. Instead, existing infrastructure at existing managed service centres and existing hardware and software can be used - a more environmentally sustainable approach.

The benefits achieved will tend to accrue over time with the cumulative efficiencies of re-use. Once an operator has commissioned a service partner to put in place the relevant people, hardware and infrastructure in one of these centres, there is no need to re-invest every time the centre is used. And of course the wide area network connectivity provided by operators enables them to achieve all the benefits associated with hosting services in distant locations and managing them remotely.

In the complete managed and hosted model, this is taken one stage further still, with the management of the system typically carried out by a third party from a remote location. The managed service approach does of course reduce, if not completely eliminate, the need for teams of consultants to travel to and from site.

Looking into the Green Future

Those telecoms operators, like BT for example, that have already deployed electronic billing and online self-service capabilities for their customers are likely to promote these areas much more in the future. And those who haven't already deployed this functionality are likely to begin doing so very soon. Perhaps they also need to look at how they provide those services and then they should really also consider the benefits of both the pre-integrated solution and the managed service approach.

Renewable energy providers are often able to sell their services at a premium because they are selling to the ‘green aware consumer'.  In the age of telecoms commoditisation, one way operators can justify maintaining their prices is through investment in green initiatives and having green systems in place. Maybe, in the future, they may even have a legitimate argument for charging a premium when selling electronic billing and self-care to a green audience?

Another possibility is the emergence of niche operators focused entirely on targeting the green consumer. In recent years, the market has seen the arrival of virtual operators who target specific industry segments, encompassing everything from students to sports enthusiasts and from children to coffee-shop users.

Maybe we are not far away from seeing the first telecoms reseller, which sets up, brands and positions its products purely in the green segment and exclusively targets the green consumer? With the growing business focus on environmental protection, this is likely to become a reality sooner rather than later.

How accurate is the position that WiMAX and cellular technologies stand in opposite corners in the development of wide area wireless standards? Robert Syputa takes a look

The debate surrounding the new entrant into wide area wireless standard developments has tended to be constructed as being WiMAX versus Cellular technologies and market development.  Industry shaping debates should start with a clear understanding of the framing of the premise of the debate.  Judging from recent white papers, panel discussions, articles and interviews, WiMAX is being opposed as an anti-cellular effort rather than an alternative development that fits into cellular mobile and the broader context of fixed-mobile convergence.  While WiMAX appeals to alternative service providers because it can be used in spectrum that are designated specifically for use by wireless broadband, both it and the cellular industry at large have changed significantly over the past few years to render this a Swiss cheese argument:

  • WiMAX and LTE are developing according to goals for evolution to the 4G multi-service wireless broadband platform that addresses both mobility and high bandwidth applications.
  • While WiMAX has broadened to become more mobile and capable of being used for media services, 3G cellular has become increasingly broadband, resulting in practical convergence between these fields of development. What's more, both are driven to use the same core sets of technologies, authentication and handoff, network management, dissimilar network roaming that align goals for network operation and user experience.
  • Multi-mode SoC and device designs are increasingly capable of delivering a user experience that disregards the differences between WiMAX and cellular. If the user can make use of services that transit from WiMAX to cellular networks, the argument in favour of control of the huge market share currently held by cellular becomes mute.
  • The argument that the huge investment in development of the cellular industry sets it apart from WiMAX also breaks down in light of the fact that many of cellular's most prominent contributors are also contributing their technology, design, production and marketing capabilities to WiMAX. Operators may convert or cross-sell their cellular customers to WiMAX to gain additional revenues.
  • Mainstream regulatory organisations, including ITU, are setting the requirements for next generation wireless systems to which both WiMAX and 3GPP/3GPP2 aspire.
  • Next generation wireless will be based on OFMDA, which causes a similar discontinuity of air interfaces from 3G for LTE and WiMAX.

There remain practical differences in technical implementations, market momentum, regulation of spectrum, and corporate support between WiMAX and LTE but the gap continues to shrink such that it looks increasingly like the gap between one generation of mainstream cellular system and the next.  As with every cellular system, operator decisions regarding adoption of WiMAX depend on detailed business case analysis that takes into account all known factors for successful deployment and business development. 
We contend that WiMAX is another variant of cellular which faces the same hurdles for adoption as a major new system development, such as LTE, that is cast within the traditional cellular standards development groups. While the approach of WiMAX and 3GPP/3GPP2 started out from a different set of objectives, fixed-nomadic data versus mobile voice, the technologies and market demand has evolved over the past seven years to become very similar.
What's more, the evolutionary path directs both fields of development toward the same basic goals and sets of technologies, making arguments that these are distinct impractical:

  • WiMAX 802.16e-2005 looks very likely to be accepted as a member of IMT-2000 cellular.
  • WiMAXm 802.16m/j will be proposed for IMT-Advanced
  • LTE, the next generation of systems from the 3G camp, will similarly use OFDMA and be proposed for IMT-Advanced.
  • Major goals for IMT-Advanced include an evolutionary framework upon which multiple classes of services and scale of operation can be developed. The goals of ITU for 4G look very similar to what WiMAX has become over the past three years as major telecommunications companies and operators have influenced development.

Ericsson, the world's largest supplier of cellular infrastructure, has renounced the importance of WiMAX. Earlier this year, Ericsson announced that they were pulling the plug on development of WiMAX and would devote their B3G efforts to develop LTE.   Ericsson has resold WiMAX equipment from Airspan but has never committed significant effort to development of WiMAX internally.  Strategically, it has never made sense for them to push WiMAX, or any alternative, that may dilute their own market position.
Ericsson executive vice president Bert Nordberg contends in the June 18th issue of the Globes online magazine: "We have nothing against WiMAX, but I have to say that it has no business model. This, at least, is Ericsson's conclusion about the matter. Therefore we're not investing in this area at all. What is supposed to work on WiMAX already works on cellular 3G."
Counterpoint: One thing that is unarguable is that the cellular industry has evolved and managed to survive adoption of new wireless interfaces that were not directly backward compatible.  This has been driven by the need to deliver better levels of voice and higher bandwidth data service.  Operators would prefer to see systems evolve on the same technology platform long enough to enjoy profits, but have been driven to adopt new cellular systems that provide a commercial advantage despite the need to commit large capital expenditure to displace or deploy next generation systems into new spectrum.  The business case for 3G deployment is clearly demonstrated, but as the need for bandwidth continues to grow, so the case for a shift to B3G (beyond 3G) systems based on OFDMA technology is progressively strengthened.
Bert Nordberg says: "They talk about WiMAX having 30 million customers in 2010," says Nordberg. "But by that time, cellular broadband will have 500 million subscribers. These are completely different orders of size. If we have learned anything from the history of technology adoption in the telecommunications market, it is that standardisation has huge power, and cellular is the standard."
Two points:  1) The wireless industry has broadened and matured to be focused on multiple classes of service.  The vision for IMT-Advanced and 4G is for highly scalable multi-service evolutionary platforms.  While this development is likely to be dominated in numbers by mobile applications, the trend is for more diverse and specialised services.  The majority of future profits will likely come from extended services, not from basic voice or data connections. 2) Ethernet is the predominant standard for wired data communications and momentum more directly extends to the WiMAX.  Open use of Internet communications and applications is part of the converged landscape of fixed-mobile technology and market convergence.  It is myopic to consider cellular mobile market momentum as a sole defensible position, particularly since that is translatable via multi-mode to new service networks. Wireless communications has been defined within various standards development groups and sets of companies that have technology and commercial agendas.  WiMAX is definitely a cellular technology, for the most part indistinct from established cellular by virtue of the increasingly overlapping road maps for development.  If a cellular operator adopts WiMAX, which is multi-mode compatible, with their existing cellular network, their customers hardly need to know.  WiMAX does also appeal to alternative service providers and various classes of service that are distinct from mainstream mobile cellular.   However, these can often take advantage of cost dynamics achieved in mobile markets.  Standardization does have huge power in helping to drive costs and market adoption.  Convergence between IT/Networking, Internet, radio, music and TV media and new interactive PtP viral video as well as mobile and fixed communications drive multiple participants together to influence overall product and market development.  While mobile cellular dominates in terms of volume, it does not dominate in terms of applications, content or dollars and openness of development and user participation.  The WiMAX standard comes about at a time that opening up of many classes of service to the benefits of standardisation is practical.
Several more arguments can be made for a shift to B3G platforms that take better advantage of evolving trends in smart antennas and granularly adopted smart wireless broadband networks.  The cellular wireless approach can be criticised in it's entirety as being too constrained to pursue the coming generation of wireless development: without a major re-write that will make LTE more similar to WiMAX than 3G, incapable of being granularly organised and deployed into open IP use scenarios.  ITU's goals for IMT-Advanced appear quite bold: A multi-service platform capable of providing per-user bandwidths of 1 Gbps fixed-nomadic and 100 Mbps mobile.  Asking Ericsson how they plan to achieve 4G performance in LTE or beyond has delivered a response that is very close to the path of development WiMAX is already well on the way to achieving.  That flips the debate about continuity of technology developments to place LTE as the follower rather than the leader of the dominant emerging mandates.  And the inevitable reorganisation of wireless business models along lines of open rather than prescribed content and applications conspires to shift the debate to a matter of when not if new operator revenue models will emerge.
 The gains in performance needed to deliver 4G will not come from advances in either CDMA or core OFDM interface technologies but from how networks are organized and deployed to make multiple use of available spectrum and source content and applications resources within the distributed network.  Delivering the performance gains has more to do with building of smart networks that incorporate wireless than wireless itself.  4G is a wireless broadband network with everything that implies.  OFDMA is the core link technology for WiMAX and LTE 4G, but the performance gains must be built upon through an evolution more to do with how networks.  The impact of the evolutionary shift to take advantage of the ‘spatial' and architectural domain of wireless development will be to greatly increase bandwidth density while reducing costs. Suffice it to say that the shift is to a new evolutionary platform with all that this implies: An additional dimension of development that will deliver 3X-10X total network throughput improvement over cellular wireless.  What may be the factor that scares up protests to WiMAX the most is the recognition that it is rapidly evolving to deliver on a frontier of new developments that have just started to unfold. 
Is the debate about WiMAX being a development that is outside the mainstream of cellular development or is it that the entire field of wireless is converging and that brings into play additional industry participants and markets?  Put directly, who owns wireless broadband?  Is it a select group of mobile companies or a broadened field of development that increasingly includes networking, IT and media interests?  We think the momentum is shifting to allow a new contender: Both WiMAX and LTE will battle in the ring for the 4G crown.
This may appear to add to problems of harmonisation, but systems are increasingly harmonised at higher levels of functionality and converged via multimode at the user device level.  Spectrums are also increasingly harmonised through device integration.  An enlightening example of this trend is the incorporation of Qualcomm Flo/MediaFlo into 3G devices in Europe and the United States: the dissimilar technologies are converged at the chip-set and device level with integration into higher levels.  The decision to use MediaFlo/Flo becomes the operator's commercial decision, not so much a standards debate.  Likewise, we expect decisions regarding WiMAX to resolve on practical concerns and for discussions about what is or is not cellular to become meaningless. 

Robert Syputa is Senior Analyst, Maravedis

In the first of a regular column for European Communications, Benoit Reillier looks at the role played by regulators and politicians in the rapidly changing telecoms arena

Regulation, public policy, competition law… it would be tempting to discount these notions in the belief that the communications sector has more to do with technological change, innovation and marketing than with politics and regulators.
All market participants, however, are playing a global game whose rules are being decided by Governments, regulators and competition authorities. In fact, a closer look suggests that the regulatory and competition framework, within which communications firms operate, may be more important than how well they ‘play the game’.
This is especially true in the telecommunications sector where incumbent operators were often deemed too powerful and subjected to heavy sector specific regulations. These rules were set by Governments and regulators in order to ensure the development of a competitive market while protecting consumers’ interests.
Enlightened strategy departments in many communications firms have been aware of the importance of regulatory affairs for quite some time now. As a result many heads of strategy have now taken on these added responsibilities (historically often left to legal or communications departments).
Indeed, shaping the debate and contributing to the development of a fairer and more efficient regulatory model is becoming paramount for operators’ future. In a market where time horizons for investment can be decades, regulatory visibility is critical. This is especially the case when existing infrastructures, such as the fixed copper networks that can be found in most countries, are showing signs of obsolescence. Indeed, when legacy infrastructures become a bottleneck, new multi billion dollar investments are required to upgrade as illustrated by the early roll out plans of new generation fibre networks in several countries.
The EU Commission has played a critical role in shaping the regulatory environment over the past decade. The Commission gives guidelines and tools (a framework) to National Regulatory Authorities (so called NRAs) as well as homework (market reviews to be carried out) and deadlines. It also reviews progress of its NRAs every year (implementation reports) to see if they have been good students or not. Those who have worked well are praised while other countries are named and shamed or encouraged to do better. Very much like a teacher, the Commission is also asking for new powers to be able to better discipline those who do not follow the rules.
Viviane Reding, the vocal EU Commissioner for the Information Society, is currently reviewing the telecoms regulatory framework that will apply to all member states over the next few years. If approved by the EU Parliament, the NRAs in each country will have to implement the new policies proposed. Some of these, like the addition of mandated functional separation to the toolbox of regulators, could result in operators having to split their operations so that the retail side of the business (that sells services) would become separate from the network infrastructure. Mandating such drastic measures would, of course, have far reaching consequences for the development of the market.
While these debates may seem remote, they will have a profound impact on the way in which the market develops; on the type of competition that emerges as well as on the levels of investments in infrastructure and services. Given the strong relationship that has been established by many economists between investment in communications infrastructures and economic growth, the stakes are high. Indeed it is not just about telecommunications but also about the overall productivity gains enabled by these new services.
So equipment manufacturers, consumers and operators alike all have a lot to gain from contributing to the regulatory process and shaping the debate. Best regulatory practice involves a period of consultation with stakeholders, and this opportunity to review the arguments and contribute to the debate shouldn’t be missed.
The challenges behind policies such as mandated structural separation, increased pan-European regulation and new generation networks are some of the critical topics to be addressed over the next few years… and as many opportunities to shape the debate.

Benoit Reillier is a London based Director of the telecoms practice of global economics advisory firm LECG. He can be reached at breillier@lecg.com.
The views expressed in this column are his own.

European Communications takes a look at the important issues up for discussion at the Broadband World Forum Europe


The success of broadband penetration into the access network is ushering in a new era of content for the residential consumer and enterprise end-user alike.  Just what we mean by "content" is also undergoing transformation, as consumer devices and ubiquitous broadband service is enabling new kinds of entertainment beyond traditional TV - ones which include user-generated content, information, and video applications anytime, anywhere. As carriers around the world continue with wireline and wireless broadband deployment, they must begin to turn equal attention to how broadband usage of key applications - such as IPTV - will shape the future of the industry.
Many cutting-edge developments are taking place in the European broadband marketplace, such as advancements in IPTV. Many of the benefits of these advancements as well as accompanying issues and challenges will be discussed by top industry leaders and experts at the Broadband World Forum Europe 2007 in Berlin this October, hosted by Deutsche Telekom and organized by the International Engineering Consortium (IEC).

IPTV: On a global roll
Indeed, IPTV will be among the foremost topics in delegates' minds. Recent research from the Multimedia Research Group (MRG) indicates that there are approximately 15 million IPTV households worldwide, and that 576 IPTV service providers presently are active in the IPTV sector. According to Helmut Leopold, Chairman of the Broadband Services Forum (BSF), "Television on the basis of Internet protocol (IPTV) is on the threshold to the mass market."
According to the BSF, global IPTV growth will be pushed considerably by the Asia Pacific region with its emerging markets China and India, and Australia, whose IPTV offerings are entering the commercial phase. Worldwide growth will also be propelled by North America, where AT&T and Verizon are getting ready for countrywide IPTV rollout. For 2010, the MRG forecasts approximately 50 million IPTV households, 21.3 million in Europe alone.
"IPTV is red hot," says John Janowiak, President of the International Engineering Consortium (IEC).  "This is the kind of application we're going to look closely at in Berlin." 

Innovative applications
Several service providers to date have demonstrated the flexibility, individuality, and diversity of IPTV applications - and many of these will be presenting and participating in the World Forum.  Such deployments have built upon the end-user addressability enabled by IPTV, as well as advances in fixed / mobile convergence (FMC).
One such case study involves individualized content for young children who are hospitalized for long periods of time. Telekom Austria met the needs of these kids for individualized programming by using RFID chips implanted within stuffed animals, each of which transmits the child's age, language, background, illness, and treatment program to a set-top box.  This then yields content and applications appropriate for the individual patient.
 "This kind of innovation is at the heart of the emerging broadband world," said Janowiak, "and it's the kind of forward thinking that will be characteristic of the World Forum."
Indeed, as innovative applications and platforms for IPTV proliferate, attention is being paid toward the future of IP-based information and entertainment services across multiple consumer devices.  And the three main devices at present-and into the foreseeable future-are the TV, the PC, and the mobile handset.  Service providers looking to remain competitive and profitable will need to understand how to deliver content across these platforms in a coordinated, effective, and controlled manner-and a profitable one at that.
 "The world of broadband is converging toward an anytime, anywhere model," says Janowiak. "The IEC seeks to bring together industry players to squarely face and analyze these sorts of critical issues. In this sense, the Broadband World Forum Europe will play a central role in moving toward the future."


European Communications previews the ECOC event in Berlin

This year's ECOC conference and exhibition is set to the biggest since the height of the telecoms boom with 360 exhibiting companies filling the show floor in Berlin's Internationales Congress Centrum - Europe's largest conference centre. 
One of the highlights on the exhibition floor will be the ECOC Market Focus Seminars, at which Next Generation Networks and FTTx will be the dominant topics.
Head of Network Transformation Research for BT Group, Don Clarke, will highlight the advances in optical network technology and the delivery of high-bandwidth services.  Also presenting during the free Market Focus Seminars, which will cover Next Generation Networks and FTTx and take place on the exhibition floor Tuesday 17th and Wednesday 18th, will be Hans-Martin Foisel of Deutsche Telekom and the Optical Internetworking Forum, Rodolfo Di Muro of Ericsson, Robert Keys of Bookham, Jy Bhardwaj of JDSU, and leading optical networking analyst, Heavy Reading's Graham Finnie.
"As interest in fibre deployments accelerates around the world it is vital that network operators share their research and experience with the wider industry - ECOC is an excellent place to do that," says Don Clarke of BT Group.
Also taking place on the exhibition floor will be the conclusion of the Optical Internetworking's (OIF) global interoperability demonstration - On-Demand Ethernet Services, a showcase of dynamic Ethernet services over multiple control plane-enabled intelligent optical core networks.  The demonstration will feature seven of the world's leading telecom operators, including AT&T, China Telecom and Deutsche Telekom, and eight leading vendors, including Alcatel-Lucent, Ericsson and Huawei Technologies.
"With all the major industry players gathered in Berlin, ECOC is the ideal location to showcase the results of this high profile, worldwide collaborative demonstration," said Hans-Martin Foisel of Deutsche Telekom and OIF Carrier Working Group chair and vice president.  "At the show, we will be able to demonstrate interoperability to all the key players from all levels of the communications industry."
Graham Finnie, Chief Analyst at Heavy Reading, commented: "Optical technology is once again moving centre stage in the telecommunications industry, largely because of the relentless rise in demand for bandwidth.  As high definition TV, new games and DVD players, and a plethora of bandwidth-hungry Internet applications spread, telcos are scrambling to replace their copper access networks with fibre, as well as increasing the bandwidth in aggregation and transport networks.
"Demand for optical technology can only increase over the next few years," concluded Finnie.
The 33rd annual ECOC will feature 360 exhibitors and a comprehensive speaker line-up, including some of the world's leading technical developers, both commercial and academic, addressing key industry topics.  Confirmed speakers at the conference include: Gregory Raybon of Alcatel-Lucent - 100 Gbit/s: ETDM generation and long haul transmission; Biswanath Mukherjee of the Dept. of Computer Science - Univ. of California Davis, USA - Optical Networks: The Road Ahead; and Russell Davey of BT - Long-reach Access and Future Broadband Network Economics.
Also, on the exhibition floor, visitors will be able to see and take part in a number of new, interactive features: the FTTx Resource centre, delivered by The Light Brigade, will be a focal point for all things FTTx; the live demonstration area, where the latest optical communications products will be showcased; a new, free training area, where the CTTS will give free, practical training courses in fusion splicing and fibre preparation tools.
The ECOC conference begins with workshops at 09:00 on Sunday 16th and concludes at 16:00 on Thursday 20th September.  The exhibition opens at 09:30 on Monday 17th and closes at 16:00 on Wednesday 19th.

The strategic partnership of Iskratel and Telekom Slovenije has already become a good example of mutual collaboration between a technologically leading network equipment vendor and a progressively oriented telco operator. Uros Jenko explains that both sides have participated in making the Slovenian telecommunication infrastructure capable of delivering the most demanding telecommunication and multimedia services

The synergetic effect of cooperation between Iskratel and Telekom Slovenije has placed Slovenia among the European countries with the most advanced telecommunication network. The benefits of collaboration have already spilled wide beyond the communication area. Nearly ten years of intensive nation-wide broadband deployment have given Slovenian residential and business subscribers the ability to intensively integrate themselves into the elite sphere of global information, economy and knowledge society. This brief overview will present the most important past, present and future stages of broadband access expansion throughout Slovenia as performed by Iskratel and Telekom Slovenije. Recently started nationwide project of FTTH deployment is the grand finale of a long and intensive work on assuring the broadband connection in every Slovenian home.  
The end of the nineties was a DSL connection expectation time. Dial-up web access was becoming inadequate for a significant share of subscribers. The broadband era has stared to gain momentum. Early adopters have already begun with first pilot projects and field deployments. Telekom Slovenije was, together with Iskratel, among the first operators in Europe to offer ADSL connections based on Iskratel DSLAMs at nationwide scale. The first period of broadband access penetration throughout Slovenia has culminated with the introduction of IPTV channel distribution in 2003. Iskratel enabled Telekom Slovenije to offer IPTV using ADSL access technology at the very beginning of European commercial IPTV over DSL deployments. Right after Italian Fastweb, Telekom Slovenije was the second European provider using the IPTV over DSL technology!
IPTV has begun gaining more and more popularity as it became obvious that ATM based access infrastructure expansions are too costly.
Telekom Slovenije hasn't waited long to begin the migration from ATM-based access systems to Ethernet based nodes. At the time, Iskratel already had the answer for the changed access technology foundation.  The company presented its first completely IP-based access node, the SI2000 IP DSLAM, in autumn of 2004.
The Iskratel SI2000 IP BAN - Broadband Access Node (IP-DSLAM) was a major technological and sales hit at that time.  In contrast to majority of world class vendors at that time who were still offering their DSLAMs' on bandwidth limited ATM platforms and Ethernet instead of ATM uplink network ports, Iskratel decided for a completely Ethernet based open platform. The SI2000 IP BAN served Iskratel as an entry ticket for the elite group of vendors that were capable to offer IP-based access systems, ready to realize the concept of Triple Play service delivery.
The migration from ATM to IP access systems has proved itself to be the optimal solution for both sides. Iskratel has gained a foundation for a successful universal product platform that is now used for the SI3000 MSAN Multi-Service Access Node which serves as a truly universal network access element. It provides FTTH, VDSL2, WiMax, Ethernet, POTS and ADSL2+ subscriber interfaces. The SI3000 MSAN has become a field proven Triple-Play network access and aggregation product with industry unmatched modularity and feature richness. Telekom Slovenije has simultaneously saved a significant amount of expenses on the broadband aggregation network and built a solid base for the following network enhancement steps.
Fast all-European deployment of DSL access equipment and the growth of broadband equipment market have resulted in enlarged vendor involvement and consequently harsh competitive environment on Slovenian market. In 2005, Iskratel has outcompeted other vendors by offering a complete end-to-end access solution for service operator that was technologically superior compared to competition. It was a clear decision for Telekom Slovenije to choose a provider with a stable and solid system that included adequate CO and CPE equipment.
At the same time, Iskratel managed to include the option to offer POTS narrowband subscriber interface on the same platform. SI3000 MSAN has become a fully mature universal access product. The global orientation in the field of network access was clear - IP/Ethernet-based universal access platforms with as wide range of user interfaces as possible, available on the same HW platform. 
As an active member of the DSL Forum, international association of DSL equipment vendors, Iskratel strived to develop the network elements in accordance with forum's TR-101 recommendations. Market and technology leading position was a logical consequence of company's successful efforts. 
Quite soon, DSL access technology have become regarded merely as a "step between". ADSL2+ with its up to 24 Mbps transfer speed on good local loops still suffices for the majority of subscribers at present time. Its limitations are more clearly visible in rural or semi-rural areas with longer subscriber loops and lines of sometimes poor quality. Quality also decreases with growing number of subscribers. In such cases, the speed mostly reaches 10 Mbps.
The released Iskratel VDSL2 blade for SI3000 MSAN has opened an opportunity to enhance capabilities of Telekom Slovenije's access network. However, in autumn of 2006, the operator decided that VDSL2 with its limited reach and serious sensitivity for line quality is not going to be deployed as a mainstream broadband subscriber interface . The use of multiple remote outdoor units that would shorten the VDSL subscriber loops was replaced with a strategic decision for mass optical access deployment beginning in late 2007 and 2008.
Simultaneous development of VDSL2 anf FTTH CPE equipment was following the progress made on access node subscriber blades. Iskratel assured all the available L2 and L3 functionalities, necessary for proper network performance, and managed to preserve the service delivery model in combination with possibility for a seamless, evolutional way of equipment upgrades. Telekom Slovenije was thus allowed to expand its access network capacity in a gradual, future proof manner, using the foundation deployed with Ethernet aggregation network. 
FTTH was accepted as the ultimate solution for the end user access dilemma. The project, named F2 is Europe's first all-national commercial FTTH deployment project led by an incumbent service provider. Its goal is to provide optical access to more than 100,000 subscribers by the end of 2008 and to nearly 70 per cent of Slovenian households by 2015. 
The access network is constructed with network elements which are members of Iskratel SI3000 MSAP access product family. The SI3000 Fibre Access is designed on the same platform (same shelves, central Ethernet switch and management) as the SI3000 DSL Access (the rebranded SI2000 IP BAN), which is already widely deployed in the access network of Telekom Slovenije.
Why Ethernet P2P fibre architecture? Telekom Slovenije has chosen the point-to-point optical fibre architecture to ensure a future proof advanced delivery of broadband services - richest multimedia, fastest data transfers and superior voice quality - to each subscriber. The Ethernet P2P fibre architecture makes it possible to offer Fast- and Gigabit-Ethernet connectivity to each subscriber using standard and widely available network equipment.

Uros Jenko is Product Marketing Manager, Iskratel

CWDM technology provides the answer to bridging demand for bandwidth, both fast and cost effectively says Francis Nedvidek

FTTx (Fibre To The Home, Business, etc.) is gradually gaining momentum in Europe. Projects realised to date have tended to be modest but all opinions concede that FTTx has entered the mainstream. Network operators have plans on drawing boards in the Netherlands, Norway, Sweden, Denmark, France, Italy, Slovenia, the UK and Germany among others. Certainly, regulatory and legal frameworks concerning the use of legacy infrastructure and of newly installed fibre; the jurisdiction of regional and city carriers vs telecom vs CATV / HFC (Cable Television / Hybrid Fibre Cable) operators; and, the access to multiple-dwelling buildings, still need resolution. In addition, the technical debates concerning PON vs Point-to-point architectures evolve as broadband demand, telecommunications legislation and network technology advance.
As networks expand in terms of the number of subscribers, the offer and take up of services and the expansion of geographic footprints, Coarse Wave Division Multiplexing (CWDM) has emerged as the preferred method for increasing link capacities of these optical access networks quickly, simply and at a low cost. Passive CWDM requires absolutely no electrical power and the technology has proven itself to be sufficiently robust and reliable for installation in the most demanding environmental conditions.
Modern CWDM technology enables network capacity upgrades in the form of install-and-forget hardware allowing network operators to multiply the bandwidth of their presently overloaded fibre spans. A CWDM technology platform permits enhanced flexibility in terms of network planning and installation without sacrificing scalability to far higher transmission volumes as bandwidth needs inevitably grow. CWDM is inherently transparent to protocol, coding and bit rate and therefore ideally suited for aggregating fibre bandwidth. Capacity increases of factors from 4X to 8X or even up to 18X at a fraction of the cost of laying new cable in trenches or drawing additional fibre strands through conduits are routinely achieved. Operators implement network functionality upgrades literally within hours, while continuing to operate legacy ATM, TDM/TDMA, SDH/SONET or whatever topology their legacy or new architectures embrace. Furthermore, CWDM bandwidth augmentations are network transparent and fully operable with BPON, (Broadband PON (BPON ITU-TG.983.x), GPON or Gigabit capable PON (GPON ITU-TG.984.x), Ethernet or EPON (EPON IEEE 802.3ah) or various versions of DOCSIS. Even 1310 nm and 1550/1490 nm analogue modulation combining full digital overlays may be accommodated.
The goal of the network operator is to provide ever more subscribers with service while containing the cost to reach each additional customer. Reaching more subscribers with higher bandwidths attains higher penetration densities and consequently greater revenue generation potential. Increasing the bandwidth of existing fibre lines promotes higher degrees of network utilisation by permitting the price of each router port and laser transceiver to be shared across many connection drop points. Increasingly, attracting new subscribers also means providing the bandwidth that customers need for the services and the programming that they are signing up to enjoy. In all, CWDM is a very attractive means for network operators to achieve their objectives.
At the very edge of the network, FTTx architectures traditionally exploit an optical platform to carry downstream traffic to approximately 16 to 32 residential drop points or subscribers and upstream traffic back in the opposite direction. FTTx deployments, whether telecom-centric or HFC-centric, ultimately require extending sufficient optical bandwidth from the central offices and headends all of the way to these subscribers.
In its simplest form, CWDM multiplexers aggregate additional wavelengths, or in other words, additional data channels, onto an optical fibre where previously only one wavelength or channel had been transmitted. Upon arrival at the opposite end of the fibre, a CWDM demultiplexer discriminates and physically separates the different wavelengths so that each wavelength is rendered once again as an individual communications channel. In practice passive CWDM may be deployed in simple ring or protected ring distribution, point-to-point setups, PON configurations, via bidirectional or unidirectional arrangements or utilised to carry analogue signals simultaneously with bidirectional digital overlays. CWDM equipment may be packaged to fit 19-inch telecom central office installations, splice cassettes for mounting in street cabinets, hand-holes or CATV-pedestal closures. The most advanced CWDM components work over temperatures spanning the Telcordia GR-standards for outside plant operating conditions and are small enough for convenient insertion or retrofit into existing fibre splice cassettes. In fact, upgrading network capacity, in practical terms, becomes a task of modifying outside plant fibre connectivity rather than procurement and installation of new inside plant equipment.
Network operators are increasingly taking advantage of CWDM-enhanced architectures and their accompanying low CAPEX, minimal OPEX, and simple and straightforward planning and implementation. Decisions to adopt CWDM typically revolve around the following priorities:
Low and predictable equipment and operating cost - CWDM network upgrade approaches require significantly lower CAPEX and offer much more economical OPEX scenarios compared with any active equipment deployment. Especially attractive are the quicker return-on-investments. We often encounter network operators who redeem the cash flows generated via newly CWDM-acquired subscribers and enterprise service contracts for financing their next access network expansion.
Ability to upgrade portions or the entire network quickly and efficiently - Agility has a major impact on launch strategy and timing. Rapid response is key to pre-emptively or defensively capturing and holding market share. Our experience over the past three years with numerous European network operators exploiting CWDM building blocks involving many thousands of nodes clearly confirms that four or eight channel upgrades may be installed and fully operational within days or less.
Simplicity of specification, simplicity of deployment and simplicity of upgrade / reconfiguration - An inherent attraction of passive CWDM-based solutions is that the technical expertise required to design, manage and upgrade or otherwise adapt the existing or new network are well within the capabilities of virtually any network operator. Risks and burdens of complex network design and planning may be minimized without sacrificing options to further scale the bandwidth or network configuration. Deployment means plug-and-play installation with no need for additional power supplies or software updates.
Solutions that facilitate rather than constrain future expansions - Network operators strive to add subscribers, extend geographical reach and transport ever more data traffic. CWDM is a low cost and low risk tactic that complements other capacity enhancements whether future expansion strategies incorporate further passive or active equipment or even a complete change of operating philosophy. Roll out may be planned to ensure that technical improvements and the financial resources associated with upgrade scenarios remain decomposable into predictable and non-cost-prohibitive phases. Network Operators preserve the freedom to roll out capacity, coverage and services as the changing demand and competitive landscape require and as cash flows dictate.
Freedom from becoming locked into proprietary schemes - A CWDM approach tied to the established open standards typically operates unconstrained with any of the routers, switches, DSLAMs and even the WDM systems offered by major Telecom / CATV / HFC / Datacomm vendors. As a passive element, CWDM modules are functionally agnostic to all data transmission protocols and are equally immune to the incompatibility problems often encountered when connecting disparate equipment or accessories supplied by different vendors. Risks of becoming captive to any particular proprietary approach or attendant service agreement are eliminated.

Dr. Francis Nedvidek is CEO, Cube Optics AG, and can be contact via tel: +49 (0) 162 263 8032; e-mail: nedvidek@cubeoptics.com

In telecommunications, as in many other industries, success usually comes from careful planning. Danny Berko and Ron Levin explain that planning now for the deployment of effective and proven deep fibre platforms will help meet the demands of the future wave of IPTV and other new customer services

The rapid growth of new broadband services such as IPTV will soon stretch the local loop or access network to its limits in terms of bandwidth delivery capability.  Even existing Internet services are becoming thirstier for higher download speeds as they cram their site pages with customer-compelling pictures and graphical content. 
Additionally, a growing customer segment - SOHOs and home workers - is looking for higher upload speeds to support their needs to send ever larger files to central office locations and facilitate increasing numbers of peer-to-peer sessions.  Many telcos and LLU operators have sought successfully to address these demands by exploiting the latest advances in Digital Subscriber Loop technology - xDSL - to carry these higher speeds over a predominantly copper local loop network originally designed to carry analogue voice services.
Much has been achieved in this respect and it is estimated that a large proportion of customers (over 90 per cent in Western Europe) now have access to broadband speeds of over 2 Mbit/s with some (nearer 10 per cent) enjoying 10 Mbit/s or greater. 
Characteristically however, xDSL speeds reduce with copper loop delivery distance and the laws of physics are beginning to diminish further speed improvements that can be made over existing copper loops to meet further higher speed customer services.  So consideration must now be given to how these distances can be reduced to meet this next wave of bandwidth demands, which look to be of the order of 25-50 Mbit/s.
This is particularly important for operators as they plan to satisfy growing demand and retain/build their revenue streams.  Deeper fibre into the access network - to shorten the copper loop distances and bring the high launch speeds of xDSL sources closer to the customer - is the principal approach to tackling the situation.
This entails the deployment of robust and reliable xDSL broadband platforms at the end of the fibre in often environmentally harsh and less accessible parts of the access network, such as cabinets, building basements and underground enclosures. 
The challenges for operators and their suppliers are not insignificant. Such platform investments must support a positive business case and they need to complement operators' current network convergence strategies, as well as longer-term plans for the access network as a whole.
The rewards are nevertheless significant in terms of order of magnitude improvements in bandwidth speeds (factor of 10+) and the overall potential future services that can be offered to customers and the community in general.  Cable operators who have copper pairs incorporated within their coax distribution (Siamese pairs) may also find such platforms attractive in terms of the premium service potential they can offer over and above their normal cable modem services.

Why more bandwidth?
The underlying trend within the developed world continues to be for more content and associated higher delivery speed in broadband services, be it within the standard Internet services portfolio or specific new planned services such as IPTV and video streaming/conferencing services. 
The limit is difficult to ascertain and is analogous to the processing speed/memory capacity trends within the PC industry.  While much has been achieved in improving the transport efficiency of such services, including advanced compression and coding techniques (MPEG, etc), the net trend still translates into ever-higher bandwidth capability required of the access transport. In cumulative terms this can move anticipated customer bandwidth demand to between 25 and 50 Mbit/s. 
Currently, most Western European operators appear to be looking at around the 25 Mbit/s figure, whereas in the USA, where the attention to HD TV appears to be greater - together with a demand for more simultaneous sessions - the figure approaches the 50 Mbit/s mark.
This expectation of future bandwidth growth needs to be addressed by operators if they plan to meet and stay ahead of future demand. Current deployment of xDSL technology at CO sites has achieved much in servicing the initial growth of broadband (principally Internet services) over the last decade and typically, within Western Europe, it is estimated that around 90 per cent of broadband customers connected to incumbent telcos' networks now enjoy 2 Mbit/s plus.
However, probably only around 10 per cent enjoy 10 Mbit/s or more. This is because the high launch speeds deliverable by xDSL technologies diminish with copper loop distance (or reach) from the CO, with the result that the speeds needed for future services become progressively less attainable to customers beyond 2-3 km reach. 
So the distribution of customers in relation to copper distance from exchange is currently a defining factor. This distribution is not too different across Western Europe incumbent operators, although there is a noticeable difference with the USA, by nature of its demographics. With these existing copper distance distributions, it is clear that the bandwidth identified for future service growth could only be delivered to around 10-20 per cent of Western European customers at best, and even less in the USA.
Clearly, in order to capture the bulk of customers within a much wider future services bandwidth footprint, something radical needs to be done to shorten copper transport distances within the access network.

Deeper fibre provides the solution
The means to achieving the shorter copper distances needed involves the deeper penetration of fibre into the access network.  As a result, operators, along with their suppliers, need to develop optimum strategies for achieving this against valid business cases. 
Most operators already deploy fibre all the way to large, and to a significant proportion of small, business customers, thus removing the copper bottle-neck altogether. However, to provide a similar major fibre overbuild to the remaining bulk of customers, i.e. fibre to the premises/home (FTTP/H), is currently a prohibitively costly investment for most operators.
Business cases are beginning to emerge for FTTP/H deployment in new build (greenfield) scenarios, but these barely amount to more than 1-2 per cent per annum of an operator's total network.  Therefore, in order to address the future bandwidth challenge effectively, lower partial fibre investment solutions need to be considered. Normally referred to as ‘deep fibre' platforms, they involve deploying fibre from the CO to appropriate points deeper in the access network, terminating on xDSL platforms which then connect with the remaining (shorter) copper distribution pairs.
As a consequence, the higher launch speeds of xDSL can be exploited to provide the much greater bandwidth anticipated to meet future needs.  The main governing factors determining the points in the access network where such fibre terminates are the location of appropriate and accessible copper cross-connection points where the fibre/copper transition can practically take place and the fibre count necessary to achieve an economic customer footprint deployment. 
The deployment scenarios adopted by most operators are either fibre to the node (FTTN), normally coincident with the first primary cross-connect point (PCP-external cabinet), or deeper to the curb (FTTC), normally coincident with a secondary cross-connect (SCP) or street distribution point (DP).
In the case of conurbations made up of large blocks of flats or multi-dwelling units (MDUs) a fibre to the basement/building (FTTB) deployment may also be appropriate.  In each case, this shortening of the copper loop enables the much higher xDSL launch speeds to be delivered to a significantly larger proportion of the population, typically 25-50 Mbit/s+.
This has been improved upon further with the latest VDSL2 chipsets (potential speeds up to 100 Mbit/s). The VDSL2 ETSI standard has been optimised for deployment at such points close to the customer and has the advantage that it can be configured to both symmetric as well as asymmetric delivery capability.  Additionally, both Ethernet over DSL - Ethernet First Mile (EFN) - and traditional ATM over DSL are configurable with this standard.

Proven deep fibre deployments
Major operators around the world are now deploying deep fibre solutions either in trials or actual deployments in major segments of their networks. This is enabling them to prove the technology, develop the experience and create the processes and procedures necessary to build this necessary high-capacity access infrastructure that their customers and future service opportunities will demand.
The key for platform suppliers is to be at the forefront of many of these deployments and be able to share and understand operator needs and requirements while being able to demonstrate the capabilities of the underlying technology. 
A major recent example of deep fibre platform deployment has been Deutsche Telekom's High Speed Interface (HSI) project where high-speed services are being rolled out in ten major cities in Germany and several thousand deep fibre platforms are being deployed delivering bandwidths of between 25 Mbit/s and 50 Mbit/s.
Examples of UG platforms are already in widespread deployment around the world, including Kingston Communications in the UK.
The thirst for more customer bandwidth is beginning to grow and will soon outstrip the capabilities of operators to deliver potential future services using only CO-based DSLAM architectures, particularly if a broad customer service footprint is to be maintained.
Enriching the access network with more fibre by the deployment of appropriate deep fibre platforms will address this need successfully. Such platforms are becoming available and are proving themselves to be economic deployment solutions now, which have the capability to accommodate future needs. 
The key to successful deployments depends on a thorough understanding of the access network, its distributive characteristics, the harshness of its environment and the associated principal factors that will drive service improvements and OPEX reductions.  This needs to be accompanied by full cognisance of the complementary service delivery management aspects required in the backhaul network. 
Leading suppliers are now working closely with operators to ensure that a partnership approach is achieved in meeting new customer demands and potential revenue growth.  In telecommunications, as with many other industries, success usually only comes when careful planning meets with opportunity.
Planning now for the deployment of effective and proven deep fibre platforms will ensure that the opportunities of this future wave of IPTV and new customer services can be grasped by operators successfully and with confidence.

Danny Berko, Director is Product Marketing, and Ron Levin, Director, Product Marketing, Broadband Access Division, ECI Telecom.

Eastern European telecoms is poised to enter a new phase. The big mobile players that dominate the region’s telecoms markets are eager to transform their business models.  With their networks mostly built-out and their subscriber numbers built-up to saturation point, they’re turning their attention to developing the subscribers they have - by introducing new services and more refined customer and financial management. And they’re looking to expand their operations into new territories. It may sound like a straightforward next step, but it requires major technological and human re-engineering and it will drive the market in the region.

To assist this transition, not just for the region’s big three operators - but for all the players, the TM Forum and telecom and technology consultants Ernst and Young are staging their latest Tele|Evo (Telecoms Evolution) event in Moscow (October 8 - 11, 2007). The underlying theme of the conference is telecom business transformation. Attendees will gain an understanding of evolving telecom business models: why they’re changing and how all players - operators, vendors, specialist service providers, integrators - can best adapt and meet the management challenges that will be thrown up by the process.  This TeleEvo event couldn’t have come at a better time.  Mobile has been a huge success in Eastern Europe (Russia, CIS and the former Warsaw Pact block).  So much so that in many territories across the region the rapid growth phase of that market is well and truly over.  With mobile penetration rates now at over 100 per cent in some markets, mobile companies now have to find new ways to drive revenue growth.  That means that instead of finding new subscribers, operators are concentrating on keeping the customers they have in increasingly competitive conditions and generating more revenue from them by offering new services and packages. “In the past the operators have been very network-centric,” says Ilya Kuznetsov, Telecom Advisory Director at consultants’ Ernst & Young and Regional Business Development Manager for Russia, CIS, and Eastern Europe for the TeleManagement Forum. “They have been competing for licenses, developing their networks over very large territories, and generally marketing their brands. So up to now their approach has generally been to say: ‘OK, we have a network, we’re going to invest in new technologies such as EDGE, then we’re going to understand what we can do with it from a services perspective, then we’re going to try and find a way of targeting it at customers’. That approach won’t work any more.” Instead, says Kuznestsov, players understand that in a saturated market they have to take a customer, rather than network or technology-centric, view of their businesses. “This requires changes to their traditional business models,” adds Kuznestsov. “The market landscape in Eastern Europe has different sub-markets in contrast to the European market which now seems much more integrated,” explains Kuznestov. “This environment should now be focused on new offerings: converged offerings, content-based offerings. It’s about getting more revenues from the existing customer base." One of the most remarkable features of the Central and Eastern European telecom markets has been the sustained growth of GSM mobile. In Russia, for instance, the ‘big three’ GSM mobile operators -  MTS, VimpelCom and MegaFon, which together account for over 132 million subscribers -  have helped Russia achieve a  108 per cent penetration rate in mobile.    “The ‘big three’ are positioning themselves right now as CIS-wide operators and they are covering around 85 to 90 per cent of the CIS territory,” says Kuznestsov. “These companies are very strong players in the field: they are generating good cash flow and they have enthusiastic, aggressive investors and holding companies behind them. They are constantly looking to expand their business operations - not necessarily within Eastern Europe - but more into markets in the Mediterranean region, Middle East or in South and South-East Asia. These are markets that are not yet at saturation point, unlike countries such as the Czech Republic or Poland. 
“The other focus for development is on generating more revenue from new services in the markets they have already developed.  “What needs to be addressed now is that gap in the business model between the Network Layer and the Brand Layer,” points out Kuznestsov.  “That’s the OSS and BSS area: do this right and you get a deeper understanding of your customers. ”Gaining that understanding is key to keeping and profiting from customers in maturing markets.  “Our regional operators cover huge populations and large geographies and within that they are serving a huge number of different segments. These segments have never, so far, been understood enough by the companies to enable smooth transformation of their old business model to the new one,” he says.   “Over the next 12 to 18 months there is potential for a wave of mergers and acquisitions in the telecom-related fields such as fixed/mobile convergence, content/service provisioning, telecom/media business. But before they make these big business decisions they need to define very precisely the customer segments they are going to target and what customer propositions will be developed for each of their different territories. As things currently stand not everybody has a way of defining and reaching these sub segments.” So one of the important roles of TeleEvo conference, says Kuznestosov, is to challenge the region’s operators. “We will be asking them:  ‘Who are your customers and what is your business?”
TeleEvo features a slate of top speakers to help them define answers to these questions. Martin Creaner, TM Forum President, will outline the TM Forum’s vision of industry transformation and the role of its standards and frameworks in the BSS/OSS area and representatives from major players in the region, such as Alexey Nichiporenko, First Deputy CEO at  mobile operator MegaFon, will also give their perspectives.   Important subjects covered at the event include revenue assurance and management, telecom-oriented IT governance and compliance and interconnect billing. The importance of the customer and services experience is also explored, as are IT and operations topics such as managing multi-play products and services, and service delivery frameworks.
As TeleEvo illustrates, telecoms is no longer purely about building technology, but is much more about building strands of business.
“There is no lack of investment resources in the region," explains Kuznestoso, “but there is currently a lack of sound ideas about how to use that investment properly and within the target profit levels in the next phase of business development from a customer service point of view.” TeleEvo will provide an ideal forum to explore these issues.  “The great advantage we have in our region is that we can look how these things have been done in different markets and the sorts of business models that have been deployed,” enthuses Kuznestosov. “We have to examine and understand which models might be applicable and justifiable from both a revenue and customer loyalty perspective. So TeleEvo conference is trying to bring people in - not just from Europe and North America - but from Asia and other territories, to demonstrate all the different ways of transforming the traditional telecom operator into the new customer and service centric operator."


Telcos are good at ‘factory’ service provision but it’s an ethos that doesn’t fit with enterprise demands for highly complex ICT outsourcing. To tap that market, telcos will have to up their game and invest, says Leo McCloskey

The number of enterprises looking to outsource converged information and communications technology (ICT) solutions is growing rapidly and should be providing a handy new market for European telcos as other parts of the wireline network business continue to be squeezed.  But the problem is that telcos need to equip themselves properly if they’re to tap this increasingly competitive market profitably, or they’ll be relegated to the dreaded role of providing low-value ‘factory’ service components to other players  - such as Virtual Network Operators or IT outsourcers - who manage to tap it first. This means equipping themselves with the right tools to efficiently meet enterprise needs in what is a highly competitive market for complex, multi-provider, converged services, because the old way of ad hoc, manually intensive processes is simply not profitable. Selling services to enterprises isn’t what it used to be. Back when enterprise technologies tended to be managed in separate silos - separate networks and separate IT domains for voice, corporate data and desktop LANs, for instance - telcos were apt to look forward to a more rational, profitable time when their infrastructure and intellectual resources could meet all those disparate technology and application requirements. That would enable the enterprise to outsource the ICT environment to a single entity, reducing cost, hiding complexity and enabling new applications. As the natural provider of the network glue, telcos judged they could and should be considered viable providers of those outsourced services involving both communications services and IT. ‘Bring on the revolution’, they chorused.  Indeed the converged network eventually happened, albeit based on IP rather than the ISDN and then ATM that telcos initially envisaged. In addition, as the converged enterprise network concept has gathered pace, the conventional business wisdom around business ownership and control has changed too. Whereas 10 or 20 years ago many enterprises (obviously depending on their sector) would have seen communications solution ownership and control as strategic and their communications performance as crucial to competitiveness, today many enterprises regard ICT as ‘non-core’ and therefore a prime candidate for outsourcing, with the emphasis on performance increasingly focused on the end-to-end user experience.  So, by rights, telcos should be in the hot seat ready to intercept a brave new lucrative business market involving network convergence and outsourcing - playing straight to their strengths.

Not quite. A single enterprise IP network infrastructure certainly irons out communications infrastructure issues. But, of course, it’s not the monolithic network it appears, as it is constructed of multiple service components, each bringing their own complexity problem. Previously, enterprises managed ICT islands with a single source (or just a few sources) of technology - such as might formerly have been found in an intranet or a voice network. This island approach may have been expensive, certainly required multiple management systems, and inevitably meant that it was difficult to build features or applications that crossed networks. But, by having the environment broken up into homogenous chunks of technology, it at least kept things simple and enabled radical change (say the replacement of a voice switch) to take place without impacting the rest of the technology estate.

Today, however, enterprises are building large complex and integrated service environments that must be able to integrate multiple applications and service components. Converged enterprise services such as LANs, storage, mobility and voice-over-IP - which make contrary performance demands of the network – are highly distributed and must operate consistently throughout the enterprise. That makes these ICT environments hugely more complex to design, integrate, and, above all, to manage on an ongoing basis.  So there has been a slight change to the script - the converged enterprise network has actually brought with it, not an across-the-board reduction in complexity, but a complexity migration - in effect, the problems have scurried up the protocol stack.  The problem for telcos is that serving this market segment by building a powerful single managed service provider capability  - pulling together IP-based ICT services in the traditional telco way  -  is no longer an option.

The complexity of each individual ICT project is such that service delivery doesn’t scale.  The germ of the problem lies in the telcos’ ‘factory’ approach to the service delivery. For most of their history, and indeed for most of their activities today, the factory approach has worked well. It’s about defining, managing and monitoring tight processes and procedures to suit the delivery of high numbers of standard product/service combinations, all within their own service delivery environment. A good example today is ADSL-based broadband service deployment where OSS standards and vendor solutions concentrate on automating as much of the ordering/provisioning/fulfilment process as possible. The objective here is to get service delivery ‘right first time’, because market competition will not permit extra costs in terms of telephone help desk time or even truck rolls to sort out problems. 

Now consider the requirements of complex ICT service delivery in the enterprise market. In complete contrast to ADSL delivery, where the service is uniform, complex ICT deployments are highly bespoke collections of products and services delivered from different service providers, with each component requiring modification to suit the enterprise ICT requirement.  An ICT enterprise solution needs its constituent elements ‘decomposed’ into specific requirements and then ordered as components from the right provider and delivered in the right sequence. As things currently stand it is typical practice for providers to devote large numbers of staff  - complex bid and project teams  - to designing and then stitching together these solutions. Being human and dealing with high degrees of interdependency and complexity, they make mistakes. The inevitable upshot is botched configurations, missing components and delay as the stricken solution is troubleshot, project profitability plummets, and the enterprise that depends on it becomes increasingly frustrated.  But the problems don’t stop there. Once a hand-crafted solution is successfully running there will be ongoing changes, sometimes as high as five per cent per month. Since the initial design was produced by groups of people using ad hoc processes and no centralised tooling, all adds and changes are implemented on an ad hoc basis too. In these circumstances, service problems are very difficult to isolate, leading to yet lower profits and higher customer frustration. Ironically it’s been the nirvana of the homogenous, converged network - the development that was supposed to enable telcos to move up the value chain to offer network outsourcing services to enterprises  -  which has exposed the limitations of the repeatable ‘factory’ approach to complex ICT solution delivery.

So what to do? What’s required is centralised tooling which acts as an abstraction layer above a single OSS to organise and correctly sequence the planning and implementation activities for complex projects that are sourced from multiple OSS ‘factory’ stacks. Such a centralised approach must complement the information received from the existing OSS that manages each ‘factory’ service component, enabling a comprehensive end-to-end managed solution. Nexagent has implemented such an approach within its Nexagent System. The Nexagent System guides the provider through the complete solution lifecycle. It provides a centralised solution design and modeling capability, enabling the MSP to efficiently capture requirements from a potential customer and to model and validate a network design based on the ICT requirements. It then takes the model and generates an implementation procedure to take the solution through to fulfilment. Once the solution is up and running, it complements the ‘factory’ OSS by monitoring the actual end-to-end user experience across all network service components to ensure that the solution is delivering against the design requirements.  This replaces the hand-tooling that currently takes place to create the enterprise solution. It standardises design and integration processes, greatly reducing the time and number of people required to design, transition and operate the solution.

Obviously there are other benefits too: revenue is accelerated, the time taken to undertake the project is reduced and the number of mistakes in implementation is reduced  - each leading to enhanced solution profitability.  In effect the Nexagent System creates a standard, automated process for integrating and sequencing ICT services from multiple service providers’ ‘factories’. The Nexagent System is aligned with the ITIL Service Lifecycle as well as conforming with the TeleManagement Forum’s eTOM (Enhanced Telecommunications Operations Map) process model.  As the telco business model changes radically - with traditional voice call charge revenue contracting and even, in some territories, voice lines shrinking as subscribers turn to mobile replacement - telcos are keen to tap new revenues, especially in high growth, high profit areas.

Leveraging their network expertise to win combined Communications/IT converged network and applications outsourcing business has always been on the radar.  But if telcos are to make good in this area there must be a recognition that custom enterprise ICT managed solutions are no longer winnable using ad hoc, hand stitching. Telcos must apply the same focus on process automation to complex ICT solutions as they currently are to the ‘factory’ parts of their business. If they don’t, then there are plenty of competitors in the IT and outsourcing spaces to step in and take the business. Telcos need to invest now in the right tools to secure their competitive edge in this growing market.  

Leo McCloskey is VP Marketing and Business Development at Nexagent

European Communications previews the TM Forum’s take on the converging industries of telecom, cable, content, media and the Internet at TMW Americas

With the slogan “Whatever the future holds – if you can’t manage it, you can’t monetise it”, the TMForum’s Management World Americas event is underlining the crucial role that effective systems management must play if the brave new world of services, from the increasingly converged strands of the telecoms, web, media, entertainment and information services industries, is to prove a profitable venture for the many and various players.
As the TM Forum celebrates its 10th anniversary, the organisation is stressing the management credentials it has built up over the years with the back room boys of telecom, to take it into a future of converged services, and a central role in the systems management of
those services.
Scheduled for November 4th – 8th in Dallas, Texas, TMW Americas is created, backed and supported by the TMForum, now widely recognised as the industry’s largest independent trade association for management and operational matters and standards.  The theme of the show is based around the TMForum’s conviction that, with services converging onto single networks and increased usage of multi-service platforms to reduce cost and new service delivery times and increase reliability, operational systems and standards have never been more critical.
Spreading it’s appeal net far and wide, the TMForum claims that whether you are service provider or supplier, operating a fixed or mobile network, an incumbent or challenger, a business executive or technologist, in finance or operations, TMW Americas will have something for you.  The organisers stress that technology is no longer the key differentiator in this new, converged marketplace.  It is how well assets are deployed, speed of reaction to market opportunities, and anticipation of customer needs that will count.  In other words ‘managing operations for converging services is the key to success’.  To this end, TMW Americas aims to offer a 360º view, ranging from strategic business issues to deep level operational and technical topics, and is lining up an impressive range of keynote speakers, and a packed programme of summits, seminars and training courses.
Among the keynotes speakers are Mike LaJoie, Executive Vice President, Chief Technology Officer of Time Warner Cable; Robin Bienfait, CIO, Oversees Blackberry Operations and Corporate IT, Research In Motion; and Kevin Slavadori, CIO, Telsus.  The Executive Summit on Business Transformation offers highly interactive sessions covering a range of topics from investment trends in wireless technology to examination of such initiatives as BT Group’s use of its 21st Century global platform to enable the delivery of more interconnected, more logical and more intuitive services.  TM Forum Technical Initiatives in the real world offers in-depth technical insight into TM Forum Collaboration Program, through research and case studies, and including such subjects as unlocking the potential of SOA as a foundation to management systems, or optimising business processes throughout the product and service lifecycle. 
Other conference tracks include Managing and Delivering Content-Based Services; Managing and Optimising Customer Experience; and Operational Challenges in a Converged Market.
TM Forum’s Catalyst Showcases will also be much in evidence at TMW Americas.  The Catalyst programme is the Forum’s proving ground for pragmatic solutions – enabling service providers, system integrators, and hardware/software vendors to work together to solve common, critical industry challenges – and always prove to be a great draw at TM Forum events.
No event worth its salt, these days, misses the opportunity of giving vendors a platform to show their wares.  TMW is no exception, and the Converged Services Expo will give vendors the chance to lay out their stalls, and show the wide range of products and solutions from the many different strands of this increasingly convergent industry.

The challenge for carriers today is to establish a global and standardised network operating system that ties together both networks and applications. Verizon and Nakina teamed up to use the TM Forum's NGOSS eTOM and SID solutions and models to design and implement the fundamental building blocks of a multi-vendor, multi-technology Common Element Management System.   William F. Wilbert provides key insights on the implementation process, its challenges, and the outcomes of creating an efficient, effective, scalable, standards-based solution for managing one of the largest and most complex multi-vendor networks in the world

It held the potential for disaster. But fortunately they saw it coming, says Robert Ormsby, Director, Network Management Laboratory, at Verizon Telecom, remembering the days before Verizon and MCI merged. “We realized that unless we took action, our infrastructure could not continue to support the new services we wanted to provide. In fact, the burden of managing the network was threatening to undermine our business model.”
A top North American inter-exchange carrier (IXC), Verizon operates one of the largest communications networks in the world with an optical networking infrastructure that spans close to 100,000 miles and more than 4,000 Points of Presence (POPs). Verizon’s network currently offers high availability, plus a wide array of next-generation features. The company’s network is now a key differentiator.  But this was not always the case.
Prior to the merger with MCI, the carrier found that the effort to maintain legacy systems – hardware, element management systems (EMSs), licensing, testing and training – were steadily rising and eating away at profits. For example, the cost of introducing a single EMS exceeded $1 million (for hardware, training, and updating methods of procedure) and also required hefty licensing, integration and ongoing maintenance fees.

OSS/BSS: a fine mess
The technology challenges facing the telecommunications industry can be summed up in a single word: complexity. Carriers operate their businesses on a complicated mix of systems and networks known as operating and business support systems (OSS/BSS). Everything runs on hardware from a variety of network equipment providers (NEPs) whose boxes come with their own proprietary systems and communications protocols. Some, but scarcely all, boxes come with element management systems (EMSs).
With varying amounts of integration work, EMSs help pull multi-vendor devices into one functioning network. EMSs tend not to be open, secure, or scalable, so a lot of extra work has to be done to make them function. “OSS/BSS integration can be a nightmare because everything has to be interfaced to everything else,” Ormsby says. “In most cases there is no standard information model to help build a searchable database, and this causes a huge stumbling block to network efficiency and reliability.”
Since the EMS layer does little to hide network complexity, many manual processes are required to fill in the gaps. To provide a single new service across multi-vendor, multi-domain (optical, Ethernet, IP/MPLS) systems requires stitching together a tangled web of networks and applications – literally spanning hundreds or thousands of network nodes utilizing many different software interfaces. Often, each device has to be manually interfaced into the network. Adding a new device type or application to the mix typically requires upgrading both hardware and software across the entire telecom network system – not an easy task considering that many of today’s new services and networks are built with a complex mix of products.
“Telecom operators deploying equipment from many different vendors face the challenges of integrating multiple EMSs into their OSS systems and training staff in each different EMS,” says Peter Mottishaw, senior analyst with independent analyst firm OSS Observer. “Many Equipment Providers deliver capable element management systems, but despite the efforts of telecom standards bodies there is still patchy support for standard northbound interfaces. This drives up integration costs. A further issue is that some equipment vendors do not deliver EMSs that meet the full set of operator requirements. EMS development is costly and complex and equipment vendors focused on getting new products to market sometimes under-invest in this area.”

Untangling the web: keep it simple
Almost all telecom services providers say they want to reduce the number of OSS vendors and products they have to manage. As they see it, their vendors – independent software vendors (ISVs) and network equipment providers (NEPs) – alike should simplify their product portfolios and move toward standardized offerings that have the potential to support plug-and-play environments that minimize network and application integration. Many are hoping that standardized Service Delivery Platforms (SDPs) will allow them to rapidly rollout new services on increasingly converged next generation networks, with high reliability and significantly lower costs.
SDPs rely on OSS/BSS systems underneath them, which in turn interface to the network elements (NEs) through element management systems (EMSs). Yet a fundamental disconnect persists between the NE and OSS layers of the network – a problem that has become costly and complex because of the continuous and inexorable technology changes taking place at both levels.
As NEs change so does the OSS/BSS interface layer, which may have a ripple effect on the overlying services. If carriers wish to implement new OSS/BSS applications, these interfaces must be rebuilt and tested against both the SDP and to the underlying EMS layer. An architecture that considers all these elements is critical if a sustained advantage is to be held in the market. Rapid innovations made in network element technologies have to be constantly linked with new operational support systems (OSS) above them. The problem only worsens in proportion as the number of OSS and network technologies grows.
Thinking there had to be a technical solution for a technical problem, Verizon/MCI teamed with Nakina systems of Ottawa, Canada to help develop and implement a new kind of integration solution called a Network Operating System (NOS). A whole new animal, a NOS essentially combines the capabilities of an EMS and an NMS (Network Management System) for multi-vendor, multi-technology networks, with a carrier grade scalable and secure architecture built using open, standards-based interfaces.
 “Our vision for a NOS is to provide a single management solution that discovers, secures, configures and manages any vendor’s networking products,” says Nakina Systems Chairman and CEO, Marco Pagani. “In essence, it is a universal network adaptation, abstraction and mediation layer that provides a single point of integration between the actual network elements themselves and higher-level management, OSS and BSS applications.”
The Nakina solution provides carrier-grade performance and scalability  that spans both legacy and next-generation equipment across multiple domains and OSI layers – including SONET, SDH and WDM optical equipment, Ethernet switches, IP/MPLS routers, video switches, and service routers and wireless equipment, among others. Unlike a traditional EMS, which essentially provides an interface or API for a specific network element type; a NOS provides a stable environment and single point of integration that abstracts the network complexity and disengages it from direct integration with the upper layer OSS/BSS in the same way that the operating system on a PC separates the hardware from its applications.
This mediation and abstraction function allows higher-level applications or services to be developed independently while simultaneously enabling new network equipment to be introduced and updated beneath it – while ensuring that the entire system still functions together like clockwork. A universal mediation layer helps service providers roll out their next-generation services and network infrastructure much faster in multi-vendor build-outs such as residential broadband, triple play, IPTV, Carrier Ethernet and wireless backhaul applications.

NOS in Action: the Verizon CEMS solution
In 2005, Verizon Business (formerly MCI) set out to build a state-of-the-art ultra-long-haul transport network and a converged packet access network initially comprised of over twenty different types of equipment from ten different vendors.  With new technologies being introduced routinely and an already overloaded operations staff, Verizon was looking for what they termed a “Common Element Management System” (CEMS) as a solution to manage both these new networks.  The goal was that CEMS would reduce operating expenses by limiting the growth of single-vendor EMSs, providing centralized operations control and simplifying the integration of new devices into their existing back office systems.
Verizon’s initial pilot project for the production CEMS solution was to upgrade functionality across a 40-node segment of the optical network without compromising network availability. The roll-out, which spanned two major metropolitan cities in the southwestern U.S. and covered approximately 650 miles (1,045 kilometers), would not only deliver new features and products to customers, it would also result in a more self-sustaining and secure network with fewer outages.
Verizon worked closely with Nakina Systems, using the company’s network OS product as the CEMS solution. The entire audit and software upgrade process was accomplished remotely in less than two hours.  To the surprise and satisfaction of the operations personnel, this was a dramatic improvement over the 40-hour, week-long, on-site effort that Verizon had originally anticipated. Verizon was also able to leverage the new features available in the new software load a full week ahead of schedule, reducing its time-to-new-service revenue. As it turned out, the CEMS solution held significant implications for the future in its potential to reduce costs and manage thousands of nodes across both networks.
Based on the success of the pilot project, Verizon Business has since expanded the use of Nakina across many of its vendors, with payback being achieved simply on cost avoidance of single-vendor EMSs. Overall, approximately $1M to $1.5M savings have been achieved per EMS in addition to real dollar savings that have been made in OPEX due to consistent, reliable operations. 
The CEMS solution has now been in production for over two years providing Verizon the benefits of a single point of integration and a consistent set of procedures and interfaces for all their network element types. In addition, the service provider has also noted the following substantial CEMS benefits that help drive new revenues while making the network more efficient:
• Simplifies and accelerates the introduction of new services
• Lowers the cost and enables rapid integration of new systems into higher level OSS/BSS systems
• Provides one set of methods, applications, and interfaces that apply to all vendors
• Enables cost-efficient training, due to common procedures

Ace in the Hole: industry standards
Until recently, the idea of a Network Operating System was looked upon with some skepticism. Who, for example, would assume the burden of building and continuously updating “adapters” (or device drivers) so that the NOS could continue to talk to each box on the network?
 “When we first started talking about the idea of a network OS, there were more than a few doubters, but that has turned around,” says Mr. Pagani. “Now many people think that a universal mediation layer will emerge naturally as more and more carriers put pressure on equipment vendors to provide standard interfaces and adapters as an integrated part of the product, just as PC peripherals manufacturers provide drivers as a standard part of their product. Nakina frequently gets request from NEPs for our Software Development Kit (SDK) so they can build their own adapters.”
As the most effective way of transcending a mix of proprietary products from NEPs, the Nakina Network OS solution was designed from the beginning to have an open and modular software architecture. It adheres to the TeleManagement Forum’s New Generation OSS (GNOSS) standards. With an open architecture approach “adapters” can be implemented at run time. These adapters are hot deployable and their development time is measured in days or weeks rather than the months or even years required to build a carrier-grade EMS.
Nakina is also forging partnerships and alliances with equipment vendors such as ANDA Networks and LSI, and System Integrators (SIs) such as HP. “These companies realize that there is a tremendous benefit to creating a standards-based common environment that makes it easy to manage and upgrade all devices on the network,” says Mary O’Neill, Vice President of Market Development at Nakina Systems. “A multi-vendor ecosystem will equally benefit carriers, NEPs, ISVs and SIs.”
With a NOS in place, a service provider is far better equipped to retain and build a differentiated service offering in the market and will be ready for their move beyond Quad Play, keeping services affordable over time.
 “Nakina’s Network OS solution is a comprehensive EMS platform that has been proved out with deployments with tier-1 customers,” says OSS Observer’s Peter Mottishaw. “It provides the scalability and security requirements required for a carrier-class EMS. Operators struggling with multi-vendor equipment environments and deficiencies in existing EMS should consider the platform as an alternative approach to purchasing EMS from equipment manufacturers. Network equipment manufacturers who cannot support large-scale EMS platform development should also consider the Nakina Network OS as a potential common platform.”

William F. Wilbert has written for technology publications for more than 15 years
For more information, visit


Other Categories in Features