To enable a successful Web 2.0, we also need Internet 2.0 says Richard Lowe
Ten years ago the Internet had capacity to spare and the applications that it supported consumed relatively few network resources. The advent of Web 2.0 has changed all of that. We have seen, and continue to see, a proliferation of complex applications demanding ever-more bandwidth. This has led many to wonder exactly who will foot the bill for the necessary upgrades to the network.
Web 2.0 applications are very much a success story - services like Wikipedia, Facebook, YouTube and the wide variety of text and video blogs all seem to defy demographic boundaries and continue to experience stratospheric growth in users. However, there are genuine fears that the demands these place on bandwidth resources may ultimately overload the network and cause Internet meltdown. To enable a successful Web 2.0, we also need Internet 2.0.
The problem is that the Internet was never designed to deal with the increasing demands that are being placed on it. In this respect it bears a close resemblance to modern motorway infrastructure. In the past no one predicted the number of cars that there would eventually be on our roads, as a result commuters are faced with chronic gridlock, especially during rush hour. Similarly, no one could have predicted the popularity of next-generation Internet applications. Interactive and video-rich Web 2.0 applications demand a great deal of bandwidth, which consequently clog the networks carrying the information and degrades overall performance quality. Furthermore, the IP traffic generated by Web 2.0 applications does not follow the one to many, top down, approach of most original web applications. As one senior network architect of my acquaintance frequently explains, "traffic can come from strange directions".
Though it may sound counter intuitive, the solution is not as simple as merely building larger networks; in much the same way as a motorway, extra capacity is soon consumed. It is a vicious circle: the more capacity that is provisioned the more innovative bandwidth-hungry applications are developed to exploit it. Of course, new capacity has to be built to ensure continued innovation on the web, but what we also need to do is to enable the intelligent management of network resources to support valuable services.
This is not about creating a two-tier Internet. Yes, for some people, they may express the value they see through increased payments to their Service Provider. For others, they may choose to prioritise their access line resources towards gaming at no extra cost, and sponsors or advertisers may choose to fund incremental network capacity for services like IPTV. We have learned over the last few years not to second-guess what business models might emerge. After all, who would have believed a few years ago that one of the world's most valuable companies would offer a free consumer search service funded by contextual advertising?
Internet operators already have to compete with the challenge of managing network resources for multi-play packages that include ‘walled-garden' services such as VoIP and IPTV. However, the problem is exacerbated when ‘over-the-top', web-based services - such as YouTube or the BBC's iPlayer - seek to exploit greater network capacity for substitute services without bearing the network costs. Real-time services such as video can be severely impaired, or fail, if insufficient network resources are available to them.
This places operators in a difficult position. They cannot tolerate a decline in the overall quality of their network, but nor can they turn their back on third party services which drive broadband adoption and are highly valued by customers. In the same way as content providers need to monetise their content, network providers need to monetise their networks.
Operators can no longer expect to make sufficient profits through the sale of voice lines. Nor is Internet service delivery the salvation is once appeared to be. Most of the important global markets are open to alternative network providers and this has resulted in fierce competition - eroding prices and eating into the profitability of broadband provision. Many telcos are facing the possibility of becoming little more than a bit pipe for the provision of over the top services by third party suppliers.
This can only be countered by the provision of compelling value-added services that exploit the unique capabilities offered by ownership of the access and aggregation network, while ensuring that these premium services enjoy the quality of service required to differentiate them from over-the-top providers. Clearly there needs to be a proven economic case for the allocation of network resources to these services, rather than allowing all services to wrangle for resources in an ‘unallocated pipe.'
If telcos are to retain autonomy over the service they provide, they need to move into the sphere of rich media, web application provider, and leverage the best asset they have to compete effectively - the network itself. In order to do so, however, they will need a fresh network management tool and a new business model.
The traditional approach to managing network resources for a particular service is partitioning. This cuts off bandwidth resources specifically for VoIP, IPTV et cetera and only admits the number of concurrent sessions that the resource can support in the access network - or what my friend, the network architect, calls "sterilizing bandwidth". Service quality is therefore only guaranteed when the network is ‘over-provisioned.' This is an untenable and inherently risky approach in the web 2.0 era.
For one thing, partitioning is a backward looking approach because the basic goal of migrating to all-IP networks is to have a common shared resource that is service and application agnostic. Partitioning the network only results in higher capital and operational expenditures because it is an inherently wasteful process. In looking to ensure quality, it necessitates highly in-demand network assets becoming ‘stranded' and idle. In the early days of IP networks where the dominant traffic was voice, with very little video, over-provisioning and partitioning was still possible, albeit inefficient, because voice compared to video consumes far less bandwidth and its growth and peak traffic patterns are more predictable. Video traffic by contrast is very bandwidth hungry and subject to large peaks, a bit like a motorway changing from nearly empty with good flow and speed to overloaded with vehicles in less than a minute causing endless delays and stoppages with no apparent reason or warning. Trying to over provision and partition for such demands will be economically impossible for Service Providers. Actual usage patterns may not match the capacity plan causing customers to be unable to access a service or application when, in fact, sufficient capacity exists in another partition or ‘silo.' Not only is it an inefficient way of managing current services but every time a new service is launched, a brand new capacity plan has to be launched alongside it. This leads to extended time-to-market, repetition of work and expense and an inflexibility that is disadvantaging in the competitive converged media markets.
A modern approach needs to be agile - there is not unlimited bandwidth to justify wasting resources. The solution lies in technologies that allow the carrier to treat the network as a holistic resource available to all applications.
The ETSI standards-based Resource and Admission Control Subsystems (RACS) permit available resources in the access network to be allocated dynamically rather than being pre-provisioned, thus ensuring that they are exploited in the most efficient way. Operax has enhanced the basic standards by proposing that the functionality operators require is "dynamic Resource and Admission Control" (dRAC). This brings dynamic topology awareness into the admission control and policy enforcement process - thus ensuring that services and sessions are truly guaranteed QoS on the basis of resources that are really available.
The functionality is situated between the application layer and the network, a position from which it is able to become the only point of contact to which applications can request bandwidth - effectively isolating the service from the network resources. It is then able to enforce subscriber and service policies, allocating resources on a real-time, per session basis, removing any need for applications to understand the underlying topology of the network.
In the same way, dynamic Resource and Admission Control is able to intelligently manage applications, services, subscribers and network resources according to the carrier's business policies. All the different points of bandwidth contention are identified and are automatically processed before a session is set up. dRAC tracks the available bandwidth into a consumer's home and can ensure that a session is not set up if the necessary bandwidth is unavailable.
At present, applications are competing for bandwidth on a best-effort network. Automated management of bandwidth commodities will not only ensure that service quality can be guaranteed for premium real-time services such as VoIP and IPTV, but can also ensure that over-the-top services have a reasonably free access to resources. Quality can be guaranteed in the premium tiers of the network while still leaving room for innovation in web-based services.
More than merely saving operational expenditure by providing the most efficient technical support for services, this method of automated management may also allow operators to open new revenue streams and present new business opportunities. By treating bandwidth as a commodity that can be allocated dynamically, quality of service can itself become a monetising strategy. For example, if an individual customer wishes to subscribe to a ‘gold' standard of quality for a service, such as high-definition (HD) for IPTV, the RACS can monitor the capacity and automatically inform the customer of the available levels of quality. If there is only capacity for a ‘bronze' standard-definition (SD) class of service, the customer could be alerted before payment and charged appropriately if they choose to proceed. Alternatively, they can be offered a discount and priority if they prefer to access the ‘gold' session through a network digital video recorder (DVR) at a later time.
There is plainly a middle ground to be drawn between the current Internet model, which allows free access to all services, and a controlled tiered Internet. Operators rightly want to see a return of investment in network technologies, but not at the risk of the competitive market. Personalisation is very much a buzz word of the Web 2.0 era, rather than the unknown quantity of provisioning through network partitions, automated resource and admission control can allow the operator to tailor its service levels for their individual subscribers, ensuring guaranteed quality for tiered services and yet still allowing capacity for innovation in over-the-top services.
Richard Lowe is CEO, Operax