We've been hearing a great deal about ‘converged', ‘21st Century' and ‘next generation' networks, and what they will mean for business. But what does it all actually mean in terms of technology? Peter Thompson takes a look
Next generation networks, while promising great strides for business, happen to entail - in terms of technology - a radical shift from circuit switching, where fixed resources are allocated to a session (such as a telephone call) for as long as it lasts - regardless of whether they are actually being used at any particular moment - to packet switching, which allocates transmission resources for only as long as it takes to forward the next packet. This is more efficient, since most sources of packets only generate them occasionally (though sometimes in bursts). Equally important is the inherent flexibility of packet switching to cope with variations in demand, and hence to support a wide range of different applications and services. While several packet switching standards have been used, the clear favorite is IP (Internetworking Protocol), which is the basis of an ever-expanding web of enterprise and service provider networks that link together to form the Internet.
For the enterprise, shifting to a converged all-IP network translates into immediate productivity gains through integration of different functions - now available in easy-to-use Unified Communications packages - and medium-term cost savings from toll bypass and consolidation of network infrastructure.
If this all sounds a bit too good to be true, then it should. Allowing streams of packets from different applications to share resources in a free-for-all makes the network simple and cheap but causes the service each application gets to be extremely variable. Whenever packets turn up faster than a network link can forward them, queues form (called congestion), causing packets to be delayed, and buffers may overflow causing some packets to be lost. Traditional data applications such as email transfer don't mind this too much, but new real-time services such as IP telephony are very intolerant of such behavior.
The upshot of all this is that despite all its benefits, a converged packet network can't be considered a reliable substitute for a circuit-switched one without having something in place to ensure that it provides an appropriate quality of service (QoS) for all critical (and particularly real-time) applications. This means giving each application enough bandwidth, and keeping packet loss and end-to-end delay within bounds. Loss and delay can only get worse as a stream of packets crosses a network, so it makes sense to think in terms of allocating an end-to-end budget for these parameters across different network segments. Different parts of the network can then attempt to meet their budgets using a variety of methods.
A technique used in the high-bandwidth and high-connectivity core of a network is to control the routes that streams of packets take so as to avoid congestion almost entirely.
MPLS, with its traffic engineering extensions, is a standardized way to do this, but there are also proprietary mechanisms that some of the IXCs use that work well enough for them to carry billions of call minutes annually over converged IP networks using VoIP.
Move towards the edge of the network, however, and the number of alternate routes diminishes. The capacity of the individual links also goes down, making occasional congestion much harder to avoid. At the level of an individual WAN access link it becomes almost inevitable, so packets will often be queued up to cross it. Delivering QoS then becomes a matter of managing this queuing process to assure service for critical packet flows even when the link is saturated. This can be very tricky when several different applications are all ‘critical' but have wildly different throughput requirements and sensitivities to packet loss and delay. This problem is a major drag on the uptake of converged networks, causing them to be widely regarded as ‘complicated' and ‘difficult', when they ought to be making life easier.
One reason that the available QoS mechanisms don't help as much as they might is that they fail to take account of the intrinsic interaction between the different QoS parameters, or rather between the resource competitions that affect them. At a congestion point, packet streams compete for the outgoing link bandwidth, and since having more traffic than capacity is the definition of congestion in the first place, a lot of ‘QoS' implementations focus on managing this one competition, i.e. they provide a way to allocate bandwidth. However this isn't the only limited resource, as queued packets have to be stored somewhere, and any buffer can only hold so many; consequently there is another competition between the streams, for access to this buffer, which determines their packet loss rate.
Finally there is the limitation that, however fast the network link, it only sends one packet at a time, and so there is a third competition to be selected for transmission from the buffer, which determines queuing delay. These three competitions are interlinked; for example increasing the amount of buffering to reduce packet loss results in more packets being queued up to send and hence increases average delay. Even assuming a series of QoS mechanisms can be combined to manage all three of these competitions, the behind-the-scenes interactions between them will sabotage every attempt to deliver precise and reproducible QoS. In practice, the effect of this is that reasonable QoS can only be achieved by leaving substantial headroom, resulting in very inefficient use of the link, which can become a high price to pay for a solution that was supposed to save money!
Predictable multi-service QoS
Fortunately a new generation of QoS solutions is emerging, that manage the key resource competitions at a network contention point using a single, general mechanism rather than a handful of special-purpose ones. This not only controls the intrinsic interactions but even allows trade-offs between different packet streams, for example giving a voice stream lower delay and a control stream lower loss within the same overall bandwidth. By starting from a multi-service perspective, multiple critical applications can all be prioritized appropriately without any risk of one dominating and destroying the performance of the others. Embracing the inherently statistical nature of packet-based communications makes the resulting QoS both predictable, eliminating surprises when the network device is configured, and efficient; up to 90 per cent of a link's capacity can be used for packet streams requiring QoS, with the rest filled with best-effort traffic.
Applying this technology at severe contention points, such as the WAN access link, enables the biggest potential losses of QoS for critical applications to be controlled. This makes the QoS ‘budget' for the rest of the network achievable using established techniques such as route control and bandwidth over-provisioning.
For the business, this QoS technology is most useful for managing the WAN access link to the rest of the network. Combining it with session awareness, NAT/Firewall/router functions and the ability to convert legacy applications such as conventional telephony to converged applications such as SIP VoIP produces a new class of device called a Multi-service Business Gateway (MSBG). Such a device can either be managed by a service provider delivering managed services or by a business buying simple connectivity services from a provider. It also provides a convenient point to provide QoS assurance such as VoIP quality measurement to ensure that SLAs are not breached. Overall it is an enabler for reliable, converged, packet-based services, allowing the full potential of 21st Century networks to be realized. We are only just beginning to see the changes this will bring to both business processes and everyday life.
Peter Thompson is Chief Scientist at U4EA Technologies and can be contacted at firstname.lastname@example.org