IMS - the ultra next generation network architecture - was to become the great unifier for all our disparate access technologies and a cure-all needed to deal with vendor interoperability issues. If done the wrong way, however, it can create an overly complex, difficult-to-manage architecture. This realisation has put a renewed focus on interoperability testing and network monitoring. Chad Hart explores the challenges and makes the case for a lifecycle approach to testing and monitoring of NGNs and IMS networks
Most operators want an NGN, but few have actually been deployed. One major reason is that making these networks operate reliably is challenging, and many initiatives never make it out of the lab. This is especially true for IP Multimedia Subsystems (IMS). New approaches to quality assurance are, however, changing this - which is where testing and monitoring comes into the picture.
Those responsible for looking after the quality of NGNs and IMS-based networks face many challenges. In the first instance, they are complex beasts. Almost by definition, they are made up of many devices, offer several different kinds of services, interface with many legacy networks, and have to interact with other providers' networks.
The IMS architecture is especially complicated. It comprises many different protocols, dozens of standardized functions, and even more interfaces than one can imagine. Because of this, coping with these seemingly infinite details creates another challenge for quality engineers to deal with.
Theoretically speaking, specifications should make designing and implementing advanced networks easier. The standards should provide a good guide for everyone to follow. But in reality, many standards - particularly those for IMS - are incomplete, or have major pieces missing. What compounds this is that there are many industry bodies developing different specifications to apply to NGN networks. These include IETF, ETSI TISPAN, and the 3GPP. Furthermore, these associations frequently update their work, making it an arduous task to keep track of new versions to adhere to.
Thirdly, engineers face the challenge of identifying and sourcing all the pieces in this jigsaw puzzle, and to then make it work together. Because of the complex nature of the IMS architecture, and the many standard ambiguities, interoperability becomes a serious issue to deal with. Often the components from one vendor do not work with those of another without a significant amount of integration work.
Furthermore, because no one vendor does everything exceptionally well, operators are confronted with the challenge of dealing with each one's weaknesses, and go through an often laborious and tough vendor interoperability testing process - alternatively, operators could pick a vendor that has already interoperated with the best-of-breed components. But even this is not without its challenges.
Finally, the biggest and single most important challenge is to ensure that your customers and subscribers remain happy. The end user does not care about how the services he uses are implemented - all he's after is high-quality, reliable, secure and affordable networks. So it becomes crucial for IMS implementers to hide the complexities from its users while providing consistent - or even higher - service quality levels. Meeting these challenges requires a more advanced approach to ensuring quality.
You'd think all these challenges make it almost impossible for any NGN - never mind an IMS network - to make it to market. But operators are dealing with them. They are providing their customers with top-notch services, and we believe this is because they have realigned their quality assurance processes and invested time and money into continuously testing and monitoring their networks.
Before progressing from a concept to a deployed network offering a service, operators implement gruelling tests, and these often take place in several unique lifecycle stages. Characteristically, they start within the infrastructure vendors and transition to the operator. They include research and development, quality assurance, production, field trials, deployment, and on-going maintenance. And within each phase, quality assurance should have been applied.
If handled correctly, each group should have had its own employees, equipment, processes, and test plans assigned to deal with this, with little being shared between groups. However, because of the many challenges created by IMS, the traditional process to managing quality requires that the testing process becomes more flexible - there simply is too much that can go wrong.
With too few quality engineers to meet today's needs the lifecycle function needs to be adaptable. Increasingly these separate groups are collaborating more in order to carry out thorough and implementation-specific testing. And this can be in the form of shared test methodologies, shared lab equipment, shared test metrics, and shared test scripts, or even shared test engineers; but what's critical is that no testing takes place in seclusion.
When doing any job it's fundamental to try and use the right tools. Therefore, when managing the lifecycle approach to quality assurance, it's absolutely imperative your teams are armed with the best tools to help them get the job done - this is especially important if you're to ensure your quality assurance remains watertight.
Lifecycle testing and monitoring consists of several different elements; typically these include:
Subscriber simulation/call generation - the slowest and least sophisticated way to test a network is to make manual calls into the network and to report on the result of each one. Although this works for simple tests, it's not the best approach to handling complex feature and scenario testing as this will take hours to run and it would be difficult to manage. For example, it would require more than thousands of callers with dozens of phones each to even begin to reach the traffic levels needed for today's load tests.
Call generation tools can normally emulate the precise end-point devices from a signalling and media perspective as well as simulate end-user calling behaviours. These tools usually have specialised capabilities for feature testing, load testing, and test automation; and often support advanced voice quality measurements and have reporting capabilities that are not viable to record with manual testing.
Infrastructure emulation -legacy networks have one main network switching component known as the Class 5 switch/softswitch or MSC. The IMS model separates these into several dozen clear-cut software and component functions such as CSCF's, AS's, BGCF's and a whole slew of other acronyms. As a consequence of this, most of today's IMS core infrastructure devices demand a considerable amount of interaction from other infrastructure devices in order to function. The problem, unfortunately, is that fitting all these devices into a test lab is not practical or feasible. By using infrastructure emulation tools, quality assurance engineers can emulate precise infrastructure devices as well as the distinct vendor implementations of these devices. What's more it helps operators save a significant amount of physical space, configuration time and capital equipment cost.
Network emulation - as a rule, labs are typically setup in a single room with all the devices connected to a single data switching infrastructure. However, real-world IP networks are quite different. In fact, several switches and routers are used to connect an array of different devices across hundreds of miles, and via many differing network topologies. In reality, this ultimately causes packet losses and delays to occur that you cannot see in a lab environment. Network emulation products let you emulate these LAN conditions, and even allow you to introduce jitter, bit error rates, and link outages.
Troubleshooting and diagnostics - being able to identify limitations and problems is the sign of a good test. Although, how can you identify whether an issue was created by the network, and not faulty testing? By using troubleshooting and diagnostic tools, engineers can isolate and analyse each problem. Information gathered during troubleshooting and diagnostics is invaluable to the development engineers as it allows them to fix any bugs discovered. Typical diagnostic tools for IMS networks have some low-level signalling message decoding and voice quality analysis capabilities.
Service monitoring - because of the intricate make-up of advanced networks, it's foreseeable that problems can arise over time, even after thorough lab testing has taken place ahead of deployment. Therefore it's important to proactively monitor the quality of service the network is delivering after being rolled out to customers, and to swiftly respond to any problems that may arise.
In order to achieve this most service providers deploy a monitoring system. This may be passive and simply listen to network traffic; or it could be active. If it's the latter it normally makes measurements against system-generated calls - or a mixture of both. In both cases they characteristically include reporting metrics that are useful to networks operations personnel, as well as specialised diagnostic and analysis tools that help them find and sort out network problems.
The testing and monitoring requirements for today's NGNs and emerging IMS networks are substantially broader and deeper than the industry has ever seen before. Creating a comprehensive test program that can be applied across the various layers, functions, applications and lifespan of such a network is not impossible to achieve. By using the advanced tools and techniques available in the marketplace you can tackle any quality assurance issues from day one, and beyond.
So, regardless of whether you're only in the initial stages of designing your NGN or IMS network, design, testing and monitoring should be at the top of your priority list - if it's not, it could certainly spell doom for the entire project.
Chad Hart is Product Marketing Manager, Empirix