Cable operators must streamline their networks for faster service rollout if they are to guard against hungry telcos says Bill Bondy
As telcos race to roll out IPTV along with Internet access, VoIP, e-mail, messaging and security services, cable operators can not rest on their laurels by relying on their strongholds in the entertainment and broadband industries. Despite cable's solid brand recognition and established customer loyalty, telcos could gain considerable ground on cable turf by boasting "on-demand" TV capabilities and personalisation of "blended lifestyle services" in their quad plays.
If IPTV subscriptions grow to 36.8 million by 2009, as predicted by Multimedia Research Group, this personalisation will be a significant differentiator.
To stay ahead, MSOs must recognise the many identities of a person as he or she transitions from personal, professional and leisure profiles. A subscriber can be a wife, a mum, an office manager, a tennis player, an antiques collector or a dancer at different times in the same day. The fact a subscriber could opt to change service settings according to the time of day, location or situation could be leveraged to open the door to increased loyalty through improved service quality perception.
The problem is that embracing the customer and the seamless hand-offs among TV, fixed telephony, broadband and cellular networks will take substantial engineering feats. Of paramount importance will be the ability to instantly access information about bandwidth requirements, QoS, permissions, pricing plans, credit balances, locations and device types.
To achieve this, there needs to be a one-stop shop for data, and an understanding of how dynamic services fit into rigid legacy networks with silo data storage structures.
While service management, control and security can be greatly simplified with the unification of subscriber-specific data, the fact remains that multitudes of protocols and access methods go across many components (ie RADIUS, AAA, session accounting, policy management and HSS). That makes consolidation a very daunting task.
With so many different types of databases to manage-each with its own protocols and access methods, there is often a duplication rate of up to 35 per cent. More often than not, manual processes and forklift migrations are the status quo for re-synchronising databases with networks in order to support and to keep up with increasingly rapid service changes.
The new-world view of data centralisation is more dynamic, as it focuses on real-time capabilities and on-the-fly transactions. These capabilities require a move away from historical, report-oriented strategies that sat at the core of monstrous data warehousing initiatives and did not have rigorous latency and response time requirements. Monolithic libraries of information now have to give way to intelligent databases that "grip" data for deeper personalisation of services and performance at increasingly higher levels.
To do so, cable companies have to break away from reliance on "transform layers" or "federation layers" that sit on top of multiple databases as an ad hoc "glue". While these layers help applications and clients to better understand the nature of queries, they will cease being real-time responsive dealing with, say, 50 databases. Because each data repository possesses its own access interfaces and protocols, the glue will no longer be enough when cross-database access within the network is required. Core network service and application performance lags are a major liability.
A centralised view will instead depend on the creation of one logical database to house all subscriber data with a discoverable, published common subscriber profile, as well as one single set of interfaces for managing that data (ie LDAP, TelNet, SNMP, etc). The single logical database will co-exist with data federation to allow a gradual, step by step, migration of data on a silo by silo basis until the operator has consolidated all required subscriber data to the degree that is possible.
Subscriber data is at the heart of control for the user experience and quality across networks. By consolidating customer data, MSOs enable provisioning and maintenance from one centralised location. A one-step process for adding all data for subscribers and services to a single database would give cable companies a huge opportunity to activate complex services within seconds of customer orders, rather than in some cases hours or days.
Instant access to synchronised data will greatly improve the customer experience, as well as create tremendous opex and capex savings. Potentially, miles of racks and servers could be eliminated if terabytes of data were moved to pizza-box sized hardware rather than complicated SANs and larger servers.
To realise capex and opex benefits, there are certain components that are crucial to centralising subscriber data among different network layers: a hierarchical extensible database, real-time performance, massive linear scalability, continuous availability, standard, open interfaces and a common information model. To help prepare for the day when IMS becomes a reality, leaving room for a software upgrade to a full-blown HSS will become important.
As cable operators integrate to PacketCable 2.0 environments, building and maintaining a subscriber-centric architecture will be key to services that require very fast, reliable and resilient repositories that concurrently serve multiple applications. After all, latency is not tolerated in pre-IMS networks today, which could spell doom for quad plays that don't build on a consolidated subscriber centric architecture.
A network directory server (NDS) is the first step in freeing and directing customer data from silos, as an NDS puts a directory in the heart of the network. With a centralised repository, service logic can be separated from subscriber data, enabling a cable operator to have VoIP and associated services working on WiFi, because the subscriber data can be reused among various access networks (ie VoIP on cable, CDMA or GSM).
Additionally, the application independent and hierarchical nature of an NDS makes it extremely flexible and extensible; suitable to host data for multiple applications and multiple access networks compared with embedded relational databases. A proper NDS directory structure is better suited to the disparate nature of the data prevalent in converged networks, which involve dynamic, real-time relationships. An NDS directory is object-oriented in nature with a data model that is published, enforced and maintained by the directory itself.
For a network directory server to provide these capabilities in the core of the MSO network, it is critical that it be highly performant, massively scalable, and geographically resilient.
Typical disk-based databases and legacy directories don't offer the read/write speed operators need to consolidate data in a live core network. Average latencies of three milliseconds for a query and less than five milliseconds for an update are critical to maintain customer performance expectations. Update performance is critical and using highly distributed memory resident directory databases can offer update (as well as query) transaction scalability at the point of access.
As critical as performance, a consolidated single logical database must always be available, downtime is loss of business. The network directory must provide continuous availability even in the event of multiple points of failure throughout the network, ideal for geographically dispersed networks and business continuity reassurance. NDS technology can be scaled massively, using data partitioning and distribution to host virtually unlimited quantities of data. Transactions and resilience are scaled by replicating data in real-time over multiple local and geographically distributed servers.
To make this scalability cost-effective, the hardware must be compact, inexpensive and non-proprietary and the NDS software must be able to scale linearly with the hardware. In fact, the hardware necessary for high transaction rates with the aforementioned low latency is actually very small. A small network directory system can yield 10,000 transactions per second for a couple million subscriber data profiles on a handful of dual-core processor servers running Linux.
That is a big difference from relational systems, which rely on expensive and complex hardware to scale to high transaction rates and directory sizes. Relational systems often struggle to utilise more than a single server or operating system footprint to scale capacity, forcing much more expensive hardware into a network. That increases opex and capex. Relational databases do have their place, as they are more the ideal for batch-mode, complex billing- and CRM-type operations, but for voice services, SMS and Internet services, distributed in-memory directories are more adept at handling the real-time nature of use when and where the data is needed.
Directories also help to simplify integration by supporting access through common IT technologies and protocols, such as LDAP, XML/SPML, and SOAP. Using IT technologies and protocols broadens the pool of qualified professionals who can support such as system. This translates into substantial cost savings, as operators can implement open interfaces in off-the-shelf hardware and operating systems. It's important to keep network components adaptable to a wide range of equipment to bring down support and maintenance costs.
Furthermore, to realise all the benefits of an NDS it is critical that forethought be put into designing a common information model (CIM). This is the foundation for a useful, extensible data model that encourages data re-use while allowing applications to peacefully co-exist in a multi-application, single logical database environment. The CIM model focuses on arranging subscriber, network and application data in several categories: subscriber identities, common shared global data, application specific shared data, and private data.
Unfortunately, no standard model exists, as every operator has its own information model and its own methodology for migrating and consolidating applications. However, most MSOs can build a common data repository within their network using an evolutionary approach. Starting with a single application that fulfils a emerging need of the MSO (eg presence, IM), the CIM data model framework may be established. This provides the foundation upon which other application data may be integrated and built. From then on, new applications (eg WiFi, AAA or policy management) can build on the already existing model. The key is to establish the proper foundation first and then add to it in an incremental fashion.
The CIM allows cable and telco operators to share data in a single logical database, as it houses re-usable data that can be used for new applications and services. As new applications are added and exiting ones evolve, data models are analysed and often changes are required. Changes can be applied to existing application data models where data is part of a common model using virtualisation techniques. So-called virtualisation is the ability to provide application clients with different views of the common data based on the identity of the accessing agent. This allows the common data model to be filtered, re-organised or enhanced to fit each individual application clients requirements, while keeping the core data model intact and un-entangled with a specific application.
As data is "virtualised", objects can be viewed according to different characteristics. For example, attributes specific to a particular application or object distinguished names according to the accessing application or user. That means data is implemented once and managed as one instance, but it can be viewed as an object according to different characteristics over and over again.
As the CIM evolves, cable companies will need to find the synergies so that applications can share common data. Once you have shared objects, you continue to evolve the process of designing schema for applications and merging the schemas together into the common model.
As operators consolidate their subscriber data, the platform they choose must offer a seamless migration to support IMS data via an HSS. This prevents an operator from deploying yet another silo if/when the operator decides to deploy IMS. An HSS can also source its data from the NDS, storing its data as part of the CIM thereby allowing IMS applications to source their data from the NDS as well as non-IMS applications. This has the potential to provide non-IMS and IMS applications a way to provide common data and services across different access planes. An HSS essentially sits on top of the NDS to offer an continued evolution to the process of consolidation. It does so as it enhances the CIM with an operator's IMS subscribers, the characteristics of their connected devices, and the preferences for those services.
For cable operators to guard their markets against hungry telcos who are charging toward IPTC, Internet service, VoIP, and other traditionally ‘cable' services, they must start planning how to streamline their networks for faster services rollout. To achieve a quad play set of offerings, consolidation of subscriber data for unified views of customer profiles across multiple services is essential.
Bill Bondy is CTO Americas for Apertio, and can be contacted via: email@example.com