Jeff Nick examines the concept of ‘inflection points’ and their application to information processes
Every few decades, a fundamental shift occurs in how IT reaches customers. The shifts are akin to what Intel co-founder Andy Grove calls ‘strategic inflection points’, or the ‘time in a life of a business when its fundamentals are about to change.’
We’ve seen bandwidth, information growth, and infrastructure complexity explode in this decade as applications became more integrated and dynamic. This has forced the IT industry to address inflection points centred on four areas.
The first inflection point reflects the changing nature of information management.
Today, most information lifecycle management approaches (ILM) are volume based: ‘Buckets’ of information move, based on coarse descriptions of where the information currently sits, how big it is, who owns it, how old it is, etc. Customers already gain value by automatically protecting information and optimising where it’s stored.
But ILM is evolving. Automatically, tools can classify and apply ‘metadata’ labels to information based on its content. The idea is simple: Scan information and use what you learn to enable automation.
• “No one’s accessing this part of the database. Let’s automatically move it to a lower service level.”
• “This e-mail says ‘Confidential.’ We’ll automatically flag it for retention.”
• “This spreadsheet may contain personnel information. We’ll automatically ensure it’s secured.”
The second inflection point ties into configuring IT assets for greater business value.
Typical data centres house many disparate technology ‘stacks’: applications running on specific operating systems, hosted on specific servers, using specific networks to connect to specific storage devices. Each deployment may have little or no relation to others, amplifying complexity and promoting inefficient use of resources.
This approach causes two significant problems:
• General-purpose computers become a dumping ground for an amalgam of different software stacks and applications. Each application has little or no relation to other applications co-hosted on the same platform and has its own patterns of resource consumption (CPU, memory, storage etc). The unpredictable resource usage results in an extremely complex operational environment and customers find themselves with a scale-out of under-utilised resources. Management simplification conflicts with the desire to optimise resource utilisation.
• The propensity to deliver IT general-purpose parts translates into an mismatch between the way that IT is delivered by vendors to customer IT organisations and the way those organisations need to deliver IT functions to their end users.
Virtualisation technologies, such as VMware, can help with both these problems significantly. Rather than mixing application workloads onto one operating system platform, VMware allows applications to be deployed into separate, flexible, virtual server containers, logically separate, while physically co-located.
There is a marked trend, mostly from start-ups, to deliver targeted niche turn-key capabilities in a network-centric appliance model, such as Web service gateways, encryption engines, network traffic monitors etc. These functional components, by their modular design, are self-contained, deploy non-invasively into existing configurations, optimise resource utilisation to the function being provided and are simple to manage due to their built-for-purpose limited configuration options.
Again, virtualisation technologies will increasingly shine, as virtual containers are a natural deployment vehicle for functional virtual appliances as well as traditional software stacks.
Guaranteed delivery of IT capabilities in support of service (i.e. service-level agreements) is extremely difficult to provide with any level of confidence in today’s working environment. This is due to the complex task of translating abstract business service-level objectives into concrete, actionable, resource management policies. Objectives for availability, performance, security, compliance and other dimensions of IT quality of service (QoS) are achieved by IT administrators largely through trial and error. Once some level of QoS is achieved, IT organisations are reluctant to introduce change to existing configurations. This puts IT groups directly in conflict with their primary objective: providing service to the customer business by remaining responsive to ever-changing demands and unpredicted business growth opportunities. Today, stability is achieved through wiring static configurations at the expense of dynamic flexibility.
This primary pain point in alignment between the business and the supporting IT environment is the result of some underlying fundamental problems.
First, there has been a general lack of agreement on how to express the capabilities of IT resources in a consistent manner. The disparities of implementation between vendors of similar computer technology elements are all too painfully evident to IT administrators. However, there have been increasing efforts in standards bodies to model resource types for management plug-ability. There has not yet been, though, convergence on the modelling framework across these different resource domains.
A further problem is that much emphasis to date has been placed on modelling ‘things’ (resources) rather than the ‘use of things’ (functions). As a result, significant effort has been placed on modelling every knob and dial of every resource in every resource class. While this is important for plug-ability of resources, it does not close the gap between service-level management and resource management. What is fundamentally required is a focus on modelling the profiles for interactions with resources in the context of a given management discipline, such as availability or performance management. This would allow for direct translation from service-level objectives to resource policies specifying constraints on acceptable settings for only the resource dials and knobs associated with that particular management discipline.
Serious security issues
A lack of seamless security policy and enforcement across application, information and resource domains also causes serious security issues. Once information is retrieved, for example, from the authoritative data store and is processed in another application, stored in another repository or migrated to another resource, security context is lost.
Further, given the lack of security integration across vendors and types of IT assets, customers seek to protect themselves by building a security ‘castle’, hoping to protect their soft IT infrastructure within the castle walls. This approach, however, does not provide the necessary security policy protections to the business once inside the perimeter. Most security breaches come from within the organisation, not outside.
The solution to all these problems: a services-oriented infrastructure (SOI). Recent developments like Web services, virtualisation, and model-based resource management have coalesced to support SOI.
The third inflection point relates to the emergence of an ‘edgeless’ IT environment.
Information is everywhere. Business processes flow across a global chain of partners, customers, and employees, and perimeter-centric thinking is inappropriate. Enabled by grid technology, for example, thousands of compute nodes share petabytes of scientific information as research labs and universities collaborate to solve the mysteries of our universe.
But technology must take into account that information moves and must be secure-whether accessed internally or over the Web, at rest or in motion. We must authenticate users wherever they are and limit access based on their roles. We must define secure management policies and apply them to information regardless of resource, platform, repository, or application.
The final inflection point will change how information creates value.
We’re adept at creating information; we haven’t truly learned to leverage it. Some analysts say nearly 80 per cent of information that already exists is recreated not reused.
Information is often tied to the application or process that created it, so sharing or repurposing poses a challenge. In a sense, information is imprisoned, bound by proprietary schema and storage-access methods.
Thus, customers miss opportunities: if they could easily search, access, and combine information, they could uncover new revenue sources and operational improvements. We see this idea evolving in our expanding investments in content management anchored by Documentum, Centera content-addressed storage, and collaboration.
These are exciting developments. Inflection points will make IT a seamless part of our lives while elevating its impact. We’ll manage information according to what it is, not where it sits. We’ll design infrastructures to provide service, not just capacity. Traditional data centre perimeters will cede to solutions that realise data moves. Tools will make information ‘self-describing,’ so users can manage it automatically, according to policy.
And we will access, share, mine, and analyse information based on automatic data classification captured in metadata, letting people and applications use data beyond the original content-creation purpose.
We are just beginning to translate data to information and information to knowledge.
Jeff Nick is SVP and CTO with EMC