Fast forward to 2006 and we experience a sense of dйjа vu.
IT Managers are struggling with managing, from creation to expiration, Terabytes and Petabytes of data across their hugely complex open systems environments. Data is so important to every single company that it must be available to employees, partners, suppliers, customers 24/7/365.
But who is managing it and ensuring that it is continually available, archived, backed-up, protected, and compliant? And why is it that the open systems storage administrators are struggling to manage just 1TB per person, while the mainframe boys can manage in excess of 10TB each? Sabanes Oxley, Basel II, the Data Protection Act; they all demand that data is properly managed from creation to expiration.
By now, the answer is pretty clear. Open systems need their own automated system managed storage and according to many of the storage vendors, system managed storage for open systems is already here – it’s called Information Lifecycle Management (ILM).
ILM could be perceived as the new SMS, and virtualisation is the new device emulation. When compared to their predecessors, they both have some extra bells and whistles to contend with the additional complexities which come with more varieties of hardware and software – that is why they have taken longer to make commercially available – but the principles are broadly similar.
Perversely those same silos of information, which companies are trying to eliminate with ILM, are themselves the very cause of the problems hampering deployment of ILM. ILM on its own is incapable of doing the job it was designed for. ILM needs virtualisation, but today, ILM is confined to single vendor storage silos because as yet there is no industry standard, (or method for presenting a single logical view of multi-vendor storage), nor is there an intelligent infrastructure connecting the information silos together.
Put ILM and an industry standard virtualisation together and you can begin to address the problem of managing information silos. Or can you? Look again at the mainframe precedent. In addition to virtualisation, there are still three missing pieces which must be addressed before companies can truly unleash to power of ILM on their data-critical open systems environments.
Physical interoperability between vendors’ servers, applications, storage devices and SAN fabric switches and directors has been answered. In general, everyone’s box talks transparently to everyone else’s box at the application program interface level (API).
Network-based Storage Services, including heterogeneous data replication, copy services and volume management to enable tiered storage/ILM and storage utility strategies, to reduce infrastructure costs, will be a vital component to successful ILM implementation.
While ILM promises to deliver Gold, Silver and Bronze performance SLAs and QoS at the application and storage device level, how can it guarantee it at the storage network level without intelligent functional interoperability in the network? Companies must ensure their storage network can run independent SAN and Security Services to fully leverage and optimise storage network investments, SAN segmentation and security required in today´s heterogeneous environments.