Originating Author: Dave Vellante
In the late 1980’s, as is still the case, information was exploding and becoming unmanageable. In response, IBM announced DFSMS, or system managed storage, designed to automate the placement, management and retention of data and information over a useful life. The problem was that metadata classifications were unreliable and data movement couldn’t easily be automated because of the inherent complexities of the application infrastructure.
The promise of SMS, which indisputably had a strong ILM flavor, was it would reduce costs, improve your efficiency and eventually, automate the management of datasets at the application level.
Why does it seem that not much has changed in twenty years?
What can we learn from DFSMS?
DFSMS was useful and dealt with a series of problems like deleting temporary data and automating backup and recovery procedures. What it wasn’t able to do was successfully optimize storage on an application-by-application basis. SMS succeeded in the data realm but fell short in the application/business domain.
The promises being put forward today by ILM echo those of SMS and are compounded by complexities of compliance and the proliferation of email. The concepts embodied in those ‘ancient’ times included tiered storage, assignment of storage based on business value and the all-important application optimization, meaning the ability to dynamically allocate storage based on application needs; all while implementing a lifecycle methodology. ILM should not repeat the mistakes of SMS’ overzealous vision.
By combining metadata, tiered storage, high performance data movement, data classification schema, security and historical data management, ILM will enable organizations to both manage huge amounts of information and bring costs under control. These are excellent objectives and should define and limit the scope of ILM strategies to the storage layer.
What we can learn from history and how it applies to ILM today is the following: In an ideal world, applications would adhere to standards such that metadata would be contained in every file that could tell us when the file was created, how it was created and the nature of its content including its business value (sort of like email files). In reality, storage will never drive the definition of metadata standards for applications. Applications are vital to the business heads and are largely fenced off and segmented within their own domains. We should expect application owners to resist any attempts to impose standards on them, unless there is demonstrable direct benefit to those business owners.
It is unlikely that storage will drive this standardization effort.
Storage should stay in its comfort zone
The conclusion of all this is that storage infrastructure should be made simpler, easy to allocate, easy to manage and easy to automate. Technologies like virtualization, thin provisioning, copy services, advanced storage security and encryption are paving the way for this promise. The point is storage vendors should focus on providing storage infrastructure and reacting to the requirements of applications and not try to impose standards on applications.
In an October conference call to investors, EMC’s Joe Tucci talked about the company’s new umbrella positioning around “information infrastructure.” This is the right direction for a storage company. The sensible tack for storage providers is to observe and participate in the development of application standards emerging from service-oriented architecture designs and add value accordingly rather than try to promote grand storage strategies that unify applications.