Storage Peer Incite: Notes from Wikibon’s September 25, 2007 Research Meeting
Moderator: Peter Burris & Analyst: David Floyer
This week Wikibon presents Volume virtualization reality. Volume virtualization, one of several virtualization initiatives, is gaining momentum among large and medium sized organizations. While not a completely mature technology in that new innovations are certainly visible on the horizon, today's technologies, most of them from major vendors such as IBM and Hitachi, offer several distinct advantages. Virtualization makes transferring data from one disk volume to another simple and non-disruptive. So for instance a storage administrator can easily move data off an aging disk to a temporary location, replace that disk with a new, possibly much larger unit, and move the data to that replacement, all without users being conscious that anything at all is going on. This also enables effective tiering, since it allows administrators or, potentially, automated rules engines, to move data from one storage tier to another without service interruptions, thus potentially saving money by allowing little used data to be moved to less expensive, lower performance media and eventually to tier 3 for archiving. It also can simplify operations and save money by imposing a single standard set of management tools and methods across a heterogeneous physical disk drive environment. These advantages can both improve the IT department's bottom line and make the storage administrator's job much easier.
In the larger picture, storage virtualization works directly with other forms of IT virtualization such as server virtualization to raise the operations of the IT infrastructure to a higher level of hardware independence. Thus, volume virtualization is in one sense a next step in a virtualization process that has been going on for some time and which promises much greater flexibility, faster adoption of new technologies, and greater operational efficiencies throughout the infrastructure.
This week's newsletter, based on Tuesday's Peer Incite Meeting, looks at the benefits and issues of volume virtualization technology. Readers should understand that this is just one of several virtualization areas, and that future Peer Incite Meetings and newsletters may take up the subject of port virtualization and other areas of IT virtualization. Bert Latamore
Contents |
Volume virtualization and reality
Virtualization has been part of IT since the 1960s as a technology for optimizing the utilization of hardware against specific named application requirements and simplifying the management of resources invoked to provide processing support. However, only recently has virtualization been brought to the heterogeneous storage world, largely in the form of products from the large, high-end storage suppliers – IBM, EMC, HDS, etc.
The advantages of virtualization in the first incarnations of these technologies are becoming clear. First it provides the capability to optimize storage services to applications by facilitating movement of data between tiers and migration of data between different generations of technology. It also empowers a dramatic simplification of the tooling and practices for storage administration by introducing common storage processes and procedures. Virtualization also can decrease software costs by better leveraging management software across multiple types of technology.
Virtualization programs have begun picking up steam in the last 18 months. Two types of virtualization technology have been introduced to make this possible:
- In-band appliances that essentially redirect storage I/O based on the mapping data held largely in cache to provide a virtualization layer to storage resources.
- Split-path architectures that handle storage control metadata in the appliance and data-related requests through technology (called a blade) in an intelligent switch.
Storage virtualization often sells itself in circumstances in which an IT organization has already accepted the benefits of virtualization across resources. In those circumstances we hear that the storage administration teams are quicker to embrace storage technology than might be expected. However, a key challenge for vendors is reducing the time between introduction of new technology (and particularly new microcode) and its certification within the virtualization stack: Three months is a reasonable target; anything longer begins to create problems in the market and in specific user organizations.
Looking forward, storage virtualization will be part of a broad set of virtualization processes, but within it users must differentiate between different paths to virtualization including volume and port approaches. We expect to see an enormous amount of activity in the next 24-36 months as users become more comfortable with storage virtualization policies and programs and vendors introduce virtualization products that provide real transparent paths to clean, simple certification in a more heterogeneous storage infrastructure world.
Action Item: Volume virtualization is proving itself in user organizations today. Users should look to initiate programs for implementing volume virtualization where data migration, operational complexity and management software costs are onerous. An effort to virtualize in storage may or may not go hand-in-hand with efforts to virtualize elsewhere, but the overall objective must be a flattening of the management stack of IT infrastructure.
Storage Virtualization: Not if but when
Block-based storage virtualization implementations have reached critical mass; virtualization works and provides demonstrable benefits. A vendor can and will argue the merits of different virtualization architectures, but the issue is how to get it done. Virtualization is an enabling technology that will underpin storage management strategies and is a key component in improving the cost and flexibility of the storage infrastructure. Implementations will need to be flexible and pragmatic, and in larger installations implementing more than one virtualization technology should not be ruled out.
Action item: Storage virtualization should move to high priority, either as a separate project of as part of a server virtualization initiative. The objective should be to improve the storage “cost of assets/cost of administration” ratio to be in line with networking and servers.
The carrots and sticks of storage virtualization
Storage administration is not prone to knee jerk reaction. This is another way of saying administrators don't like change. In IT organizations following a virtualization strategy, storage often brings up the rear. What's the formula for getting storage folks to be more enthusiastic, if they're not on board?
Start with metrics that matter. For example, users indicate it takes anywhere from four to six months to migrate an array. If equipment is on a three-year lease, that means a good portion of every year is spent planning array migrations-- not very productive for IT or the business. Putting incentives in place for storage admins to get that figure down to less than one month is a good way to create some urgency and momentum.
The other complimentary approach organizationally is to expose storage administration to other parts of the organization that have successfully implemented virtualization (e.g. on the server side). Setting up cross-functional teams will reinforce the notion that IT virtualization is good for business, and storage cannot be an impediment to that goal.
Action Item: Storage administration will often initially resist virtualization and the changes it brings. However appropriate training, preparation and exposure to other virtualization layers will generate enthusiasm for the hard work that's ahead. Management objectives that tie incentives to metrics related to migration, reduced software costs or simplified storage management have proven highly effective.
Integrating the virtual pieces
Making sure that all the pieces fit together in a virtualization strategy is not easy. Working in a testing environment is one thing; working with high I/O rates and multiple error conditions is completely different. Reducing the number of moving parts that need to be tested by introducing common technologies such as MPIO makes strategic sense. But as a strategy, users should look to vendors to integrate the technologies and certify that the hardware works together, not only initially but as new microcode is released over time. The components that have to work together are the appliance(s), the arrays, the SAN, and (if intelligent switches are used) the requisites blades and software.
Action item: Include in the virtualization RFP and contract negotiations the time to certify new levels of microcode on the virtualization appliance (and the blades on the intelligent switch if applicable), and clearly define inter-vendor problem resolution processes and escalation procedures.
When the storage virtualization engine is the bottleneck to certification
Over the past decade advancements in microcode functionality for storage arrays have been impressive. Vendors add function and correct errors on installed arrays with frequent new versions of microcode, increasing the useful life of those systems. However before new releases of microcode can be applied to arrays attached to a virtualization engine, the combination of storage and virtualization engine must be certified as working correctly. Virtualization vendors must be careful to walk a fine line.
Often, certification of new microcode can take many months, bringing change management to a halt within IT storage infrastructures supported by the virtualization device. If this process extends out beyond 90 days, business is impacted in a meaningful way.
Sometimes virtualization vendors may invoke FUD (fear, uncertainty and doubt), arguing the best way to maintain data integrity is to stay within that vendor's product set, and delay certification of competitive arrays. This gamesmanship will not go away; however the bottom line is users want heterogeneity. As with other practices that damage business relationships, delaying new microcode release certification (deliberately or otherwise) will eventually backfire on the vendor.
Action Item: The demonstration of certification speed, transparency and quality will emerge as major points of differentiation. Vendors who put forth standards of certification, invest in managing certification cycles and do so in an expeditious manner will gain a leg up.
Virtualization will help make storage certifiably sane
As the technologies required for robust storage virtualization mature, to the point necessary for certifying interoperability, a wide array of storage product interfaces, heretofore hidden, will become clearer. The drive to "virtualize" already has provided a significant boost to interface standards like Multipath I/O (MPIO). The desire to virtualize services higher than pathing (e.g., snapshot copy, replication) will lead to additional, near-standard conventions for automating storage administration tasks across multiple platforms. The consequences of increased interoperability at multiple levels of function will effect not just storage procurement, but also dramatically simplify the processes of retiring (1) less attractive storage hardware and software; and (2) less productive storage administration practices. With each step towards certified storage virtualization, users are closer to gaining real leverage over their storage resources for the first time in nearly two decades.
Action Item: Demanding clear answers from suppliers regarding current and go-forward commitments to certification regimes for storage virtualization is not only a reasonable, but an essential, step in any storage-related negotiation. These answers will profoundly affect storage-related procurement, asset management, and operations practices for years to come.