Storage Peer Incite: Notes from Wikibon’s November 18, 2008 Research Meeting
Nothing pushes change like a recession, and this one promises to be deeper and longer than any since 1929. By the end of it, IT organizations (ITOs) will look very different. Drastic, sometimes draconian budget cuts for the coming fiscal year will force organizations to seek efficiencies in new strategies from SaaS and cloud computing to the application of new automation tools. The overall result will be to increase the pace of long term trends that have seen ITO staffs shrink even as data centers have grown, with large amounts of technical work moving overseas or suffering heavy impacts from improved technologies. In the storage arena, these often heavy budget cuts will drive the adoption of tiered architectures, virtualization, and automation to both address the corporate appetite for more capacity and minimize the staff needed to maintain them. Increasingly, CIO's will focus on strategies that get better return on assets (ROA) and delay riskier investments. Heterogeneous storage virtualization is a proven technology that supports this philosophy by squeezing more capacity out of installed storage and simplifying storage administration-- but it has costs and a learning curve which users should consider. This issue of the Wikibon Peer Incite newsletter examines the opportunities this technology presents, using the example of LSI's SVM 5, to meet the demands of 5%-30% budget cuts, or more, in a recession that at least for the coming year cannot support the constant growth philosophy that has driven the American economy and culture. G. Berton Latamore
The Wikibon community met on Tuesday, Nov. 18, to discuss how storage executives can best address budget constraints in 2009. Most IT departments are facing budget cuts of between 5%-30%. Many Wikibon users have lived through downturns in the past two decades. While perhaps not as severe as this current economic crisis, these experiences have taught IT executives to find ways to stretch budgets through a variety of measures focused both on CAPEX and OPEX reduction. Not surprisingly, most of these moves are highly tactical, although executives do cite some architectural maneuvers that are more strategic in nature.
The case presented by Grant highlighted several challenges probably facing numerous organizations. Grant is a Senior Storage Administrator at a large financial firm. His group manages about 700TB's of storage across 400 servers. Exchange and SQL are primary applications. Approximately 300TB's of storage are virtualized today with plans to steadily increase that figure over time.
Grant's organization is facing several challenges, including:
- Deep budget constraints in 2009-- in his cast 75% for the next six months,
- High growth of SQL databases,
- Increasing volume size,
- Too many disruptions on storage adds, changes, deletes and moves,
- "Copy creep",
- An overall push to reduce costs.
Several years ago, Grant's organization decided to embark on an aggressive push toward storage virtualization to:
- Virtualize storage, consolidate multiple systems, and create a tiered infrastructure using LSI's SVM technology delivered through IBM,
- Establish a three-tier storage architecture,
- Consolidate function and subsequently software licenses and maintenance contracts,
- Virtualize servers with VMware (future).
Currently, data de-duplication is not a technology deployed at the organization and experiments with lower cost iSCSI technology did not provide the performance required.
For a more complete overview read about LSI's Storage Virtualization Manager (SVM) - Version 5. SVM 5 is a storage virtualization technology that allows LSI's OEM customers to sell end-to-end solutions (i.e. storage virtualization and thin provisioning and storage management software such as snaps and clones) to their existing bases as well as new arrays.
Because it is end-to-end, it allows OEMs and their VARs to provide services and maintenance in a single package that better meets customers' needs.This is good news for users. While LSI is not the first to provide heterogeneous storage virtualization, the company's OEM strategy allows server vendors specifically (e.g. HP) and other storage OEM's in general to provide a capability that can support near Tier 1 advanced functionality at modular storage price points.
This technology supports the strategy of leveraging installed infrastructure and focusing more on return on assets (ROA) versus spending incrementally to generate ROI.
Here's a brief summary of the advice from Wikibon users. In addition to negotiating harder for better pricing/deeper discounts, the community recommends the following tactical moves:
- Renegotiate existing contracts using leverage where it exists (e.g. a pending new purchase, reduced service levels, etc.) These include maintenance contracts, monthly software charges, and extending leases to lower monthly run rates.
- Cut non-essential projects and re-scope very large initiatives to reduce overall expenses.
- Re-examine personnel and reduce headcount for non-essential functions.
- Selectively outsource where it will reduce costs.
- Eliminate professional services expenses where possible by shifting responsibilities to internal staff, recognizing this will reduce service levels.
In addition, Wikibon users suggest several actions that are more strategic and architectural in nature, focused on improving return on assets in general. Specifically, as it relates to storage, users cited storage virtualization, if in place, as a tool to:
- Migrate data to less expensive storage tiers;
- Ease administrative overhead and maximize staff productivity, especially in the face of headcount reductions;
- Minimize migration expense (planning and disruption);
- Improve negotiation leverage, specifically where heterogeneous storage virtualization is in play (i.e. you can choose any array);
- Support lower disaster recovery costs, allowing diversity at the source and target.
The bottom line is that cumbersome LUN management, the endless search for contiguous free space and painful migrations are productivity killers that waste storage space and reduce negotiation leverage by locking in buyers. Storage professionals that have architected a virtualized infrastructure are in better shape to take advantage of that capability in a downturn, somewhat sacrificing service levels but gaining lower cost. Organizations that do not possess this capability should consider the cost of implementing it or risk overspending in this economic crisis.
Action item: Storage executives should immediately develop and begin implementing a plan to reduce expenses to support management edicts to cut the budget. Negotiating leverage, reduced service levels, pragmatic headcount cuts, and storage virtualization are the tools of the budget cutting trade that users should employ. Communications is the key. Storage execs need to set expectations that everyone must sacrifice and expect reduced service levels in non-critical areas.
SVM 5 is the current generation of the LSI Storage Virtualization product. The idea behind SVM is to enable advanced data services such as storage pooling, thin provisioning, space efficient snapshots, online data migration, and remote replication in existing storage array environments. This allows you to consolidate many different arrays from different vendors into a single “storage system” without the need to replace any hardware. A prerequisite is that the storage array model has been qualified by LSI to work with SVM 5.
In addition to augmenting the storage features of existing arrays, having a storage virtualization layer at the SAN level enables capabilities like remote mirroring with different array types on each side, non-disruptive data migration between devices, and the creation of a consistency group of volumes that span multiple different arrays.
SVM has a split path architecture using an out-of-band metadata server (the SVM) to create and manage the virtualization maps (i.e metadata), and multiple in-band data paths (DPMs) that move data from server to storage as fast as possible in accordance with the virtualization tables provided by the SVM.
SVM 5 runs on a LSI DPM 8400 that has 16 4GB/sec ports and is capable of performing up to 1 Million IOPs per DPM. Although a single pair of DPMs is adequate for most SANs, it is possible to deploy multiple DPMs to scale-out the storage virtualization performance.
One of the key design goals of SVM was to introduce storage virtualization at existing SANs with minimum effort and cost. Usually effort is required to ensure that the SAN and storage arrays are configured to conform to SVM requirements. After that a simple SVM system can be deployed in days, and it is then possible to import LUNs with data into the virtualization environment very quickly. SVM allows the coexistence of non-virtualized and virtualized storage, even within the same array, enabling users to start small and grow the virtualized environment over the time.
SVM is a mature product with more than five years of deployments around the world. Most of SVM users started to virtualize their storage to solve a particular problem (i.e. storage pooling, or remote mirroring or data migration or space efficient snapshots), and over time ended up virtualizing much of the storage environment.
LSI provides SVM though some direct channel partners and through OEM agreements with IT vendors such as HP. The software is licensed on a sliding scale according to the number of terabytes virtualized. HP in particular offers a very strong set of qualified storage arrays and services available in the United States and internationally that provide a "single throat to choke".
Action item: In today's harsh economic environment, heterogeneous virtualization is a sound strategy for improving the return on existing storage assets. SVM 5 from LSI competes well on heterogeneous virtualization functionality with IBM's SVC, Hitachi's USP VM and EMC's Invista products. LSI's strategy of achieving volume with channel and OEM agreements is sound and provides multiple alternatives for potential customers. Wikibon recommends that products using the SVM technology be strongly considered for inclusion in heterogeneous virtualization RFPs.
On paper, heterogeneous storage virtualization should be a huge winner. In general, while clearly the innovation has had some traction in the marketplace, its application has not been broad-based. This is changing, albeit slowly.
Discussions with Wikibon users show that heterogeneous storage virtualization engines have typically been been deployed to support focused initiatives to develop a specific business capability. The main areas of emphasis have been:
- Tiered storage - in an effort to create a default tier 2 strategy and avoid expensive tier 1 platforms;
- Migration capability - especially for customers facing a rolling financially-forced lease conversion every year or those with particularly frequent migrations due to acquisition strategies;
- Storage consolidation - in an effort to pool heterogeneous storage assets;
- VMware and server virtualization - to support backend storage virtualization for virtualized server environments.
There are others (e.g. disaster recovery and remote replication) but these are the main applications users cite as the end game where heterogeneous storage virtualization is the means. In general, heterogeneous storage virtualization as a business capability has not been the objective, largely because in and of itself it's not a business goal.
The current economic crisis will increase pressure on organizations with a diverse base of installed arrays to establish a mainstream, low-cost tier 2 data store that can accommodate a variety of array types. LSI's SVM 5 which was discussed on Tuesday's Peer Incite call is an example of heterogeneous technology coming to the mainstream. The technology competes with IBM's SAN Volume Controller (SVC), Hitachi's USPV (especially the USPVM diskless version) and EMC's Invista. Interestingly, LSI customers including IBM (SVC), HP (USPV) and Sun (USPV) all sell heterogeneous virtualization technologies today. LSI is positioning SVM 5 at the sweet spot customer base of these OEMs, and the technology is compelling due to it's mainstream appeal, high functionality, and competitive cost.
The challenge is to simplify the transition to the virtualized infrastructure. Customers want a two-step process to storage provisioning: 1) Plan; 2) Deploy. Unfortunately, to get to an environment that supports heterogeneous assets in two steps, users have to go through a four step process: 1) Plan; 2) Design; 3) Test; 4) Deploy. Supporting this transition to a heterogeneous virtualized infrastructure can often be cumbersome, risky, and expensive.
The customer is left with a difficult choice. Keep adding to the current stove-piped infrastructure, and go further into the LUN management heart of darkness, or create a homogeneous virtualized stove-pipe and go with the likes of 3PAR, Compellent, Pillar, LeftHand (HP), EqualLogic (Dell) or other emergent suppliers.
Action item: Heterogeneous storage virtualization suppliers need to proactively recognize the risks customers face in transitioning to such an environment and endeavor to minimize these risks. Interoperability matrices, professional services, and use cases all help. However to be successful suppliers must aggressively put forth and market a vision of what storage infrastructure will look like in the next decade.
In boom times, IT groups are focused on helping their organizations improve revenues and profitability. Recently capital has been easy to come by, and supporting earlier implementation of projects has been more important than maximizing ROI. As the recession bites, so the emphasis shifts to minimizing capital expenditures, and maximizing the return on assets deployed (ROA).
There are two main storage virtualization strategies, homogeneous and heterogeneous. There are advantages and disadvantages to both strategies. A key question in recessionary times is where to invest scarce capital budget. In the case of virtualization, there are three issues users should consider:
- The length of time and internal effort required to enable virtualization
- The amount of storage that is going to be improved (i.e. exploitation potential)
- The capital investments needed
The obvious goal in a recession is that the hard dollar savings, in terms of increased capacity utilization, outweigh the capital costs of installing the virtualized infrastructure and allow the near term deferral of new capacity.
Homogeneous storage virtualization
Homogeneous virtualization solutions are provided by 3PAR, Hitachi USP V and VM (via internal-only storage), Compellent, NetApp, Pillar and HP’s EVA line. The advantages of this approach are:
- Better ease and speed of implementation – virtualization is built in, and there is very little if any modification to be made to the storage network. Although migration of data to the new array is time consuming and requires careful planning, it is a well understood procedure within IT. Provisioning in general is significantly simpler on virtualized arrays.
- Qualification of disk drive is automatic.
- Only one component to install – There is less complexity (all the microcode is in one box), less to go wrong, and overall it is a potentially higher reliability system.
- One set of storage management software - Centralization of software allows the potential for lower storage software costs, lower training requirements and simpler storage procedures.
The disadvantages of this approach are:
- Higher capital costs - The current equipment has to be replaced.
- Vendor lock-in - Once a specific architecture has been selected there is less choice of storage solutions, and less negotiation power with the vendor on both software and hardware
- No ability to dynamically migrate data between arrays or between storage pools. This reduces the ability to optimize storage usage, reuse older storage arrays and implement tiered storage infrastructure.
Heterogeneous storage virtualization
Heterogeneous virtualization solutions are provided by IBM’s SVC, Hitachi’s USP V and VM, EMC’s Invista, Incipient's iNSP and LSI’s SVM 5. These systems differ in their network architecture approach, but to date there has been no evidence that the different architectures significantly impact storage network response time and throughput; if properly implemented any of these approaches will work satisfactorily. All the solutions are implemented on appliances with significant attention to high-availability and zero data loss of meta-data. They all provide two main functions:
- The ability to virtualize the storage on different array types – Each vendor provides a list of qualified storage array models (and microcode levels) that have been tested. The vendors differ significantly in the range and depth of their qualification testing.
- A set of centralized storage management functions – Again, there are significant differences in the number of functions provided, the quality of function provided and the quality of implementation.
The overall advantages of the heterogeneous approach are:
- Lower initial capital costs – Only the appliance needs to be purchased up front. The capital cost of the virtualization storage management software is usually directly related to the amount of storage that is going to be virtualized and can be implemented in stages.
- Avoiding purchase of new storage arrays – By improving the utilization of storage on existing arrays, new array purchases can be deferred.
- Dynamic migration of data between heterogeneous arrays without application interruption - This facilitates implementing a tiered storage infrastructure, maximizes the re-usability of space on storage arrays, and elongates the useful life of storage arrays. This benefit is available after the volumes have been virtualized; the initial migration of data to a virtualized appliance causes disruption to the application, and like all migrations needs to be carefully planned and implemented.
- Storage management software can be centralized – This will assist in reducing the training costs on different management software, and simplifies procedures.
- Less lock-in for disk arrays.
The disadvantages of the heterogeneous approach are:
- The virtualization appliance is an additional element in the storage network – This adds management cost and is potentially (and historically) a source of unavailability.
- Arrays need to be qualified – Each vendor has a list of array models that are qualified on its virtualization appliance, and particular attention has to be paid to microcode levels. This reduces the choice of storage arrays available.
- Significant pre-implementation effort - This is often required on the storage network infrastructure to ensure compatibility with the virtualization appliance.
- Vendor lock-in - Implementing a storage management software in the appliance widely creates significant vendor lock-in and reduces negotiation leverage on appliance software costs.
If capital budgets have been cut, heterogeneous virtualization becomes an effective strategy to help improve the utilization and functionality of installed storage. For effective implementation, CIOs and CTOs should:
- Avoid purchase or retention of storage equipment that is not on the qualification list.
- Ensure that the storage network infrastructure is adjusted and maintained to enable a heterogeneous virtualization implementation.
- Ensure that financial information of all storage assets installed (e.g., end of lease date, extension terms) are known to the virtualization implementation team, and that virtualization priorities are driven by the financial impact on IT budgets.
- Focus the initial implementation on simple storage virtualization and thin provisioning (if available) to extent the life on installed assets.
- Focus the storage management implementation on the minimum function that will allow the use of tier 2 storage as tier 1 storage and minimize the purchase of additional tier 1 arrays.
- Negotiate a provision specifying that appliance software costs will reduce in line with storage hardware prices.
- Put in place an organization that will ensure centralized selection, management and allocation of storage resources and agree with the lines of business.
Action item: Organizations that have avoided heterogeneous storage virtualization completely will increasingly pay a price in terms of reduced efficiencies, greater complexity and less IT flexibility. While the risk of staying the course may be lower and less disruptive in the near term, CIO's should carefully weigh those risks against cost and complexity and begin to implement heterogeneous storage virtualization strategies with the objective of better utilizing installed assets to offset virtualized appliance and license costs.
For years storage has been purchased as a resource to support specific applications and business initiatives. When you ran out of storage, you bought more even if there was underutilized storage somewhere on the floor. Today, CIOs should insist, with the exception of certain applications such as Exchange or transactional systems, that storage be a virtualized shared resource.
This week's Peer Incite used a customer example featuring LSI's Storage Virtualization Manager (SVM) and its position as a mainstream technology to support virtualization for mid-tier storage infrastructure. The trend is clear. A large, stable R&D company with an OEM selling mentality has begun to bring heterogeneous storage virtualization, thin provisioning, and advanced software to a large installed base of modular arrays.
The organizational implications are several in the near term and will be heightened by the economic crisis. Specifically, over the next 18 months CIOs should increasingly emphasize infrastructure expertise focused on five key areas:
- Business skills to facilitate storage as pooled services;
- Architecting and implementing a capability to support shared storage;
- The ability to negotiate with both vendors and lines of business to form a new business model that supports shared storage provisioning and management;
- Performance tuning to ensure service levels are adequate;
- Reporting to monitor, measure, adjust to change and communicate success metrics.
Storage infrastructure continues to offer opportunities for consolidation and improved efficiency. The promise of SAN in the early 2000's, while enabling better recovery, fell short of expectations and resulted in too many SAN islands and too much wasted space.
Action item: The recent economic downturn provides CIO's with an opportunity to impose service level standards that aggressively use virtualized storage as the default tier 2 infrastructure. CIO's should communicate to lines of business and form a cost-cutting partnership that trades a 'my storage' mentality for an 'our storage' paradigm.
During the dot.com bust, a huge amount of used IT equipment entered the market and, while it does not appear that we will see the same volume of equipment during the current financial climate, it is clear that some businesses are failing, especially smaller ones. So, we can expect an increased supply of used equipment. Moreover, the vendors are going to be very hungry over the next few quarters, and they will be forced to bid used hardware to users who cannot afford new. Users should in fact be pressing their vendors to propose creative alternatives using used/refurbished hardware. The leasing companies will love it.
What’s more, by employing storage virtualization, users do not have to scale back their performance and availability requirements. In-band caching virtualization appliances and uber-boxes can meet or exceed service level requirements including performance and availability using non-tier-1 storage hardware. Server- and network-based storage virtualization will not improve storage response times but will bring the benefits of pooling all your storage – both new and used. Both also offer high availability features, but one must carefully navigate interoperability matrices
On the down side, many newly budget-constrained users will look to extend leases on existing hardware and/or turn purchased assets into cash with sale and leaseback arrangementsm thus reducing the volume of hardware that would have normally entered the used market. Also, storage hardware vendors have vigorously tried to control the used market with non-transferable software/micro code licenses – a tactic whose use is inversely proportional to their hunger.
Action item: Users should explore the used IT equipment market; have an aggressive conversation with incumbent vendors about supplying used equipment as well as more flexibility in transferring software licenses; and investigate how storage virtualization could stitch together a vast array of heterogeneous storage.