Storage Peer Incite: Notes from Wikibon’s October 23, 2007 Research Meeting
Moderator: Peter Burris & Analyst: Josh Krischer
This week Wikibon presents IBM: Scaling performance, capacity and functionality peaks. This week's Peer Incite Meeting was dedicated to a discussion of IBM's Oct. 23 storage announcements, which have brought IBM back into approximate parity with HDS and EMC in the high-end storage market in terms of functionality, performance and capacity. Highlights of the announcement are summarized in a piece written for the meeting by leading subject-matter-expert and Wikibon community member Josh Krischer. A link to that piece, published on the Wikibon site, is in the first article in this newsletter. Bert Latamore
IBM System Storage DS8000 series October 2007 Announcement Review
On October 23rd 2007, IBM announced enhancements to its high-end storage, the IBM System Storage DS8000 series. This note shares perspectives on the latest innovations and enhancements incorporated into this enterprise-class disk system. These enhancements are in microcode only, and most of them will be "free of charge" available to deploy on installed systems as well. The general availability of most of the enhancements is 7 December 2007. While this announcement did not include some expected hardware improvements, IBM has plugged critical holes in its product line that eliminate objections to including IBM in high end storage RFP's. As well, the announcement included improvements to the IBM SAN Volume Controller (SVC), virtual tape enhancements and new Series N file virtualization capabilities.
IBM: Scaling performance, capacity and functionality peaks
IBM today (Oct. 23, 2007) delivered an announcement that is striking in its competency as much as any specific technical extension. After years of announcements that tended to underwhelm, IBM appears to have delivered one that hits the key issues of performance, capacity and functionality with a product set that is on par if not leading in all three storage dimensions.
The centerpiece of the announcement was important functional enhancements to the DS8000 Series including Flash Copy SE, Global Mirror for z/OS, a new technology for avoiding hot spots in an array, and a productivity center for better administering IBM-installed storage devices. These enhancements rectify some weaknesses in IBM’s storage functionality that often left it regarded as less than competitive, forcing it to discount against key competitors.
Specifically we point out the importance of the enhancements of the Global Mirror technology introduced by IBM that offer a marked improvement over the original XRC technology for remote copy. By adding multiple readers, IBM is effectively able to increase not only the amount of data managed but also the throughput of that data in remote copy applications. Clearly the DS8000 announcement puts IBM strongly back into a completely competitive position with HDS and EMC.
Many users expected to see the POWER6 microprocessor become the platform for the DS8000, but it appears IBM will push that off to a future announcement, leading us to anticipate more additions and advances on this important platform in the future.
The second part of the announcement was a set of modest but nonetheless pointed enhancements to the SVC SAN controller. While the enhancements are mostly focused on a new disk-to-disk copy technique and formal support on the SVC for VMware EXS server, these seemingly small enhancements nonetheless extend IBM’s strategic advantage and constant developments in the emerging area of storage virtualization.
As users continue to highlight their need for virtualization technologies, the SVC, which passed 3,400 system installations, has a clear advantage from the customer install as well as application affinity standpoints. Also very importantly, IBM introduced a virtual global file manager for the N series (OEMed from NetworkAppliance) which simplifies the traditional headaches associated with migrating and administering large stores of files. The final area of importance was enhancements to the TS7520 virtualization engine which actually seemed to be constrained compared to what we thought IBM might announce. The TS7520 now is running Tivoli Storage Manager directly on the box, and we expect this is likely to be an architectural direction for IBM – moving increasing numbers of storage-related applications directly to the controller.
As users consider this announcement, we think that they should closely focus on three things:
- IBM has muscled its way back into full membership in the high-end storage game, delivering functional performance and capacity parity with this announcement. As a result, it may be slower to discount pricing.
- The array of function now available on a wide variety of different boxes increases the requirement for users to identify their needs in performance, capacity and functionality explicitly and purchase to those storage needs. As storage packaging becomes more complex, users do not want to pay either out-of-pocket or operational dollars for things they will not exploit.
- Finally, the global mirror product turns the crank of remote functionality on new types of technology and technology exploitations. As users attempt to answer their overall data and processing resilience questions, the issue of how remote copy is handled from an operational and architectural standpoint will probably grow in importance.
Action item: IBM’s Oct 23 announcement is a solid effort from a renewed storage supplier. While this announcement will not necessarily place IBM back into the leadership position in the enterprise storage industry, it is clear that users should fully take advantage of reemergence of IBM as major force in high-end storage and virtualization to achieve best price for right combination of capacity, performance and functionality.
IBM: High end credibility is crucial for overall storage success
IBM has been playing catch up in the high end of the storage marketplace. IBM's October 2007 DS8000 Turbo Series announcement is an attempt to shore up its position in this space and support competing more broadly in the storage market with products like the SAN Volume Controller (SVC). While SVC is a category leader, the DS8000 has been lagging, a fact which threatens IBM's goal of regaining storage leadership for the simple reason that if IBM can't compete for the world's most demanding applications (from a response-time perspective) it won't be taken seriously as a storage leader in the data center.
Does the DS8000 accomplish the objective of keeping IBM competitive at the high end? Yes, but there's more work to be done, and customers should continue to observe IBM's actions and investments in this area closely. Specifically, with Dynamic Volume Expansion and FlashCopy SE, IBM has addressed some major holes that have kept it out of competitive bids. However, IBM needs to be more than column fodder for EMC-wired deals, and customers should expect/demand continued progress in the form of hardware improvements, higher capacity devices, thin provisioning functionality and further investments to keep pace with EMC and Hitachi.
Storage virtualization with SVC is a different story. IBM is executing brilliantly on its strategy to virtualize heterogeneous data center assets while EMC continues to make excuses for Invista and Hitachi struggles to position the diskless USP VM, leaving the door wide open for IBM.
Action Item: The combination of a broad storage portfolio, excellent services capabilities, SVC leadership and adequate progress at the high end make it hard for customers to exclude IBM from the short list of storage suppliers. While users should continue to push IBM to accelerate the pace of product functionality they should consider IBM in earnest as a serious storage player on critical RFP's.
IBM's Virtual File Manager™ brings the potential for file consolidation
IBM hopes that the System Storage N series Virtual File Manager™ (VFM, OEMed from NetApp) will bring the same success for file-based storage as the SAN Volume Controller (SVC) has for block-based storage. The key is to consolidate file systems virtually into on a global namespace. However, instead of doing this within one large filer (as BlueArc does), IBM uses an external appliance that provides this across all connected filers. IBM is hoping that by providing a low entry point and demonstrating the ability to solve change, migration and file data sharing problems, they can emulate the SVC success.
File-based storage is the last bastion of storage consolidation. It is generally very distributed, with different departments using a range of filer products. Similar appliance solutions are already available (e.g., F5 Acopia), with much higher function. IBM will have to work hard to repeat success.
Action item: Users need to put in place organizational mechanisms for implementing standards and sharing the costs of file-based virtualization. This is a prerequisite for being able to persuade users to adopt this approach and achieve an overall reduction in the costs of managing, moving and sharing files.
Storage integration: IBM's scale out strategy
IBM's storage strategy is scale-up and scale-out. For scale-up, IBM has developed or bought-in a series of products that compete adequately in each storage class - high-end, modular, low-end, tape, VTL, etc. For scale-out, IBM has concentrated on developing technologies and services that provide a common framework for efficient management of these piece parts. The framework comes in the form of the SVC and specific management software.
The most important parts of this announcement were the shoring up of the high-end (scale-up), and incremental enhancements to the scale-out technology integration management framework or "the glue." This included enhancements to the SAN Volume Controller (SVC), the introduction of the System Storage Productivity Center, and the Virtual File Manager.™ On balance, IBM's integration strategy is the best in the business.
Action Item: IT management should expect vendors to perform the integration of storage piece parts. Integrating internally is not a sustainable long term strategy.
Even bad benchmarks can lead to good standards
Performance benchmarking has been a first step on almost all paths to infrastructure technology standardization; storage should be no different. In networking, server, and most other infrastructure domains, performance benchmarking has shined a strong analytic light on product performance, capacity, and functional claims, highlighting specific, common attributes of value in a manner that encourages reasonable comparisons. Even when performance benchmarks are imperfect or incomplete, benchmarking nonetheless has fostered emergence of interoperability protocols, configuration and implementation conventions, and consistent approaches to assessing functional capabilities. The SPC (the Storage Performance Council) no doubt is imperfect, and will be attacked as not representing real workloads. However, it will provide a springboard for comparison now, and will improve rapidly, becoming more representative and gaining credibility, as users demand it and vendors invest in it.
Action Item: Users should welcome the SPC benchmark as a basis for product comparison. Benchmarks can be expensive to run, which can put smaller storage suppliers at a disadvantage. But only smaller suppliers should be given a benchmarking pass by users. Larger suppliers should be held accountable for investing in efforts to discredit benchmarks instead of investing in reasonable benchmarking.
IBM: Avoiding forklifts, for now
Good news for the DS8000 installed base. The benefits of IBM's new DS8000 can be achieved with microcode updates, no forklift necessary -- yet.
"Forklift upgrade" is a pejorative term used to describe a replacement of existing infrastructure with entirely new hardware. The term originated with mainframes when/if field upgrades were not available from the vendor and a forklift was required to physically remove the existing system and/or deliver the new product.
When a storage supplier announces a product that cannot be upgraded in the field, competitors point out (loudly) that conversion requires a forklift upgrade. When a vendor announces a product that requires a forklift, it always will claim this is no big deal because it has financial incentives in place for its loyal customers that endure the pain of a forklift upgrade.
Every storage vendor has to do forklift upgrades at some point, some more frequently than others. Vendors would often rather do forklifts because they're more profitable. But too many forklift upgrades can rankle users. As well, upgrades lock-in customers so they're not bad for the vendor. The frequency of forklifts will depend on:
- The architectural design of the product,
- The degree of relative performance competitiveness the vendor currently experiences,
- Timing.
A better design will offer more upgrade options. A highly competitive announcement from a performance perspective will often be delivered as a forklift because the vendor is in a stronger position to maximize profits. Timing will depend in part on number two, the competitive posture of the vendor, and also the time of year. Generally, vendors like to avoid the large disruptions associated with forklift upgrades in the fourth quarter, typically the most active for deals.
In the case of the October 2007 IBM DS8000 announcement IBM has determined that it was not necessary to include an update using the POWER6 microprocessor as many had expected. Perhaps IBM is trying to avoid the need to disrupt customers in the all-important Q4. Or maybe IBM determined that it simply didn't need to introduce this now and could wait for a 2008 kicker that includes the world's fastest microprocessor. Either way, as with EMC and Hitachi products, IBM will eventually have to introduce a system requiring a forklift upgrade.
Action Item: Customers of the new DS8000 should understand IBM's product roadmap and lock in terms and conditions today that may effect migrations tomorrow.