NAND technology has changed the marketplace for personal hand-held devices. Lower power consumption and improved durability have driven the eclipse of magnetic media in this sector. The same is likely to happen in the laptop market. This has created a very high-volume flash market, which is driving prices down fast. This marketplace is driven by consumer dynamics on a much wider scale than just the PC market.
So where is the best place to introduce such technology in the data center? Not all the advantages of NAND technology are so relevant. For example, saving power is nice, but there is no business case if the drives are thirty times more expensive.
EMC have introduced a NAND Solid State Disk (SSD) "tier-0" layer within its high-end DMX storage array. The good news is that the tier-0 SSD disks look like any other disk, and with very few minor tweaks can take advantage of the array storage management software, including EMC's thin provisioning software. The bad news is the price/disk of ~$30,000+/disk! A small ten disk configuration for RAID 6 + spares could cost $300K for just over one terabyte of storage! Data can only be moved dynamically to this type of storage from within the same array. Candidate volumes will have to specify that this type of storage is required. Moving data from outside the array will require a disruption to the applications.
The current alternatives to improving I/O performance are large server RAM (no I/O is the best I/O), larger storage cache (Storage controller RAM, more expensive that NAND SSD but able to improve the performance of all I/O in an array), stand-alone SSDs, or short-stroking disks. These are well tried and well understood alternatives. For some specific applications where there are consistent, random, high access rates to a small amount of data, or very high I/O write rates that swamp the array's fast-write capabilities, SSDs will be a valuable addition to the storage administrator's armory.
EMC OEMs the drives from STEC and does not have any exclusive capability. Hitachi's array architecture would allow an additional benefit if it decides to put STEC flash drives in the controller. Hitachi's approach of putting virtualization in the controller has the advantage of high performance and the ability to move volumes dynamically to that device from any array in the data center. This could mean much better utilization of the expensive SSDs.
The biggest limitation to effective use of this device is that whole volumes are allocated to it. Wouldn't it be nicer if the blocks of data that have high-activity from any volume or file could be migrated to the solid-state disks, and the rest to standard disk? This would utilize the SSDs much more efficiently and would automate the allocation process.
3PAR's block-based virtualization is the closest architecture to this ideal, with the theoretical ability to monitor each block and migrate blocks to the optimum location. IBM's newly acquired XIV technology has a similar architecture and could also benefit from this type of approach. The use of SSD disks and the block-based virtualization architecture could be used to bring tier-two+ storage up to tier-one+ performance in incremental steps.
Appliance virtualization approaches such as IBM's SVC and EMC's Invista would seem to put significant complexity and performance impediments in the way. Introducing block-based virtualization into these appliances could improve the attractiveness of this approach.
Microsoft's announcement of a slew of server virtualization features points to the potential of bringing virtualization of I/O back into the file system at the server level, allowing the placement of blocks on different performance devices to optimize the cost/performance balance for the application as a whole, and minimizing the cost of array controllers. Traditional storage array function such as fast write could be moved to the drives. It will be interesting to see if Google integrates such functionality into the Google File System, and how EMC's Hulk and Maui will incorporate these technologies.
Action Item: NAND storage will continue to drop in price and will very probably be a disruptive technology to the traditional storage array market. Storage executives should develop a close understanding of the applications running in their data centers, and develop cost models of performance and capacity specific to their business. This will allow discussion to rise above vendor polemics and the allure of the architecture "du jour".
Footnotes: