Why Flash is a Threat and Opportunity for Array Vendors

In the last twenty years we’ve witnessed a steady migration of storage function from the host CPU to the array controller. Storage services including replication, copy services, data migration, data protection and other storage management capabilities have ended up as solutions sold by array vendors and have created a multi-billion dollar industry. It made sense. Lacking a way to store peristent data close to the CPU, on the processor side of the channel, architects chose to use storage networks as the persistent storage alternative of choice. They realized the penalty of performing I/O to external disk was offset by the benefits from sharing persistent storage. Array vendors have also made tremendous strides in performance by using intelligent caching and other sophisticated algorithms and techniques to speed response times and throughput. External storage networks have become the dominant deployment model for the vast majority of mission critical applications.

Server Vendors and ISVs Strike Back

Server and software vendors want to take this function back and reclaim the space. They see a major opportunity to improve application performance, perhaps by a factor of 10X and capture some of the revenue that array vendors have created. Flash is an enabler of this trend for two key reasons: 1) Driven by consumer demand and volume shipments, the price of flash is coming down more rapidly than the price of spinning disk; 2) Flash is persistent. As such, it seems inevitable that flash will migrate toward the host and function will migrate with it; ending a twenty year gold rush. Oracle’s Exadata Version 2 is an excellent example of this trend as are deployments from startup Fusion-io.

Persistent Flash Enables Storage Services at the CPU

Persistent Flash Enables Storage Services at the Host

By packaging persistent high speed flash closer to the CPU, developers can and will architect applications to exploit this new resource. This flash capability is in the best position to handle the bandwidth coming off of new Intel multi-core chips without going to the other side of the channel and incurring the well-known performance bottlenecks associated with performing I/O’s to spinning disk. This will make applications run much faster and users will be thrilled.  The challenge is that this flash resource will have to be protected and shared; and it’s unclear how that will happen. So in the near term, this capability will do well only in mid-range and lower end environments or workloads that are bespoke and isolated from the balance of the application portfolio. Storage networks will remain the most logical way to widely share and protect critical data across an organization. As well, traditional storage services can be leveraged across multiple applications and will provide better efficiency versus services that are purpose-built for a particular application.

However eventually suppliers will figure out a way to protect and share the data that is on flash without going to spinning disk– both within a CPU rack and at a distance. This will likely be accomplished through some type of inter-process communications using Infiniband or some other high speed interconnect. But the real key is software that needs to be invented that secures, protects and shares that persistent flash data without having to go to spinning disk.

The basic premise is to extend the notion of cache coherency” to “Flash Coherency” and  the company who figures this out with the right go to market model could be the next Veritas of the industry in the 2010’s.

Counter Arguments

The arguments against this scenario are emerging. This capability will only be suitable for read data. External storage still provides better operating leverage across the application portfolio. Flash is not reliable, especially on writes and its write performance is not great. Flash is also way too expensive.The technical challenges of securing, protecting and sharing data in memory are too enormous.

The Way Forward

My bet is that consumer demand will continue to explode and be the benefactor of enterprise flash, driving prices down below those of spinning FC disks by 2012-2013; setting up a Tier 0 Flash/Tier 3 SATA scenario over time. Reliability will continue to improve and the incredibly innovative and intelligent inventors in the technology business will find a way to solve these and other problems. Further, this architecture is highly complimentary to virtualization and the cloud as these emerging modes of computing are screaming for higher performance. And while mostly read data will be the initial candidates for placement on flash, database log files and other write data in memory (DIM) will emerge as candidates– I am very confident.

Frankly, the best argument against this trend is inertia. People are comfortable with today’s infrastructure; it’s hardened and works well. Applications will have to be re-written to take advantage of this approach and that will take time. But the fact is applications are already taking advantage of this trend. It’s going to happen and is happening today. It’s inevitable and suppliers should embrace the trend. This is a tremendous opportunity for server vendors, ISVs, startups and yes, even storage suppliers.

After all, who has better knowledge of how to secure, protect and share data than storage companies? I think they have two choices. Ignore the trend and see what happens or lead the charge.

Share

,