In my previous post in this space, I discussed the merits of solutions that erase the divide between servers and storage in enterprise environments. In such situations, it’s desirable to place super fast storage inside the server, even if it means giving something up--sharing that storage, for example--in return for the sheer performance of the solution.
By the way, for an outstanding overview of the flash-based storage market, read this article by David Floyer here at Wikibon.
Today, I want to talk about a slightly different approach. Here, consider instead the concept of “spanning the blur” between servers and storage with overlays that meet specific use cases.
For years, storage costs have been measured using cost-per-capacity as the measure. For example, dollars-per-gigabyte and dollars-per-terabyte are common metrics that CIOs use when comparing storage vendors with one another on a level playing field. However, today’s IO-intensive applications have forced CIOs to consider a newer metric – dollar-per-IOPS – which I wrote about in 2010 for CBS Interactive’s TechRepublic. Although the raw pricing from that article may have changed, the overall theme remains; solid-state storage such as flash remains the undisputed champion when it comes to measuring cost as a function of performance.
Another item that has held true since that article was written is the fact that solid-state storage, when measured with the traditional dollars-per-GB metric, remains the most expensive kind of storage that can be purchased.
This raises a conundrum for CIOs: Is the right choice to go all flash with storage in order to achieve the best possible performance for the business? Or, is it more desirable to stick with tried-and-true spinning storage and simply add additional disk spindles until desired performance thresholds are met?
Frankly, neither approach makes much sense.
The CIO that goes all flash will spend tremendous sums of money to get the capacity that is necessary to run the environment; solid state storage has yet to come close to rotational storage in terms of pure capacity. Specific use cases will certainly benefit from such fast storage, but the vast majority of the information being stored doesn’t need the speed.
The CIO who simply throws more rotating hardware at the problem will ultimately spend vast quantities just to get the performance desired and may be hard pressed to be able to afford the spindles necessary to meet every use case.
Today’s CIO needs to implement solutions that provide balance; they must balance the constant demand for increasing quantities of storage while, at the same time, meeting increasing demands for IO-intensive solutions. An emerging class of solutions, such as those provided by EMC’s VFCache product, is designed to meet these competing demands and support new business-facing solutions that might otherwise remain unattainable.
EMC’s VFCache is intended to sit between an application and the storage and to do so in a transparent way that does not disrupt the existing environment, but improves on it. VFCache is a PCI-E x8 card that installs into the host to provide its services. It’s a caching card, transparently caching content from a traditional storage array to which the host system has access. With this fast caching in place, “hot” data stays local and doesn’t need to traverse the network back and forth to and from the mass legacy storage system every time its accessed. In order to maintain data integrity, VFCache is a write-through device, so that there is no stranded data kept locally that is not also written to long-term storage and therefore available to be shared with other applications.
VFCache is an example of another way that flash-based storage is being deployed to satisfy the evolving need for different kinds of storage. With VFCache, the extremely fast caching component is a server-based hardware solution, which caches content to streamline and enhance the storage operation. As you see in the many articles here at Wikibon, flash storage can be leveraged in numerous ways to address storage concerns.
Action Item: First, determine what problem you’re trying to solve. Perhaps you’re attempting to speed up database performance or improve overall IO for a VDI environment. Once you’ve defined the problem, choose the right storage "bucket" and then look at the various options you have at your disposal.
Footnotes: See the Wikibon Community Peer Incite that analyzed EMC's VFCache: Squinting through the Glare of Project Lightning