Today’s storage market is shaping up to be an exciting place to watch innovation taking place, but the rapid changes can be a cause for concern for CIOs who need to ensure that their storage investments meet three key metrics:
- Sufficient capacity,
- Sufficient performance,
- Ability to survive for the expected equipment lifecycle.
Virtual environments have traditionally been built out with storage designed as a separate entity unto itself. The server side has consisted of compute/processing and RAM and has been designed with these two resources in mind. In fact, there have been a lot of moves toward minimizing or even eliminating storage in servers in favor of shared storage. What storage is local has been dedicated to supporting the local operating system or hypervisor, with other storage needs addressed through the remote storage.
Why is this? The reason is simple: The storage environment is the primary shared resource in the entire environment and carries particular significance as a result. However, as workloads morph to support more and bigger virtual workloads, this places new demands on storage. Further, VDI initiatives require a complete rethinking about how storage is handled. After all, with server virtualization, one can more easily “hide” storage performance issues. Under VDI, though, the user is interfacing directly with the virtual environment, so all of the potential shortcomings of that environment are front and center.
Vendors are developing any number of solutions in an attempt to solve this growing customer dilemma. What the various solutions have in common is that they are leveraging flash-based storage to meet these needs. Such solutions happen at one of two locations, on a server itself as either internal or directly attached storage or as network-based storage arrays.
There is some irony here: As companies have built out massive infrastructures to support virtual workloads, they’ve done so with shared storage at the heart of the solution. This shared storage has enabled new kinds of workload migration options that were not possible without shared storage. Now, as these environments become “needier”, some vendors are pushing storage back to the local server in order to enjoy lower latency and faster throughput. After all, placing storage on the server itself will almost always result in faster access than if its placed on the network.
Companies such as Fusion-io are taking server-side flash-based storage to new levels. The company sells PCI Express cards outfitted with between 160 GB and 10.24 TB of capacity. Further, the solution enjoys broad operating system support-- 64-Bit Microsoft XP/Vista/Win7/Server 2003/Server 2008, RHEL 4/5/6, SLES 10/11, OEL v5, VMware ESX 4.0/4.1/ESXi 4.1/ESXi 5, Solaris 10 U8/U9 (x64)--allowing it to apply to both physical and virtual workloads. Solutions like Fusion-io’s ioMemory technology result in 100x faster access time than traditional spinning storage. In fact, between the solid state nature of the solution and the fact that its so close to the CPU, the ioMemory product boasts a an average latency of just 29 μs (that’s microseconds) and carries an IOPS figure of over 200,000.
However, local storage alone negates some benefits of virtualization, such as the ability to seamlessly migrate workloads between host servers (think vMotion). This is a tradeoff that some may be willing to take in exchange for the massive performance gain.
However, it also demonstrates a blurring of the lines between the server and the storage. Even when this kind of storage is in use, it’s more than likely that there is still some kind of shared mechanism in place.
For example, companies such as Nutanix are creating solutions leveraging Fusion-io’s technology to create a shared storage environment that can continue to support the technologies--vMotion, high availability, fault tolerance--that administrators have come to know and love.
Fusion-io also has its own software-based caching solution. This software converts the company’s local flash-based storage device into a very fast, very large local cache. In this scenario, the servers still use traditional network-based storage, but the caching mechanism provides a major boost for I/O intensive applications and services.
There are other plays at hand, too. There are companies out there today building incredibly advanced solid state drive-based arrays with innovation data reduction capabilities. These tools, too, are intended to address today’s major storage problems, but they do so using a more traditional array-based paradigm.
Action Item: Based on these trends, CIO’s need to consider a number of factors as storage purchasing decisions are made:
- Migration of performance-enhanced storage migration from shared storage to local servers either directly or as fast cache. What requirements are there for the workloads that will be supported by the new storage?
- This necessarily blurs the line between server and storage. Whereas the market has seen storage as a separately designed architecture than that of the server side, it’s now being designed collectively.
- Is network-based storage latency a primary issue that needs to be resolved? In that case, consider server-side flash solutions coupled with shared storage mechanisms in order to retain the full hypervisor feature set. If network-based storage latency isn’t a major issue, consider flash-based arrays.
Footnotes: