A couple of weeks ago, I had the opportunity to attend the Next Generation Storage Symposium, an event related to the storage industry. However, what quickly became clear is that there is a blurring line between many areas of the hardware business while, at the same time, we are seeing deep commoditization of hardware platforms.
Nowhere was that more clear than in our group’s session with Nutanix. While much of the event focused on pure storage technologies, the Nutanix session was sort of the “anti-storage” event of the day, although storage obviously plays a major part in the company’s offerings. One might even think that Nutanix didn't even belong in the group, particularly given one of the company’s favorite taglines, which is “The SAN-free Datacenter.”
A move from resource silos to converged architectures
However, Nutanix is just one of a growing number of players in a space that is defined by the elimination of expensive storage, specialized hardware, and separation of resources – compute, storage, and networking – in favor of converged, simpler devices based on commodity, less expensive hardware platforms. And, it’s a space that I believe CIOs should watch very carefully, as these players provide the potential to rethink the data center in a number of different ways while enabling new services
Hardware commoditization has benefits for users, starting with lower cost. For more information regarding the benefits of the various convergence options, read Stu Miniman’s Wikibon Primer on Converged Infrastructure.
Other players in Nutanix’s space include Pivot3 and Simplivity. All three companies have created devices based on commodity hardware and are working to differentiate themselves through software and unique hardware scaling capabilities not found in traditional environments.
However, beyond Nutanix, some of the storage industry's biggest players have also jumped onto the convergence bandwagon. That said, some players rely on traditional hardware offerings to accomplish their convergence goals and take what I see as a “rack-based” approach to convergence. That is, everything you need to run your data center exists inside a single rack, and you still get support from a single vendor. Emerging players (again, Nutanix, Pivot3, Simplivity)are taking a “U-based” approach, where all of the resource elements are enclosed inside single hardware devices and look very different from the larger players.
Primarily due to the rise of virtualization, x86-based servers are now considered commodity hardware. With some exceptions, vendors have yet to seriously distinguish their x86-based servers from the competition. The software layer -- in this case, the hypervisor — has made the differentiation in the underlying hardware largely irrelevant.
On a number of levels, this is a good direction. First, the commoditization trend has simplified the data center environment, which, as mentioned, is now normalized through the use of a software abstraction layer. Second, it allows CIOs to better focus on the overall cost of commoditized components in an effort to reduce costs while, at the same time, either maintain existing or even improve service levels and capabilities.
Convergence is becoming more compelling
For years, IT organizations have built data centers organized around three key hardware segments – compute, network and storage. This just made sense at the time. Each resource element was tuned and scaled on its own as needs around that resource changed. For example, if a data center was running low on disk space, it added more storage to the SAN environment.
This “stovepipe” nature of resource management also resulted in the creation of teams within IT tasked with managing each resource. This was due to the breadth and depth of knowledge necessary to manage each disparate resource.
With the rise of commoditization and the increasingly software-defined nature of data center hardware components, granularly converged hardware is growing in popularity. These solutions bring with them massively reduced complexity, lower overall costs, and easier upgrade paths than are often found with more traditional solutions.
For example, with Nutanix, organizations simply deploy Nutanix’s Node-based Blocks, which provides:
- 48 cores of processing power,
- 1.3 TB of PCI-e cache,
- 1.2 TB of SATA-based SSD,
- 20 TB of storage capacity,
- 192 to 768GB of RAM,
- 4 x 10 GbE and 8 x 1 GbE network.
The specs above assume that you’ve deployed four nodes. As you need more capacity, you simply add more nodes, which become part of the block.
Little to no compromise
You may wonder about some of the enterprise-grade hypervisor’s ability to function in such an environment. The players in this space have all created solutions that maintain the ability to use all of the features you’ve come to know and love in vSphere, and they’re adding support for other hypervisors, including KVM and Hyper-V.
On the enterprise capabilities front, you don’t need to compromise when it comes to these solutions.
One challenge remains
Right now, many of the smaller hyper-convergence plays are single SKU products. Given the desire to quickly and easily scale the environment as a whole, this isn’t surprisingly, but it does make it impossible to easily scale just one resource component, if that should become necessary.
Over time, I fully expect that these players will create more differentiated products that allow CIOs to pick and choose where to scale the environment while continuing to maintain as much simplicity as possible.
Update: December 5, 2012 - Nutanix releases major updates
I would be remiss if I didn't update this article to reflect a very recent and significant update to the Nutanix product discussed in this article. This week, Nutanix released new hardware platforms as well as version 3.0 of the company's operating system. There is nothing incremental about these updates; Nutanix is clearly going for the gold.
Nutanix will continue to sell their existing product, but the company has added the NX-3000 to their lineup. The NX-300 picks up where the NX-2000 leaves off. With the NX-3000, each block now boasts:
- 64 cores of processing power (Dual Intel Sandy Bridge ES-2660 processors, 8 cores / 2.2 GHz per node),
- 1.6 TB of PCI-e cache (400 GB per node),
- 1.2 TB of SATA-based SSD (300 GB per node),
- 20 TB of storage capacity (5 TB per node),
- 512 GB to 1 TB of RAM (128 GB to 256 GB per node),
- 8 x 10 GbE and 4 x 10/100 network (2 10 GB/E per node).
With the NX-3000, Nutanix estimates that organizations can run 110-130 server workloads or 300-400 virtual desktops per block.
Operating System version 3.0
While the hardware upgrades alone are worth the price of admission, Nutanix's new operating system is also an impressive update in its own right, with the following new features:
- Support for both KVM and vSphere with Hyper-V support coming at some point in the future,
- Inline deduplication to conserve disk space,
- Dynamic cluster expansion via BonJour-based discovery further simplifies the administrative experience,
- Compression in order to conserve yet more disk space.
Between the major hardware upgrade and the addition of a number of sought-after enterprise grade features in the operating system, it's yet more validation that hyper-convergence is here to stay and shouldn't be ignored.
Action Item: For CIOs looking at an upcoming replacement cycle, thought should be given as to whether or not these hyper-converged solutions are the right fit for their organizations. As hyper-converged vendors release more preconfigured, resource-based building blocks, this simple approach to building the data center becomes more and more appealing, particularly as it means that currently disparate IT teams may be able to be retasked to other duties.
Footnotes: Updated: 12/5/2012 to reflect new product