Enhancing Cloud Services with Hybrid Storage

From Wikibon

Jump to: navigation, search

Storage Peer Incite: Notes from Wikibon’s July 30, 2013 Research Meeting

Recorded video / audio from the Peer Incite:

Video replay

When Utah-based colocator Voonami developed its IaaS offering, it found it needed higher performance storage. So naturally it called its incumbent provider, in this case NetApp, with whom it has had a long, happy relationship. However, said Voonami Sales Engineer Steve Newell in Wikibon's July 30 Peer Incite meeting, the all-disk solution that NetApp proposed was far beyond Voonami's budget. And NetApp seemed uninterested in listening to Voonami's needs, instead trying to dictate the expensive solution.

That sent the Voonami team on a quest for a solution that would meet the company's needs at an affordable price. Initially the team was wary of flash solutions, believing that they would be more expensive than disk. After several disappointing conversations with a variety of vendors, none of whom could meet the company's needs, the team heard of a hybrid flash storage startup called Tegile and called it as a last resort.

Somewhat to Voonami's surprise, Tegile's solution met or exceeded all Voonami's requirement, including its high IOPS spec, at a price that fit its budget. And Tegile worked with the team to design a solution that fit both its immediate and longer team needs.

Today Voonami still uses its legacy NetApp disk arrays, which continue to work well. But its IaaS service is built on Tegile, which has become its preferred vendor. The main reason for that change, Newell says, is that Tegile showed itself to be a vendor that Voonami could partner with for the long term, although he admits that cost issues were also important.

The lesson for CIOs here is do not presume that flash is the "Cadillac" solution that will always be more expensive. The best storage solution today depends on the requirements that need to be filled. Disk may still be the right solution for some applications where high performance is not important. But when you need high performance, look to either a hybrid system or, if the amount of data is within reason, an all flash array.Bert Latamore, Editor


Enhancing Cloud Services with Hybrid Storage and Modern Infrastructure

Stuart Miniman


On July 30, 2013, the Wikibon community gathered to discuss the intersection of new infrastructure architectures and cloud computing with Utah-based Voonami. This service provider has two sites in Utah that provide collocation, hosting, and public and hybrid cloud managed services. Sharing his experiences was Steve Newell who after spending seven years in software development is now a sales engineer at Voonami, which has transitioned from being a pure collocation provider into offering multiple services.

Watch the video replay of this Peer Incite.

Building the Infrastructure to Deliver Cloud Services

The infrastructure that delivered Voonami’s public cloud offering was running into capacity and performance limits. The stack that made up the environment was NetApp storage, newly deployed Cisco UCS compute and VMware for the hypervisor, network virtualization, and vCloud Director. While the storage team was very happy with the features and functionality of the NetApp solution, the 20TB expansion would be cost prohibitive based on a disk-only solution.

The team was concerned that flash would be cost-prohibitive based on investigations a couple of years ago, so many options were considered including flash as a cache with NetApp and bids from a variety of other storage companies including EMC, HP, and Nimble. Voonami’s requirements were for high performance (specifically IOPS), 20TB usable capacity, multi-protocol support (both iSCSI and NFS) and a strong management solution that could give visibility into the solution. Most of the solutions either did not support NFS natively or charged extra for it. The storage team was also concerned about leaving NetApp snapshots and other functionality that they relied on.

Towards the end of the search, Voonami came across Tegile. Not only was the price lower than the alternatives, the hybrid flash architecture of Tegile provided such high performance that the storage administrator would no longer need to spend time allocating and optimizing the infrastructure based on application requirements, simplifying operations.

How Infrastructure Delivers Cloud Services for Users

The use of SDN allows each customer to have its own “virtual data center” including individual firewall and VPN. While VMware is the primary offering, Voonami also has created offerings based on both OpenStack (at both customer locations and public cloud) and Microsoft Hyper-V. Steve Newell commented that customers often don’t understand that cloud is not just another rack of infrastructure – they need to make changes in their architectures to take full advantage of services. A move to modular applications helps with this conversion, especially when the compute and load can each be managed separately and dynamically. Customer applications in Voonami’s cloud include Web farms, Linux/Apache stacks, lots of Windows Web stacks, hosted desktops and test/dev operations. Voonami can support remote replication between customer environments and cloud, where the Tegile Zebi storage array is used at both locations (details on the partnership).

Action item: Voonami’s advice to CIOs is that nobody knows your environment as well as you do. Don’t let vendors tell you what you need or what your pain points are. Too often companies are comparing the wrong metric rather than looking for the best real solution. Both the choice of infrastructure architectures and deployment (onsite, hosted, hybrid or public cloud) are changing rapidly, so users need to do a thorough due diligence at the next refresh or upgrade.

Wikibon research reinforces hybrid storage as optimal choice for mainstream CIOs

Scott Lowe


Wikibon’s own David Floyer has written an incredibly detailed article entitled Hybrid Storage Poised to Disrupt Traditional Disk Arrays. This article details a comprehensive baseline which:

  • Provides a common definition of what constitutes hybrid storage,
  • Provides detailed cost comparisons that explain why true hybrid storage system are ultimately less expensive than their traditional storage brethren as the need for more IOPS grows,
  • Explains the architecture that comprises hybrid storage systems, and,
  • Identifies the point at which hybrid storage becomes the cost leader.

The cost/performance factor

For CIOs, ensuring that the storage selection meets operational workload needs at a reasonable cost is of paramount concern, particularly since storage can often consume a not insignificant percentage of the IT budget. For many CIOs, storage capacity has become an almost secondary concern. Certain workloads are able to leverage modern array features such as deduplication and compression technologies to great effect, thus reducing the need to worry as much about capacity as was necessary in the past. For example, VDI workloads, because of the great similarity in the virtual machines that make up the solution, can often achieve 75% or higher data reduction rates in production, practically solving the capacity issue.

So, for CIOs, the great storage capacity expansion question is easing, if only a little bit.

However, storage performance demands for modern workloads have emerged as the next great challenge. Historical workloads in mainstream IT shops have been able to rely on spinning disks to provide enough IOPS to meet workload needs, at least with the right planning. Organizations could choose from low IOPS 7200 RPM SATA disks or 15K RPM SAS disks, which provide about double the IOPS of the SATA disks. As workloads demanded more performance, CIOs could simply add more spinning disks, generally in the form of another disk shelf.

Capacity and performance decoupled

That was OK when capacity growth and performance demand were growing as one. Adding more disks both increased overall capacity and added more IOPS to the storage pool.

However, while capacity growth is still happening, modern workloads are much more IOPS hungry than has been seen in the past in mainstream IT. Further, storage performance woes are no longer visible to just IT. Consider VDI initiatives. If VDI is implemented in an organization with the wrong storage, the end result is directly experienced by frustrated users who become subjected to boot and login storms, which come to life when storage cannot keep up with demand in such situations.

Without a change in storage buying options, CIOs would be forced to buy more and more disks just to meet performance demands. We have seen organizations take drastic steps, such as short stroking, to meet performance demands, even though such techniques had a negative impact on capacity. So CIOs were seeing hand-in-hand capacity and performance increases happening at different rates, thus requiring strategies to balance the two needs while still attempting to balance the economics of the overall solution.

In order to improve performance, some vendors began adding flash-based storage to traditional storage systems and using these drives as a sort of cache in front of the hard disk. However, as David explains in his article, this has not always been the best option.

A solution emerges

As you might guess from the title of David’s article, hybrid storage solutions have emerged as the sweet spot for many modern mainstream workloads. Although IT shops still have to meet capacity needs, modern hybrid storage arrays generally provide adequate capacity and, when coupled with a variety of data reduction features – deduplication and compression – capacity needs can be easily met. Where hybrid storage arrays truly shine for CIOs is on the performance side of the equation.

For comparative purposes, a true hybrid solution is defined as one that takes a flash-first approach to data storage. I won’t repeat here the full text, but refer to the VM-aware Hybrid Definition section in David’s article for more information.

Evaluating Hybrid Suppliers

In his research for his article, David spoke with customers of several vendors. The methodology laid out in his article is geared to help customers evaluate important criteria of available solutions. The table below outlines how Tintri stacks up to the three primary characteristics that David outlined as being necessary for an offering to be considered a true hybrid solution.

How Tintri Maps to Hybrid Characteristics
The IO queues for each virtual machine are fully reflected and managed in both the hypervisor and the storage array, with a single point of control for any change of priority. Tintri provides complete end-to-end tracking and visualization of performance from both the hypervisor down and from the storage layer up, ensuring that administrators are able to procure critical statistics from whatever tool is in use. The goal is to ensure that storage performance remains at acceptable levels and reduces write latency.
All data is initially written to flash (flash-first). Tintri’s solution takes a flash-first approach by ensuring that all data written to the storage array is initially written to high performance flash storage before being offloaded to slower rotational storage. This results in much higher levels of throughput, which is one of the key differentiators between hybrid storage and traditional arrays.
Virtual machine storage objects are mapped directly to objects held in the storage array. Tintri VMstore feature is VM-aware associates the storage array directly to the virtual machines and critical business applications.

Tintri GUI

David goes into great depth and demonstrates the point at which a hybrid storage array begins to outshine traditional storage arrays from a cost/IOPS perspective. In his analysis, David uses a scenario that requires 10 TB of usable capacity and demonstrates that, at the 7,000 IOPS mark, a hybrid system and a traditional disk system have about the same price from a performance perspective. Beyond that mark, the cost of the traditional disk system continues to escalate as spindles are added to continue growing performance while the hybrid continues to have plenty of excess IOPS capacity to meet continuing needs.

For a CIO, perhaps the most critical and succinct part of the analysis is as follows:

  • An environment requiring 15,000 IOPS from 1 terabytes of usable storage would require:
  • 64 drives and 1 TB of flash in a traditional storage array with a flash cache;
  • 16 drives including 2.4TB of flash in a hybrid storage array;
  • The traditional storage array would cost more than twice as much as the hybrid ($190,000 vs. $88,000).

Action item: CIOs who are considering expanding storage or replacing existing storage need to carefully consider the full range of options before them with particular attention paid to true hybrid solutions that maximize both IOPS and capacity needs to provide outstanding performance for mainstream and emerging workloads. By doing so, CIOs can avoid simply throwing more spindles at older systems and achieve much better results at much lower overall costs.

Is Orchestrating Storage or Systems the Right Approach

David Floyer

Every storage vendor has made good money from proprietary tools that manage their own storage, whether that storage is a traditional array, traditional file, hybrid, flash-only, private cloud, or public cloud. Of course this software works best when 100% of the storage is managed by these proprietary tools. This approach just will not cut it as software-led infrastructure and software-led storage become established. An overall systems orchestration approach is going to be much more cost effective.

IT has specific roles to perform, whether the IT organization is centralized, distributed or a hybrid model. These include ensuring performance, reliability, recoverability, security and compliance of corporate systems, and corporate data. Just as with quality, businesses will define different organizational models of IT deployment, but the fundamental IT skills, tools, processes, and procedures need to be in place. IT, like quality, cannot be completely outsourced – it is too important.

The key to the success of software-led infrastructure is orchestration at the application or application suite level, ensuring that the correct compute, storage, networking, and business continuity resources are in place to meet the application owner's business objectives. The key is ensuring that resources can be managed and deployed by the orchestration layer, and the data available to the orchestration layer from the resource layers below is 100% of all the data available. A fundamental requirement is for every device to have APIs that allow discovery of all the data and metadata by the orchestration layer and for every device to be fully managed by orchestration APIs.

The orchestration layer should work across all types of resources. For example, storage should be available from all sources and types, internal & external, with the same interface and same SLA management including IOPS, capacity, latency, reliability, recoverability and security. Included in this orchestration should be the capability to meet the internal and external governance standards, and the charge-back or show back costs and projected future costs.

Increasing the level of deployment of in-house and private cloud in mega-datacenters that include cloud service providers and data aggregators relevant to their specific vertical industry will allow optimum systems integration and management within a mega-datacenter complex. Back-hauling data and metadata within a datacenter is an order of magnitude more effective and cost effective that attempting long-distance movement and management.

Action item: For enterprise IT, picking best-of-breed storage, CPU, or networking orchestration platforms is a short-term strategy that is unlikely to realize the full potential of software-led infrastructure. Enterprise IT should focus on system-level integration and pick an ecosystem and device suppliers that support Open APIs both to and from all devices. The ability to integrate legacy, private cloud and public cloud system resources will need a shift away from traditional resource stovepipes back to a systems view.

Get the Organizational House in Order and Let Value Flow

Dave Vellante

For several years the Wikibon community has discussed how disruptions in technology often are not the most challenging factor within business technology environments. Thinking about people, process, and technology, practitioners almost universally cite people and process as, by far, the most difficult aspects of IT.

Moreover, in the past 24 months we’ve seen the “Amazon Effect” underscore innovations that cloud service providers (CSPs) are bringing to the industry. Often, the budgets within CSPs are growing rapidly, whereas most IT shops face belt-tightening. A natural byproduct of budget cuts is organizational tension, and, because they are profit centers, CSPs tend to cope with such friction better than many IT shops.

At this week’s Wikibon Peer Incite we heard this theme again from Voonami, a CSP providing IaaS. Steve Newell of Voonami told us that his organization moved from a traditional storage infrastructure to one that utilized a hybrid approach from Tegile. Perhaps more interesting than the technology discussion was the organizational impact of this move and the process changes it enabled.

In particular, Steve Newell, Voonami’s sales engineer for co-location, told us that the admin who owned storage management saw time spent managing storage go from 30-40% down to 5% once everything was up and running. Moreover, Voonami went live inside of a week, including installation, testing, storage migration, and offering services to customers. A critical enabler, according to Newell, is that Voonami had the cloud architect, the cloud sys admin, and the storage sys admin involved in the project, plus the networking team to accommodate basic connectivity.

The key takeaway was that by focusing on the business value and organizing a cross-functional team, Voonami was able to see more immediate value from its infrastructure refresh and simplify its management process.

Action item: Running IT as a business won’t completely eliminate organizational tensions, but it will focus teams on driving value. IT organizations intent on managing infrastructure in-house should take a page from for-profit CSPs and think IT-as-a-Service first and make protecting organizational turf a lower priority item.

Voonami and Tegile Partner on Next Gen Cloud Storage Solution, Remote Replication Service

Gary MacFadden

When Voonami needed to add 20 TBs of high speed storage capacity for its two data centers in Utah, why did it choose Tegile Systems over its incumbent storage vendor, NetApp? According to Steve Newell, Voonami’s sales engineer for co-location, “It was mostly about the opportunity to develop a true partnership – and cost was also an issue.”

During Wikibon’s July 30th, 2013 Peer Incite entitled, “Enhancing Cloud Services with Hybrid Storage”, Newell shared with Wikibon community members Voonami’s selection process and its rationale for partnering with Tegile vs. going with alternative offerings from other storage vendors. “We looked at putting flash on the front-end of our existing NetApp SAN environment and also a variety of other hybrid storage solutions that combine solid state drives (SSD) with traditional hard disk drives (HDD). NetApp is a great partner, and we have no issue with the performance of their solution. But the NetApp team came in with a pre-conceived notion of how we should grow our storage assets. Tegile was more than willing to take the time to understand our environment and help us design a solution that fit our unique requirements”

Cloud Optimized Hybrid Storage

Voonami delivers data center solutions, cloud computing infrastructure services, managed hosting, dedicated servers, VOIP solutions, and traditional co-location services primarily to the SMB business community. Newell and the Voonami team selected Tegile Systems’ Zebi Hybrid Storage Platform that integrates multiple storage device types including SSDs, HDDS, DRAM (used as fast level 1 cache) and Flash technology “intelligently” into the data path to create an optimized storage appliance.

According to Tegile, its metadata accelerated storage system (MASS) allows the Zebi network storage array to “organize and store metadata, independent of the data, on high-speed devices with optimized retrieval paths. This accelerates every storage function within the system, raising the performance of near-line SAS HDDs to the level of extremely expensive high-RPM SAS or Fibre Channel drives.”

Tegile Zebi protects user data by storing it permanently on less expensive HDDs. Tegile also offers multi-protocol support including iSCSI and NFS and claims to have a total cost of ownership (TCO) five or more times less expensive than comparable solutions delivered by traditional storage vendors.

New Remote Replication Service

As Senior Wikibon Contributor Scott Lowe aptly states in his recent note covering Voonami’s business model and its appeal to SMB CIOs, “Hybrid cloud solutions like the one provided by Voonami make it possible for CIOs to ease their way into the cloud without having to take a forklift, costly approach.” Initially, Voonami installed Zebi 2100 arrays in its two data centers to accelerate the performance of its cloud computing and Storage-as-a-Service product offerings.

However, both Voonami and Tegile saw the opportunity to add a remote replication offering for their clients. Tegile users, for example, can simply order the service from Voonami and immediately achieve highly available distributed data protection to a secure data center without any additional hardware or software expenditure.

According to a joint press release, “The Voonami-Tegile replication service is the first of its kind for a hybrid SSD array vendor with a Managed Service Provider and signals the strength that Tegile has established with service provider customers, who are adopting Tegile's Zebi hybrid arrays for the transformational economics that make SSD performance affordable for outsourcing services.”

Bottom Line

The level of innovation coming from the data storage and cloud communities today is unprecedented in the history of computing. Hybrid cloud and storage solutions have proven to provide cost effective, safe, and fast access to data stores with remarkably quick turnaround for system builds, bringing applications to market and developing much needed data protection and replication services compared with the old data center paradigm of build it, test it, and manage it in-house. While the traditional data center approach is still very viable, especially for larger firms that have expertise to support complex environments, the SMB market has, and will continue to embrace, the managed services and other cloud-enabled, fast storage delivery models.

Action item: Emerging vendors with disruptive technologies and innovative service offerings have learned that next-generation solutions are not enough to win the loyalties of customers and unseat incumbent vendors who offer more traditional, “safer” solutions. Legacy vendors who discount fledgling technology companies will regret overlooking their customers, who are responding more to true partnerships with their vendors as opposed to tried and true templates for successful implementations that may or may not meet each customer’s unique requirements. Dramatically lower cost-of-ownership is also a compelling argument for customers to try new solutions. Vendors need to understand that innovation and price are major motivations for buyers - as long as risk is sufficiently mitigated.

Get Rid of Old Attitudes

Bert Latamore

Steve Newell, sales engineer for Co-location and IaaS provider Voonami, brought a message to the November 30 Peer Incite meeting, and it was, “get rid of the 20th Century attitude that ‘I have to own everything.’”

The question he asks CIOs is: Why are you still running infrastructure? What benefit do you get from it? And while this is an obvious sales pitch, his point, that CIOs should move their applications to the Cloud (and in his case specifically Voonami), eliminate all the headaches of running infrastructure, and focus on the applications that are the real source of value to the organization, is valid.

Moving applications to the Cloud provides several important advantages. CIOs no longer have to worry about hardware problems, backup and recovery, security, or predicting growth and demand three-to-five years ahead. They no longer have to justify hardware upgrades to CFOs who don’t understand the need. They no longer have to compete with much larger companies for rare talents such as virtualization experts. They can reallocate staff to focus on higher level issues that create value. And of course it replaces Capex investments in hardware with Opex monthly payments that are predictable and that can be managed as demand changes. And for most midrange companies in particular, it costs less, because they can share the very large servers that provide economies of scale as well as the expert staff, and highly efficient operations they enable, formerly only available to large enterprises.

CIOs also should not look on the Cloud as a virtual server. Cloud computing provides important advances in computing including superior connectivity and the ability to automatically increase or decrease resources to match changes in demand. That means companies with monthly or annual demand changes do not have to maintain idle equipment for three quarters of the year to be ready for the demands of the fourth.

Action item: Stop thinking in 20th Century paradigms. Reexamine your company’s compute loads to identify which, if any, must absolutely stay in house and create a plan for migrating the rest to one or more Cloud service providers, either IaaS or SaaS, and let them manage the infrastructure. That plan should include transitioning internal staff to higher level functions focused on building closer integration between IT and the business.

Personal tools