Storage Peer Incite: Notes from Wikibon’s December 13, 2011 Research Meeting
Recorded audio from the Peer Incite:
A new wave of applications is sweeping across global business, moving IT from the back office to the front lines of competitive advantage and changing how business is conducted forever. The key to these new applications is big data, whether that is data generated internally about what customers want and don't want from the companies they trade with or externally from public social networks and other Cloud-based sources.
But traditional disk storage is proving inadequate to power these new applications. What they need is an order-of-magnitude increase in data read/write speeds to avoid the increasingly long latency times caused by the slow speed of disk access.
Fortunately the cost of flash storage is dropping steadily, driven by the growing volume of flash used in consumer products. And its overall business value when used for transaction data already far exceeds disk. As a result, Wikibon believes that the data center storage architecture will move to effectively a two-tier physical structure, with active data, including non-structured data types, on solid state -- in-memory flash or solid-state drives -- and inactive data on SATA disk, which will maintain a per-Gbyte cost advantage. This newsletter, which was created out of the December 13 Peer Incite meeting on the impact of flash storage, explores some of the most important issues and implications of this generational change in storage architecture.
We at Wikibon would like to take this opportunity to wish our community members a happy holiday season and a successful New Year! G. Berton Latamore, Editor
Flash Storage Driving a Wave of Modern Applications and Increased Data Usage
Introduction
The speed of disk systems (measured in milliseconds) has not changed significantly in the last 20 years, while flash devices can access data in nanoseconds. New architectures have emerged that avoid putting production data on disk. Wikibon co-founder and CTO, David Floyer had an animated discussion with the community about flash technology's future on a Peer Incite call, December 13, 2011. The flash technology marketplace is moving fast; the bottom line is that initial solutions have targeted decreasing IT cost, while the long-term advantage is the increase in the amount of data that can be accessed.
Flash applications
Initial enterprise deployments of flash in traditional architecture storage arrays and legacy applications to help alleviate hot spots. Modern applications are emerging that can fully use the low latency and high I/O of flash architectures. Some examples that David Floyer shared:
- Google search found that it was quicker to get a piece of data from cache in London than getting it from a group of disks [locally] in Mountain View, CA.
- Facebook has very large databases; the performance of traditional storage arrays could not support the requirements. The databases were put in memory with Fusion-io technology.
- Apple Siri must deliver data in less than 3 seconds, and the data is pre-staged into memory.
- IBM Watson (as shown on Jeopardy) is a harbinger of new decision-making capabilities based on leveraging massive amounts of data
Real returns on new applications can take 20 years to roll-out; this will start quickly by first delivering improvements to existing architectures and applications.
If response times can be reduced by a factor of 10, productivity can be improved by a factor of 2.
New architectures
Disk is not going away; raw and archive data will continue to live on disk (since disk remains 10x cheaper for storage), but production data on disk is at a significant disadvantage to flash alternatives. A forecast of storage spending on flash and traditional storage arrays in 2015 can be seen here.
Flash technology will change the way systems are designed. The requirement for new systems is to pre-fetch the data from disk to have it available on flash. Big data applications will deliver this functionality. Production systems will have 99+% access via data-in-memory instead of disk in the next decade. The technology required to make this transition is the improvement to flash. While NAND flash has potential competitors, none of them have achieved high volume through the consumer market, which helps to lower cost and improve reliability. Flash will be spread throughout many parts of system architectures, with multiple use cases driving deployments. Data should be moved around as little as possible. Flash is improving in durability, performance and price along a fast curve and should continue along this path through 2015. Other technologies that are part of the flash ecosystem are data-in-memory architectures, in-memory databases, I/O tiering, and mixed types of memory: more expensive SLC (one-bit per memory cell) and cheaper MLC (three-bit per memory cell). Flash is much further down the maturity curve than unproven technologies like Racetrack, Memristor, or Phase-change memory.
While the raw cost of flash is higher than disk, flash vendors use compression, deduplication and other techniques to reduce the cost per GB. The cost of active data in a flash-only array is cost competitive with high-speed (SAS/FC) disk. SSD is not an optimal design; better form factors will emerge over the next four years.
Market Dynamics
Thomas Isakovich of Nimbus Data commented that the common perception that flash will remain 10 times more expensive than disk for a long period of time is not true. The price of flash is dropping faster than disk; and some disk prices are increasing due to shortages. He projects that total acquisition price per GB of flash and 10k/15k SAS or FC disk will reach parity in 2012 even without compression or deduplication. While Nimbus and many others have deduplication support, Thomas stated that it is not a market requirement since some data can not be deduped and there are many places in the stack where deduplication can be done.
The traditional SCSI interface is inefficient for flash architectures and new standards such as NVM Express, SCSI Express and SATA Express are being developed to solve the current interface bottlenecks. Other initiatives, such as atomic writes directly to the flash that don’t use SCSI, can simplify the requirements of the SCSI stack. These building blocks will allow the driving of much greater amounts of data to new architectures.
IT organizations utilizing flash technologies can increase the amount of data that they can process and in turn drive additional revenue without adding new staff. Revere Electric Supply drove 20% additional revenue by putting its databases in flash with flat headcount. IT will want to redesign the applications and reign in server sprawl to bring solutions into smaller, more tightly managed environments.
Flash technology allows not only a power savings of flash over spinning disk; in overall cost, power is a 10% savings. The greater savings will be that the increased power density where using flash will allow for more compute per rack.
Action item: Flash storage is not a point technology but a disruptive wave to storage architectures that will fit in various architectures for many use cases. The long term question that CIOs should be asking is what technologies can be implemented that will allow employees and customers to have access to an order of magnitude more information that can transform the interactions between companies and customers. Companies can adopt flash storage solutions immediately to take advantage of cost savings while reevaluating long term modernization of applications.
Footnotes: Flash market segments, vendor roundup and pricing forecast in David Floyer's 2011 Flash Memory Summit Enterprise Roundup
The Potential of Flash to Disrupt Whole Industries
Introduction
Mechanical Disk drives have imprisoned data for the past 50 years. Technology has doubled the size of the prisons every 18 months for the past 50 years, and will continue to do so. The way in and out of prison is gated by the speed of the mechanical arms on a mechanically rotating disk drive. These prison guardians are almost as slow today as 50 years ago. The chances of remission for data are slim; there is little opportunity for data to be useful again. Data goes to disk to die.
Transactional Systems Limitations
Transactional systems have driven most of the enterprise productivity gains from IT. Bread-and butter applications such as ERP have improved productivity by integrating business processes. Call centers and web applications have taken these applications direct to enterprise or government customers. The promise of transactional systems is to manage the “state” of an organization, and integrate the processes with a single consistent view across the organization.
The promise of systems integration has not been realized. Because transactional application change “state”, this data must be written to the only suitable non-volatile medium, the disk drive. The amount of data that can be read and written in transactional systems is constrained by the elapsed time of access to disk (milliseconds) and the bandwidth to disk. The fundamental architecture of transactional systems has not changed for half a century. The number of database calls per transaction has hardly changed, and limits the scope of transactional systems. Transactional systems have to be loosely coupled, and be part of an asynchronous data flow from on system to another. The result is “system sprawl”, and multiple versions of the actual state of an enterprise.
Read-heavy Application Limitations
Enterprise data warehouses were the first attempts to improve the process of extracting value from data, but a happy data warehouse manager is as rare as two dollar bills. The major constraint is bandwidth to the data. Big data applications are helping free some data by using large amounts of parallelism to extract data more efficiently in batch mode. Data in memory systems can keep small data marts (derived from data warehouses or big data applications) in memory and radically improve the ability to analyze data.
Overall, the promise of data warehousing has been constrained by access to data. The amount and percentage of data outside the data warehouses and imprisoned on disk is growing. Enterprise data warehouses are better named data prisons.
Social & Search Breakthroughs
Social and search are the fist disruptive applications of the twenty-first century. When “disk” is googled, the search bar shows “109,000,000 results (0.17 seconds)”, impossible to achieve if the data was on disk. The Google architecture includes extensive indexing and caching of data to enable access to disk to be avoided where possible for these read-heavy data applications, data without “state”.
Social applications turn out to be a mixture of state and stateless components. All of them started implementing scale-out architectures using commodity hardware and homegrown or open-source software. The largest reached a size where the database portions of the infrastructure were at a limit, where locking rates had maxed out at the limits of hard disk technology.
As examples, Facebook and Apple have used flash within the server (using mainly Fusion-io hardware and software) extensively for the database portions of the infrastructure to enable the scale-out growth of services. In a similar way to Google, they have focused on extensive use of caching and indexing.
The end objective for both Google and Facebook is to ensure quality of end-user experience. Both have implemented continuous improvement of response time, with the objective of shaving milliseconds of response times and assuring consistency of data. Productivity of the end user ensures user retention and more clicks, and more revenue.
Watson and Siri, and the Potential Impact on Health Care
The two most exciting application devolopments in 2011 were Watson from IBM, and Siri from Apple. Both respond to questions in natural speech and answer them quickly and accurately (for the most part).
Watson won the Jepody challenge In February 2011, astounding contestants and the public with it's performance. Watson had to answer within three seconds, and was not allowed to be connected to the Internet. To meet that requirement, all the data and metadata was held in memory. That data was was constructed from multiple sources. The technology behind Watson is being used to create products that can be interrogated to answer medical questions.
Siri was announced in October 2011, and works for iPhone 4S smartphone in connection with Apple hosting systems. The data here is held in memory and flash memory. Siri provides a set of assistant services to users, looking things up on the web and using the local data from the smart phone. It is an amazing and seamless blending of local and remote technologies.
Looking out to 2015, it is interesting to think through what could happen to the health care market if the Siri and Watson technologies were fused. Assume for the moment that the technology and trust* issues have been addressed, and in 2015 there exists a robust technology that will answer in real time spoken queries about health care issues. Two key questions:
- Where and how could this technology be applied?
- Direct use by Doctors in medical facility
- Service Provided by Insurance Company
- Service Provided Direct to Consumer
- What are the Potential Savings?
- Reduction in Costs might include reduced time of doctors per patient, fewer tests, lower risk of malpractice suits, avoidance of treatmeents
- Improvement in Outcome might include additional revenue from new patients and improved negotiation with health insurance companies/government
- Perceived Quality of Care or customer satisfaction might include improved Yelp scores, improved customer retention, additional revenue from new patients/health insurance companies.
- Risk of Adverse Publicity negative impact on revenue, brand and customer satisfaction from misuse, faults found with technology, etc.
The total health care spend in the US is estimated to be 16% of GDP, about $2.5 trillion per year. Assuming that this technology can address 40% of that spend ($1.0 trillion) that is attributable to doctor initiated spend, the top half of Table 1 attempts to look at difference in impact between the different deployment models. The bottom half of Table 1 takes the two cost cells, and based on some simple assumptions, attempts to ball-park the potential yearly impact.
The table indicate some interesting potential impact on the heath care industry. Firstly, the savings might be 10 times higher if this service could be delivered direct to the consumer. If we assume some contribution from the cells not assessed, the potential benefit might be $100 Billion/year, or $1 trillion over 10 years. And health care practitioners indicate that increasing consumer use would be easier than increasing doctors use.
Nobody is going to build a factory based on this analysis - but it show that there is the potential of broad-scale implementation of systems that would rely on all the data being held in memory to meet a 3 second response time. Flash would play a pivotal role in enabling cost-effective deployment.
The players that are able to develop and deploy these technologies will have a major impact of health care spend in general, and a major potential to create long-lasting direct and indirect revenue streams.
Action item: CEOs and CIOs must take the best and brightest to identify integrated high-performance applications that could disrupt their industry increasing real-time access to large amounts of data. They should then be working proactively with their current or new application suppliers on how to such systems could be designed. Significant resources will need to be applied, based on the assumption that the first two attempts at defining any "killer" application will be off target.
Footnotes: *Trust issues include openness about the sources and funding behind information within the technology, trust in the brand of the suppliers and deployers of the technology, trust in the training to use the technology, trust in the security and confidentiality arrangements, trust that the system can be updated rapidly in the light of new information, trust in the reliability and validity of the outputs of the technology.
Flash-based SSDs are Driving New Standards and Charging Models
Introduction
SSDs provide faster random access and data transfer rates than electromechanical hard disk drives (HDD) and today can often serve as rotating-disk replacements, but the host interface to these devices remains a performance bottleneck. PCIe-based SSDs together with several emerging standards promise to solve the interface bottleneck.
SSDs are proving useful today but will find far more broad usage once new standards mature and the industry delivers integrated circuits that enable closer coupling of the SSD to the host processor.
Today SSDs Use Disk Interfaces
The disk-drive form factor and interface allows IT vendors to seamlessly substitute an SSD for a magnetic disk drive. No change is required in system hardware or driver software. You can simply swap in an SSD and realize significantly better access times and somewhat faster data-transfer rates.
However, disk-drive interfaces are not ideal for flash-based SSDs. Flash can support higher data transfer rates than even the latest generation of disk interfaces. Also, SSD manufacturers can easily pack enough flash devices in a 2.5-inch form factor to exceed the power profile developed for disk drives.
Most mainstream systems today use second-generation SATA and SAS interfaces (referred to as 3Gbps interfaces) that offer 300MB/sec transfer rates. Third-generation SATA and SAS push that rate to 600MB/sec, and drives based on those interfaces have already found use in enterprise systems.
While those data rates support the fastest electromechanical drives, new NAND flash architectures and multi-die flash packaging deliver aggregate flash bandwidth that exceeds the data transfer capabilities of SATA and SAS interconnects. In short, the SSD performance bottleneck has shifted from the flash devices to the host interface. The industry needs faster host interconnects to take full advantage of flash storage.
The PCIe host interface can overcome this storage performance bottleneck and deliver unparalleled performance by attaching the SSD directly to the PCIe host bus. For example, a 4-lane (x4) PCIe Generation 3 (G3) link, which will ship in volume in 2012, can deliver 4GB/sec data rates. Moreover, the direct PCIe connection can reduce system power and slash the latency that's attributable to the legacy storage infrastructure.
Multiple Standards in Process
In typical industry fashion, a happy marriage of SSDs and PCIe is being addressed by multiple standards efforts including:
NVM Express – The Optimized PCI Express SSD Interface
The NVM Express (NVMe) specification, developed cooperatively by more than 80 companies from across the industry, was released on March 1, 2011 by the NVMHCI Work Group; now more commonly known as the NVMe Work Group. The NVMe 1.0 specification defines an optimized register interface, command set and feature set for PCI Express Solid-State Drives (SSDs). The goal is to help enable the broad adoption of solid-state drives (SSDs) using the PCI Express (PCIe) interface.
A primary goal of NVMe is to provide a scalable interface that unlocks the potential of PCIe-based SSDs now and into the future. The interface efficiently supports multi-core architectures, ensuring thread(s) may run on each core with their own queue and interrupt without any locks required. For enterprise-class solutions, there is support for end-to-end data protection, security, and encryption capabilities, as well as robust error reporting and management capabilities.
SCSI Express
SCSI Express uses the SCSI protocol to have SCSI targets and initiators talk to each other across a PCIe connection; very roughly it's NVMe with added SCSI, but it also includes a SCSI command set optimized for solid-state technologies.
HP is a visible supporter of it, with a SCSI Express booth at its HP Discover event in Vienna, and support at the event from Fusion-IO.
SCSI Express is currently being independently standardized under part of INCITS T10 by the SOP-PQI Working Group and the SCSI Trade Association, with involvement from SFF Committee and PCI-SIG.
SATA Express – Enabling Higher Speed, Low Cost Storage Applications
SATA Express is a new specification under development by SATA-IO that combines the SATA software infrastructure with the PCI Express (PCIe) interface to deliver high-speed storage solutions. SATA express enables the development of new devices that use the PCIe interface and maintain compatibility with existing SATA applications. The technology will provide a cost-effective means to increase device interface speeds to 8Gb/s and 16Gb/s.
Solid-state (SSDs) and hybrid drives are already pushing the limits of existing storage interfaces. SATA Express aims provide a low-cost solution that fully utilizes the performance capability of these devices. Storage devices not requiring the speed of SATA Express will continue to be served by existing SATA technology. The specification will define new device and motherboard connectors that will support both new SATA Express and current SATA devices.
The spec won't be complete until the end of 2011, but it will allow for two new SATA speeds: 8Gbps and 16Gbps as well as backwards compatibility with existing SATA devices. SATA Express will leverage PCIe 3.0 for higher operating speeds.
Form Factors for PCIe SSDs
These interface standards do not address the subject of form factors for SSDs, and that is another issue that is being worked out through another working group. The SSD Form Factor Working Group was formed to promote PCIe as an SSD interconnect through standardization. It will focus on driving PCIe storage standardization in three key technology areas: connector, drive form factor, and hot-plugability
Summary
The significant advances in performance enabled by non-volatile, memory-based storage technology is demanding that the surrounding platform infrastructures evolve to keep pace and allow realization of the full potential of SSDs.
Action item: As SSDs penetrate deeper into enterprise storage architectures and capture more capacity with performance, the model of charging for capacity will change. Instead of paying for tiers of storage which require data movement, look to design architectures that have the user pay for I/O performance. This will avoid runaway usage. This space is a changing model without any best practices yet.
In-Memory Database Engines Support Real-Time Analytics, Compliment Hadoop
The emergence of flash storage technology has the potential to dramatically improve the performance of user-facing applications. The move to in-memory computing is already having an impact in big data environments, providing analytic applications access to large volumes of data in near real-time.
Traditionally, databases that support data analytics applications store data on disk in the form of complex multidimensional cubes and tables. Users perform queries against the tables and cubes on disk via front-end applications. As the laws of physics are immutable, data can be accessed off spinning disk only so fast, resulting in high latency for large queries.
In-memory databases, by contrast, load and store data in random access memory. Applications perform queries against the data in RAM, greatly increasing response time and reducing the level of data modeling required.
While in-memory databases are not new, they are the focus of renewed attention thanks in part to HANA, SAP’s new in-memory database engine to support analytic applications and, eventually, transactional systems. SAP plans to migrate its entire application portfolio onto HANA, giving power analysts and business users alike access to near real-time analytics.
In-memory databases such as HANA, however, are not a big data cure-all. While HANA is capable of storing multiple terabytes of data, it does not scale to accommodate truly big data scenarios – hundreds of terabytes or more. Nor is it optimized to process unstructured data.
Action item: Databases such as HANA support real-time analytics on relatively large, structured data sets, while Hadoop facilitates deep processing and storing of huge volumes of unstructured data for historical analysis and predictive modeling. In scenarios where both low-latency, real-time analytic queries and deep historical analysis on large volumes of unstructured data are required, CIOs should consider deploying both in-memory database technology and Hadoop in conjunction for a comprehensive approach to big data processing, storage, and analytics.
Turning IT Costs into Profits: New Storage Requirements Emerge for NextGen Cloud Apps
One of the more compelling trends occurring in the cloud is the emergence of new workloads. Specifically, many early cloud customers focused on moving data to the cloud (e.g. archive or backup), whereas in the next 12 months we’re increasingly going to see an emphasis on moving applications to the cloud; and many will be business/mission critical (e.g. SAP, Oracle).
These emergent workloads will naturally have different storage requirements and characteristics. For example, think about applications like Dropbox. The main storage characteristic is cheap and deep. Even Facebook, which has much more complex storage needs and heavily leverages flash (e.g. from Fusion-io), is a single application serving many tenants. In the case of Facebook (or say Salesforce), the owner has control over the app, end-to-end visibility, and can tune the behavior of the application to a great degree.
In 2012, cloud service providers will begin to deploy multitenant/multi-app environments enabled by a new type of infrastructure. The platform will not only provide block-based storage but it will deliver guaranteed quality of service (QoS) that can be pinned to applications. Moreover, the concept of performance virtualization - where a pool of performance (not capacity) is shared across applications by multiple tenants – will begin to see uptake. The underpinning of this architecture will be all-flash arrays.
This means that users can expect a broader set of workload types to be run in the cloud. The trend is particularly interesting for smaller and mid-sized customers who want to run Oracle, SAP, and other mission critical apps in the cloud as a way to increase flexibility, reduce CAPEX, and share risk, particularly security risk.
The question is who will deliver these platforms? Will it be traditional SAN suppliers such as EMC, IBM, HP, HDS and NetApp, or emerging specialists like SolidFire, Nimbus, and Virident? The bet is that while the existing SAN whales will get their fair share of business, especially in the private cloud space, a new breed of infrastructure player will emerge in the growing cloud service provider market with the vision, management, QoS, and performance ethos to capture marketshare and grow with the hottest CSP players.
For data center buyers, where IT is a cost center, the safe bet may be traditional SAN vendors. For cloud service providers, where IT is a profit center, the ability to monetize services levels by providing QoS and performance guarantees on an application-by-application basis will emerge as a differentiator.
Action item: All-flash arrays will emerge as an enabler for new applications in the cloud. The key ingredients will be not be just the flash capability, but more importantly management functions that enable controlling performance at the application level. Those shops that view IT as a profit center (e.g. cloud service providers) should aggressively pursue this capability and begin to develop new value-based pricing models tied to mission critical applications.
Solid State Drives Heat From DCs: Capture the Benefits
Solid-state storage is poised to enter mainstream use in data centers in the near term, driven by large potential performance advantages and supported by dropping cost premiums compared to disk-based systems.
Whether it's mounted directly in servers or makes up a next-generation storage array, a key ancillary benefit for data center operators is reduced cooling load due the 10x energy efficiency premium that solid state delivers.
Only a handful of data center operators actively manage for energy efficiency, and most IT managers will never directly see energy cost savings returned to their budgets, but don't disregard this benefit.
Are you facing cooling challenges in your data center due to increasing rack power density? Servers with solid state storage will at least ameliorate that problem.
And have you priced new data center capacity lately? Avoiding just one watt of power use on your raised floor helps avoid the capital cost of securing new data center facility capacity - worth as much as $2 with today's construction costs. Add another dollar-per-year in energy costs to support that watt.
So if a vendor knocks on your door touting the performance benefits of solid-state solutions, listen intently. And if the economic case doesn't quite pass the bar, be sure to factor in the energy savings, capacity avoidance, and data center cooling management benefits and redo the calculations.
Action item: Data center operators and storage vendors should explicitly include the ancillary energy efficiency advantages of solid-state solutions in their cost/benefit analyses - it may very well push projects past rate-of-return hurdles.