Flash Storage will Radically Change Systems and Application Design

 

I’d like to explore the topic of how system and storage architectures are changing and the impact this will have on application delivery and organizational productivity.

Allow me to put forth the following premise:

Today’s enterprise IT infrastructure limits application value.

What does that mean? To answer this, let’s first explore the notion of value. The value IT brings to an organization flows directly from the application to the business and is measured in terms of the productivity of the organization. Infrastructure in-and-of itself delivers no direct value; however the applications, which run on infrastructure directly affect business value. Value comes in many forms but at the highest level it’s about increasing revenue and/or cutting costs; and ultimately delivering bottom line profits.

A Bit of History

In the very early days of computing, the delta between processor and disk speeds was negligible. Today it’s literally 6 orders of magnitude—counting nanoseconds (processor) and milliseconds (disk). At the dawn of computing time, all data was held on magnetic drums. These had a single head per track, and rotated at 17,500rpm. There were 25 sectors per track. By optimizing the placing of instructions and output data on the tracks, a maximum processor speed of 400KHz, and a practical speed 100KHz. Memory access was in line with processor speed. It took it 1 microsecond to process and 1 microsecond to write to persistent storage.

Today processor speeds are measured in GHz (>1,000 times faster), and the magnetic media write times in milliseconds (~1,000 times slower); a net increase in difference of 1 million times (10 to the minus 6). This difference has been offset by reading and writing large blocks, increasing the multi-programming levels of OS and file systems, increasing the number of cores, increasing the IP invested into the IO storage controllers and, most importantly of all, increasing the functionality and complexity of the database systems to protect data, primarily from Oracle, IBM and Microsoft.

The Processor IO Gap

Think about this for just a moment in terms of distance. It’s like processor speeds are 1 foot away whereas disk speeds are like the distance from San Francisco to Los Angeles. Applications are constrained by this length of time delay. The amount of data that can be brought into systems is extremely limited and application design must be cognizant of this slowness.

Today’s computing infrastructure can be likened to a military convoy. The entire convoy must decelerate to allow the slowest vehicles to keep up. The slowest vehicle in computer systems today is the mechanical disk drive.

Computer systems today are designed to minimize trips to LA. They’re designed to handle many other tasks while data is being written to disk. So there’s an immensely complicated multiprogramming environment that’s been built up over decades. The speed of applications is severely limited by this complexity and an organization’s ability to attack a new problem is constrained by the inflexibility of computer architectures.

In particular, databases are relatively small. And there are lots of them. Calls made to the database are relatively few. The transactional systems must be isolated from all the other data in an organization so their performance can be optimized. Think about an ERP system. It has many modules tracking inventory, supply chain, demand forecasts, etc. The entire workflow of the organization must be built around the application and workflow is a fixed process that is very hard to change. To alter pricing, for example, all these asynchronous systems must by synched up in a data warehouse that becomes the single source of record. But by the time the single source of record is in place the market may very well have changed.

How Does Flash Change Infrastructure Design?

For the past fifteen years, function has moved out of the processor into the array – for good reason – to share data, protect information and off load servers. With persistent flash, however, function is moving back, closer to the server—promising new levels of application performance and organizational flexibility. Flash will reside at the server, in all-flash arrays, in hybrid arrays – virtually throughout the entire stack. The control point for the flash, however, will be the fast server, not the slow storage array.

Efforts to address the disparity between processor and storage performance have been seen with in-memory databases, which have used DRAM protected by battery backup to provide persistent memory. This is a very expensive solution with costly databases, that may have trouble scaling.

We’ve also seen efforts to extend memory using flash and atomic writes. With atomic writes, the delta between cpu and disk speeds is reduced by 10,000 times, from a best of 1 milliseconds to 100 nanoseconds for a line write of 64 bytes. Because flash technology is silicon-based, it is not constrained by mechanical limitations. The gap in speed between the technologies should not increase again. This will lead to significant reduction in complexity in IO engineering, and an radical increase in the size, complexity and potential business value of applications.

As a result – application design will change. Organizations will begin to design a much flatter database structure allowing secure, multi-tenant access to the single database of record. Data architectures will accommodate transactional and analytic data and allow machines to make decisions (for example pricing changes) in near real time based on market conditions.  In this scenario, the organizational workflow can be flexibly changed in days or less; versus many months.

The impacts of these changes to organizational productivity will be enormous and result in much greater IT value and flexibility. Companies will be able to respond much quicker to market opportunities, competitive threats, disasters, etc…by analyzing and acting on massive streams of data in near real-time.

As data volumes explode and new approaches like Hadoop hit the enterprise, flash will play a critical role in allowing organizations to manage, capitalize on and monetize massive amounts information. System design will evolve and flash will be an enabler to this vision. Flash as a persistent medium is important, but much more critical and valuable will be the software that manages the data—end-to-end. Systems and software expertise – in file systems, operating systems, metadata management and middleware will be critical to enable a new crop of applications to be developed.

Importantly, spinning disk will not disappear. It will still account for the lion’s share of capacity stored and nearly half the spend. But increasingly organizations will invest more in the software, algorithms, data scientists and processes to extract value from the data rather than invest in the container in which the data resides.

Clearly this will not happen overnight. Flash systems that don’t disrupt existing processes will help accelerate existing applications without resorting to unnatural acts (e.g. wide striping, short stroking, etc). Such systems will be in high demand in the near-to-mid term.

However longer term, I believe the ideas shaping up in the world of hyperscale – where application design is being completely re-thought – will trickle into system design within the traditional enterprise and drive new levels of productivity and  unprecedented value from IT systems.

For a re-cap – check out this short video I did on this topic: