Originating Author: Fred Moore
A New Wave of Hi-performance Storage Solutions Takes Off A Report by: Horison, Inc. Fred Moore President www.horison.com
The First Solid State Disks
Solid-State Disks (SSDs) are again becoming a powerful new technology for storage administrators who struggle to deliver consistent response times and to meet business critical service levels. The first SSD was delivered to the IBM S/370 mainframe market in 1978 by StorageTek and sold for $8,800 per megabyte. That would be $8.8 million per gigabyte! These first SSDs were based on volatile DRAM chips rather than rotating magnetic disk media with moving actuator arms and read/write heads. The variable length seek and rotational times for rotating disks that cause erratic performance are eliminated with solid state storage leaving only a very short access and data transfer time to complete an I/O operation. There are no cache misses or backend data transfers on an SSD and the product soon became a quick fix for severe I/O performance problems. Later, these devices added fault-tolerant architectures and battery backup to protect data from all types of device failures including the loss of electrical power.
SSD’s were a successful mainframe solution for I/O intensive applications like paging, swapping, catalogues, queues, and directories. The mainframe SSD era ended in 1985 when IBM introduced an optional virtual memory feature called Expanded Storage for S/370 processors. During the next 20 years, the SSD market barely existed with a few small companies offering SSD products for non-mainframe systems. The relatively high price of DRAM based SSD’s has been the fundamental reason that has kept the SSD market very small.
Today, the DRAM-based SSD market remains a small niche market with price the limiting factor. The primary uses for SSD’s are as data base accelerators (Oracle, DB2, SQL, Informix and Sybase) as transactional data bases are often the most I/O bound of all applications. Future software developments have suggested more intelligence will be applied to SSD’s tightly integrated with disk storage systems. This includes capabilities such as automatically moving hot-files I/O intensive files to and from the SSD as the workload activity dictates. Typically, from one to three percent of online data can be classified as “hot data” based on its respective IOPs (I/Os per second) requirement, making it the most optimally suited and cost-justifiable for an SSD.
Disk Performance Doesn’t Keep Up With Capacity Increases Magnetic disk performance has not kept pace with the increases in disk capacities creating a growing opportunity for higher performing devices. Disk storage capacity witnessed exponential increases during the 1990 decade. Since 1992, the areal density (gigabits per square inch) of magnetic disk recording has increased between 40 and 60 percent annually. More densely packed data means fewer disk actuators for a given amount of storage. Disk drive performance is now improving at less than 5 percent annually. Further improvements in seek times will be minimal and any gains will come from faster data rates and possibly faster rpm’s. Disk performance is normally measured in total random IOPs (I/Os per second) and can exceed 100 I/Os per second when the average access time (average seek, latency and data transfer) falls below 10ms per I/O.
The disk industry continually increases actuator capacity, without delivering corresponding performance improvements at the drive level, creating a performance imbalance that is defined by the ratio called Access Density. Access Density is the ratio of performance, measured in I/Os per second, to the capacity of the drive, usually measured in gigabytes (Access Density = I/Os per second per gigabyte). The six remaining disk manufacturers remain primarily focused on driving capacity increases knowing that consumers still focus on acquisition price or $/GB rather than performance. If capacity doubled and performance doubled, the access density would remain unchanged. Today disk capacity doubles without delivering any corresponding performance improvements. Scaling disks involves more than simply increasing capacity. Disk drive manufacturers try to avoid this issue and focus on $/GB “price is everything” with the consumer.
In reality, the access density has steadily declined and the capacity has increased substantially throughout the evolution of the disk drive since the first successful drive appeared in 1956. Larger caches and actuator-level buffers can help improve overall subsystem performance, and multi-path I/O options deliver higher aggregate subsystem throughput. Access density is becoming a significant factor in managing storage subsystem performance and the tradeoffs of using higher-capacity disks must be carefully evaluated. Lowering the cost-per-megabyte usually means lowering the overall disk performance. As a result, the effective space utilization levels for non-mainframe disks have declined to reduce actuator contention and device busy conditions.
Flash Memory Gains Momentum Flash memory gets its name because the microchip is organized so that a section of memory cells are erased in a single action or "flash." Flash memory is a type of constantly-powered nonvolatile memory that can be erased and reprogrammed in units of memory called blocks. It is a variation of the older electrically erasable programmable read-only memory (EEPROM) which, unlike flash memory, is erased and rewritten at the byte level, which is actually slower than flash memory updating.
TDK Shows 32GB Flash Memory Drive for Notebooks (graphic)
Source: http://gizmodo.com/gadgets/peripherals/tdk-shows-32gb-flash-memory-drive-for-notebooks-201337.php
NOR and NAND Flash Both NOR and NAND types of flash memory were invented by Dr. Fujio Masuoka while working for Toshiba in 1984. The name "flash" was supposedly suggested because the erasure process of the memory contents was reminiscent of the flash of a camera. The endurance of NAND flash at typically ~1,000,000 cycles is much greater than NOR at ~100,000 cycles though NOR is slowly improving in this area. NAND devices are accessed much like block devices such as disks storage or processor memory. Whichever type of flash is used in a device, there are some unfavorable performance characteristics that need to be addressed. NOR is fast to read current data but markedly slower to erase it and write new data. NAND is fast to erase and write, but slow to read non-sequential data through its serial interface. The NAND pages are typically 512 or 2,048 or 4,096 bytes in size. For data center and multi-user systems, NAND is the preferred choice and its quick rise makes it a viable storage option for the re-emergence of SSDs in the data center.
Feature Comparison NOR NAND Endurance/cycles ~100,000 ~1,000,000 Read Speed Faster Slower Erase/write Slower Faster
Single-cell and multi-cell Flash
Single-cell flash memory stores one bit of information in each memory cell, while multicell flash memory stores two at the present time. The greater density of multicell flash makes it perfect for personal appliances, cell phones, music players and digital cameras as these are single-user systems. Multicell flash can store more than one bit per cell by choosing between multiple levels of electrical charge to apply to the gates of its cells. But multicell flash is significantly slower, making single-cell flash more suitable for high-performance applications such as solid-state drives. Single-cell flash memory is more costly and also more durable than multicell flash. Each cell on a multicell flash chip is good for about 10,000 write/erase cycles, while the cells on single-cell chips can last for 100,000 write/erase cycles at the present time. The durability of a flash memory chip can be increased with the use of intensive wear leveling techniques built into flash controller chips that write data equally to all of the memory cells on a chip instead of using the same cells repeatedly. An exciting variety of techniques to extend the current write limits of flash technology are under evaluation by several startups.
Feature Comparison Single-cell Multi-cell Unit stored 1 bit per cell Multiple bits per cell Speed Faster Slower Cost Higher Lower Write/erase cycles ~100,000 ~10,000
Flash Based SSDs Become Viable
The growing applications of flash memory are rapidly re-energizing the SSD market. There are no moving parts on flash. Flash SSDs have growing appeal as they are non-volatile; have low power consumption, much higher read performance than magnetic disk, low heat output, and a small size with pricing finally in the disk drive range. Flash can also withstand significant pressure, temperature variations and even water submersion. The much lower flash memory price points will continue to increase the appeal and size of the SSD market.
While the capacity focused HDD industry has always (and rightfully so) used $/GB as it measure, a SSD addresses a high performance market and should be based on IOPs/$ as its measure. Don’t measure a performance device with a capacity metric and vice versa s this will lead to the wrong conclusion.
HDD DRAM SSD Flash SSD
Price/GB (subsystem) $1-25 Gb $500-1000 Gb $20-200 Gb
Performance ratio (IOPs) vs. disk 1 2500-5000x 25-50x reads
1-2x writes
Read/write times 5-10ms .02 ms Reads .2ms,
writes 4-5 ms
Read/write IOPs 100-200/sec 400,000 per/sec Reads 5,000-50,000
Writes 50-1,000
Capacity max. 1 TB per drive 256 GB per drive 256 GB per drive
Source: Horison, Inc. and a Variety of Industry Sources
Will a New Storage Tier (T0)Emerge? Several storage suppliers are offering flash HDDs to complement their storage arrays. Continued success for Flash storage suggests that the T0 storage tier may soon re-appear. Defined as tier 0 storage, this tier would address the highest performing, response-time critical applications.
Flash may possibly find its way embedded in individual disk drives serving as drive-level cache. This hybrid disk concept will add cost to the drive, something the disk manufacturers don’t want to do, but it helps disk drives deal with their growing access density problem. This drive-level approach has been attempted in the past with Fixed-head disks and Actuator Level Buffers though neither technique was very successful. With a 25-50 times performance improvement over disk and prices that finally make sense, the day for tier 0 to fully emerge may not be far away. For the first time in nearly 30 years, data center storage is again poised to benefit from a high-performance storage solution – only this time it will be economically justifiable.
Action Item: Flash memory is beginning to appear in the data center establishing a new tier of high-performance storage devices Tier 0 that are, at last, economically justifiable. Capacities of flash disk drives are currently available at 146Gb and increasing, and will directly benefit applications requiring ultra-high performance and consistent response times for IO intensive workloads. User's should begin to identify and position their performance critical applications as new Tier 0 solutions begin to appear from many storage vendors.
Footnotes: