Contents |
Introduction
SSDs provide faster random access and data transfer rates than electromechanical hard disk drives (HDD) and today can often serve as rotating-disk replacements, but the host interface to these devices remains a performance bottleneck. PCIe-based SSDs together with several emerging standards promise to solve the interface bottleneck.
SSDs are proving useful today but will find far more broad usage once new standards mature and the industry delivers integrated circuits that enable closer coupling of the SSD to the host processor.
Today SSDs Use Disk Interfaces
The disk-drive form factor and interface allows IT vendors to seamlessly substitute an SSD for a magnetic disk drive. No change is required in system hardware or driver software. You can simply swap in an SSD and realize significantly better access times and somewhat faster data-transfer rates.
However, disk-drive interfaces are not ideal for flash-based SSDs. Flash can support higher data transfer rates than even the latest generation of disk interfaces. Also, SSD manufacturers can easily pack enough flash devices in a 2.5-inch form factor to exceed the power profile developed for disk drives.
Most mainstream systems today use second-generation SATA and SAS interfaces (referred to as 3Gbps interfaces) that offer 300MB/sec transfer rates. Third-generation SATA and SAS push that rate to 600MB/sec, and drives based on those interfaces have already found use in enterprise systems.
While those data rates support the fastest electromechanical drives, new NAND flash architectures and multi-die flash packaging deliver aggregate flash bandwidth that exceeds the data transfer capabilities of SATA and SAS interconnects. In short, the SSD performance bottleneck has shifted from the flash devices to the host interface. The industry needs faster host interconnects to take full advantage of flash storage.
The PCIe host interface can overcome this storage performance bottleneck and deliver unparalleled performance by attaching the SSD directly to the PCIe host bus. For example, a 4-lane (x4) PCIe Generation 3 (G3) link, which will ship in volume in 2012, can deliver 4GB/sec data rates. Moreover, the direct PCIe connection can reduce system power and slash the latency that's attributable to the legacy storage infrastructure.
Multiple Standards in Process
In typical industry fashion, a happy marriage of SSDs and PCIe is being addressed by multiple standards efforts including:
NVM Express – The Optimized PCI Express SSD Interface
The NVM Express (NVMe) specification, developed cooperatively by more than 80 companies from across the industry, was released on March 1, 2011 by the NVMHCI Work Group; now more commonly known as the NVMe Work Group. The NVMe 1.0 specification defines an optimized register interface, command set and feature set for PCI Express Solid-State Drives (SSDs). The goal is to help enable the broad adoption of solid-state drives (SSDs) using the PCI Express (PCIe) interface.
A primary goal of NVMe is to provide a scalable interface that unlocks the potential of PCIe-based SSDs now and into the future. The interface efficiently supports multi-core architectures, ensuring thread(s) may run on each core with their own queue and interrupt without any locks required. For enterprise-class solutions, there is support for end-to-end data protection, security, and encryption capabilities, as well as robust error reporting and management capabilities.
SCSI Express
SCSI Express uses the SCSI protocol to have SCSI targets and initiators talk to each other across a PCIe connection; very roughly it's NVMe with added SCSI, but it also includes a SCSI command set optimized for solid-state technologies.
HP is a visible supporter of it, with a SCSI Express booth at its HP Discover event in Vienna, and support at the event from Fusion-io.
SCSI Express is currently being independently standardized under part of INCITS T10 by the SOP-PQI Working Group and the SCSI Trade Association, with involvement from SFF Committee and PCI-SIG.
SATA Express – Enabling Higher Speed, Low Cost Storage Applications
SATA Express is a new specification under development by SATA-IO that combines the SATA software infrastructure with the PCI Express (PCIe) interface to deliver high-speed storage solutions. SATA express enables the development of new devices that use the PCIe interface and maintain compatibility with existing SATA applications. The technology will provide a cost-effective means to increase device interface speeds to 8Gb/s and 16Gb/s.
Solid-state (SSDs) and hybrid drives are already pushing the limits of existing storage interfaces. SATA Express aims provide a low-cost solution that fully utilizes the performance capability of these devices. Storage devices not requiring the speed of SATA Express will continue to be served by existing SATA technology. The specification will define new device and motherboard connectors that will support both new SATA Express and current SATA devices.
The spec won't be complete until the end of 2011, but it will allow for two new SATA speeds: 8Gbps and 16Gbps as well as backwards compatibility with existing SATA devices. SATA Express will leverage PCIe 3.0 for higher operating speeds.
Form Factors for PCIe SSDs
These interface standards do not address the subject of form factors for SSDs, and that is another issue that is being worked out through another working group. The SSD Form Factor Working Group was formed to promote PCIe as an SSD interconnect through standardization. It will focus on driving PCIe storage standardization in three key technology areas: connector, drive form factor, and hot-plugability
Summary
The significant advances in performance enabled by non-volatile, memory-based storage technology is demanding that the surrounding platform infrastructures evolve to keep pace and allow realization of the full potential of SSDs.
Action Item: As SSDs penetrate deeper into enterprise storage architectures and capture more capacity with performance, the model of charging for capacity will change. Instead of paying for tiers of storage which require data movement, look to design architectures that have the user pay for I/O performance. This will avoid runaway usage. This space is a changing model without any best practices yet.
Footnotes: