Managing billions of small files effectively requires a clear understanding of data flows and a system based on common Lego-like building blocks that provide services to application owners.
This was the message at the September 29th, 2009 Peer Incite Research Meeting, where an industry practitioner, Eugean Hacopians, Senior Systems Engineer at the California Institute of Technology (Caltech), addressed the Wikibon community.
Caltech is the academic home of NASA’s Jet Propulsion Laboratory. As such it runs the downlink for the Spitzer Space Telescope, NASA's orbital space telescope, as well as 13 other missions, processes the raw data into images, and supports the needs of scientists visiting from locations worldwide. The focus of this discussion was the activities of the Infrared Processing and Analysis Center (IPAC), which has evolved to become the national archive for infrared analysis from telescopic space missions.
To be sure, Caltech’s needs are on the edge. The organization is the steward for more than 2.3 petabytes of data created from its 14 currently active missions. Caltech captures data from these missions and performs intense analysis in what it calls its ‘Sandbox’, a server and storage infrastructure that supports scientific applications that analyze the data. Once ‘crunched,’ the data is moved to an archive, using homegrown data movement software.
Special Requirements
Hacopians explained to Wikibon members that due to the nature of the downlink, the files managed by Caltech are small, ranging in size from 5-25 kilobytes. But there are a lot of them -- billions or even trillions. Caltech had previously attempted to use HSM software and tape but quickly realized the environment was not appropriate for tape libraries. Hacopians called it a ‘tape killer.’
The team at Caltech had to design a cost-effective means of providing reliable data access to all this scientific data. As well, organizationally, the projects supported by Caltech had to be completely walled from each other from an accounting standpoint. Rather than implement a shared SAN infrastructure with onerous chargeback mechanisms, Caltech decided to use a common set of technologies that would support each of the projects. The technological building blocks are:
- A Sun Solaris server running the ZFS file system,
- A QLogic 5602 FC switch,
- One-to-three Nexsan SATA Beast arrays.
Caltech uses Nexsan’s Automaid spindown capabilities in its archive to reduce energy costs, using Level 1 (slowing the spin speed of the disk) and Level 2 (parking the heads after sufficient inactivity). It does not put the drives into sleep mode (Level 3) and has never had reliability problems associated with spinning down devices.
Caltech uses SAIC tape for long term archiving and last resort off-site disaster recovery. However, its own tests indicate that because of the huge number of small files involved, recovery from tape would take weeks or longer.
This building block approach has allowed Caltech to use common configurations across its infrastructure. Caltech derives four main benefits from this strategy:
- The infrastructure is architected for fast, simple, safe recovery from failure or data loss.
- The approach scales nicely in support of Caltech’s data growth, which occurs in large chunks of hundreds of TB’s and billions of tiles at a time.
- It streamlines staff training.
- The "Lego" building-block method allows Caltech to reuse infrastructure when it comes off maintenance, providing it with large numbers of spares and saving money.
Caltech uses a cascading refresh approach when new infrastructure is purchased, placing the newer equipment in support of the most critical parts of the infrastructure and migrating older equipment to less mission-critical areas. In this case, the archive is the most critical as it houses massive numbers of files that scientists access for their research and because it is regarded as a National Archive, which should be kept indefinitely. The Sandbox infrastructure is the least critical because data is quickly migrated off it into the archive.
Benefits of the Approach
The choice of building-block versus shared-SAN infrastructure is an interesting one. While it may appear more expensive, because in some cases Caltech may be over-provisioning resources to support an application, on balance the benefits outweigh the costs. Caltech has only three individuals looking after all this infrastructure and the system has been extremely reliable. Training costs are low because of the commonality across infrastructure and data integrity has been high. The organization has not lost big chunks of productivity due to data loss or complicated recoveries.
The Nexsan infrastructure is a good fit for Caltech for two primary reasons: Caltech’s applications are well-suited to using high capacity SATA arrays as part of its building block strategy, and Nexsan support has been very responsive, assisting Caltech in both architecting the building block (from a storage perspective) and rapidly solving problems. Caltech has avoided jumping on the fad du jour (e.g. object-based storage or Cloud Computing models), preferring rather to stick with a proven approach.
Action Item: The challenge of managing many billions or even trillions of small files presents issues above and beyond difficulties of managing capacity and growth. In this type of environment, IT organizations must understand the type of data, the rate of data change, and the flow of data before settling on an infrastructure and methodology to support applications. Taking a building-block approach, using common server, interconnect, and storage components will simplify installation, maintenance, and training, and support more facile, reliable recovery from data loss.
Footnotes: