How Orbital Sciences Manages High Data Growth

Recently I had the opportunity to speak with Bryan Pretre  (pronounced prader) who is a Senior IT Operations Manager at Orbital Sciences Corporation.  Orbital Sciences manufactures small space satellite and launch vehicles. The company experiences 60-70% data growth annually and the management of this information was becoming unsustainable. Pretre had an interesting take on the growth of unstructured data. He told me:

You can’t control the rate of storage growth …but you can control storage costs.”

How does Orbital Sciences control costs? Pretre implemented what he refers to as a ‘Tier 1 Avoidance Strategy.’ We’ve seen this theme recurring over the past few years from Wikibon members in both block and file environments. The idea is to create so-called Performance Grades that service different data requirements. Performance Grade 1 for example will store frequently accessed data requiring a high speed of access while Performance Grade 2 will service less active data.

Here’s a short video clip of Bryan explaining how he manages growth:

In the case of Orbital Sciences, the IT organization implemented virtualization technology from F5 Networks which enabled the company to initiate an ILM/HSM strategy.  Pretre explained to me that using an F5 ARX device he implemented a global namespace which separates the presentation of data from its physical location. This allows the IT department to automated the movement of information from Tier 1 to Tier 2 and from system to system based on policies or ‘rule sets.’

As Bryan discussed, the basic data classification scheme used by Orbital Sciences is one of usage characteristics. Pretre sets policies based on business, legal and regulatory requirements and data is migrated from Tier 1 to Tier 2 based on levels of inactivity. Tier 1 uses 15K rpm FC drive infrastructure and Tier 2 7200 rpm 1TB SATA drives which are about half the cost. Using this strategy has allowed Pretre to avoid purchasing any more Tier 1 disk for another 2-3 years.

The key driver for the move toward virtualization was cost and specifically CAPEX savings. Pretre told me that his CIO looks at return on invested capital as the most important metric to manage with a breakeven requirement typically in the 12-14 month range; meaning management expects the initial cost of the hardware, software and implementation  to be offset within one year by avoiding additional Tier 1 purchases.

I asked Bryan about the user impacts? He said there were none that were noticeable. Frequently we’ve seen a backlash with this type of strategy, especially in shops with no chargebacks. Specifically, organizations that use chargebacks are more receptive to this type of approach whereas companies that don’t implement a chargeback system have less visibility on the consumption of IT resources by department. In this situation, the line of business often has veto power over migrations off Tier 1 and IT has less power to create a default Tier 2 that houses most data.  

Advice to Peers? Byran says be smart and don’t move data on Tier 2 that doesn’t belong there. Start with basic files like word docs and reference information and chances are you won’t see an impact on performance.


, , ,