Why Tape is Poised for a George Foreman-Like Comeback

Tape is Dead, Not!

The combination of tape and flash will yield much better performance and substantially lower cost than spinning disk. This statement will prove true for long-term data retention use cases storing large data objects. The implications of this forecast are: 1) Tape is relevant in this age of Big Data; 2) Certain tape markets may actually show growth again; 3) Spinning disk is getting squeezed from the top by flash and from below by a disk/tape mashup we call “flape.”

Spinning Disk: Slow and Getting Slower

For decades we’ve heard that the amount of capacity under a disk actuator will increasingly be problematic for application performance. Specifically, as disk capacities increase at the rate prescribed by Moore’s Law, placing more data under a mechanical arm will serve to relentlessly decelerate performance and eventually the spinning rust business will hit a wall. Guess what? The predictions were right. We’re finally at the breaking point. For a long time the storage industry somewhat masked disk performance problems by spinning the platters faster, increasing track capacities, using larger caches, writing better algorithms, creating massive backend disk farms, wide striping data and doing unnatural acts like short-stroking devices. More recently the industry began to jam flash into legacy storage arrays, which breathed new life into disk performance. For a while…But the end is near for so-called “high performance” spinning disk. Ironically, the very flash technology that has extended the runway of legacy SAN is the reason why tape, for certain applications, could grow again. There are three main reasons that explain this contrarian view:

  1. The number of disk vendors is down to the single digits. In the 1980’s there were more than 70 disk drive players…now we have three or four. The investment going into mechanical disk drives simply isn’t there any more.
  2. Head technology investments, the lynchpin of disk technology for years, are in a long, slow, managed decline cycle. Head technology used to be a strategic advantage. It was a main reason Al Shugart bought Control Data’s Imprimis disk division in 1989. In 2014, the ROI of innovating in disk head technology is minimal.
  3. The days of the so-called high-speed spinning disk are numbered. High performance spinning disk is an oxymoron. Flash will replace high spin speed disks within a year. Flash is cheaper and now more economical than the segment formerly known as high-performance disk drives.

The lack of investments in spinning disk tech, the fact that spin speeds have hit their physical limits and the reality that track capacities aren’t increasing that fast, combine to give you slow as molasses disk bandwidth. To be clear, we’re talking about the speed at which data comes off the disk internally. While time-to-first-byte is faster than tape, because disk is a random access device…once you find the data, if it’s a large object like a video file or archived email blob, it takes a long time to get data off the disk. This is not the case for tape. Tape heads aren’t housed in a hermetically sealed disk unit. Their tolerances are much less stringent than disk heads. Tape heads are fixed…a stationary component that can more easily be replicated and staggered.

Metadata is the Key to Tape’s Future

Today, metadata, the data about the data (i.e. what files live where), is locked inside the contents of an individual tape cartridge. If you take all the metadata that’s locked on tapes and surface that onto a flash layer as a front-end to tape – Wikibon calls this flape – you’ll get way lower costs and much better performance than with a spinning disk system…even a disk system with flash. It’s happening today. Wikibon practitioners (several in media and entertainment) are writing metadata taxonomies and surfacing metadata to a high-speed layer. Many still use “fast” disk for that layer but soon they will be using flash. This infrastructure is connected to an application server, which writes both metadata and data directly to the high-speed layer. The data is then grouped into objects that are categorized based on the taxonomy and trickled asynchronously to the backend tape for cold storage. These customers are also writing algorithms that fetch data intelligently based on the data request. For example, if the fifth request for a piece of data in a chain can be serviced more quickly than the first, they’ll complete the fifth request while in parallel servicing the first and then re-order the chain at the backend.

These future flape architectures are delivering new business value for customers because they’re able to monetize complex and previously unattainable (in a decent amount of time) seek requests for information. An example might be give me all the Lady Gaga video clips where she performed live; with Crayon Pop opening for her…between 2011 and 2012. Think about applying this to facial recognition applications or email archive blobs or any large object data set.

The spinning disk cartel will cringe at the premise that tape is faster/cheaper/better. They’ll make faces and tell you why this is nonsense. They will tell you tape is not cool and you are not cool if you use tape. Don’t just trust their skepticism. Do the research yourself and come to your own conclusions. You may find that you can drive significant value for your organization.

David Floyer explains all the gory details and technology assumptions in this Research Note. Read it carefully and think about what flape could do for your business.

Tape Versus Disk Performance

Let’s look at performance more closely. How can tape be faster than disk? Tape is a serial medium whereas disk is a random access device. Disk must be faster. Well this turns out not to always be true – especially where bandwidth is the primary measure of performance – as it is for large objects (e.g. audio, video, facial recognition, large email blobs, etc.). The slide below shows a comparison of disk versus tape bandwidth– specifically the internal bandwidth of the device (normalized by capacity). Key takeaways from this data include:

kid straw

  • Tape bandwidth is nearly 5X that of disk.
  • The tape blue line stays flat over time, while the disk red line declines dramatically.
  • The implication is that the time it takes to scan a 5TB tape cartridge today will be the same as it takes to scan a 50+TB cartridge in 2023.
  • See the kid sucking coconut milk from the straw? The disk straw is tiny and really isn’t getting bigger.

The fact is disk is slower than tape when large files are involved because the internal bandwidth of disk devices are limited, especially as capacity grows. As implied in the points above, the time it takes (for example) to rebuild a disk drive on a failure is escalating with every new generation of capacity increase. To rebuild a 21TB disk could take more than a month—which by the way is why no one should use RAID 5 anymore because during a rebuild you’re exposed for far too long, risking a second drive failure and data loss. Another reason tape can be faster is file systems like Linear Tape File System (LTFS) and SAM-QFS. Tape file systems allow separation of metadata in a self-describing tape format providing direct access to file content and metadata, independent of any external database or storage system. This capability presents tape in a standard file system view of the data stored making accessing files stored on the tape, logically similar to accessing files stored on disk. LFTS was introduced by IBM and SAM-QFS by Sun Microsystems. Oracle is now the steward of the latter format. We believe these types of innovations, combined with flash will breath new life into tape and solve a nagging user problem – how to contain data growth, cost effectively.

What about Costs?

Is the demise of tape greatly exaggerated? We believe yes. There are two reasons in addition to the performance scenario put forth above that tape continues to be around, including: 1) Tape is cheaper than disk for long term retention; and 2) As a last resort disaster recovery medium, tape is still the most cost effective (and fastest) way to move data from point A to point B. We call this, “CTAM,” the Chevy Truck Access Method. When it comes to compliance for DR, tape still is viable. From a cost perspective, HDDs have been unable to keep pace with the areal density curve of tape. As an example, overall $/TB declines for disk are forecast at roughly 17% per annum whereas tape is tracking at a 23% decline. Tape of course is starting at a much lower cost per bit than disk and is likely to remain an order of magnitude cheaper. Flash meanwhile is on the steepest price decline curve of the three storage technologies and is expected to continue to close the gap on disk.

Storage costs

Quantifying the Flape Effect

We believe a new architecture called “Flape” will emerge, combining flash storage and tape. The data below show an economic model developed by Wikibon around flape as compared with alternative spinning disk architectures. Notably, the best value is flape with a large flash metadata layer. That scenario will be 3-5 times faster (for large objects) than disk-based alternatives.

What about smaller files? Flape may still be the way to go in such use cases because archiving software is often able to group smaller files into larger objects. Today that is done on disk-based systems but there’s no reason it couldn’t be done on flash with some backend integration to tape. With flape, data and metadata would be written to the flash layer and the data moved asynchronously to tape. There are a few items to consider as one thinks about a flape architecture. First, is it realistic that every application or data set can reside on either flash/SSD or tape? Second, can the application that may need to leverage the data that resides on tape, easily access the data that lives on the tape? This is why a middleware layer is needed to be able to find the data in an “appropriate” amount of time. Flape architectures must be designed to read metadata that lives on the high-speed layer. This metadata informs the system as to where the data lives and can retrieve it in the most efficient timeframe. As long as an application can work in this fashion, a flape architecture can help any organization achieve the right balance between performance and cost which in turn helps to drive business value.

Who Will Lead the Flape Revolution?

Our view is that in order to scale, flape needs leadership that can set standards, point the way for customers to develop metadata taxonomies, provide middleware technology to exploit flape and finally entice ISVs to recognize flape as a viable platform. Today, customers are leading the innovation charge. Wikibon spoke with several IT practitioners who have developed early instantiations of flape using a high spin speed disk metadata layer. As indicated, it’s only a matter of time before this becomes flash-based storage. These customers are writing metadata / data classification taxonomies and automating the placement of metadata on the high-speed layer at the point of data creation. They’ve also written request optimization algorithms as described earlier. In order for flape to be adopted widely however, the vendor community must step up to the plate.

In our view, the two companies best positioned to execute on flape are IBM and Oracle. HP could also be a player, but Oracle in particular, with its StorageTek heritage, could emerge as an early innovator. This is not to say that IBM could not participate in this trend. In fact we believe IBM will. Both IBM and Oracle sell flash storage products, IBM with the TMS acquisition and Oracle with the ZFS line of flash-first arrays. In addition, several Oracle customers we spoke with are focusing the ZFS appliance on backup applications as an alternative to NetApp targets. Meanwhile, IBM with its Tivoli line of management software has a background in this market. We believe both companies are actively working on flape-like products. Clearly the market will be much better off with two suppliers and ironically, we feel IBM and STK will further collaborate on tape standards to accelerate market adoption. It would be in their mutual interests to do so as market leader EMC is vocal in its marketing about the death of tape.

Like the famous boxer who once lost his luster, tape is re-inventing itself and can become a prominent mainstream player again. Tape is no longer a good backup medium. It has been re-positioned for long-term retention. Tape’s superior economics relative to disk and its better performance for large files (when combined with flash) make it attractive for retention and big data repository apps. Not to mention that tape lasts longer. The bit error rates for tape are two orders of magnitude better than spinning disk – meaning tape is a much more suitable platform for long-term storage. If the industry steps up and buyers keep an open mind, flape is a winner that will cut costs for the “bit bucket,” improve archiving performance for large object workloads and deliver substantial incremental business value.


, ,

  • Snehal Dasari

    Hi David,

    Sure, tape might be re-inventing itself, but it still hasn’t worked out the access mechanism required for data written several generations ago.

    If I have an LTO5 tape drive I cant read my old LTO1 or LTO2 tapes due to the limitations of the drive technology. Presumably, “flape” (btw, why not “tapefl” – pronounced tape-fill. “Flape” sounds weird, then again so does “tapefl”) would need to address this in a standards based way so that data can always be accessed, regardless of how old the media is.


  • Nathan

    There is some conflicting data in this article. The chart showing disk vs tape performance is inaccurate. I assume you are referring to speed in the chart, which for tape the speed stays constant. In your article you say the time it takes to read an entire tape stays constant. Both cannot stay constant if the capacity is 10x. I imagine you mean the speed is increasing not staying the same.

  • Mark Erickson

    Snehal – (1) Most of these advances have come since about LTO3/4. LTFS in particular is not on prior generations. Let’s begin there. (2) LTO4 is 4 years in the market and still sold new. It is fully read compatible with LTO6. (3) LTO7 will break that but there is nothing forcing a client from skating directly to LTO7. Why not keep some LTO6 drives around for N-3 read compatibility with LTO4 media? (4) LTO is meant for generation skipping (skip 7 go to 8, if you’re on a 4/6 track) and allows clients to stretch the tech life. We would expect LTO6 to be relevant in 4-6 years, meaning your (then) 10+ year old LTo4 media is still readable. What is wrong with that? And (4) if you’re looking for more innovation on media reuse, look to the innovation drive line – TS1100 from IBM – not the value drive line (which has a media roadmap dictated by the consortium).

  • dvellante

    True, there’s no free lunch…Both disk and tape have migration considerations over time. Tape longevity is better (much) than disk so the migration costs will be lower for tape over a 10-year period. All those migration costs are assumed in our models – so as we always tell customers – don’t focus on just one thing – look at the full picture and make a business case.

  • dvellante

    I don’t agree the data is inaccurate Nathan. The Y-axis is normalized on a per GB basis (Bytes/GB) – so a flat line suggests the time to scan a 5TB tape today will be the same as it takes to scan a 20+TB tape in the future. So tape speed doesn’t stay constant as you say- it actually increases quite dramatically over time – but when you normalize it on a per GB basis it graphs as a flat line. In turn…while disk performance is increasing ever so slightly, when it’s normalized per capacity it declines, quite dramatically. This is a function of the increased capacity per actuator problem described in the article…and the inability of the disk industry to increase performance at a rate proportional to capacity increases.

  • dvellante

    Thanks for making this point Mark. With tape you can do just as Mark describes – i.e. keep older gen tapes drives on line for compatibility. With disks – you must migrate because disk drives will eventually die. Remember the old “bathtub curves” we learned about in the old days. Disk failure rates are high early in their life, then they moderate when the drive is “broken in” so to speak…then failure rates escalate toward end of life, creating a bathtub shaped curve. I would suspect tape may have a similar dynamic but over a much wider range of time – but I don’t know for sure.

    Nonetheless, tape is more reliable than disk from an error rate standpoint. I believe the bit error rate (BER) of tape is 10 to the minus 17 whereas disk is 10 to the minus 15. Two orders of magnitude (I would think) translates into a significantly longer useful life…which dramatically lowers migration costs for an archive.

  • stking

    It seems that erasure coding methods employed by many storage vendors (and part of Swift) make tape obsolete. Extremely large amounts of data, on the order of exabytes, can be safely stored on disc-based storage nodes with greater than 10 nines of reliability.

    It’s really not interesting a compare a disc drive to a tape drive. Disc arrays are more efficient and cost effective for storing PBs of data. Nobody needs to put up with tape’s shortcomings any longer. The reliability of erasure coded disc-based systems eliminates the need for tape.

    How long will it take to restore 100 PB from tape? Provided you could find then all and had no defects. The resiliency of today’s disc based systems is just too compelling.

  • Pingback: FLAPE – the next BIG THING in storage | Hihid News