Top of Page

Is flash the new disk?

I had the good fortune to catch (well, via webcast anyway) Dave Hitz, NetApp founder and EVP, speaking from the GigaOM Big Data conference. I was particularly intrigued by the last part of the discussion (beginning at 14min 45sec). When asked about the performance mismatch between CPU and disk, Dave made the point that this was old news indeed: (I’m paraphrasing) If you go back to the 1960s when mainframes and tape were king, the ratio of CPU performance to seek time on the magnetic tape drive was closer than it is between today’s CPU and 15K performance disk. Flash was then proposed as the answer to closing this gap. Like Dave, we’re convinced.

Data center storage has failed to keep pace with servers, which have gotten faster, denser, and cheaper following Moore’s law for decades. Storage has certainly been getting denser and cheaper as we’ve learned how to pack more data onto each hard drive, but performance is headed in the opposite direction. Mechanical disk seek time and rotation speed are constrained by Newton’s Laws, so performance in terms of IOPS per GB has actually been falling, thus creating an ever-greater imbalance between storage and the rest of the data center.  As we have already pointed out in this blog, flash affords dramatically more random I/O per GB vs. disk, and due to virtualization and cloud computing an ever-greater share of data center I/O is random.

Dave Hitz goes on to say, “Everything that we saw in the last twenty years of transition from tape to disk, exactly the same evolution is going to occur, except just substitute disk is the new tape, and flash is the new disk.” (To give credit where credit is due, Jim Gray was—as usual—well ahead of the rest of us. Check out his 2006 talk entitled Tape is dead, Disk is tape, Flash is disk, RAM locality is king.)

What was not discussed, at least at the GigaOM conference, were the barriers to getting this transition right. Remember Hierarchical Storage Management (HSM) and Virtual Tape Libraries (VTL)? HSM is reminiscent of today’s intra-array tiering. While HSM sounded great in the slideware, in practice HSM implementations were complex to manage and suffered from orders-of-magnitude latency disparities between the tiers, particularly in the face of conflicting workloads. VTL was an effort to preserve industry investment in infrastructure for tape backups even as the media was replaced with hard drives. As such, VTL is arguably akin to today’s substitution of a flash SSD for a hard drive within a traditional storage array without re-architecting the storage controller hardware and software to take better advantage of that SSD.

The interesting learning from the tape/disk transition wasn’t that disk replaced tape, it was that disk redefined the role of tape.  Tape used to be used for all things backup and archiving: performing backups, performing recoveries, archiving data and moving data offsite via trucks.  When disk entered the backup scene, backup changed and the roles of the media changed. Disk is now used for recovery and local retention; replication plus disk is now used for moving data offsite; and tape is still used (albeit in a smaller role) for long-term retention and archiving.

My belief is that we’ll see something similar in the flash/disk transition.  Flash won’t replace disk outright (how many years would the flash fabrication plants have to run at full tilt to produce enough storage?), but the days of disk being used for all forms of online storage (Tier 1 performance, Tier 2 capacity, Tier 3 retention) are numbered. In the future you’ll see flash redefine the role of disk: flash will be used to deliver performance in online storage and disk will be used to deliver capacity. And just like the transition to disk forced a change in the backup infrastructure and processes, flash will force major changes within primary storage architectures.

So yes, we have seen this sort of transition before, and hence should know to expect some intense debate and experimentation to determine which storage architectures will deliver the most bang for the buck for which application workloads. Is an incremental approach that preserves industry investment in mechanical disk going to work best?  Or will it take an architectural rethink to deliver on the promise of flash within data center storage?

About the Author

Scott Dietzen is the CEO of Pure Storage and a three-time successful entrepreneur with WebLogic, Zimbra, and Transarc.

  • http://www.purestorage.com/blog/auspicious-times-for-flash-in-the-data-center/ Auspicious Times for Flash in the Data Center | Pure Storage Blog

    [...] wanted (disk) at the price point of the one they didn’t (tape). Well, for performance storage, flash is the new disk and data reduction done right allows all-flash solutions to be price competitive with mechanical [...]

  • http://www.purestorage.com/blog/are-flashdisk-hybrids-just-hsm-2-0/ Are Flash/Disk Hybrids Just Hierarchical Storage Management 2.0? | Pure Storage Blog

    [...] Our argument is that tiering across flash and hard drives within a single array is HSM 2.0. End users face similarly disparate latencies as they fall through solid-state flash to mechanical disk, particularly as their vendors employ most cost effective (and slower) multi-gigabyte SATA drives. From the perspective of a modern CPU doing the random I/O required for virtualization and database workloads, these drives really do look like tape. [...]

  • http://www.purestorage.com/blog/xtrem-thunder-in-the-forecast-for-emc/ “Xtrem” Thunder in the Forecast for EMC | Pure Storage Blog

    [...] market and our proposed emerging segmentation, please see this post. We have remarked before that disk is the new tape and flash is the new disk in that from the perspective of a CPU doing random I/O, disk today appears slower than tape did 20 [...]

  • http://www.purestorage.com/blog/pure-storage-ships-ga/ Pure Storage goes GA | Pure Storage Blog

    [...] worth for the >$15B they spend per year on 15K disk arrays, technology that’s increasingly antiquated by Moore’s Law. Our aspiration is nothing less than to lead the industry transition from mechanical disk to [...]

  • http://www.purestorage.com/blog/the-risk-of-over-promising-and-under-delivering-with-hybrid-storage-arrays/ The Risk of Over Promising and Under Delivering with Hybrid Storage Arrays | Pure Storage Blog

    [...] plug-compatible with traditional disk arrays and that is generally more cost effective, flash really is poised to be the new disk. Why should the large consumer sites have all of the fun with flash, when the benefits are just as [...]

  • http://www.purestorage.com/blog/top-ten-enterprise-tech-predictions-for-2013-part-2/ Top Ten Enterprise Tech Predictions for 2013, Part 2 | Pure Storage Blog

    [...] (disk) at the price point of the one they wanted to replace (tape). Well, for performance storage, flash is the new disk, and deduplication and compression done right allow all-flash solutions to be price competitive [...]

  • http://www.purestorage.com/blog/welcome-to-the-all-flash-array-party-netapp/ Netapp: Welcome to the All-flash Array Party | Pure Storage Blog

    [...] performance and have to wait on mechanical disk. The same shift in expectations ultimately doomed Virtual Tape Libraries (VTL) and Hierarchical Storage Management (HSM). And when Data Domain employed inline data reduction to make disk backup appliances price [...]