Top of Page

The Risk of Over Promising and Under Delivering with Hybrid Storage Arrays

As Pure embarks for Europe this week (VMworld Barcelona and Structure Amsterdam, hope to see you there!), an analogy for the inherent risk in hybrid storage occurred to me.

First, imagine that you’re expecting to travel internationally by ship, but then find that you’ve been upgraded to a flight, and will get there in a fraction of the time. You’re likely ecstatic (modulo the crowding in coach), contemplating extra work and fun or an earlier return home thanks to the savings in transit time. Hybrid storage (that intermixes flash memory and mechanical disk within an appliance) is similar: When your applications are designed for disk latencies and instead get a 10+X acceleration with flash, your users are thrilled. So incorporating a flash cache to help a disk-centric array go faster is classic under promise and over deliver.

But now imagine if you showed up to board your international flight, and they put you on a ship instead? As soon as you start expecting the higher performance of air travel, then your view of the situation reverses. Relative to airplanes, ships actually have excellent throughput (items transfered per unit time—think IOPS and bandwidth). They just suck on latency. And latency matters to your business: Just as you cannot plan a business trip without knowing whether you were going by air or by sea, your applications cannot be designed to take advantage of solid-state performance unless they know they are going to get it. When your users are expecting flash latencies but instead find themselves waiting on disk, you’ve over promised and under delivered.

Afraid I shouldn’t get much credit for the above analogy. It was inspired by Peter Burns of Google, who contributed the following characterization in his blog that helps put compute performance into perspective (text borrowed with permission, original post here):

Let’s talk time scales real quick. Your computer’s CPU lives by the nanosecond: most CPUs can get a few things done in each nanosecond – mostly simple math and comparisons. To make this easier to grasp, suppose you’re the CPU and instead of nanoseconds, you live and work second by second. For clarity I’ll keep this metaphor to a single-core of a single processor.

You can hold a few things in your head (register). Not more than a dozen or two in your active memory, but you can recall any of them pretty much instantly. Information that’s important to you you’ll often keep close by, either on sheets of loose-leaf paper on your working desk (L1 cache) a couple seconds away, or in a one of a handfull of books in your place (L2 and up cache) which is so well organized that no individual piece of information is more than a dozen or so seconds away.

If you can’t find what you’re looking for there, you’ll have to make a quick stop at the library down the street (RAM, i.e. main memory). Fortunately, it’s close enough that you can go down and grab a book and get back to work in only ~8 and a half minutes, and it’s enormous, some are thousands of times the size of a typical strip-mall book store. A little inconvenient, until you remember that this library has a free delivery service, so it’s really no bother at all so long as you can still find things to work on while you wait.

But the local library mostly just stocks things on demand (which is fair, your bookcases, worksheets, and even the dozen or two facts you hold in your head are mostly the same way). The problem is that when you need something that’s not there, it can take a while to get it. How long? Think Amazon.com in the age of exploration. They send out an old wooden boat and it could be a week, could be a month, and it’s not unusual to wait 3 years before you hear a response.

Welcome to the world of hard disk storage, where your information is retrieved by making plates of metal spin really fast. Many metric tons of sweat have been spent making this as fast as possible, but it’s hard to keep up with electrons flowing through wires.

So when someone says that Solid State Disks are awesome, it’s because they’re able to turn that slow, unpredictable old sailing ship into a streamlined steam-powered vessel. A good SSD can often make the voyage in less than a week, sometimes in little more than a day. It can also make many thousands more quests for information per year.

Love it. So the problem with hybrid arrays is now obvious: It’s simply very hard to design your business applications not to care whether each underlying operation is submillisecond by flash or 10s of milliseconds by disk. When your customer or employee is waiting for the result in real-time, being able to offer flash performance 50%, 75%, or even 95% of the time doesn’t let you raise the bar, since the application and user have to expect the ship even if they may get the airplane. In order to raise the bar without over promising and under delivering, you need to provision enough flash memory that you can service virtually all I/Os from it even as your workloads evolve over time. Hence the appeal of flash-centric, rather than disk-centric storage. Disparite latencies were one of the things that doomed Hierarchical Storage Management (HSM) and Virtual Tape Libraries (VTLs), which were once a $1B+ business. Sound familiar?

From the perspective of a CPU doing the random I/O demanded by virtualization and many database operations, disk today is slower than tape was 15 years ago. This is the reason that leading consumer websites like Google and Facebook have been systematically eliminating mechanical disk from the latency path of their performance intensive applications. With vendors like Pure delivering flash in a form factor that’s plug-compatible with traditional disk arrays and that is generally more cost effectiveflash really is poised to be the new disk. Why should the large consumer sites have all of the fun with flash, when the benefits are just as material to your business?

 

 

About the Author

Scott Dietzen is the CEO of Pure Storage and a three-time successful entrepreneur with WebLogic, Zimbra, and Transarc.

  • http://rogerluethy.wordpress.com/2012/10/12/the-risk-of-over-promising-and-under-delivering-with-hybrid-storage-arrays/ The Risk of Over Promising and Under Delivering with Hybrid Storage Arrays « Storage CH Blog

    [...] on here Rate this:Share this:TwitterEmailLinkedInPrintDiggFacebook Leave a Comment by rogerluethy on [...]

  • http://www.datacenterjournal.com/dcj-expert-blogs/are-large-storage-arrays-dead-at-the-hands-of-ssd/ The Data Center Journal Are large storage arrays dead at the hands of SSD?

    [...] (and Physical) Environments Part I: Spinning up to speed on SSD Ok, nuff said for now Cheers GsAn industry trends and perspective. . Are large storage arrays dead at the hands of SSD? Short answe…coration: none;" href="http://storageioblog.com/?p=3025">a place for traditional storage arrays or [...]

  • http://www.purestorage.com/blog/reflections-on-a-puritan-new-year/ Reflections on a Puritan New Year | Pure Storage Blog

    [...] The luster wears off of auto-tiering within a storage array − The problem with tiering within a volume is simply one of expectations: when your application expects disk-performance and instead have its request serviced from flash cache, you are thrilled. But as soon as you expect flash performance, a two orders of magnitude latency spike for missing cache and hitting disk becomes unacceptable. Such disparities in performance doomed disk/tape hybrid appliances—once a $1B+ business, because having tape in the latency path became unacceptable. The same thing will happen even more quickly for performance-intensive primary storage, particularly with flash now competitive with disk in price (thanks to data reduction algorithms that are incompatible with mechanical disk). [...]

  • http://www.purestorage.com/blog/welcome-to-the-all-flash-array-party-netapp/ Netapp: Welcome to the All-flash Array Party | Pure Storage Blog

    [...] caching and tiering both get on the 100% solid-state bandwagon, something intriguing is going on. As we have remarked elsewhere, caching is great when you’re expecting disk performance and hit flash cache, but is [...]