- Flash Array
The FlashArray employs high-performance inline data reduction techniques including deduplication, compression, and thin provisioning to dramatically reduce the data footprint, allowing the logical size of the FlashArray to be 5-10X larger than its actual raw physical size, thereby driving down the $/GB usable. But don't take our word for it: check out the dedupe ticker below with real-time customer data reduction results.
(Deduplication + Compression Only)
6.27 = 2.86 (comp.) x 2.19 (dup.)
Useful for comparing vs. other storage
arrays with thin provisioning enabled.
Real-time Data Reduction Results
(Including Thin Provisioning)
Useful for comparing vs. other disk
arrays without thin provisioning,
including most other flash products.
* As of May 2013
512byte - 32K
VMware or Hyper-V, consolidated virtual server environments with mixed applications.
OLTP or OLAP, even databases get surprising amounts of data reduction.
Virtual desktops (both persistent and non persistent) are one of the most reducible workloads in the datacenter.
Deduplication and compression in a flash array have to be fast. The FlashArray's core architecture was designed to support data reduction. All of our performance benchmarks are taken with data reduction turned–on.
Other data reduction solutions dedupe as a post-process, and operate only inside a single SSD, LUN, card, or volume. Partitioning your data set dramatically reduces the savings from deduplication. Pure Storage dedupe is inline and global across the entire array.
With flash storage, reading is easy; writing is the hard part. Inline deduplication and compression allow the FlashArray to avoid 70-90% of the writes it would otherwise do to flash, dramatically increasing the array's write bandwidth and extending the life of the underlying flash.
In the world of deduplication, size matters. The smaller "chunk size" you use to look for duplicates, the more effective you will be at reducing data. But chunk size is a trade-off, as smaller chunks require more processing and metadata. Pure Storage detects duplicates down to a 512-byte chunk size, which has two advantages: substantially higher deduplication (typically 3-5X better than the coarse-grained alternatives), and that fine-grained geometry also offers better alignment with application data layouts.
At Pure Storage we realized early on that we couldn't deliver our breakthrough data deduplication algorithms if there was any mechanical disk in our array. Hear Pure Storage CTO and co-founder, John Colgrove, talk about how the FlashArray's architecture enables high-performance data reduction.