- Flash Array
(Deduplication + Compression Only)
5.42 = 2.72 (comp.) x 2.00 (dup.)
Useful for comparing vs. other storage
arrays with thin provisioning enabled.
Real-time Data Reduction Results
(Including Thin Provisioning)
Useful for comparing vs. other disk
arrays without thin provisioning,
including most other flash products.
Pattern removal identifies and removes repetitive binary patterns, including zeroes. In addition to capacity savings, pattern removal reduces the volume of data to be processed by the dedupe scanner and compression engine.
High-performance, inline deduplication with 512-byte granularity ensures only unique blocks of data are stored on flash, even for datasets that cannot be reduced by traditional fixed block dedupe implementations.
Inline Compression encodes data with a lightweight Lempel-Ziv-Oberhumer (LZO) lossless algorithm that uses less capacity than the original format. Coupled with Deep Reduction, compression delivers reliably 2-4X data reduction.
A patent-pending form of the Huffman encoding algorithm is employed as part of our Continuous Optimization process to further reduce storage consumption. This deeper compression can increase savings on data that was compressed inline.
Leveraging the data reduction engine, Purity provides instant, pre-deduplicated copies of data for snapshots, clones, replication, and xCopy commands. Copying data on a FlashArray only involves metadata!
VMware or Hyper-V, consolidated virtual server environments with mixed applications.
OLTP or OLAP, even databases get surprising amounts of data reduction.
Virtual desktops (both persistent and non persistent) are one of the most reducible workloads in the datacenter.
Deduplication and compression in a flash array have to be fast. The FlashArray's core architecture was designed to support data reduction. All of our performance benchmarks are taken with data reduction turned–on.
Other data reduction solutions dedupe as a post-process, and operate only inside a single SSD, LUN, card, or volume. Partitioning your data set dramatically reduces the savings from deduplication. Pure Storage dedupe is inline and global across the entire array.
With flash storage, reading is easy; writing is the hard part. Inline deduplication and compression allow the FlashArray to avoid 70-90% of the writes it would otherwise do to flash, dramatically increasing the array's write bandwidth and extending the life of the underlying flash.
In the world of deduplication, size matters. The smaller "chunk size" you use to look for duplicates, the more effective you will be at reducing data. But chunk size is a trade-off, as smaller chunks require more processing and metadata. Pure Storage detects duplicates down to a 512-byte chunk size, which has two advantages: substantially higher deduplication (typically 3-5X better than the coarse-grained alternatives), and that fine-grained geometry also offers better alignment with application data layouts.
At Pure Storage we realized early on that we couldn't deliver our breakthrough data deduplication algorithms if there was any mechanical disk in our array. Hear Pure Storage CTO and co-founder, John Colgrove, talk about how the FlashArray's architecture enables high-performance data reduction.
Pure Storage FlashArray is 100% thin provisioned. This means capacity for all volumes, all workloads is allocated dynamically on demand, thereby maximizing storing data (and not the storing of zeroes). While some vendors use thin provisioning as a way to boost data reduction savings, thin provisioning is not a data reduction technology. This is why the Dedupe Ticker on our website breaks out the average data reduction savings with deduplication and compression only as separate than average total reduction with thin provisioning included. Oh and granularity? It’s at the 512-byte level just like all Purity services, meaning that Purity thin provisioning delivers even more efficiency than the other guys.