The Industry's Best Data Reduction, Hands Down

FlashReduce delivers the industry’s most granular and complete data reduction. Five different technologies reduce data for virtually any application, bringing the cost of flash below that of disk. And, incidentally, FlashReduce delivers data reduction savings that other all-flash competitors can’t touch.

Unbeatable Numbers

Ticker data reduction averages are calculated across the entire Pure Storage® FlashArray installed base, updated live 24/7.

: 1 Average Data Reduction Rate (Deduplication + Compression Only)
: 1 Average Total Reduction (Deduplication + Compression + Thin Provisioning)

We Use Five Different Data Reduction Technologies

Pattern Removal

Pattern removal identifies and removes repetitive binary patterns, including zeroes. In addition to capacity savings, pattern removal reduces the volume of data to be processed by the dedupe scanner and compression engine.

512B Aligned, Variable Dedupe

High-performance, inline deduplication operates on a 512-byte aligned, variable block size range from 4 - 32K. Only unique blocks of data are saved on flash – removing even the duplicates that fixed-block architectures miss. Best of all, these savings are delivered without requiring any tuning.

Inline Compression

Inline compression reduces data to use less capacity than the original format. Append-only write layout and variable addressing optimize compression savings by removing the wasted space that fixed-block architectures introduce. Combined with Deep Reduction, compression delivers 2 - 4x data reduction, and is the primary form of data reduction for databases.

Deep Reduction

FlashReduce doesn’t stop at inline compression – additional, heavier-weight compression algorithms are applied post-process that increase the savings on data that was compressed inline. Most other all-flash products lack the use of multiple compression algorithms, and simply miss these savings.

Copy Reduction

Copying data on a FlashArray only involves metadata! Leveraging the data reduction engine, Purity provides instant pre-deduplicated copies of data for snapshots, clones, replication, and xCopy commands.

Why We Lead the Industry in Data Reduction

Five Reduction Technologies – We’ve got the data reduction necessary for virtually any application: pattern removal, deduplication, compression, deep reduction, and copy reduction.

Always-On – Purity Operating Environment is designed to support high-performance, always-on data reduction. All of our performance benchmarks are taken with data reduction on. 

Global – Unlike some data reduction solutions which operate within a volume or a pool, thereby partitioning the data and dramatically reducing dedupe savings, FlashReduce dedupe is inline and global across the array. 

Variable Addressing – Purity employs variable addressing, which finds duplicates that fixed-block implementations miss. FlashReduce scans for duplicates at 512-byte granularity and auto-aligns with application data layouts without any tuning at any layer. In addition, variable (byte-granular) compression avoids diluting your savings with waste that fixed-bucket granular compression implementations propagate.

Multiple Compression Algorithms – Different kinds of data compress differently. Purity employs multiple compression algorithms for optimal data reduction.

Designed for Mixed Workloads – FlashReduce delivers optimal data reduction savings for mixed workloads without requiring any tradeoffs and/or tuning.  

How Much Data Reduction Can I Expect?

Data reduction works on a wide variety of applications and data types, but the only way to know how it functions is to try it. The averages below are what we find typical for our most common use cases.

Virtual Server Environments

VMware or Hyper-V, consolidated virtual server environments with mixed applications.

Database Environments

OLTP or OLAP, even databases get surprising amounts of data reduction – most of it via compression.

Virtual Desktop (VDI) Environments

Virtual desktops (both persistent and non-persistent) are one of the most reducible workloads in the datacenter.

FlashArray//m is 100% Thin Provisioned

This means capacity for all volumes, all workloads, is allocated dynamically on demand, thereby maximizing the storing of data – and not the storing of zeros. While many vendors use thin provisioning as a way to boost data reduction savings, thin provisioning is an over-provisioning, not a data reduction technology. This is why our FlashReduce Ticker separates the average data reduction savings from deduplication and compression only from average total reduction with thin provisioning included. Oh, and granularity? It's at the 512-byte level just like all Purity services, meaning that Purity thin provisioning delivers even more efficiency than the competition.

"We are seeing a 30% reduction on our Oracle database today, and 90% reduction on our Mongo database."
Justin Stottlemyer, Fellow
"[With] dedupe, compression, I can still provide the read scalability without actually taking up any space, because of the dedupe features."
Neil Pinto, Senior Director, Data Operations
"With Flash, disk space is completely utilized space."
Jack Hogan, CTO

Featured Technology Partners