Skip to Content
Dismiss
Innovations
Everpure Simplifies Enterprise AI with Evergreen//One for AI and Data Stream Beta

Accelerate the transition from pilot to production with benchmark-proven performance, automated data pipelines, and a flexible consumption model.

Read the Press Release
Dismiss
June 16-18, Las Vegas
Pure Accelerate 2026

Discover how to unlock the true value of your data. 

Register Now
Dismiss
Innovation
A platform built for AI

Unified, automated, and ready to turn data into intelligence.

Find Out How

What Is NVMe Storage?

Storage protocols built for spinning hard drives have been limiting flash performance for over a decade. While SSDs deliver hardware capable of microsecond response times, legacy interfaces like SATA and SAS add hundreds of microseconds of unnecessary overhead through single-queue architectures and protocol translation layers.

NVMe (non-volatile memory express) is a storage access and transport protocol built specifically for solid-state drives (SSDs) that connects directly through the PCIe bus, bypassing the bottlenecks of disk-era protocols. Instead of funneling commands through a single queue like SATA, NVMe supports up to 64,000 queues, each with 64,000 commands—a fundamental shift in how storage communicates with modern multi-core processors.

This guide covers NVMe’s architecture, real-world performance advantages, NVMe over Fabrics transport options, and why end-to-end implementation matters for enterprise workloads.

The history and evolution of NVMe

The story of NVMe starts with a mismatch. For two decades, SATA and SAS protocols assumed storage devices needed time to physically seek data, building in command overhead that made sense when disk platters had to rotate into position. These protocols funnel all commands through a single queue—adequate for mechanical seeks, but a serious constraint for flash memory.

The NVM Express consortium was formed in 2008 to address this problem. Intel, Samsung, Dell, and other major IT providers collaborated on a specification designed from the ground up for non-volatile memory, with no assumptions about mechanical components.

The first NVMe specification (1.0) arrived in 2011. Adoption moved quickly. By 2014, NVMe SSDs were shipping from multiple manufacturers, and enterprise adoption accelerated through the mid-2010s as PCIe Gen 3 provided sufficient bandwidth for the protocol’s parallel architecture. The NVMe 2.0 specification, released in 2021, reorganized the standard into a modular library, separating base specifications from command sets and transport specifications. This restructuring opened NVMe to new device types, including computational storage drives, Zoned Namespace (ZNS) SSDs, and even rotational media.

Enterprise SSD capacity on NVMe continues to grow at a compound annual rate exceeding 43%, according to IDC projections.

How NVMe works

NVMe is a storage transfer protocol for accessing data quickly from flash memory storage devices such as SSDs. It enables flash memory to communicate directly with a computer via a high-speed peripheral component interconnect express (PCIe) bus, offering fast, high-throughput, and massively parallel data transfer.

Here’s how it works:

  • The host writes I/O command submission queues and doorbell registers (i.e., ready signals).
  • The NVMe controller fetches and executes the I/O command queues and returns I/O completion queues followed by an interrupt to the host.
  • The host records the I/O completion queues and clears the door register.

The end result is significantly lower overhead when compared with traditional transfer protocols such as serial attached SCSI (SAS) and serial ATA (SATA).

Also, NVMe is optimised for non-uniform memory access (NUMA), meaning it was designed to allow for multiple CPU cores to manage queues. Modern processors contain dozens of cores, yet SATA and SAS funnel them all through a single I/O queue. NVMe assigns dedicated queue pairs to each CPU core, eliminating lock contention and enabling true parallel processing.

PCIe interface and bandwidth

NVMe devices connect via PCIe lanes, with each lane providing bidirectional bandwidth. A typical NVMe SSD uses four PCIe lanes, delivering about 2GB/s per lane with PCIe Gen 4. Enterprise arrays aggregate multiple devices for even higher throughput. With PCIe Gen 5 doubling per-lane bandwidth to 4GB/s, NVMe performance headroom continues to grow.

But bandwidth alone doesn’t determine performance. Latency—the time between request and response—often matters more for transactional workloads. NVMe’s direct PCIe connection eliminates multiple bus transitions and protocol conversions that add overhead in SATA implementations.

NVMe command set

The NVMe command set is intentionally streamlined. Where SCSI requires complex command parsing and multiple protocol translation layers, NVMe uses a simplified set of commands that map directly to flash operations. The result: fewer CPU instructions per I/O request, lower host stack latency, and more efficient use of processor cycles.

NVMe also introduces features absent from legacy protocols, including multi-path I/O with namespace sharing across multiple controllers, end-to-end data protection via metadata fields, and sanitize commands that make data recovery impossible when hardware is retired.

Benefits of NVMe

NVMe’s main benefits include:

High throughput and low latency

NVMe leverages the high-speed PCIe bus for latency, allowing for significantly faster data transfer rates compared to older interfaces. This results in substantially lower latency and higher input/output operations per second (IOPS).

Performance enhancement

NVMe's ability to deliver high-speed data transfer and low latency significantly improves the performance of storage systems. NVMe uses parallel data paths through multiple queues, each capable of handling up to 64,000 commands, to support multi-core processors used for applications requiring fast data access, such as databases, virtualised environments, and high-performance computing (HPC).

Scalability

NVMe's architecture supports scalable performance as SSD technology evolves, ensuring compatibility with future advancements in storage. It easily scales with advancements in PCIe technology, supporting newer versions like PCIe 5.0 and beyond.

Enterprise applications

In enterprise environments, NVMe is essential for handling large-scale data operations and supporting demanding workloads, enabling faster data analytics, reduced processing times, and improved overall efficiency.

Cost efficiency

Although NVMe may initially be more expensive than traditional HDDs, its superior performance and durability usually leads to long-term cost savings through increased productivity and reduced downtime.

Consumer experience

For individuals, NVMe offers faster boot times, quicker file transfers, and a more responsive computing experience. This is particularly beneficial for gamers, content creators, and professionals working with large files.

Power efficiency

NVMe’s efficiency compounds its performance advantages. By eliminating legacy protocol overhead and reducing CPU utilization per I/O operation, NVMe SSDs deliver significantly higher work per watt than SATA. With PCIe Gen5, NVMe drives can achieve up to ~25× higher sequential read throughput and ~20× higher sequential write throughput compared to SATA SSDs, while maintaining low CPU overhead. This combination of higher throughput and better efficiency per I/O means fewer drives and less CPU time are required to achieve the same workload, resulting in measurable reductions in power consumption and cooling requirements at scale.

Future-proofing

NVMe is very extendable and works well with all emerging persistent memory technologies. It’s also much more flexible than Transmission Control Protocol (TCP), as it can be deployed on any TCP network without special hardware, making it attractive for both on-premises and cloud environments. 

NVMe advantages over traditional protocols 

NVMe offers numerous advantages over traditional storage protocols like SAS and SATA, particularly in terms of performance, scalability, and efficiency. 

Here are some specific metrics and examples illustrating these differences:

1. Command queues and depth

  • NVMe supports up to 64,000 queues, with each queue handling up to 64,000 commands.
  • SATA supports a single queue with a maximum of 32 commands (known as Native Command Queuing, or NCQ). 
  • SAS typically supports up to 256 queues, each with a depth of 256 commands.

The vast number of command queues and the depth supported by NVMe allows systems to minimize latency and maximise throughput. This is especially beneficial in multi-core systems where multiple processors can issue commands simultaneously.

2. Data transfer speeds

  • NVMe uses the PCIe interface. PCIe 4.0 doubles the throughput of PCIe 3.0, offering up to 7.88GB/s per lane, with typical drives using four lanes, resulting in up to 31.5GB/s. PCIe 5.0 offers a theoretical maximum throughput of about 8 GB/s per lane, with typical NVMe drives using eight lanes, yielding up to approximately 64 GB/s.
  • SATA offers a maximum throughput of 600MB/s (SATA III).
  • SAS-3 (12Gb/s) offers a theoretical maximum of 1.5GB/s per lane, often using multiple lanes for higher throughput.

An NVMe SSD connected via PCIe 5.0 can achieve transfer speeds up to 64 GB/s with 8 lanes, which is significantly faster than the 600MB/s limit of SATA III or the 1.5GB/s per lane of SAS-3. This makes NVMe particularly advantageous for data-intensive applications such as real-time data processing and large-scale database management.

3. Latency

  • NVMe is designed for low latency, typically enabling latencies of around 20 microseconds or less.
  • SATA has higher latencies due to the older AHCI protocol, typically around 100 microseconds.
  • SAS latencies are generally better than SATA but still higher than NVMe, often around 50-100 microseconds.

Lower latency in NVMe drives means faster response times for applications, which is crucial for performance-critical tasks like high-frequency trading, real-time analytics, and interactive gaming.

4. Protocol efficiency

  • NVMe has an optimised command set for flash memory, reducing overhead and enabling more efficient data processing.
  • SATA and SAS command sets are designed for spinning disks, resulting in higher overhead when used with SSDs.

The NVMe protocol reduces unnecessary command translations and utilizes fewer CPU cycles, which directly contributes to faster read and write operations. This efficiency is noticeable in enterprise environments where high transaction rates and minimal latency are critical.

5. Scalability

  • NVMe easily scales with advancements in PCIe technology, supporting newer versions like PCIe 5.0 and beyond, ensuring compatibility with future storage innovations.
  • SATA and SAS are limited by the aging architecture and slower evolution compared to PCIe.

Businesses adopting NVMe can future-proof their storage infrastructure, ensuring long-term performance gains and compatibility with cutting-edge storage technologies.

NVMe vs. SATA vs. SAS: A direct comparison

Understanding NVMe’s advantages requires comparing it directly against the protocols it replaces.

Feature

SATA III

SAS-3

NVMe (PCIe Gen 5)

Max command queues

1

1

64,000

Commands per queue

32

256

64,000

Max throughput

600MB/s

1.2GB/s

~16GB/s (x4)

Typical latency

100–200μs

70–150μs

70–120μs

Protocol overhead

50–100μs

30–80μs

<10μs

CPU efficiency

High overhead

Moderate overhead

Low overhead

Hot-plug support

Limited

Yes

Yes

Interface

AHCI

SAS HBA

PCIe direct

Slide

According to Architecting IT, raw NAND flash reads take approximately 100 microseconds. SATA can add 50–100 microseconds of protocol overhead on top of that. NVMe keeps protocol overhead below 10 microseconds, which means the protocol itself is no longer the bottleneck—the flash media is.

For organisations currently running SATA-attached SSDs, the performance gap is significant. A single NVMe device delivers over 1 million IOPS for 4 KB random reads—performance that requires dozens of SATA SSDs. In enterprise database workloads, this translates to more transactions per second, lower query response times, and fewer storage-related wait events.

What is NVMe over Fabrics (NVMe-oF)?

NVMe-oF is the practice of connecting NVMe storage systems with hosts over a network or data fabric. A data fabric simply refers to the network architecture, transfer protocol, and other technologies and services that allow data to be accessed and managed seamlessly across this network. It’s about extending the low latency and performance capabilities of NVMe over PCIe to storage area networks (SANs) through NVMe-friendly standards for popular transfer protocols such as Ethernet, Fibre Channel, and TCP.

The NVMe over Fabrics (NVMe-oF) specification was created and is currently maintained by NVM Express, an open collection of standards for non-volatile memory technologies. Let’s take a closer look at NVMe transfer protocols supported by this standard.

What is NVMe over Fibre Channel (NVMe/FC)?

NVMe over Fibre Channel (also known as NVMe/FC or NVMe-FC) is a high-speed transfer protocol for connecting NVMe storage systems to host devices over fabrics. It supports the fast, in-order, lossless transfer of raw block data between NVMe storage devices in a network.

The original Fibre Channel Protocol (FCP) was designed to transport SCSI commands over Fibre Channel networks. It has become the dominant protocol used to connect servers with shared storage systems. While traditional FCP can be used to connect servers with NVMe storage devices, there’s an inherent performance penalty incurred when translating SCSI commands into NVMe commands for the NVMe array.

NVMe/FC supports the transfer of native NVMe commands, eliminating this translation bottleneck. This unlocks the full potential of FCP as a transport technology for end-to-end NVMe storage solutions, including parallelism, deep queues, multi-queueing, and high-speed data transfer.

What is NVMe over TCP (NVMe/TCP)?

NVMe over TCP (NVMe/TCP) is a low-latency transfer protocol that allows you to use standard Ethernet TCP/IP networking equipment natively with NVMe storage.

TCP/IP is the default transfer protocol used by the internet by which messages are broken up into packets to avoid having to resend an entire message in the event of a disruption of service. As an extension of the NVMe-oF specification, NVMe/TCP allows you to send NVMe commands using the same TCP/IP protocol transfer packets you use to transmit other types of data.

The plug-and-play ease and lower cost of standard Ethernet makes it an economical solution for connecting your NVMe storage devices over a data fabric. Modern implementations achieve 200–250 microsecond latency—faster than SATA SSDs despite crossing the network. NVMe/TCP works with existing switches, standard NICs, and cloud provider networks, making it especially attractive for hybrid cloud deployments.

A dual-protocol approach supporting both FC-NVMe and NVMe/TCP gives organisations flexibility to choose the transport that best fits each workload while maintaining a consistent NVMe command model throughout.

NVMe over RDMA (RoCE)

RoCE (RDMA over Converged Ethernet) promises the lowest network latency through kernel bypass—RDMA operations complete in approximately a microsecond. But RoCE requires lossless Ethernet with Priority Flow Control configured across every switch and adapter. A single misconfigured port can cause performance collapse. When properly deployed, RoCE delivers sub-microsecond storage latency.

Implementing NVMe in production

Simply installing NVMe drives rarely delivers expected benefits. The entire storage stack must support end-to-end NVMe operations for the protocol’s advantages to reach applications.

The protocol translation problem

Many organisations buy NVMe SSDs for existing arrays and expect a transformation. The drives communicate via NVMe, but the array controller translates everything to SCSI internally for compatibility. This translation adds microseconds at every layer, negating much of what NVMe is designed to deliver.

Best practices for NVMe deployment

Successful NVMe implementation requires attention across the stack:

  • Start with non-critical workloads for validation and baseline measurements
  • Implement latency monitoring at every layer to identify translation bottlenecks
  • Prioritize latency-sensitive databases first—these see the most immediate benefit
  • Verify end-to-end NVMe using tools like nvme-cli to confirm no hidden translations
  • Configure interrupt affinity to align NVMe queue pairs with CPU cores
  • Plan queue depth adjustments based on workload characteristics—NVMe supports far deeper queues than most applications default to

NVMe use cases

NVMe technology is transforming many industries, including:

  • Healthcare: Fast access to patient records is crucial for timely diagnosis and treatment. High-resolution imaging modalities such as MRI, CT scans, and digital pathology also require rapid storage and retrieval of large data sets.
  • Finance: High-frequency trading bots use ultra-low latency to execute trades in milliseconds, and large-scale data analysis is used for risk assessment and decision-making.
  • Cloud computing: Efficient management of virtual machines (VMs) and containers optimises resource utilization and reduces costs.
  • Media and entertainment: Video editing and production requires rapid access to large video files and seamless playback.
  • Gaming: Fast load times and smooth performance are needed to deliver a better user experience.

Basically, any industry that relies on quick data access and retrieval can benefit from NVMe. 

The future of NVMe

NVMe’s evolution extends well beyond raw speed. Several developments are shaping the protocol’s trajectory:

  • Computational storage is now standardized through NVMe command sets released in January 2024. These specifications enable processing within the storage device itself—database filtering, compression, and AI inference can happen where data lives, reducing data movement overhead.
  • Zoned Namespaces (ZNS) align data placement with the physical characteristics of NAND flash, reducing overprovisioning and improving both performance and cost per terabyte. This is especially relevant for workloads with heavy sequential writes like video processing and logging.
  • Key Value command sets allow applications to communicate with drives using key-value pairs instead of block addresses, eliminating translation overhead for NoSQL databases and object storage workloads.
  • PCIe Gen 6 has reached mass production in 2026, doubling bandwidth again to 128GT/s, ensuring NVMe has headroom to grow alongside increasingly demanding AI and analytics workloads.
The Everpure Platform
The Everpure Platform
THE EVERPURE PLATFORM

A platform that grows with you, forever.

Simple. Reliable. Agile. Efficient. All as-a-service.

Conclusion 

Big data is no longer enough to maintain a competitive edge—it must also be fast. 

How do you make big data fast?

You start in the server room. Transitioning from HDDs to SSDs is a good place to start, but it’s only one piece of the SAN puzzle. The transfer protocol, interconnects, and networking architecture also play important roles in the overall speed of your storage system. That means replacing legacy technologies like SAS and SATA with NVMe, which offers clear and significant advantages in throughput and latency. 

Everpure leverages the increased transfer speeds of native NVMe transfer protocols to provide performant all-flash storage solutions. 

The secret to Everpure performance is DirectFlash®:

  • DirectFlash Fabric: Delivering performance close to DAS, DirectFlash Fabric offers enterprise-class reliability and data services.

All Everpure solutions leverage NVMe storage to unlock the full potential of flash memory. Everpure offers on-premises all-flash solutions for all your block, file, and object storage needs:

  • FlashArray//X: A performance-optimised all-flash array for Tier 0 and Tier 1 block storage applications
  • FlashArray//XL: Top-tier performance at petabyte scale for your most demanding workloads
  • FlashArray//C: A capacity-optimised all-flash array for Tier 2 block storage applications
  • FlashArray//E: An always-on data repository that can lower your TCO 
  • FlashBlade//S™: An all-flash, scale-out unified file and object storage platform
  • FlashBlade//E: A unified file and object storage platform that delivers all-flash storage at a cost comparable to disk for everyday use

Additionally, Everpure offers a suite of software solutions that can simplify and unify storage management across your hybrid cloud: 

  • Evergreen®: Our portfolio of storage-as-a-service subscriptions allows you to non-disruptively upgrade your hardware with no downtime, no migrations, and no degradations in performance.
  • Pure1®: Our AI-driven data services platform provides predictive analytics that help you catch bugs and address bottlenecks before they happen.
  • Purity: Purity abstracts away the complexity of managing a data centre, providing you with a simple dashboard for complete control over your data.
  • Portworx®: The most complete Kubernetes data services platform, Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, and more for container workloads.
04/2026
Technical Guide to Migrate from VMware to Kubevirt
Eliminate rising VMware costs and vendor lock-in. This guide details how to migrate workloads to a unified KubeVirt and Portworx platform on Everpure FlashArray without losing enterprise-grade performance or reliability.
White Paper
22 pages

Browse key resources and events

TRADESHOW
Pure Accelerate 2026
June 16-18, 2026 | Resorts World Las Vegas

Get ready for the most valuable event you’ll attend this year.

Register Now
PURE360 DEMOS
Explore, learn, and experience Everpure.

Access on-demand videos and demos to see what Everpure can do.

Watch Demos
VIDEO
Watch: The value of an Enterprise Data Cloud

Charlie Giancarlo on why managing data—not storage—is the future. Discover how a unified approach transforms enterprise IT operations.

Watch Now
BLOG
What’s in a Net Promoter Score?

For nine consecutive years, Everpure has maintained a Net Promoter Score of over 80. Find out how we did it and what it means for our customers.

Read the Blog
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.

Personalize for Me
Steps Complete!
1
2
3
Continue where you left off
Personalize your Everpure experience
Select a challenge, or skip and build your own use case.
Future-proof virtualisation strategies

Storage options for all your needs

Enable AI projects at any scale

High-performance storage for data pipelines, training, and inferencing

Protect against data loss

Cyber resilience solutions that defend your data

Reduce cost of cloud operations

Cost-efficient storage for Azure, AWS, and private clouds

Accelerate applications and database performance

Low-latency storage for application performance

Reduce data centre power and space usage

Resource efficient storage to improve data centre utilization

Confirm your outcome priorities
Your scenario prioritizes the selected outcomes. You can modify or choose next to confirm.
Primary
Reduce My Storage Costs
Lower hardware and operational spend.
Primary
Strengthen Cyber Resilience
Detect, protect against, and recover from ransomware.
Primary
Simplify Governance and Compliance
Easy-to-use policy rules, settings, and templates.
Primary
Deliver Workflow Automation
Eliminate error-prone manual tasks.
Primary
Use Less Power and Space
Smaller footprint, lower power consumption.
Primary
Boost Performance and Scale
Predictability and low latency at any size.
What’s your role and industry?
We've inferred your role based on your scenario. Modify or confirm and select your industry.
Select your industry
Financial services
Government
Healthcare
Education
Telecommunications
Automotive
Hyperscaler
Electronic design automation
Retail
Service provider
Transportation
Which team are you on?
Technical leadership team
Defines the strategy and the decision making process
Infrastructure and Ops team
Manages IT infrastructure operations and the technical evaluations
Business leadership team
Responsible for achieving business outcomes
Security team
Owns the policies for security, incident management, and recovery
Application team
Owns the business applications and application SLAs
Describe your ideal environment
Tell us about your infrastructure and workload needs. We chose a few based on your scenario.
Select your preferred deployment
Hosted
Dedicated off-prem
On-prem
Your data centre + edge
Public cloud
Public cloud only
Hybrid
Mix of on-prem and cloud
Select the workloads you need
Databases
Oracle, SQL Server, SAP HANA, open-source

Key benefits:

  • Instant, space-efficient snapshots

  • Near-zero-RPO protection and rapid restore

  • Consistent, low-latency performance

 

AI/ML and analytics
Training, inference, data lakes, HPC

Key benefits:

  • Predictable throughput for faster training and ingest

  • One data layer for pipelines from ingest to serve

  • Optimised GPU utilization and scale
Data protection and recovery
Backups, disaster recovery, and ransomware-safe restore

Key benefits:

  • Immutable snapshots and isolated recovery points

  • Clean, rapid restore with SafeMode™

  • Detection and policy-driven response

 

Containers and Kubernetes
Kubernetes, containers, microservices

Key benefits:

  • Reliable, persistent volumes for stateful apps

  • Fast, space-efficient clones for CI/CD

  • Multi-cloud portability and consistent ops
Cloud
AWS, Azure

Key benefits:

  • Consistent data services across clouds

  • Simple mobility for apps and datasets

  • Flexible, pay-as-you-use economics

 

Virtualisation
VMs, vSphere, VCF, vSAN replacement

Key benefits:

  • Higher VM density with predictable latency

  • Non-disruptive, always-on upgrades

  • Fast ransomware recovery with SafeMode™

 

Data storage
Block, file, and object

Key benefits:

  • Consolidate workloads on one platform

  • Unified services, policy, and governance

  • Eliminate silos and redundant copies

 

What other vendors are you considering or using?
Thinking...
Your personalized, guided path
Get started with resources based on your selections.