Skip to Content

What Is High-Performance Computing?

High-performance computing (HPC) is the ability to run computations in a synchronized manner across a large number of networked computers. HPC makes it possible to run computations that are too large for regular computers, reducing the time it takes to complete large operations. HPC is also referred to as supercomputing, and high-performance computers are commonly referred to as supercomputers.

HPC is especially important given the unprecedented rates at which data is generated today. IoT devices alone are expected to produce nearly 80 zettabytes of data by 2025. A single factory with IoT devices could generate hundreds of terabytes of data every day. Processing such a huge volume of data on a single computer isn’t possible. HPC, on the other hand, can handle huge data sets by splitting operations between multiple computers with the help of software and network capabilities.

Let’s take a closer look at why HPC is important and how it’s used.

Why is HPC important?

HPC enables the simulation or analysis of huge volumes of data that would otherwise be impossible to do with standard computers. This, in turn, leads to major advancements in fields such as scientific research, where the use of HPC has led to breakthroughs in everything from cancer treatments to COVID-19 vaccines. 

How does HPC work?

A single high-performance computer is made up of a group of computers called a cluster. Each computer in a cluster is called a node. Each node has an operating system consisting of a processor with multiple cores, storage, and networking capabilities that allow the nodes to communicate with each other. A small cluster, for example, can have 16 nodes with 64 cores, or four cores per processor, which, combined with networking capabilities, enables the high-performance computer to compute things much faster than a normal computer. 

Where is HPC used?

Currently, HPC is used in a wide range of industries. In the future, almost all industries will likely turn to HPC to tackle large volumes of data. The adoption of HPC has been particularly robust in industries that need to quickly analyse large data sets, including:  

  • Scientific research
  • Astronomy
  • Machine learning
  • Cybersecurity
  • Genome sequencing 
  • Animation
  • Molecular dynamics
  • Visual effects
  • Financial services
  • Financial risk modeling
  • Market data analysis
  • Product development
  • Greenfield design
  • Computational chemistry
  • Seismic imaging
  • Weather forecasting
  • Autonomous driving
Evergreen One

Reduce Risk with Evergreen//One™

Downtime is not an option. Look to Pure to ensure you’re always ready to meet capacity demands.

Explore Evergreen//One

What factors make HPC possible?

In particular, there are four factors driving the use of HPC:

Processing power

Put simply, the bandwidth required to process huge volumes of data can’t be delivered by a single processor. Instead, in an HPC model, multiple processing centers work in parallel to deliver results. Recall that within this model:

  • The collection of individual computers that are networked together is called a cluster.
  • Each individual processing unit in the cluster is called a node.
  • Each processor in a node will have multiple cores. 

As an example, a cluster with 16 nodes with four cores each is a very small cluster, representing a total of 64 cores operating in parallel.

Most HPC use cases today involve thousands of cores working in parallel to complete designated processes in a shorter amount of time. Infrastructure-as-a-service (IaaS) providers offer users the ability to leverage large numbers of nodes when required and then wind down the workload when the requirement is complete. Users only pay for the processing power required, without the capital expenditure (CAPEX) costs associated with building out infrastructure. With IaaS, users also typically have the ability to prescribe node layouts for specific applications, if required.

Operating system

Operating systems act as the interface between the hardware and software used in HPC. The two major operating systems used in HPC environments are Linux and Windows. Linux is generally used for HPC, whereas Windows is used only when Windows-specific applications are required. 


In HPC, the network connects the computing hardware, the required storage, and the user. The computing hardware is connected through networks that can handle a large bandwidth of data. The networks should also have a low latency to facilitate faster data transfers. Data transmissions and the management of clusters are handled by cluster managers, management services, or schedulers. 

The cluster manager runs the workload among the distributed computational resources, such as CPUs, FPGAs, GPUs, and disk drives. All the resources have to be connected to the same network for the cluster manager to manage resources. When using the services of an IaaS provider, all of the facilities required to manage the infrastructure will be automatically applied by the provider.


Finally, the data to be processed by HPC has to be stored in a large data repository. Since that data can take different forms—structured, semistructured, and unstructured—different types of databases may be required to store the data.

Data in its raw format(s) is stored within a data lake. It can be difficult to process this data because it doesn’t have a purpose assigned to it yet. Data warehouses store data after processing once it’s been cleaned up to suit its specific purpose. 

Storage: The missing link in HPC

In many HPC use cases, storage—a critical part of the architecture—is often overlooked. HPC is used when a vast quantity of data has to be processed in parallel, yet its performance depends on whether all of the components in its architecture can keep up with each other. 

Traditional legacy storage solutions may not be able to handle the needs of HPC, creating bottlenecks in the process and potentially hampering its performance. Data storage must be able to keep up with the speed of the processing power of the setup, which is why many HPC architectures make use of unified fast file and object (UFFO) storage.

Evergreen//One™ offers fast and reliable UFFO storage with the convenience of the pay-as-you-go (PaYG) model. It can be used in on-premises and hybrid-cloud models and is ideal for HPC environments, which require the ability to scale operations without compromising performance.

Get started with Evergreen//One today. New customers get the first three months of service free.

Resources and Events
Pure’s Modern Data Protection Portfolio
Mar 19, 2024 | 11:00AM PT • 2:00PM ET

Descriptor: Dive into the expansive landscape that is all things Modern Data Protection - it’s far more than just backups.

Register Now
Embrace Change without Disruption

Learn how AI will affect everything from power utilization and rack space to data governance and compliance.

Read the Story
10 Ways Pure Helps You Uncomplicate Storage, Forever

Discover 10 ways Pure Storage® can help you uncomplicate storage, forever.

Read More
Reality Check On Storage as a Service and Evergreen//One
33 Min

Learn about Pure's Evergreen//One subscription offering and the white glove support we provide Evergreen//One customers.

Listen Now
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: +44 8002088116


Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.