Skip to Content
Dismiss
Innovation
Une vision de l’IA pour tous

Une base unifiée et automatisée pour transformer les données en intelligence à grande échelle.

En savoir plus
Dismiss
Du 16 au 18 juin, Las Vegas
Pure//Accelerate® 2026

Découvrez comment exploiter la véritable valeur de vos données. 

S’inscrire maintenant
Dismiss
Rapport Gartner® Magic Quadrant™ 2025
En tête dans les catégories Exécution et Vision

Everpure s’est classé parmi les leaders dans le Gartner® Magic Quadrant™ 2025 pour les plateformes de stockage d’entreprise et se positionne en tête dans les catégories Exécution et Vision.

Obtenir le rapport

The Joy (and Challenges) of Brute Force Computing

We’ve entered the AI era of scale. How can businesses seize this opportunity to create competitive advantage, derive value, and build new things? By embracing brute force computing.

Actions
4 min. read

Introduction

By Par Botes, VP AI Infrastructure, Pure Storage

If the history of computing can be divided into distinct eras, we are in the era of scale. All of the milestones that have led us to today—rapid advances in networking, cloud, mobile, and now AI—have created a literal infrastructure pressure test.

As someone who’s worked in the data industry for many years, this isn’t just a watershed moment for me and my peers or the IT industry. This is seismic for business, the economy, and even society. What will this rapid, endless expansion mean for the future? 

None of us knows the answer yet, but we can approach whatever comes next with context, clarity, and curiosity. In this series, I’ll explore some key computing concepts and conversations we should really be having around AI—starting with the history (and future) of brute force computing.

“The model that internet search pioneered looks on track to dominate how we think about computing in the age of AI.”

Par Botes

VP AI Infrastructure, Pure Storage

A Watershed Moment for Data

Why are we in an era of scale? Look at consumer and enterprise hardware. More than a billion smartphones have been sold annually for over a decade. Cloud computing data centers hold well over a million servers, and they’re networked into similar-sized systems around the world.

The volume of data created, captured, copied, and consumed globally has been growing exponentially. In 2024, it was estimated that approximately 402.89 million terabytes (or about 0.4 zettabytes) of data would be generated daily, totaling 147 zettabytes annually. This represents a substantial increase compared to data volumes from two decades ago, reflecting the rapid expansion of digital activities worldwide.

Big numbers are fun, but the real watershed is how all that data and computing power, networked to all those locations, has been put to use.

The best early example of that payoff has been in search. Want to find a website to suit your interest? Search all of them efficiently, looking for key connections among words, histories, and other relationships. Image recognition? Tag a million pictures, and fire up your servers. Want to translate from one language to another? Map directions? That’s the power of data at scale—and brute force computing.

The Unmatched Power of Scale

When I say this is brute force computing, I’m not being disrespectful. Before we had all of the data and compute resources, there were lots of theories about how to manually sort websites, construct models of how languages worked, or enact ontological theories. What we didn’t have, though, were results like these. You can’t argue with results.

Rich Sutton wrote an essay titled “The Bitter Lesson” in which he observed that “breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”

These are still true words in the era of large language models (LLMs). Today, AI has its own intricacies: deep learning, amazing algorithms, and optimized hardware that can use all of the compute resources available. All great and important innovations. But the true success of generative AI has a lot to do with the reality that LLMs have been growing on average 10x a year.

Scalable compute is what is making this possible. Nvidia’s new Blackwell chip, critical to much cutting-edge AI, uses 208 billion transistors connecting to other chips at 10 terabytes a second to serve this need. It’s dramatically increased memory speed to support even more advanced (read: high parameter) AI models.

It’s Still a Brute Force World

Meta-methods for search and learning are still in full force and evolving for the foreseeable future. In the 18 months leading up to the middle of 2024, the world’s largest hyperscale computing companies picked up something like 1.6 million Nvidia H100 chips, previously the top-performing chip. Other cloud and big tech companies got about 300,000, while all the other buyers together purchased maybe 350,000 of the chips. Aside from that chip, Google’s latest TPU pod uses 8,960 chips—twice as many chips, with twice as many interconnections between chips, at two-thirds more compute per chip than the prior generation.

Brute force, meet brute force.

The model that internet search pioneered looks on track to dominate how we think about computing. Even if the newest LLM’s performance isn’t improving as quickly as in the past, because it’s been trained on all the internet data by now, there are still more advanced model algorithms in the toolbag. Looking across the contemporary landscape, there’s nothing likely to replace this. Even if it does, the next thing is likely to have a lot of search-type “scale of data/scale of compute” someplace inside it. A neuro-semantic AI system performs a lot of interesting functions, but they’ll only get there with a lot of compute power and better data management.

To Succeed in This and Future Eras, Harness the Power of Data and Scale

The joy of brute force will become even more apparent and urgent as industries move further into multimodal AI models or domain-specific AIs that draw on the same data sets in different ways. (Data is, in the minds of many, considered a fixed thing, resting up for the next call, but much data is a very dynamic thing.) One picture is a snapshot, but put together 175,000 of them and you have a two-hour movie.

See, scale isn’t just about “more.” Scale changes environments and creates the capability to gain new insights or build new things. Here’s where businesses have their opportunity and where experimentation will be key.

Technologists and business leaders now are at that moment in time where you need to consider the highest-value areas to deploy brute force computing. Consider comparative advantages in the market, the strength of current and future competitors, and the amount and quality of data that can be put toward brute force problems. We’ll need better storage platforms, offering greater velocity and diversity of data sources, and new ways of managing and even thinking about data. And, preparing data for infinite compute power will be the next big thing. More on that in a future article.

Much of what lies ahead remains unknown, but one thing is certain: We won’t know the answers unless we start experimenting.


- Par Botes

Image axée sur la durabilité et l’efficacité énergétique, avec un thème écologique subtil.
Image axée sur la durabilité et l’efficacité énergétique, avec un thème écologique subtil.
RAPPORT

Notre engagement en faveur d’une gestion responsable de notre activité

Découvrez notre stratégie ESG (environnement, social et gouvernance) et l’impact souhaité sur nos opérations, notre chaîne logistique et nos produits.
Actions
4 min. read

We Also Recommend

Votre navigateur n’est plus pris en charge !

Les anciens navigateurs présentent souvent des risques de sécurité. Pour profiter de la meilleure expérience possible sur notre site, passez à la dernière version de l’un des navigateurs suivants.

Personalize for Me
Steps Complete!
1
2
3
Personalize your Everpure experience
Select a challenge, or skip and build your own use case.
Stratégies de virtualisation pérennes

Des options de stockage adaptées à tous vos besoins.

Favorisez les projets d’IA à n’importe quelle échelle

Stockage haute performance pour les pipelines de données, l’entraînement et l’inférence.

Prévenir la perte de données

Des solutions de cyber-résilience qui réduisent vos risques.

Réduire le coût des opérations cloud

Stockage économique pour Azure, AWS et les clouds privés.

Accélérer les performances des applications et des bases de données

Stockage à faible latence pour accélérer les performances des applications.

Réduire la consommation d’énergie et d’espace du datacenter

Stockage efficace en ressources pour améliorer l’utilisation du datacenter.

Confirm your outcome priorities
Your scenario prioritizes the selected outcomes. You can modify or choose next to confirm.
Primary
Reduce My Storage Costs
Lower hardware and operational spend.
Primary
Strengthen Cyber Resilience
Detect, protect against, and recover from ransomware.
Primary
Simplify Governance and Compliance
Easy-to-use policy rules, settings, and templates.
Primary
Deliver Workflow Automation
Eliminate error-prone manual tasks.
Primary
Use Less Power and Space
Smaller footprint, lower power consumption.
Primary
Boost Performance and Scale
Predictability and low latency at any size.
What’s your role and industry?
We've inferred your role based on your scenario. Modify or confirm and select your industry.
Select your industry
Financial services
Government
Healthcare
Education
Telecommunications
Automotive
Hyperscaler
Electronic design automation
Retail
Service provider
Transportation
Which team are you on?
Technical leadership team
Defines the strategy and the decision making process
Infrastructure and Ops team
Manages IT infrastructure operations and the technical evaluations
Business leadership team
Responsible for achieving business outcomes
Security team
Owns the policies for security, incident management, and recovery
Application team
Owns the business applications and application SLAs
Describe your ideal environment
Tell us about your infrastructure and workload needs. We chose a few based on your scenario.
Select your preferred deployment
Hosted
Dedicated off-prem
On-prem
Your data center + edge
Public cloud
Public cloud only
Hybrid
Mix of on-prem and cloud
Select the workloads you need
Databases
Oracle, SQL Server, SAP HANA, open-source

Key benefits:

  • Instant, space-efficient snapshots

  • Near-zero-RPO protection and rapid restore

  • Consistent, low-latency performance

 

AI/ML and analytics
Training, inference, data lakes, HPC

Key benefits:

  • Predictable throughput for faster training and ingest

  • One data layer for pipelines from ingest to serve

  • Optimized GPU utilization and scale
Data protection and recovery
Backups, disaster recovery, and ransomware-safe restore

Key benefits:

  • Immutable snapshots and isolated recovery points

  • Clean, rapid restore with SafeMode™

  • Detection and policy-driven response

 

Containers and Kubernetes
Kubernetes, containers, microservices

Key benefits:

  • Reliable, persistent volumes for stateful apps

  • Fast, space-efficient clones for CI/CD

  • Multi-cloud portability and consistent ops
Cloud
AWS, Azure

Key benefits:

  • Consistent data services across clouds

  • Simple mobility for apps and datasets

  • Flexible, pay-as-you-use economics

 

Virtualization
VMs, vSphere, VCF, vSAN replacement

Key benefits:

  • Higher VM density with predictable latency

  • Non-disruptive, always-on upgrades

  • Fast ransomware recovery with SafeMode™

 

Data storage
Block, file, and object

Key benefits:

  • Consolidate workloads on one platform

  • Unified services, policy, and governance

  • Eliminate silos and redundant copies

 

What other vendors are you considering or using?
Thinking...
Your personalized, guided path
Get started with resources based on your selections.