Skip to Content

The Joy (and Challenges) of Brute Force Computing

We’ve entered the AI era of scale. How can businesses seize this opportunity to create competitive advantage, derive value, and build new things? By embracing brute force computing.

Actions
4 min. read

Introduction

By Par Botes, VP AI Infrastructure, Pure Storage

If the history of computing can be divided into distinct eras, we are in the era of scale. All of the milestones that have led us to today—rapid advances in networking, cloud, mobile, and now AI—have created a literal infrastructure pressure test.

As someone who’s worked in the data industry for many years, this isn’t just a watershed moment for me and my peers or the IT industry. This is seismic for business, the economy, and even society. What will this rapid, endless expansion mean for the future? 

None of us knows the answer yet, but we can approach whatever comes next with context, clarity, and curiosity. In this series, I’ll explore some key computing concepts and conversations we should really be having around AI—starting with the history (and future) of brute force computing.

“The model that internet search pioneered looks on track to dominate how we think about computing in the age of AI.”

Par Botes

VP AI Infrastructure, Pure Storage

A Watershed Moment for Data

Why are we in an era of scale? Look at consumer and enterprise hardware. More than a billion smartphones have been sold annually for over a decade. Cloud computing data centers hold well over a million servers, and they’re networked into similar-sized systems around the world.

The volume of data created, captured, copied, and consumed globally has been growing exponentially. In 2024, it was estimated that approximately 402.89 million terabytes (or about 0.4 zettabytes) of data would be generated daily, totaling 147 zettabytes annually. This represents a substantial increase compared to data volumes from two decades ago, reflecting the rapid expansion of digital activities worldwide.

Big numbers are fun, but the real watershed is how all that data and computing power, networked to all those locations, has been put to use.

The best early example of that payoff has been in search. Want to find a website to suit your interest? Search all of them efficiently, looking for key connections among words, histories, and other relationships. Image recognition? Tag a million pictures, and fire up your servers. Want to translate from one language to another? Map directions? That’s the power of data at scale—and brute force computing.

The Unmatched Power of Scale

When I say this is brute force computing, I’m not being disrespectful. Before we had all of the data and compute resources, there were lots of theories about how to manually sort websites, construct models of how languages worked, or enact ontological theories. What we didn’t have, though, were results like these. You can’t argue with results.

Rich Sutton wrote an essay titled “The Bitter Lesson” in which he observed that “breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”

These are still true words in the era of large language models (LLMs). Today, AI has its own intricacies: deep learning, amazing algorithms, and optimised hardware that can use all of the compute resources available. All great and important innovations. But the true success of generative AI has a lot to do with the reality that LLMs have been growing on average 10x a year.

Scalable compute is what is making this possible. Nvidia’s new Blackwell chip, critical to much cutting-edge AI, uses 208 billion transistors connecting to other chips at 10 terabytes a second to serve this need. It’s dramatically increased memory speed to support even more advanced (read: high parameter) AI models.

It’s Still a Brute Force World

Meta-methods for search and learning are still in full force and evolving for the foreseeable future. In the 18 months leading up to the middle of 2024, the world’s largest hyperscale computing companies picked up something like 1.6 million Nvidia H100 chips, previously the top-performing chip. Other cloud and big tech companies got about 300,000, while all the other buyers together purchased maybe 350,000 of the chips. Aside from that chip, Google’s latest TPU pod uses 8,960 chips—twice as many chips, with twice as many interconnections between chips, at two-thirds more compute per chip than the prior generation.

Brute force, meet brute force.

The model that internet search pioneered looks on track to dominate how we think about computing. Even if the newest LLM’s performance isn’t improving as quickly as in the past, because it’s been trained on all the internet data by now, there are still more advanced model algorithms in the toolbag. Looking across the contemporary landscape, there’s nothing likely to replace this. Even if it does, the next thing is likely to have a lot of search-type “scale of data/scale of compute” someplace inside it. A neuro-semantic AI system performs a lot of interesting functions, but they’ll only get there with a lot of compute power and better data management.

To Succeed in This and Future Eras, Harness the Power of Data and Scale

The joy of brute force will become even more apparent and urgent as industries move further into multimodal AI models or domain-specific AIs that draw on the same data sets in different ways. (Data is, in the minds of many, considered a fixed thing, resting up for the next call, but much data is a very dynamic thing.) One picture is a snapshot, but put together 175,000 of them and you have a two-hour movie.

See, scale isn’t just about “more.” Scale changes environments and creates the capability to gain new insights or build new things. Here’s where businesses have their opportunity and where experimentation will be key.

Technologists and business leaders now are at that moment in time where you need to consider the highest-value areas to deploy brute force computing. Consider comparative advantages in the market, the strength of current and future competitors, and the amount and quality of data that can be put toward brute force problems. We’ll need better storage platforms, offering greater velocity and diversity of data sources, and new ways of managing and even thinking about data. And, preparing data for infinite compute power will be the next big thing. More on that in a future article.

Much of what lies ahead remains unknown, but one thing is certain: We won’t know the answers unless we start experimenting.


- Par Botes

REPORT

Our commitment to drive responsible business.

Learn about our environmental, social, and governance (ESG) strategy, and purposeful impact across our operations, supply chain, and products.
Actions
4 min. read

We Also Recommend

Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.