Investing

The Next AI Gold Rush Is Inside the Data Center

The AI bull market follows a clear pattern once you know where to look. 

The massive buildout of AI infrastructure isn’t a single race – it’s a rolling gold rush driven by a sequence of AI infrastructure bottlenecks.

Each time hyperscalers run into a constraint – GPUs, servers, cooling, power, memory – the market floods capital toward the companies that solve it. Those companies become the next wave of winners.

The winning strategy isn’t buying what just worked. It’s identifying which constraint hyperscalers will throw hundreds of billions of dollars at next.

The good news is that the pattern is remarkably consistent. The better news? We can see the next bottleneck forming right now.

It sits deep inside the data center itself: the network plumbing that moves data between GPUs.

And if you catch this cycle early, the upside can be enormous.

To see how the AI infrastructure bottleneck cycle works, look at the previous phases of the AI buildout.

The Previous AI Infrastructure Bottlenecks

We’ve already seen how this cycle plays out when a new AI bottleneck emerges.

  1. Compute. The first bottleneck was the most obvious one: you cannot train a large language model without enormous quantities of GPUs. Nvidia (NVDA) had the dominant training GPU. Its revenue went from $27 billion in FY2023 to $130 billion in FY2025. The stock rose roughly 800% in two years. The lesson was not subtle.
  2. Servers. Nvidia was selling chips as fast as it could make them, but someone still had to assemble the systems that housed them. The GPU server build-out created a secondary wave. Super Micro Computer (SMCI) and Dell (DELL) rocketed as hyperscalers raced to deploy. At one point, Super Micro was the fastest-growing company in the S&P 500
  3. Cooling. You cannot pack that many GPUs into a data center without dealing with the thermal consequences. Conventional air cooling hit a wall. Liquid cooling became non-negotiable. Vertiv (VRT) became Wall Street’s favorite infrastructure play seemingly overnight, going from a quiet power management company to a consensus AI trade.
  4. Energy. Data centers started drawing so much power that utilities couldn’t keep up. Suddenly, nuclear power plants were not boring regulated assets – they were scarce AI infrastructure. Constellation Energy (CEG) and small modular reactor plays like Oklo (OKLO) caught enormous bids as investors woke up to the reality that all this compute needed electrons, and those electrons had to come from somewhere reliable and carbon-friendly enough to survive ESG scrutiny.
  5. Memory. AI inference requires massive amounts of fast memory bandwidth. The bottleneck rotated to high-bandwidth memory (HBM) and high-performance storage needed to serve AI workloads at scale. Micron (MU) and the newly independent SanDisk (SNDK) became plays on the memory buildout. The storage and memory layer got its moment in the sun.

Each of these waves followed the same arc: obscurity, recognition, euphoria, rotation. In every case, hyperscalers had identified a specific constraint that prevented them from deploying capital productively – and the market rewarded whoever solved it.

That pattern is repeating again right now. And the next bottleneck is already visible.

The Next AI Bottleneck: Data Center Networking

As AI clusters grow from thousands of GPUs to hundreds of thousands of GPUs – and as the architectural ambition shifts from training giant monolithic models to running distributed inference across sprawling, always-on infrastructure – the internal plumbing of the data center has become the binding constraint.

We are talking about interconnects: the cables, transceivers, switches, and signal-processing chips that move data between GPUs, servers, racks, and buildings. 

GPUs are only as powerful as the data pipeline feeding them. If information can’t move fast enough between chips, racks, and clusters, even the most advanced processors spend time sitting idle. In a world where a single GPU can cost tens of thousands of dollars, idle time becomes extremely expensive.

The hyperscalers understand this. Broadcom (AVGO) CEO Hock Tan made it explicit in the company’s most recent earnings call, distinguishing between scale-up networking (connecting GPUs tightly within a cluster) and scale-out networking (connecting clusters to each other across a data center). This is not semantic hairsplitting. It is the architectural distinction that determines who wins the next leg of the AI infrastructure trade.

Source link

Share with your friends!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get The Latest Investing Tips
Straight to your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.