Confidential
For Accredited Investors Only
Worth Capital Intelligence | May 2026

The Ai Infrastructure Thesis

May 2026 | Confidential - For Accredited Investors Only

EXECUTIVE SUMMARY

AI infrastructure is the largest capital expenditure cycle since the internet. The five biggest technology companies on Earth have committed $622 billion in 2026 alone to build the physical infrastructure that AI requires to exist - data centers, power plants, chips, memory, cooling, and fiber. Goldman Sachs projects $7.6 trillion in cumulative AI infrastructure spending through 2031. This is not speculative. The contracts are signed, the earnings calls are public, and the money is being spent right now.

The opportunity is in the supply chain, not the software. The United States is short power, short compute, short chips, and short memory. Larry Fink at BlackRock says these shortages will last a decade. Power grid interconnection queues are 5-7 years. Transformer lead times are 4 years. High-bandwidth memory is sold out through 2026. The companies that solve these bottlenecks - the ones that build the power plants, manufacture the cooling systems, produce the memory, and lay the fiber - are being paid today and have multi-year order backlogs. Many of them are up 100-238% year-to-date and the buildout has barely started.

This paper lays out the thesis in full: the demand math, the five investable layers of the infrastructure stack, the specific companies and catalysts, the risks, and the timeline. It is written for investors who want to understand why we believe AI infrastructure is the picks-and-shovels play of the decade - and why the window to invest before Wall Street consensus catches up is now.

Twelve months ago, the world was arguing about chatbots.

Which AI assistant would win. Whether GPT-5 would be smarter than Claude. Whether Google was behind or ahead. The conversation was about software. Models. Interfaces. The thing people type into.

Nobody was talking about the buildings. The power plants. The cooling systems. The fiber optic cables. The memory chips that every single one of those models requires to exist.

That was a mistake. And it created an opportunity that is still wide open.

I. Where We Were

In early 2025, AI was a consumer product. ChatGPT had crossed 400 million weekly users. Investors piled into NVDA because it was the obvious name. The Magnificent Seven dominated every portfolio. If you wanted "AI exposure," you bought the index.

Nobody was asking: where does the electricity come from? Who builds the data center? What happens when you need 120 kilowatts per rack instead of 10? Who makes the memory that goes inside the GPU? Who runs the fiber between the clusters?

Those questions didn't matter yet because the scale hadn't arrived. A few hundred thousand GPUs in a handful of data centers was enough. The grid could handle it. The existing infrastructure was sufficient.

That changed.

II. What Happened

Between Q4 2024 and Q1 2026, hyperscaler capital expenditure went vertical.

Company
2024 Capex
2026 Guided
Change
Amazon
$74.9B
$200B
+167%
Microsoft
$55.7B
$190B
+241%
Alphabet (Google)
$49.7B
$160-180B
+222%
Meta (Facebook)
$38.5B
$115-135B
+212%
Oracle
$11.8B
$25-30B
+130%
Combined
$230B
$622B
+170%

That is a 170% increase in two years. And 85-90% of it is going to AI infrastructure.

These are not projections from analysts. These are guided numbers from CFOs on earnings calls. This money is committed. It gets spent whether the stock market goes up or down.

The question is no longer whether the buildout happens. The question is who benefits from it.

III. The Four Shortages

Power

U.S. data centers consumed 183 TWh in 2024, about 4% of total U.S. electricity. By 2028, that number rises to 325-580 TWh, or 6.7-12% of the entire grid. The Department of Energy has identified hyperscale data center connection requests of 300-1,000 MW with lead times of 5-7 years. The grid grew 5% in the last decade. AI needs it to grow 50% in the next five years. The grid cannot keep up.

Compute

NVIDIA shipped $193.7 billion in data center revenue in FY2026, holding 90-95% market share in AI training chips. Every hyperscaler is constrained by GPU supply. Blackwell demand is 35x supply. Jensen Huang estimates the AI infrastructure buildout will exceed $1 trillion annually by 2028.

Chips

TSMC manufactures 95% of advanced AI chips on 3nm and below. A single company in Taiwan. The entire AI revolution runs through one island. ASML is the only company on Earth that makes the EUV lithography machines required to print these chips. There is no alternative supplier.

Memory

Every NVIDIA GPU requires high-bandwidth memory. Only three companies make HBM: SK Hynix, Samsung, and Micron. HBM is sold out through 2026. Micron's HBM3E revenue hit $2.5 billion per quarter in Q1 2026, up from near-zero two years ago. The HBM market is projected to reach $40 billion by 2027.

Four shortages. Four bottlenecks. Four investment themes.

IV. The Picks and Shovels

During the California Gold Rush, the people who made the most money were not the miners. They were the people who sold the picks, the shovels, the jeans, and the provisions. The miners went broke. The suppliers got rich.

The same dynamic is playing out in AI.

We do not know which AI model wins. We do not know whether OpenAI or Anthropic or Google captures the most revenue. We do not need to know. Because every single one of them needs the same physical infrastructure:

  • Chips to run the models
  • Power to run the chips
  • Buildings to house the chips
  • Cooling to keep the chips from melting
  • Fiber to connect the chips to each other

It does not matter who wins the model war. It matters who builds the arena.

V. The Five Layers

We break the AI infrastructure stack into five investable layers. Each layer has its own supply/demand dynamics, its own bottleneck, and its own set of companies.

Layer 1: Compute

GPUs, custom ASICs, and high-bandwidth memory. This is the layer everyone knows about because of NVIDIA.

NVIDIA's data center revenue hit $62.3 billion in Q4 FY2026 alone, up 75% year-over-year. They hold 90%+ market share in AI training and 80% in inference. Their gross margins are 75%. They are printing money at a rate that has no precedent in semiconductor history.

But the compute layer is bigger than NVIDIA. AMD is the only credible second source. Meta runs 100% of Llama inference on AMD MI300X chips. Microsoft and Amazon are testing AMD for Azure and AWS workloads. Hyperscalers need a second source to avoid being held hostage by NVIDIA's pricing power, and AMD is it.

Broadcom designs custom AI chips for Google, Meta, Anthropic, and OpenAI. Their backlog hit $73 billion in Q1 FY2026. Custom silicon is 25% of hyperscaler AI compute spend and growing.

Micron is the only U.S. manufacturer of high-bandwidth memory. HBM goes inside every NVIDIA GPU. Three companies on Earth make it. Micron's revenue is up 54% year-over-year and they cannot build capacity fast enough.

ASML is the only company that makes the machines that print advanced chips. Their monopoly on EUV lithography is absolute. No ASML, no advanced semiconductors. Period.

Layer 2: Power

This is the layer the market underestimates the most.

AI data centers require 50-150 kW per rack, compared to 10-15 kW for traditional workloads. A single hyperscale facility under construction today targets 2 GW of power - the equivalent of a small city. FERC interconnection queues exceed 2,400 GW nationwide with average wait times of 5-7 years.

You cannot build an AI data center without power. And the power does not exist yet.

The solutions are playing out on a timeline:

Natural gas (now - 2028): GE Vernova's H-class turbines are the fastest path to power. They can deploy 500 MW gas plants in 18-24 months. Their order backlog is $2.5 billion and growing 50% year-over-year.

Fuel cells (now - 2028): Bloom Energy drops a fuel cell power plant next to a data center in 12-18 months, bypassing the 5-7 year grid connection queue entirely. Oracle signed a 200 MW deal. Equinix signed a 100 MW expansion. Bloom's pipeline is 1 GW.

Nuclear (2027 - 2035+): Constellation Energy restarted Three Mile Island Unit 1 for Microsoft under a 20-year power purchase agreement at $180/MWh. Oklo, NuScale, and Kairos Power are developing small modular reactors. Nuclear is the 10-year destination. Gas and fuel cells are the bridge.

Grid infrastructure (ongoing): Eaton has a $4 billion transformer backlog with 4-year lead times for large units. If you order a power transformer today, it arrives in 2030. The grid is a bottleneck inside a bottleneck.

Layer 3: Data Centers

The physical buildings that AI lives in. $46.5 billion in U.S. data center construction spending through Q1 2026 alone, up from $7.3 billion in the same period last year. Nearly 3,000 U.S. data centers are under construction or planned.

Equinix and Digital Realty dominate colocation. CoreWeave is the NVIDIA-backed GPU cloud pure-play, backed by $12 billion in debt and an $11.9 billion Microsoft contract.

But the more interesting story is the converted Bitcoin miners. Companies like IREN, Applied Digital, and Core Scientific already own data center infrastructure - power, land, cooling - from their mining operations. They are pivoting those facilities to GPU compute hosting for AI workloads. The infrastructure is already built. The capex burden is behind them. They are leveraging sunk costs into the highest-growth market in technology.

Layer 4: Networking

GPU clusters need to talk to each other. A single training run on 100,000 GPUs generates petabytes of data movement. Every GPU must communicate with every other GPU, synchronizing gradients across the entire cluster. The network is the nervous system.

Arista Networks is the Ethernet leader, with $1.2 billion in AI networking revenue in FY2025 and a $2.5 billion order backlog. Broadcom's Tomahawk 5 switching silicon holds 70% market share in AI Ethernet switches.

The optical layer is equally critical. Coherent, Lumentum, and Corning make the transceivers and fiber that connect the clusters. NVIDIA just committed $2.5 billion to build three manufacturing facilities with Corning specifically for AI optical interconnects. 800G shortages persisted through Q2-Q4 2025. The 1.6T generation starts shipping Q4 2026.

Dell'Oro Group projects $80 billion in Ethernet switch sales over the next five years from AI back-end networking alone.

Layer 5: Software and Services

For every $1 spent on AI hardware, $1.80 follows in software and services. Cloud platforms, AI security, enterprise deployment tools, and inference optimization.

AWS AI/ML revenue hit $4.2 billion in a single quarter, up 125% year-over-year. Azure AI revenue hit $3.7 billion, up 78%. CrowdStrike's AI security modules generated $850 million in FY2026 Q3, up 62%. Palantir's AIP platform grew 42% to $312 million per quarter.

The software layer overtakes hardware spending in 2028. Gartner projects $450 billion in AI software spend versus $420 billion in hardware by that year. This is where the returns compound over the longest horizon.

VI. Why Right Now

The window matters. Here is what we are seeing in real time as of May 2026.

The stocks are moving, but the thesis is still early. Year-to-date returns on our watchlist as of May 9, 2026:

Ticker
YTD Return
Why
Intel
+238.5%
CHIPS Act funding, Apple manufacturing partnership, Google collaboration
Bloom Energy
+200.4%
Fuel cell data center deployments
Micron
+161.7%
HBM memory sold out
Corning
+113.5%
$2.5B NVIDIA manufacturing deal
AMD
+112.5%
MI400 ramp, Meta inference wins
Vertiv
+109.8%
Liquid cooling for GPU racks

These names have already run, and the thesis is still in the second inning. The hyperscaler capex numbers keep getting revised upward. Wall Street consensus for 2024 was $180 billion; actual was $231 billion, a 28% overrun. Consensus for 2025 was $300 billion; guided was $392 billion, a 31% overrun. Consensus for 2026 was $625 billion; now guided at $725 billion, a 16% overrun.

Every year, Wall Street underestimates the spending. Every year, the companies spend more than expected.

The crowd hasn't found most of these names. Everyone knows NVDA. Very few people know that Vertiv is the only company that makes liquid cooling systems for 120 kW Blackwell GPU racks, and that they have a $15 billion backlog through 2027. Very few people know that Bloom Energy can deploy data center power in 12 months while the grid takes 5-7 years. These are the disconnects we invest in.

The buildout has a floor. Even in a recession, even if AI revenue disappoints in the short term, the capex is committed. Microsoft's CFO described their data center investments as "monetized over 15 years." These are not speculative bets that can be withdrawn on a bad quarter. The contracts are signed. The buildings are going up.

VII. The Timeline

This is not a trade. It is a multi-year buildout.

Phase
What Happens
Details
Phase 2: 2025-2026
Happening now
Power, cooling, data centers, and networking. The market is waking up to the infrastructure bottleneck. Stocks in this phase have moved 100-200% and still have room to run because the capex is accelerating.
Phase 3: 2027-2028
Emerging
Edge computing, robotics, defense AI, autonomous systems. DARPA and DoD positioning for a Manhattan Project-scale AI deployment. Defense primes with AI capabilities (Palantir, Kratos, Rocket Lab) and U.S.-onshored semiconductor fabs (Intel, GlobalFoundries) become national security assets.
Phase 4: 2028-2030
Not yet visible
Software monetization overtakes hardware. Inference becomes 80% of compute workloads. The winners are the cloud platforms and enterprise software companies that monetize the infrastructure built in Phases 2-3.

Goldman Sachs projects $7.6 trillion in cumulative AI infrastructure capital expenditure between 2026 and 2031. That is not a typo. Seven point six trillion dollars.

VIII. What Could Go Wrong

We are not blind to the risks. This is real capital and we take every bear case seriously.

Scaling Laws Break

If AI models stop getting better with more compute, the capex thesis weakens. Current evidence suggests a slowdown: GPT-5 used less pretraining compute than GPT-4. Epoch AI estimates compute efficiency is doubling every 7.6 months, which could reduce absolute demand. We assign 50-60% probability to material diminishing returns by 2027-2028.

Our counter: Even if training scales less efficiently, inference explodes. Every ChatGPT query, every enterprise AI agent, every autonomous vehicle decision requires inference compute. The installed base of AI users grew from zero to 1.8 billion in 41 months. Inference demand grows with every additional user, every additional application, every additional industry that adopts AI.

Regulatory Disruption

The EU AI Act is live. Carbon pricing for data centers is under consideration. Data localization mandates could fragment global infrastructure. We assign 70-80% probability that at least one material regulatory constraint emerges by end of 2027.

Our counter: Regulation slows deployment but does not reverse it. The U.S. government is actively incentivizing domestic AI infrastructure through CHIPS Act funding and Department of Energy support. The national security imperative ensures that U.S. AI infrastructure investment is not optional - it is strategic.

Hyperscaler Pullback

If one major hyperscaler cuts capex, the supply chain feels it. Meta cut capex 19% in 2023 and it rippled through semiconductors and infrastructure suppliers.

Our counter: This is 2026, not 2023. AI is no longer a side bet. Azure AI drove 52% of Microsoft's cloud growth. AWS AI workloads are growing triple digits. Google Cloud AI revenue is $12 billion annualized. Cutting AI capex would mean cutting their fastest-growing revenue line. The CFOs have been explicit: this money is committed.

Customer Concentration

Five companies drive most of AI infrastructure demand. That concentration creates fragility. We manage this by investing across the supply chain rather than betting on any single hyperscaler.

IX. The Investment Call

Here is what we believe, stated plainly.

AI infrastructure is the picks-and-shovels play of the decade. Not the apps. Not the models. Not the chatbots. The physical infrastructure that AI physically cannot exist without.

The demand is not theoretical. It is $622 billion in committed hyperscaler capex this year alone, rising to a projected $7.6 trillion through 2031.

The supply constraints are real. Power queues are 5-7 years. Transformer lead times are 4 years. HBM is sold out. EUV lithography has one supplier on Earth.

The window is now. The stocks in our universe have moved 100-238% year-to-date, and the thesis is still early because the buildout has barely started. Wall Street is underestimating the spending every single quarter.

We are not trying to pick the next ChatGPT. We are investing in the power plants, the cooling systems, the memory chips, the fiber optic cables, and the buildings that every AI company needs. We find them before the crowd, we invest while the thesis is still ours, and we exit when consensus catches up.

The math is clear. The capital is committed. The infrastructure does not exist yet but it must be built.

The companies that build it will be paid. That is the thesis.