Meta Locks In Millions of Nvidia Chips in Unprecedented AI Deal

A multi-billion dollar, multiyear pact makes Meta the first to deploy standalone Nvidia Grace CPUs at scale

Meta just signed the biggest chip procurement deal in the history of AI infrastructure. The multiyear agreement, announced February 17, covers “millions” of Nvidia processors — GPUs, CPUs, and networking hardware — across Meta’s entire U.S. data center footprint. Chip analyst Ben Bajarin of Creative Strategies estimates the deal is worth tens of billions of dollars.

The numbers are staggering even by 2026 standards. Meta plans to spend $135 billion on AI infrastructure this year alone, part of a $600 billion U.S. data center investment through 2028. This single Nvidia deal absorbs a significant share of that budget and spans multiple chip generations, from current Blackwell GPUs to the upcoming Vera Rubin architecture expected in 2027.

But the real story isn’t the GPUs. It’s the CPUs.

The Grace CPU Move

Meta became the first company to deploy Nvidia’s Grace central processing units as standalone chips at data center scale. That distinction matters more than it sounds.

Until now, Nvidia’s Grace CPUs shipped paired with GPUs in “Grace Hopper” superchips — a bundle designed for AI training workloads where GPU and CPU sit on the same board. Meta is unbundling them. The company is using Grace-only server configurations to power general-purpose and agentic AI workloads that don’t need GPU acceleration.

The Grace processor packs 72 Arm-based Neoverse V2 cores clocked up to 3.35 GHz with up to 480 GB of memory. Meta reports 2x performance-per-watt improvements on backend workloads compared to its existing CPU fleet. For a company operating billions of user interactions per day, halving the power bill on CPU-bound tasks translates directly to the bottom line.

This is Nvidia doing something new: competing directly in the CPU-only server market that Intel and AMD have owned for decades.

The Strategy

Jensen Huang’s pitch to Meta extends well beyond selling more chips. The deal includes co-design agreements where Nvidia and Meta jointly optimize CPU ecosystem libraries and software stacks specifically for Meta’s workloads. Meta also gets priority access to Nvidia’s next-generation Vera Rubin architecture, with plans to deploy standalone Vera CPUs at scale starting in 2027.

The partnership covers three product lines simultaneously:

  • Blackwell GPUs for current AI training and inference
  • Grace CPUs as standalone processors for agentic AI and general workloads
  • Spectrum-X Ethernet networking switches integrated into Meta’s Facebook Open Switching System

This isn’t a purchase order. It’s an infrastructure marriage. Mark Zuckerberg described the goal as building “leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world.” Huang responded that “no one deploys AI at Meta’s scale.”

Both statements are positioning, but the underlying deal structure reveals something more concrete: Meta is betting its entire AI infrastructure roadmap on Nvidia hardware for at least the next three years.

Who Wins

Nvidia gains its largest single customer commitment and, more importantly, establishes Grace as a credible standalone data center CPU. If Meta validates Grace in production at this scale, every other hyperscaler will take a second look. Nvidia’s revenue from Meta alone could represent a meaningful percentage of its total data center business.

Meta locks in supply at a time when GPU allocation remains constrained. Blackwell chips have been on backorder for months, and Nvidia’s next-generation Rubin GPUs only recently entered production. A multiyear, multigenerational commitment likely comes with preferential allocation — exactly the advantage Meta needs while spending $135 billion this year on AI.

The deal also positions Meta as the primary deployment partner for Vera Rubin, meaning Meta’s engineering team gets early access to shape how the next architecture works in production. That’s a competitive edge money alone can’t buy.

Who Loses

Intel takes the hardest hit. Meta deploying millions of Nvidia Grace CPUs in roles that previously required Intel Xeon processors is an existential threat to Intel’s data center business. Intel’s competitive server chips — Clearwater Forest and Diamond Rapids — aren’t expected to be competitive until 2028. That’s two years of Meta building Grace-based infrastructure that won’t switch back.

AMD faces a different problem. AMD’s EPYC server CPUs have been gaining data center share against Intel, but Nvidia just jumped into that fight with the backing of the world’s largest social media company. AMD’s upcoming MI500 GPUs still compete with Nvidia on the training side, but losing CPU sockets to Grace at Meta’s scale narrows AMD’s total addressable market.

Custom chip efforts at other hyperscalers look less urgent. Amazon has Graviton, Google has its TPUs, Microsoft has Maia. Meta had been exploring custom silicon too, but this deal suggests Meta decided it’s cheaper and faster to buy Nvidia’s integrated stack than build its own. If that calculus holds at Meta’s scale, smaller companies will reach the same conclusion.

The Bigger Picture

This deal arrives amid an arms race that shows no signs of slowing. Hyperscalers are projected to spend at least $625 billion on AI infrastructure in 2026. Custom ASIC shipments are growing at 44.6% annually, while GPU shipments grow at 16.1% — but Meta’s commitment to Nvidia suggests GPUs (and now CPUs) from a single vendor can still win against the build-your-own approach.

The $600 billion long-term commitment also locks Meta into a specific vision of AI infrastructure: centralized, GPU-heavy, Nvidia-dependent. If the industry shifts toward more efficient architectures, smaller models, or edge computing, Meta will be carrying a lot of expensive hardware optimized for a different era.

For now, though, Meta is making the bet that scale wins. And Nvidia is the house that supplies the chips.