AMD’s stock surged 8% on news of a multi-year, 6-gigawatt agreement with Meta to power the company’s next-generation AI data centers. The deal, valued at roughly $60 billion over five years, represents the largest single contract ever signed for AI chips outside of NVIDIA.
Meta will deploy custom AMD Instinct MI450 GPUs starting in the second half of 2026. It’s a direct bet against NVIDIA’s Vera Rubin platform, and the biggest validation yet that AMD can compete at hyperscale.
The MI450 Specs
AMD’s Instinct MI450 series uses the CDNA 5 architecture and will be manufactured on TSMC’s N2 (2nm-class) process — potentially beating NVIDIA to the cutting-edge node. The company’s Rubin GPUs are set for TSMC’s N3.
The numbers are aggressive:
- 40 petaflops of FP4 compute performance
- 20 petaflops of FP8 performance (double the MI350)
- 432GB of HBM4 memory
- 19.6TB/sec memory bandwidth
AMD is also building rack-scale solutions. The MI450X IF128 will pack 128 GPUs into a single rack, challenging NVIDIA’s VR200 NVL144 configuration.
The first 1GW of MI450 deployments arrive in H2 2026, scaling from there.
Beyond Meta
Meta isn’t the only major customer. Oracle Cloud Infrastructure announced plans to deploy 50,000 Instinct MI450 GPUs beginning in the second half of 2026. The deal signals enterprise confidence in AMD’s inference performance and software ecosystem.
AMD has also secured commitments from OpenAI, which is diversifying its chip supply away from exclusive NVIDIA dependency. The OpenAI partnership suggests the MI450’s software stack — particularly ROCm compatibility — has reached the reliability threshold needed for production AI workloads.
What This Means for NVIDIA
NVIDIA still dominates AI training, where its CUDA ecosystem and networking (NVLink, InfiniBand) create massive switching costs. But the AI market is shifting toward inference, where trained models run in production rather than being developed.
Inference workloads have different economics. Power efficiency and cost-per-token matter more than peak training throughput. AMD has historically competed better on price-performance, and Meta’s deal suggests the MI450 delivers competitive inference economics.
NVIDIA faces additional pressure:
- EU antitrust investigation into bundling practices
- Memory shortage forcing 40% production cuts through mid-2026
- Senators probing the $20B Groq licensing deal for merger evasion
- AMD securing the manufacturing node advantage on 2nm
Who Wins, Who Loses
AMD transforms from a credible alternative into a scaled competitor. $60 billion in committed revenue provides the R&D investment runway to sustain competition across multiple chip generations.
Meta reduces dependency on a single vendor. Mark Zuckerberg’s company has been public about concerns over NVIDIA’s pricing power and supply constraints. Diversification is risk management.
NVIDIA retains training dominance but faces real competition in the inference market. The company’s premium pricing assumes limited alternatives. AMD’s Meta deal proves alternatives exist at scale.
Hyperscale customers benefit from competition. AWS, Google, and Microsoft will have leverage in future NVIDIA negotiations. Even if they don’t switch, the credible threat of switching extracts better terms.
The AI chip market is no longer a monopoly. It’s now a duopoly with NVIDIA facing a funded, manufacturing-advantaged competitor with $60 billion in committed demand. Jensen Huang’s next earnings call should be interesting.