Broadcom just announced it expects to sell over $100 billion in AI chips in 2027 - and the company says it has already locked in the supply chain to make it happen.
The chipmaker reported Q1 2026 earnings this week with AI revenue of $8.4 billion, more than double the same period last year. But the headline number was CEO Hock Tan’s projection for next year, backed by six major customers building custom AI accelerators: Google, Meta, Anthropic, OpenAI, and likely Fujitsu and ByteDance.
The Custom Chip Bet Pays Off
While NVIDIA dominates the AI chip market with general-purpose GPUs, Broadcom has carved out a different niche: building custom silicon for hyperscalers who want chips tailored to their specific workloads.
The scale of deployment is now measured in gigawatts of compute power:
Anthropic: Broadcom is delivering 1 gigawatt of custom TPUs in 2026, with demand expected to hit 3 gigawatts in 2027.
OpenAI: The company’s first custom Broadcom chip ships in 2027, targeting over 1 gigawatt of capacity as part of a massive 10-gigawatt compute buildout.
Meta: The MTIA custom accelerator roadmap is “alive and well,” according to Tan, with production already underway and multi-gigawatt scaling planned for 2027 and beyond.
For context, 1 gigawatt is roughly enough power to run 750,000 homes. These AI deployments are essentially building small power plants worth of compute.
Supply Chain Locked Through 2028
The most reassuring detail for investors: Broadcom claims it has secured capacity for leading-edge wafers, high-bandwidth memory, and substrates through 2028. That addresses the persistent worry that demand might outrun available manufacturing capacity.
“We have fully secured capacity of these components for ‘26 through ‘28,” Tan said on the earnings call.
Q2 guidance points to $10.7 billion in AI revenue, which would represent continued acceleration. The company’s adjusted EBITDA margin of 68% beat analyst expectations, pushing back on concerns that Anthropic’s aggressive ramp might compress margins.
The Bigger Picture
According to TrendForce, ASIC-based AI servers (like those using Broadcom’s custom chips) will represent 27.8% of the AI server market in 2026 - the highest share since 2023. The top five North American cloud providers plan to increase capital spending 40% year-over-year this year.
The custom silicon war reveals something important about where AI infrastructure is heading. NVIDIA isn’t losing - it still dominates training workloads and has its own roadmap with Rubin chips coming later this year. But the hyperscalers are betting billions that purpose-built chips will deliver better performance-per-watt for their specific use cases.
For Anthropic, that means chips optimized for the particular way Claude processes and generates text. For OpenAI, chips tuned for GPT’s architecture. For Meta, accelerators built for both training and serving recommendations to billions of users.
Who Benefits
Broadcom: The obvious winner, now projecting revenue growth that would put it in the same league as NVIDIA’s AI business.
TSMC: All these custom chips get manufactured somewhere, and Taiwan Semiconductor remains the chokepoint for leading-edge production.
Hyperscalers: Custom silicon lets Google, Meta, and the AI labs reduce dependence on NVIDIA and potentially achieve better economics at massive scale.
Power companies: Building gigawatts of compute requires gigawatts of electricity. Data center energy demand is becoming a national infrastructure challenge.
The losers are harder to identify - this isn’t a zero-sum game yet. But smaller cloud providers without the resources to design custom chips may find themselves at a permanent cost disadvantage as the giants optimize their silicon stacks.