Anthropic was founded on a promise: build the most capable AI systems in the world, but do it safely. The company’s researchers left OpenAI because they thought the race for capability was outstripping the work on alignment. They structured Anthropic as a public benefit corporation so its board could prioritize safety over profit.
That was the pitch. Here’s the receipt.
The Numbers
On April 24, Google announced it would invest up to $40 billion in Anthropic — $10 billion upfront at a $350 billion valuation, with another $30 billion contingent on performance milestones tied to compute consumption on Google’s TPU infrastructure. That last part matters: the more Anthropic uses Google’s hardware, the more money it gets.
Add that to the existing capital stack. Google’s previous $2 billion. Amazon’s $8 billion from 2023-2024, plus an expansion of up to $25 billion. A $13.7 billion Series E. The total hyperscaler commitments to a single AI safety lab now stand at roughly $75 billion.
That’s not venture capital anymore. That’s infrastructure dependency.
10 Gigawatts of Borrowed Power
Google’s deal reserves 5 gigawatts of dedicated TPU capacity for Anthropic. Amazon provides another 5 gigawatts. Together, that’s 10 GW of compute — roughly one-third of what OpenAI says it needs by 2030, and about one-eighth of projected U.S. AI data center demand.
Every watt of that capacity runs on hardware Anthropic doesn’t own, in data centers Anthropic doesn’t control, on chips designed by its investors. The company can’t train frontier models without Google’s Trillium and Ironwood TPUs or Amazon’s Trainium clusters. This isn’t a partnership. It’s a utility relationship where the utility companies also happen to be your competitors.
Google sells Gemini. Amazon sells Nova. Both compete with Claude in the enterprise market. And both now have a financial interest in Anthropic succeeding — but only up to the point where Claude doesn’t cannibalize their own models.
The Independence Question
Anthropic maintains it has operational independence. Google’s ownership is capped at around 15%, with no board seats. The public benefit corporation structure technically allows the board to prioritize safety over shareholder returns.
But $75 billion in commitments from two hyperscalers creates a gravity well that no corporate structure can fully resist. When your existence depends on continued access to another company’s infrastructure, the word “independence” means something different than it does on paper.
The FTC flagged this pattern in its 2025 study on AI partnerships, warning that cloud-provider investments in AI developers risk “locking in the market dominance of large incumbent technology firms.” Senators Elizabeth Warren and Ron Wyden have sent letters probing both the Google-Anthropic and Microsoft-OpenAI relationships over antitrust concerns.
The structure — minority stake, no board seats, milestone-based funding — was almost certainly designed with these proceedings in mind. But $43 billion in total Google investment and a multi-gigawatt compute dependency create economic entanglement that percentage ownership doesn’t capture.
The Lobbying Arms Race
Perhaps the most telling indicator of where Anthropic’s priorities now sit: the company’s federal lobbying topped OpenAI’s for the first time in Q1 2026, spending $1.6 million versus OpenAI’s $1.5 million.
A safety-focused research lab doesn’t typically outspend its competitors on K Street. A company protecting a $350 billion valuation does.
Why This Should Worry You
The concentrated structure of frontier AI is now unmistakable. Microsoft controls OpenAI’s compute. Google and Amazon split Anthropic’s. The three hyperscalers — Microsoft, Google, Amazon — collectively underwrite virtually all frontier AI development. No lab can train a competitive model without one of them.
This matters for safety in two ways. First, it means the companies responsible for AI governance are financially entangled with the companies they need to govern. When your cloud provider is also your investor, board representation matters less than the phone call about your next compute allocation.
Second, it concentrates the decision-making about which AI models exist, who gets access, and under what terms into a remarkably small group. Anthropic’s restricted release of Claude Mythos — limited to a consortium of tech companies including Google, Amazon, Microsoft, and Apple — shows how this plays out. The most capable AI systems are now gatekept by the same companies that fund their creation.
What’s Being Done (And Why It’s Not Enough)
Formal antitrust investigation is plausible by Q4 2026, but the regulatory apparatus isn’t built for this kind of relationship. Traditional merger review looks at ownership percentages and board seats. The AI hyperscaler model — minority stakes, compute dependencies, milestone-based funding — slips through those frameworks.
Bloomberg Law reported that Google can keep its Anthropic investment even under the current antitrust remedies proposed in its search monopoly case. The investment doesn’t technically constitute control. It just constitutes everything else.
Anthropic’s public benefit corporation structure remains its strongest defense — at least on paper. But the test of that structure isn’t whether the board can prioritize safety. It’s whether they will, when the companies funding their existence have different priorities.
Seventy-five billion dollars buys a lot of safety research. It also buys a lot of leverage.