On April 27, more than 600 Google employees — including over 20 executives and DeepMind researchers — sent an open letter to CEO Sundar Pichai demanding that Google refuse to let the Pentagon use its AI on classified networks. The next day, Google signed the deal anyway.
The agreement gives the U.S. Department of Defense access to Gemini, Google’s most capable AI system, for “any lawful government purpose” on classified military networks. No content restrictions. No use-case limitations. No external oversight.
This is a story about what happens when an AI company’s principles meet its balance sheet.
From Project Maven to “Any Lawful Purpose”
If this feels familiar, it should. In 2018, roughly 4,000 Google employees signed an internal petition protesting Project Maven, a Pentagon program using AI to analyze drone footage. A dozen people resigned. Google responded by publishing AI principles that explicitly pledged the company would not build weapons or surveillance technology.
That pledge lasted about seven years. In February 2025, Google quietly removed the weapons and surveillance clause from its AI principles. DeepMind CEO Demis Hassabis co-authored a blog post explaining the change: “There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape.”
In March 2026, Google deployed Gemini to the Pentagon’s three-million-strong workforce on unclassified systems. And now, the classified networks too.
The scale difference between 2018 and 2026 is worth spelling out. Project Maven was one contract, one program, analyzing drone video. The new deal covers Google’s entire AI stack — Gemini, DeepMind’s research, the TPU chips powering inference — running on air-gapped military networks where nobody outside the Pentagon can audit what it does.
What the Employees Are Worried About
The letter to Pichai raised specific, concrete concerns:
No external oversight on classified networks. Google proposed contractual language to prevent Gemini from being used for domestic mass surveillance or autonomous weapons without human oversight. The Pentagon rejected this, insisting on “all lawful uses” wording. On air-gapped military networks, Google has no technical ability to enforce any restrictions. Once the model is deployed behind a classified firewall, vendor controls disappear.
Lethal autonomous weapons. The employees cited the risk that Gemini could be used to develop weapons systems that select and engage targets without meaningful human control. With Google’s own safeguards unenforceable on classified networks, the distinction between “AI-assisted” and “AI-directed” targeting becomes a matter of Pentagon policy, not technical limitation.
Mass surveillance. Even though the contract supposedly covers only foreign intelligence, the employees argued there’s no mechanism to verify compliance once the system runs on classified infrastructure.
The signatories weren’t just rank-and-file engineers. Over 20 vice presidents and directors signed, including researchers from DeepMind — the team that built much of the underlying technology now heading to the Pentagon.
Anthropic Said No — and Got Blacklisted
Google’s deal becomes sharper when you look at what happened to Anthropic for taking the opposite position.
Anthropic refused to sign an agreement allowing the Pentagon to use Claude for “all lawful purposes.” The company insisted on contractual bans on mass domestic surveillance and fully autonomous weapons, arguing it had no “kill switch” to enforce its policies once models run on classified networks.
The Pentagon’s response: designating Anthropic a “supply-chain risk” and effectively blacklisting it from government contracts. A federal judge has since granted Anthropic an injunction while the case proceeds, but the message was clear — refuse the military’s terms and face consequences.
The White House is now reportedly drafting guidance to let agencies work around the blacklist and access Anthropic’s latest model, Mythos. But the damage to the precedent is done: companies that set boundaries on military AI use get punished; companies that don’t get rewarded.
Google saw what happened to Anthropic and made its choice.
The Principles That Weren’t
Here’s the timeline in full:
- 2018: 4,000 employees protest Project Maven. Google publishes AI principles banning weapons and surveillance.
- 2019: Maven contract expires. Palantir takes it over (the Maven program has since grown to $13 billion).
- February 2025: Google removes weapons and surveillance clause from AI principles.
- March 2026: Gemini deployed to Pentagon on unclassified systems.
- April 27, 2026: 600+ employees send letter opposing classified deployment.
- April 28, 2026: Google signs classified deal with “any lawful purpose” language.
The employees’ letter described the trajectory as “systematic.” That’s generous. Reading the timeline, it looks more like a company that wrote principles as crisis management in 2018 and spent the next eight years dismantling them as defense spending grew.
What This Means
Three things matter here beyond the immediate controversy.
The “kill switch” problem is real. Both Google and Anthropic acknowledged the same technical reality: once AI models run on air-gapped classified networks, the vendor loses all enforcement capability. Google decided that was acceptable. Anthropic decided it wasn’t. Neither company has a technical solution.
Military AI norms are being set right now. Every major AI company is making these decisions in 2026: OpenAI, xAI, Meta, Microsoft, Google, Anthropic. The terms they agree to now will define the baseline for how AI gets used in military and intelligence operations for years. Once “any lawful purpose” becomes the standard contract language, walking it back will be functionally impossible.
Employee dissent doesn’t work when money is big enough. In 2018, employee pushback forced real change at Google. In 2026, a larger number of more senior employees raised more specific objections and were overruled within 24 hours. The difference isn’t just corporate culture — it’s the size of the contracts. Defense AI spending now dwarfs what was at stake with Project Maven.
What You Can Do
If you use Google services and this concerns you:
- Review your data exposure. Google’s AI models are trained on user data from its products. Consider what information you’re providing through Gmail, Google Docs, Search, and Android. If the same company’s AI is running on classified military networks, that data pipeline takes on different implications.
- Explore alternatives. Proton for email, Signal for messaging, and DuckDuckGo or Brave Search for search. These don’t feed data into models that end up on military networks.
- Pay attention to AI principles disclosures. When companies update their ethics pages, read the changes carefully. Google’s February 2025 edit was easy to miss.
The broader issue isn’t whether you trust the Pentagon to use AI responsibly. It’s whether any vendor restrictions can survive contact with classified infrastructure. Right now, the honest answer from every company involved is: they can’t.