On February 26, 2026, Anthropic CEO Dario Amodei released a statement explaining why his company could not sign a contract with the Department of War. The AI company had two red lines: no mass domestic surveillance of Americans, and no fully autonomous weapons without human authorization.
One day later, the Pentagon designated Anthropic a “supply chain risk to national security” - the first time such a label has ever been applied to an American company. Hours after that, OpenAI announced it had secured a $200 million contract with the same agency.
The fallout has been swift. Over 1.5 million users have abandoned ChatGPT. Anthropic’s Claude surged to the number one position on the Apple App Store. OpenAI’s robotics lead resigned. And over 900 employees at OpenAI and Google signed a letter demanding their employers reject Pentagon surveillance contracts.
What Anthropic Refused
Anthropic’s position was narrow. The company maintained it wanted to serve the Department of War and support national security. Its only conditions involved two high-level restrictions.
“Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance,” Amodei wrote in the company’s official statement. “Anthropic has much more in common with the Department of War than we have differences.”
The distinction matters. Anthropic was not refusing all military work. It was refusing work that could put AI in control of lethal decisions or enable mass monitoring of American citizens without judicial oversight.
Defense Secretary Pete Hegseth rejected these conditions. When negotiations broke down, he moved to designate Anthropic a supply chain risk - a classification typically reserved for foreign adversary contractors suspected of sabotage.
On March 9, Anthropic filed lawsuits in two federal courts. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit states.
What OpenAI Accepted
OpenAI’s contract, announced February 27, came with what the company called equivalent protections achieved through “different mechanisms.”
The agreement states the AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” Defense intelligence components are excluded from the contract. OpenAI emphasized its technical safeguards: systems to classify and reject problematic prompts, model fine-tuning to resist harmful instructions, and contract language binding the Pentagon to existing laws.
CEO Sam Altman acknowledged the rollout was problematic. “It was definitely rushed, and the optics don’t look good,” he said. He argued OpenAI moved swiftly to “de-escalate” tensions between the military and the AI industry.
On March 2, facing mounting criticism, OpenAI amended the contract to add explicit language prohibiting domestic surveillance of Americans.
The Loopholes
Civil liberties organizations are not reassured. The Electronic Frontier Foundation published a detailed analysis identifying what it calls “weasel words” that undermine the stated protections.
The phrase “consistent with applicable laws” is problematic because the government has historically embraced expansive interpretations of what those laws permit. The word “intentionally” is concerning because intelligence agencies routinely argue that surveillance of Americans happens “incidentally” through programs targeting overseas communications.
“Secret agreements and technical safeguards have never been enough to rein in surveillance agencies,” the EFF wrote. “They are no substitute for strong, enforceable legal limits and transparency.”
Former Army General Counsel Brad Carson was blunter. “I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it.”
The contract language does not address commercially purchased data - a well-documented method intelligence agencies use to acquire information about Americans without triggering warrant requirements.
Internal Revolt
The backlash extended inside OpenAI’s own offices.
Caitlin Kalinowski, a senior member of OpenAI’s robotics team, resigned on March 7. “AI has an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
Research scientist Aidan McLaughlin posted publicly: “I personally don’t think this deal was worth it.”
Leo Gao, who works on AI alignment at OpenAI, accused the company of engaging in “window dressing.” He questioned whether the stated protections were genuine or merely designed to quiet critics.
More broadly, over 900 employees from OpenAI and Google signed an open letter supporting Anthropic’s refusal and opposing Pentagon surveillance contracts. Many OpenAI staff privately expressed admiration for their competitor’s willingness to walk away from a nine-figure deal.
The User Exodus
Public reaction was immediate and measurable.
On February 28, the day after the deal was announced, ChatGPT uninstalls spiked 295% day-over-day in the United States. One-star reviews for the app surged 775%.
By that weekend, Anthropic’s Claude had climbed to the number one spot on the U.S. Apple App Store - the first time it ever surpassed ChatGPT in daily downloads.
The QuitGPT movement organized protests outside OpenAI’s San Francisco headquarters. The campaign claims 1.5 million users have taken action, either by canceling subscriptions, sharing boycott messages, or signing up through quitgpt.org.
What This Means
The OpenAI-Anthropic split reveals a fundamental question facing the AI industry: Can companies serve military and intelligence clients while maintaining meaningful ethical boundaries?
Anthropic bet that refusing mass surveillance and autonomous weapons terms would be defensible - and that customers would reward integrity. The supply chain risk designation was an attempt to make that position economically untenable.
OpenAI bet that accepting Pentagon terms with stated safeguards would be defensible - and that most users would not leave over geopolitics. The QuitGPT movement and staff resignations suggest that calculation was at least partially wrong.
The legal battle is just beginning. Anthropic’s lawsuit challenges whether the government can punish a company for refusing to drop ethical constraints. The outcome will shape how AI companies approach military contracts for years.
The Bottom Line
OpenAI chose a $200 million contract and Pentagon access over the ethical lines Anthropic refused to cross. The market response - 1.5 million departures, staff resignations, and Claude briefly topping the App Store - suggests a meaningful segment of users care about these distinctions. Whether that matters more than Pentagon money remains an open question.