Anthropic vs. The Pentagon: The AI Safety Showdown That Could Reshape the Industry

Anthropic refused to let Claude be used for autonomous weapons and mass surveillance. Now it's blacklisted from the US government. Here's what happened and why it matters.

Anthropic, the company behind Claude, has been blacklisted from the entire US federal government. Defense Secretary Pete Hegseth designated it a “supply chain risk to national security” - a label normally reserved for companies tied to foreign adversaries. President Trump ordered all federal agencies to “IMMEDIATELY CEASE” use of Anthropic’s technology.

The company’s crime? Refusing to remove two restrictions from its AI: no mass surveillance of Americans, and no fully autonomous lethal weapons without human oversight.

What Actually Happened

Last summer, Anthropic signed a contract worth up to $200 million with the Pentagon. Claude became the first AI model deployed on the military’s classified networks, working through partnerships with Palantir and Amazon Web Services.

The arrangement worked until the Pentagon demanded new terms: Claude must be available for “all lawful purposes.” Anthropic said no.

CEO Dario Amodei told CNN the company “cannot in good conscience accede” to the Pentagon’s demands. The company’s position: it would allow Claude for missile defense, intelligence analysis, logistics planning - basically everything except two categories. But those two categories were non-negotiable.

The Pentagon’s response escalated quickly. Defense Secretary Hegseth threatened to invoke the Defense Production Act, a Cold War-era law that could force Anthropic to comply. When Anthropic held firm, the threats became action.

The Nuclear Hypothetical

According to the Washington Post, a Pentagon official posed a scenario to Amodei: If a nuclear-armed ICBM were hurtling toward the United States with 90 seconds to spare, and Claude was the only way to trigger a defensive response, but the company’s safeguards wouldn’t allow it - what then?

The Pentagon claims Amodei’s answer was dismissive: “You could call us and we’d work it out.” Anthropic called this account “patently false,” noting the company has already agreed to allow Claude for missile defense.

The dispute reveals a fundamental tension. The Pentagon wants blanket authorization. Anthropic wants specific use-case approval. Neither trusts the other’s assurances.

A Designation Without Precedent

The “supply chain risk” label is extraordinary. Defense experts told DefenseScoop this designation has historically targeted companies with ties to foreign adversaries - think Chinese telecoms, not American AI startups.

The immediate effects are significant. Military contractors cannot do business with Anthropic. Companies that integrated Claude into Pentagon work - including Palantir and AWS - face immediate complications. Federal agencies across the government, not just Defense, must phase out Anthropic products within six months.

Anthropic has promised to fight the designation in court, calling it “legally unsound” and warning it sets “a dangerous precedent for any American company that negotiates with the government.”

The company’s legal argument centers on statutory limits: according to Anthropic, Hegseth lacks authority to extend a supply chain risk designation beyond Pentagon contracts to the entire federal government.

The Defense Production Act Question

The Pentagon’s threat to invoke the Defense Production Act raised legal questions that Lawfare called “novel.”

The DPA has never been used to force a company to produce something it considers unsafe or to strip safety features from a product. Experts suggested that forcing Anthropic to retrain Claude without guardrails could raise First Amendment issues - if model training decisions are editorial choices, compelling different training compels speech the company rejects.

The Biden administration invoked the DPA for AI in 2024, establishing that AI falls within the law’s scope. But reaching AI systems is different from reaching into them.

The Industry Response

More than 300 Google employees and over 60 OpenAI employees signed an open letter supporting Anthropic’s position.

“They’re trying to divide each company with fear that the other will give in,” the letter stated. “That strategy only works if none of us know where the others stand.”

OpenAI CEO Sam Altman weighed in, saying he doesn’t “personally think the Pentagon should be threatening DPA against these companies.” An OpenAI spokesperson confirmed the company shares Anthropic’s red lines against autonomous weapons and mass surveillance.

OpenAI then struck its own deal with the Pentagon, reportedly including similar restrictions to what Anthropic requested. The key difference: OpenAI framed its limits as compliance with existing law, while Anthropic argued the law hasn’t caught up with AI capabilities - particularly regarding aggregation of legally-collected public data into mass surveillance.

Ilya Sutskever, the former OpenAI co-founder now running his own AI company, posted on X: “It’s extremely good that Anthropic has not backed down, and it’s significant that OpenAI has taken a similar stance.” He added: “In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside.”

Why This Matters

The immediate stakes are financial. Anthropic loses a $200 million contract and access to the federal market. Palantir needs to find a new AI partner for classified work. AWS faces subcontracting complications.

But the precedent matters more.

If the federal government can blacklist an AI company for maintaining safety restrictions, every lab must now calculate whether ethical red lines are worth the risk. Google abandoned its post-Maven weapons and surveillance prohibitions in February 2025. OpenAI removed its explicit ban on military applications in January 2024. The trend line was already pointing toward accommodation.

Anthropic’s gamble is that standing firm creates space for the entire industry. The employee letter from Google and OpenAI suggests some workers agree. The question is whether that sentiment reaches executives writing contracts.

What Happens Next

Anthropic has indicated it will challenge the supply chain risk designation in court. The company’s statement Friday was unequivocal: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

The legal battle could take months or years. Meanwhile, the six-month phase-out period ticks forward. Defense contractors scramble to certify their Claude-free status. And the rest of the AI industry watches to see what happens when a company actually holds the line.

The Pentagon says its demands are reasonable - Claude would only be used for lawful purposes. Anthropic says “lawful” isn’t the same as “safe” when AI can supercharge data collection in ways existing law never anticipated.

One company’s $200 million contract dispute has become a referendum on whether AI safety commitments survive contact with government power.

The Bottom Line

Anthropic bet its government business that AI safety guardrails aren’t negotiable. The Trump administration is betting that’s a bluff any company can be pressured to fold. The outcome will shape how every AI company approaches military contracts - and whether “responsible AI” means anything when money and power are on the table.