Anthropic Wins: Federal Judge Blocks Pentagon's 'Supply Chain Risk' Designation in First Amendment Ruling

Judge Rita Lin calls the Trump administration's blacklisting of Anthropic 'classic First Amendment retaliation' and rejects the 'Orwellian notion' that American companies can be punished for disagreeing with the government.

Golden scales of justice against a dark background

Anthropic just won a major battle in its fight against the Trump administration.

Federal Judge Rita Lin issued a preliminary injunction on Thursday blocking the Pentagon’s designation of Anthropic as a “supply chain risk” — a label normally reserved for foreign adversaries and terrorists. The 43-page ruling also halts President Trump’s order directing all federal agencies to stop using Anthropic’s Claude AI.

The decision marks the first time a court has intervened in the Trump administration’s escalating pressure campaign against an AI company that refused to remove safety guardrails.

”Classic First Amendment Retaliation”

Judge Lin didn’t mince words. In her ruling, she called the government’s actions “likely both contrary to law and arbitrary and capricious.”

The judge found no legitimate basis for treating Anthropic as a potential saboteur. The supply chain risk designation — a serious national security tool — was being weaponized against an American company for the first time.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

She concluded that “punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

The Dispute That Started It All

The conflict centers on two non-negotiable positions Anthropic CEO Dario Amodei publicly stated in February: his company would not allow Claude to be used in autonomous weapons systems or for mass surveillance of American citizens.

Amodei’s reasoning was practical, not ideological. Frontier AI systems aren’t reliable enough for fully autonomous weapons, he argued. “It doesn’t show the judgment that a human soldier would show — friendly fire or shooting a civilian.” The technology simply isn’t there yet.

The Pentagon disagreed. Defense Secretary Pete Hegseth demanded Anthropic drop these restrictions or face consequences. When Anthropic refused, the administration escalated.

On February 27, Trump ordered all federal agencies to stop using Claude. Hegseth designated Anthropic a “supply chain risk to national security.” Within hours, OpenAI announced a $200 million Pentagon deal to fill the gap.

What Comes Next

The ruling restores the status quo — for now. Anthropic can continue federal contracting and government agencies can resume using Claude.

But the injunction is temporary. Judge Lin gave the government seven days to appeal before the order takes effect. The full case will proceed on the merits, likely over months.

The Pentagon has not announced whether it will appeal.

What This Means for AI Companies

The ruling sets an important precedent, even if it’s preliminary.

The government argued that Anthropic’s refusal to grant unrestricted military access was “conduct, not speech” — and therefore not protected by the First Amendment. Judge Lin rejected this framing entirely.

If the government had won, any AI company setting ethical limits on its products could be branded a national security threat. The message would be clear: ethics are a liability in the defense market.

Lin’s ruling suggests the opposite. AI companies can maintain safety commitments without being treated as foreign adversaries.

Microsoft, the ACLU, and 22 retired military leaders filed briefs supporting Anthropic. Over 30 OpenAI and Google DeepMind employees signed a separate brief in their personal capacities.

The coalition suggests broad concern across the tech industry about where the government’s pressure campaign was heading.

The Bigger Picture

Anthropic’s case exposes a fundamental tension in American AI policy.

The Pentagon wants AI systems it can deploy without restrictions. Some AI companies want to maintain guardrails on how their technology is used. These positions appeared irreconcilable.

But court filings revealed that just days before the blacklisting, Pentagon officials told Anthropic the two sides were “nearly aligned” on the disputed issues. That email — sent March 4, the day after the supply chain designation was finalized — became central to Anthropic’s argument that the ban was retaliation, not security policy.

Whether the full court agrees remains to be seen. But for now, Anthropic has won the first round.

The ruling takes effect in seven days.