Government Filing Calls Anthropic an 'Unacceptable Risk' - Could Alter AI in Wartime

In a 40-page court response, DOJ lawyers argue Anthropic could disable or modify its AI systems to suit its own interests rather than America's priorities during conflict

Courtroom interior with wood paneling and judicial bench

The Trump administration filed its formal defense of the Anthropic blacklist this week, and the arguments reveal just how far the government is willing to go to control AI companies that refuse military terms.

In a 40-page filing in U.S. District Court for the Northern District of California, Justice Department lawyers argued that Anthropic poses an “unacceptable risk” to national security because the company could “disable or alter its technology to suit its own interests, rather than the country’s priorities, in a time of war.”

The filing questions whether Anthropic can be considered a “trusted partner” and warns that AI systems “are acutely vulnerable to manipulation.” Giving Anthropic access to the Defense Department’s warfighting infrastructure, the government argues, would “introduce unacceptable risk into DoD supply chains.”

The Core Argument

The government’s filing attempts to reframe the entire dispute. Anthropic claims it was punished for refusing to drop ethical guardrails against mass surveillance and autonomous weapons. The administration says this is about conduct, not speech.

“It was only when Anthropic refused to release the restrictions on the use of its products - which refusal is conduct, not protected speech - that the President directed all federal agencies to terminate their business relationships with Anthropic,” the filing states.

This distinction matters legally. First Amendment protections for speech are strong. Protections for business conduct are weaker. The government is betting it can characterize Anthropic’s ethical red lines as mere contract negotiation rather than constitutionally protected expression.

The Wartime Scenario

The filing’s most striking claim involves what might happen during armed conflict. The government argues that Anthropic’s refusal to grant unrestricted access means the military cannot fully trust the AI system in crisis situations.

The implication: an AI company that maintains any ethical limits on its technology cannot be a reliable defense partner, because it might refuse orders during wartime.

This argument has alarmed legal observers. It suggests the government believes defense contractors must cede all ethical autonomy to the Pentagon - that any retained values represent a national security threat.

Growing Coalition Against the Designation

The government’s position faces opposition from an unlikely alliance.

Nearly 150 retired federal and state judges - appointed by both Republican and Democratic presidents - filed an amicus brief supporting Anthropic. They argue the supply chain risk designation represents government overreach and misuse of a statute designed for foreign adversaries.

Twenty-two former high-ranking military officials, including former secretaries of the Army, Navy, and Air Force, have separately filed their own brief. They allege Defense Secretary Pete Hegseth’s actions constitute a “misuse of government authority for retribution against a private company that has displeased the leadership.”

Microsoft, which invested $5 billion in Anthropic, urged the court to temporarily block the Pentagon order. The company warned that U.S. warfighters could be “hampered” if AI providers are forced to immediately reconfigure their systems.

“Everyone involved shares common goals,” a Microsoft spokesperson said. “We need time and a process to find common ground.”

The Contradictions

Legal experts have identified logical problems with the government’s case.

Fortune analyst Mark Minevich noted an inherent contradiction: the government claims Anthropic is so dangerous it must be blacklisted, yet allows a six-month phase-out period. If the threat were genuine and urgent, immediate termination would be required.

There’s also the geopolitical paradox. Chinese labs have allegedly distilled both Anthropic and OpenAI models into open-source versions now accessible to the People’s Liberation Army without guardrails. The U.S. government is restricting an American company that insists on human oversight of lethal decisions, while adversaries operate similar technology freely.

What Happens Next

A hearing is scheduled for March 24. The court will consider Anthropic’s request for a temporary restraining order that would pause the supply chain risk designation while the full case proceeds.

The stakes extend beyond Anthropic’s $200 million canceled contract. Defense contractors working with Anthropic face their own exposure. Companies like AWS, Google, Palantir, and Accenture may eventually need to certify they have zero Anthropic integration - a requirement that could reshape enterprise AI procurement across the defense sector.

More fundamentally, this case will establish whether AI companies can maintain ethical boundaries that conflict with military preferences, or whether national security arguments override all other considerations.

The Bottom Line

The government’s 40-page filing reveals the depth of the conflict: the administration believes Anthropic’s refusal to drop ethics guardrails makes it inherently untrustworthy for defense work. The counterargument - that companies maintaining ethical limits are exactly the kind of partners a democracy should want - will be tested in court on March 24.