At 5:01 PM Eastern today, Anthropic’s deadline expires. The company that built Claude - the only AI model currently deployed in the Pentagon’s most classified systems - will either capitulate to Defense Secretary Pete Hegseth’s demands or become the first American technology company branded a “supply chain risk” for refusing to give the military unrestricted access to its products.
Anthropic chose to hold the line.
“These threats do not change our position,” CEO Dario Amodei wrote Thursday. “We cannot in good conscience accede to their request.”
What unfolds next will set precedent for how AI companies, governments, and the technology itself interact for decades to come.
The Two Red Lines
At the heart of this dispute are two conditions Anthropic refuses to budge on:
- No mass surveillance of Americans. Claude will not be used for bulk monitoring of U.S. citizens.
- No fully autonomous weapons. Claude will not select and engage targets without meaningful human oversight.
These restrictions have been in Anthropic’s defense contracts from the beginning. The company entered the defense market last year with eyes open about what it would and wouldn’t do. But in January, Hegseth’s AI strategy memorandum directed all Defense Department AI contracts to incorporate “any lawful use” language within 180 days - a direct collision with Anthropic’s terms.
The Pentagon’s position, articulated by Chief Technology Officer Emil Michael, is that it’s “not democratic” for a private company to decide how military technology gets used. The military should make those calls, not Silicon Valley.
Amodei sees it differently. In a narrow set of cases, he argues, “AI can undermine, rather than defend, democratic values.” Domestic mass surveillance and autonomous targeting are “simply outside the bounds of what today’s technology can safely and reliably do.”
The Escalation Timeline
The showdown accelerated this week:
Tuesday, February 25: Hegseth meets with Amodei in person. The defense secretary delivers an ultimatum: agree to “all lawful purposes” language by Friday at 5:01 PM, or face termination of Anthropic’s $200 million defense contract and potential designation as a supply chain risk. He also threatens to invoke the Defense Production Act.
Wednesday, February 26: The Pentagon sends a revised contract proposal. Anthropic reviews it and rejects it publicly, stating the new language “made virtually no progress” on their core concerns.
Thursday, February 27 (today): The deadline looms. Anthropic has not changed its position.
What “Supply Chain Risk” Means
Getting labeled a supply chain risk isn’t just PR damage. According to CNN’s reporting, any company with Pentagon contracts - or hopes of future Pentagon contracts - would need to prove they have no connection to Anthropic whatsoever.
For an AI company whose models power enterprise software across industries, this is potentially existential. Your bank uses Claude? Your bank might need to switch if they want government business. Your law firm uses Claude? Same problem. The designation creates a cascading effect that could hollow out Anthropic’s commercial customer base.
The Defense Production Act Threat
The more dramatic threat involves the Defense Production Act, a Korean War-era law that lets the president compel companies to prioritize government contracts over other business.
The DPA has two distinct authorities:
Title VII grants information-gathering power. Biden used this to require AI companies to report training activities and red-team results. That’s invasive but manageable.
Title I is different. It allows the government to “require acceptance and performance” of contracts and to allocate materials, services, and facilities “as he shall deem necessary or appropriate.” This is the core compulsion power, and Hegseth appears to be threatening its use.
The problem: this has never been done to a technology company for products it considers unsafe to produce. Legal experts told Lawfare that using the DPA this way would be “without precedent under the history of the statute.” The allocation authority “has barely been used since the Korean War.”
If the Pentagon demanded Anthropic retrain Claude to strip its safety guardrails, the legal challenge becomes even thornier. The closest analogy is the FBI’s attempt to force Apple to write custom iPhone-unlocking software in 2015 - a demand a magistrate judge rejected, concluding the government sought “authority that Congress chose not to confer.”
What the Other AI Labs Are Doing
Anthropic isn’t facing this alone by accident. It’s isolated because competitors made different choices:
xAI (Elon Musk’s company) agreed to the “all lawful use” standard and went further, allowing Grok deployment in classified systems.
Google and OpenAI have their models in unclassified military systems and are in talks for classified access. Pentagon officials insist both will need to accept “all lawful purposes” language, though it’s unclear if OpenAI would agree.
Until now, Claude has been the only model available in classified systems where the Pentagon’s most sensitive intelligence, weapons development, and battlefield operations take place. Anthropic’s competitors are happy to fill that gap.
Congress Weighs In
The Pentagon’s hardball tactics have drawn bipartisan criticism.
Senators Elizabeth Warren and Andy Kim argued that Congress passed the Defense Production Act “to aid the U.S. economy in times of need, not to permit the Trump administration to extort American companies that refuse to help the Pentagon surveil Americans or build killer robots.”
They warned that weaponizing the DPA “will shatter the bipartisan consensus in support of a strong DPA - weakening our hand in competition with China.”
Senate Intelligence Vice Chair Mark Warner called the situation “deeply disturbing” and urged Congress to establish binding AI governance frameworks for national security contexts.
But Senator Roger Wicker, the Republican chairman of the Armed Services Committee, offered a different perspective: “If Anthropic doesn’t choose to follow this business plan, there are other sources.”
What Happens After 5:01 PM
The immediate consequences if Anthropic doesn’t comply:
- Contract termination. The $200 million defense contract ends.
- Supply chain risk designation. Pentagon contractors must distance themselves from Anthropic.
- Potential DPA invocation. The government may attempt to compel cooperation, triggering likely legal challenges.
The medium-term question: Does the Pentagon’s position hold? Using the Defense Production Act to force a company to produce something it considers unsafe would face immediate legal challenges. The major questions doctrine - recently used to strike down Trump’s emergency tariffs - supports skepticism toward broad agency claims based on ambiguous statutory language.
The Bigger Picture
This standoff crystallizes a question the AI industry has avoided: What happens when a company’s safety commitments conflict with government demands?
For years, AI labs have talked about maintaining control, building in safeguards, and refusing to deploy systems in ways that could cause catastrophic harm. Anthropic has positioned itself as the safety-focused alternative to OpenAI. Their entire brand is “responsible scaling.”
Now that brand faces its first real test against government power.
If Anthropic holds and survives - commercially viable despite Pentagon blacklisting - it proves that AI companies can maintain safety boundaries even under extreme pressure. If Anthropic folds, or if it holds and gets crushed, the message to every other AI company is clear: your safety principles last exactly as long as the government allows them to.
Dario Amodei has bet his company that the first outcome is possible. By 5:02 PM today, we’ll know if the government agrees.
The Bottom Line
Anthropic is testing whether AI safety commitments can survive contact with national security demands. The outcome will shape how every AI company - and every government - approaches this question going forward.