Pentagon Gives Anthropic 72-Hour Ultimatum: Drop AI Safety Guardrails or Face Defense Production Act

After a tense Tuesday meeting, Defense Secretary Hegseth told Anthropic's CEO: comply by Friday 5pm or the government will force compliance. Anthropic isn't budging.

The meeting was “not warm and fuzzy at all.”

That’s how a senior Defense official described Tuesday’s confrontation between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei at the Pentagon.

The outcome: Anthropic has until 5:01 pm Friday to allow the military unrestricted use of Claude - or face consequences the AI industry has never seen.

The Three Threats

Hegseth laid out a clear escalation path if Anthropic doesn’t comply:

1. Contract termination. The Pentagon’s $200 million contract with Anthropic would be immediately cancelled.

2. Supply chain risk designation. Anthropic would be branded a security liability, forcing every Pentagon contractor to drop the company’s technology. Eight of the ten largest U.S. companies currently use Claude.

3. Defense Production Act invocation. A Cold War-era law, typically reserved for national emergencies, would be used to compel Anthropic to provide its technology to the military regardless of the company’s objections.

The DPA has been invoked for mask production during COVID, semiconductor manufacturing during the chip shortage, and critical minerals for defense. Using it to force an AI company to drop safety restrictions would be unprecedented.

What the Pentagon Wants

The demand is simple: “all lawful purposes.”

That phrase has become the Pentagon’s litmus test for AI vendors. OpenAI, Google, and xAI have all signed on to it. Elon Musk’s xAI just reached a deal allowing Grok to operate across all classification levels, including systems handling weapons development and battlefield operations.

Anthropic is the only holdout.

The company has two red lines it refuses to cross:

  1. Mass surveillance of Americans - Claude cannot be used for large-scale domestic monitoring
  2. Fully autonomous weapons - A human must remain in the decision loop before lethal force is deployed

Everything else is negotiable. Anthropic already allows Claude for intelligence analysis, logistics, and even participated in the operation that captured Venezuelan dictator Nicolás Maduro.

But those two restrictions are non-negotiable for Anthropic - and unacceptable to the Pentagon.

The Missile Defense Fight

The feud escalated in December, months before Tuesday’s meeting.

Under Secretary of War Emil Michael posed a hypothetical to Amodei: would Anthropic refuse to help defend the U.S. against a hypersonic missile attack due to its autonomous weapons restrictions?

The accounts diverge sharply. Pentagon sources say Amodei suggested officials should “reach out and check with Anthropic” during an active missile attack. Anthropic disputes this, saying it offered a carveout for missile defense.

“Every iteration of our proposed contract language would enable our models to support missile defense and similar uses,” an Anthropic spokesman said.

The truth likely lies somewhere between, but the damage was done. Pentagon officials saw a company that would second-guess military decisions in real-time. Anthropic saw a government that wanted to use AI for things beyond defense.

The Maduro Raid Fallout

The relationship fractured further after the Venezuela operation.

Following the raid, a senior Anthropic executive contacted a Palantir executive - the company that deploys Claude on classified networks - asking whether Anthropic’s software was used in the operation.

The Palantir executive reported the exchange to the Pentagon, alarmed that the question seemed to indicate Anthropic might disapprove of its model being used in combat operations.

Whether that interpretation was fair doesn’t matter. The perception that Anthropic was monitoring and potentially objecting to military use of its own product sealed the company’s fate with the new administration.

The Competitive Shift

A year ago, Anthropic’s safety-first positioning was a competitive advantage. Enterprise customers wanted assurances their AI vendor took ethics seriously.

Now it’s a liability.

Every major AI competitor has signed the Pentagon’s terms. xAI operates in classified systems. OpenAI removed restrictions. Google cooperates fully.

Anthropic stands alone, and the Pentagon is making an example of what happens to companies that don’t play ball.

What Happens Friday

Anthropic has three days to decide its future.

Sources familiar with the company’s thinking say Anthropic has no plans to budge. The safety commitments that define Anthropic’s identity aren’t negotiating positions - they’re the reason the company exists.

If Anthropic holds, the question becomes whether the Pentagon will actually invoke the Defense Production Act against an AI company. That would create a precedent that no tech company could ignore: your safety commitments last exactly as long as the government finds them convenient.

For users, the calculation is simpler. If AI safety restrictions can be overridden by government pressure, they were never real restrictions - just marketing.

The deadline is 5:01 pm Friday. By this time next week, we’ll know whether AI safety is a principle or a talking point.