AI News: Pentagon Used Claude for Iran Strikes Despite Trump Ban

Daily roundup for March 2, 2026 covering Pentagon's use of Claude AI despite federal ban, OpenAI's rushed defense deal, and Claude's surge to #1 in the App Store

Top Stories

US Military Used Claude AI for Iran Strikes Despite Federal Ban

In a stunning revelation that underscores the chaos of the ongoing AI-Pentagon standoff, reports indicate the US military used Anthropic’s Claude AI to assist with recent strikes in Iran - even after President Trump ordered federal agencies to “immediately cease” using Anthropic products.

The disclosure raises serious questions about implementation of the ban and highlights how deeply embedded AI tools have become in military operations. It also complicates Anthropic’s legal position as it prepares to challenge its designation as a “supply chain risk.”

Source: The Verge

OpenAI’s Pentagon Deal Was “Definitely Rushed,” Altman Admits

OpenAI CEO Sam Altman acknowledged that his company’s defense contract was “definitely rushed” and that “the optics don’t look good.” The admission came as OpenAI released more details about the agreement that rival Anthropic refused to sign.

Altman claims OpenAI’s deal includes “technical safeguards” addressing the same concerns that led Anthropic to reject the Pentagon’s terms - specifically around lethal autonomous weapons and mass surveillance. However, critics note that technical safeguards can be modified or disabled, unlike the hard contractual limits Anthropic demanded.

The contrast between the two approaches has split the tech industry, with employees at Google and other AI labs signing open letters supporting Anthropic’s position.

Source: TechCrunch

Claude Surges to #1 in App Store After Pentagon Standoff

Anthropic’s Claude chatbot has risen to the top spot in the App Store following intense public attention around the company’s refusal to allow unrestricted military use of its AI. The “Streisand effect” appears to be in full force, with users rallying around the company’s stance.

The surge represents a remarkable marketing win for Anthropic amid what could have been an existential crisis. While the company faces potential loss of hundreds of billions in government contracts, consumer sentiment has clearly shifted in its favor.

Source: TechCrunch

OpenAI Fires Employee for Prediction Market Insider Trading

OpenAI terminated an employee for making trades on prediction markets like Polymarket and Kalshi based on insider knowledge. The incident highlights growing concerns about information advantages in the increasingly lucrative prediction market space.

Prediction markets have exploded in popularity, and tech employees at major companies are testing ethical and legal boundaries by trading on non-public information about product launches, partnerships, and other announcements.

Source: Wired

Quick Hits

  • AI agent security: A new open source project called IronCurtain aims to constrain AI agents before they can cause harm, using novel isolation techniques to prevent rogue behavior. Wired

  • Chinese AI censorship: Stanford and Princeton researchers found Chinese AI chatbots are significantly more likely than Western models to dodge political questions or provide inaccurate answers, revealing systematic self-censorship patterns. Wired

  • Quantum-safe HTTPS: Google unveiled its plan to protect HTTPS certificates against quantum computer attacks using Merkle Tree Certificates, already supported in Chrome. The challenge: quantum-resistant crypto is 40x larger than current methods. Ars Technica

  • The “SaaSpocalypse”: AI is driving massive disruption in the SaaS market, with traditional software companies facing existential pressure as AI agents increasingly handle tasks that previously required dedicated applications. TechCrunch

  • Lenovo’s robot desk companion: At MWC 2026, Lenovo showed off the AI Workmate Concept - a robotic arm with an expressive face designed to be an “always-on desk companion.” The dystopian vibes are strong with this one. The Verge

Worth Watching

The Anthropic-Pentagon saga isn’t over. Anthropic has signaled willingness to challenge the “supply chain risk” designation in court, calling it “legally unsound.” Meanwhile, the revelation that the military continued using Claude for operations in Iran despite the ban suggests enforcement will be chaotic at best.

The split between AI companies over military cooperation is now impossible to ignore. OpenAI, xAI, and others have signed the Pentagon’s terms; Anthropic has drawn a hard line. This divergence may reshape the competitive landscape for years to come, as companies must now choose which customer base - military or consumer - they prioritize.

Max Tegmark’s analysis in TechCrunch this weekend posed the uncomfortable question: in the absence of AI regulation, what actually protects companies that make ethical commitments? The answer, it seems, is not much - except reputation and the willingness to walk away from money.