Top Stories
Pentagon Designates Anthropic a “Supply Chain Risk” After Trump Ban
The standoff between Anthropic and the Trump administration escalated dramatically on Friday. President Trump ordered federal agencies to “immediately cease” using Anthropic’s AI products, posting on Truth Social that the company was attempting to “strong-arm” the Pentagon. Hours later, Defense Secretary Pete Hegseth went further, officially designating Anthropic as a “supply chain risk” - a label typically reserved for national security threats.
The designation could immediately impact major tech companies that use Claude for Pentagon contracts, including Palantir and AWS. Anthropic has indicated it will challenge the designation in court, calling it “legally unsound.”
At the heart of the dispute: Hegseth’s demand that AI companies agree to “any lawful use” of their technology by the military, including mass domestic surveillance and fully autonomous lethal weapons. Anthropic CEO Dario Amodei refused to cross these red lines, stating “threats do not change our position: we cannot in good conscience accede to their request.”
Tech Workers Across Industry Rally Behind Anthropic
The Pentagon showdown has sparked an unusual moment of solidarity in Silicon Valley. Employees at Google, OpenAI, and other AI labs signed an open letter supporting Anthropic’s stance against unrestricted military AI use.
The letter comes as OpenAI and xAI have reportedly agreed to the Pentagon’s new terms. But even within those companies, workers are questioning what their employers are building. “We don’t have to have unsupervised killer robots,” one tech worker told The Verge.
In an ironic twist, Claude shot to No. 2 in the App Store following the controversy - consumers apparently voting with their downloads.
Sources: TechCrunch, TechCrunch
OpenAI Closes $110 Billion Round, Announces Pentagon Deal
While Anthropic faces federal blacklisting, OpenAI secured one of the largest private funding rounds in history: $110 billion from Amazon ($50B), Nvidia ($30B), and SoftBank ($30B). The company is now valued at $730 billion.
OpenAI also announced ChatGPT has reached 900 million weekly active users and over 50 million paid subscribers.
CEO Sam Altman claimed OpenAI’s new Pentagon contract includes “technical safeguards” addressing the same concerns that Anthropic refused to compromise on. Details of what those safeguards actually entail remain unclear.
Sources: TechCrunch, TechCrunch
Microsoft Launches Copilot Tasks: AI Running on Its Own Computer
Microsoft previewed Copilot Tasks, a new system where AI handles busywork on its own cloud-based computer and browser rather than your device. Users can assign tasks in natural language - scheduling appointments, generating study plans - and Copilot Tasks completes them in the background, sending a report when finished.
Tasks can run on a one-time, scheduled, or recurring basis. The feature represents another step toward AI agents that operate independently rather than responding to moment-by-moment prompts.
Source: The Verge
Quick Hits
-
AI Music Milestone: Suno hit 2 million paid subscribers and $300 million in annual recurring revenue, proving there’s serious money in letting anyone generate music with natural language prompts. (TechCrunch)
-
Quantum-Proof HTTPS: Google unveiled Merkle Tree Certificates to protect web traffic from future quantum computer attacks. The quantum-resistant crypto is 40x larger than current methods, but clever compression squeezes 15kB into a 700-byte package. Chrome already supports it. (Ars Technica)
-
Musk vs. OpenAI Deposition: In his ongoing lawsuit against OpenAI, Elon Musk claimed “nobody committed suicide because of Grok” while attacking ChatGPT’s safety record. This just months after xAI’s Grok flooded X with nonconsensual nude images. (TechCrunch)
-
OpenAI Insider Trading: OpenAI fired an employee for using insider knowledge to make bets on prediction markets like Polymarket and Kalshi. As these platforms grow, expect more crackdowns on Big Tech employees with advance knowledge of product launches. (Wired)
Worth Watching
The Self-Regulation Trap: TechCrunch published a sharp analysis of how Anthropic, OpenAI, and others promised to govern themselves responsibly - but without actual regulation, there’s little to protect them when government pressure arrives. The piece argues Anthropic is now caught in a trap of its own making: having built its brand on safety, it can’t easily capitulate, but principled stands come with real costs.
The Anthropic-Pentagon clash may be the defining moment for AI governance. Either companies can set limits on military AI use, or they can’t. We’re about to find out which.