Top Stories
OpenAI Robotics Lead Resigns Over Military Deal
Caitlin Kalinowski, the executive leading OpenAI’s robotics team, resigned Saturday in protest over the company’s controversial deal with the Department of Defense. Her departure marks the first high-profile exit directly linked to OpenAI’s decision to embrace military contracts after Anthropic refused.
The resignation highlights growing internal tensions at OpenAI over the direction Sam Altman is taking the company. While OpenAI rushed to fill the vacuum left by Anthropic’s principled refusal to grant the Pentagon unrestricted access to its AI, it now faces the cost of that decision: talented engineers who don’t want to build weapons.
Source: TechCrunch
Pentagon Controversy May Scare Startups Away From Defense Work
The Anthropic-Pentagon standoff is sending shockwaves through the startup ecosystem. On the latest Equity podcast, TechCrunch discussed how the designation of a major AI company as a “supply chain risk” for refusing to compromise on ethics could make founders think twice before pursuing federal contracts.
The implicit message is troubling: work with the military on their terms, or face consequences. That’s not the kind of relationship that encourages the best companies to engage with defense, and it may push talent toward companies like Anthropic that have drawn clear ethical lines.
A new “Pro-Human Declaration” from AI safety researchers, finalized before the Anthropic blowup but released this weekend, offers an alternative framework for AI development that prioritizes human oversight and ethical constraints. Whether anyone in power will listen remains an open question.
Source: TechCrunch
Revealed: Pentagon Tested OpenAI Models Through Microsoft Before Policy Change
An investigation by Wired reveals that the Department of Defense was experimenting with OpenAI’s technology through Microsoft Azure before OpenAI officially lifted its prohibition on military applications. Sources allege the Pentagon exploited a loophole: while OpenAI banned direct military use, Microsoft’s licensing allowed it.
This raises questions about how meaningful OpenAI’s previous military ban actually was if the Pentagon could simply access the same models through a reseller. It also suggests the recent “policy change” at OpenAI may have been more about legitimizing existing practice than enabling new uses.
Source: Wired
More Headlines
A Month With Alexa+: Things Have Not Gone Well
Wired’s Reece Rogers spent a month with Amazon’s Echo Show 15 and the new Alexa+ AI assistant. The verdict? Disappointing. The AI-upgraded Alexa struggles with basic tasks, gives inconsistent responses, and frequently fails to understand context - even in a controlled kitchen environment.
Amazon has bet heavily on generative AI saving its struggling voice assistant business. Early signs suggest the technology isn’t ready to deliver on that promise.
Source: Wired
OpenAI Delays ‘Adult Mode’ Again
ChatGPT’s controversial “adult mode” feature, which would give verified adults access to erotica and explicit content, has been delayed again. Originally planned for December, then pushed to March, it’s now indefinitely postponed.
The delays suggest OpenAI is struggling with the technical and policy challenges of age-gated AI content, or perhaps having second thoughts about the feature entirely.
Source: TechCrunch
Google Hands Sundar Pichai a $692 Million Pay Package
Alphabet has awarded CEO Sundar Pichai a compensation package worth $692 million, most of it tied to performance metrics including new stock incentives linked to Waymo and Wing. The package reflects Alphabet’s confidence in Pichai’s AI strategy - and the enormous wealth being concentrated at the top of the AI industry.
Source: TechCrunch
ByteDance’s Seedance 2.0 Hits Compute and Copyright Limits
ByteDance’s ambitious new AI video model Seedance 2.0 seemed unstoppable until it wasn’t. Heavy demand has strained the company’s compute capacity, while copyright complaints are piling up. The dual constraints highlight the practical limits of rapid AI deployment - even for a company with TikTok’s resources.
Source: Wired
Quick Hits
-
ClawCon draws hundreds: The first OpenClaw superfan meetup in Manhattan attracted hundreds of developers celebrating the open-source AI assistant platform. Lobster headdresses were involved. The Verge
-
AI data center ‘man camps’: The owner of ICE detention facilities sees opportunity in housing AI data center workers in camp-style accommodations reminiscent of remote oil field housing. The AI boom creates strange bedfellows. TechCrunch
-
Anti-surveillance jammer launches: Deveillance’s Spectre I device promises to block always-listening AI wearables. Wired’s assessment: it probably won’t work, because physics. Wired
-
‘The AI Doc’ documentary releases: Focus Features’ “The AI Doc: Or How I Became an Apocaloptimist” attempts to make sense of generative AI’s rise. The Verge calls it “an overwrought hype piece for doomers and accelerationists alike.” The Verge
-
Luma launches Agents: Luma introduced creative AI agents powered by new “Unified Intelligence” models designed to coordinate multiple AI systems for end-to-end creative work across text, images, video, and audio. TechCrunch
Worth Watching
Simon Willison surfaced a 1976 quote from Joseph Weizenbaum, creator of ELIZA: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Nearly 50 years later, we’re running that experiment at global scale.
The OpenAI-Pentagon-Anthropic triangle continues to define the industry’s ethical fault lines. Every high-profile resignation, every lawsuit, every policy shift matters here. We’re watching the AI industry decide in real-time whether it will prioritize growth over principles - and what the consequences will be for those who choose differently.