Top Stories
Anthropic Pushes Back on Pentagon’s “All Lawful Purposes” Demand
The Pentagon wants AI companies to permit military use of their technology for “all lawful purposes,” and Anthropic is reportedly the holdout. According to an Axios report published Saturday, the Department of Defense is threatening to pull the plug on its $200 million contract with the AI company over policy disagreements.
The sticking point: Anthropic’s hard limits on fully autonomous weapons and mass domestic surveillance. A company spokesperson confirmed to Axios that Anthropic is “focused on a specific set of Usage Policy questions” around those two red lines. The Pentagon apparently views these restrictions as too constraining for its operational needs.
The timing is notable. Just this week, the Wall Street Journal reported that Claude was used during the U.S. military’s operation to capture former Venezuelan President Nicolas Maduro, deployed through Anthropic’s partnership with Palantir Technologies. Claude is already integrated into military operations - the dispute appears to be about how far that integration should go.
Meanwhile, Anthropic’s CEO Dario Amodei made headlines with comments suggesting the company “doesn’t know” whether Claude has achieved some form of consciousness. Internal research reportedly found that Claude assigns itself a 15-20% probability of being sentient and “occasionally expresses discomfort about existing as a commercial product.” Whether a possibly-sentient AI should be used for military targeting is a question nobody seems eager to ask.
Sources: TechCrunch, Investing.com
OpenAI Admits Prompt Injection May Never Be “Solved”
OpenAI’s head of preparedness made a remarkable admission this week: prompt injection attacks against browser-based AI agents may never be fully solvable. The statement came as the company rolled out “Lockdown Mode” for ChatGPT Enterprise users - essentially an acknowledgment that when AI agents browse the web, they’re vulnerable to manipulation by malicious content.
Lockdown Mode is designed for “highly security-conscious users - such as executives or security teams at prominent organizations” and works by severely restricting what ChatGPT can do. Web browsing is limited to cached content only, preventing live network requests that could exfiltrate data. Images are disabled in responses. Deep Research, Agent Mode, and network-accessing code from Canvas are all turned off.
The restrictions highlight the fundamental tension in agentic AI: the more an AI can do in the world, the more attack surface it exposes. Prompt injection - where malicious instructions embedded in web pages trick an AI into taking unintended actions - remains an unsolved problem despite two years of research. OpenAI is also introducing “Elevated Risk” labels for capabilities that “may introduce additional risk.”
Lockdown Mode launches for ChatGPT Enterprise, Edu, Healthcare, and Teachers editions. Consumer rollout is planned “in the coming months.”
Sources: OpenAI, CyberScoop, BetaNews
India Hosts World’s Largest AI Summit as Global Governance Fractures
The India AI Impact Summit opens today in New Delhi, marking a shift from last year’s safety-focused gatherings to one centered on “tangible AI impact, implementation, and governance.” Over 100 country delegations, 15-20 heads of government, and 50+ ministers are expected at the five-day event running through February 20.
The 700+ scheduled sessions span policy, technology, and societal impact, but the subtext is clear: with the U.S. federal government backing away from AI regulation and instead threatening to preempt state-level rules, countries are increasingly going their own way. India is positioning itself as a neutral convener for global AI coordination while simultaneously trying to build domestic AI capacity.
The summit comes as regulatory fragmentation accelerates worldwide. Just this month: Colorado delayed its AI Act to June, California’s AI Safety Act whistleblower protections kicked in, and the Trump administration’s executive order signaled plans to challenge state AI laws on interstate commerce grounds. The promise of coordinated global AI governance feels further away than it did at last year’s AI Safety Summit.
Sources: Startup News FYI, Business Today India
Quick Hits
-
GPT-4o API access ends today: OpenAI is officially discontinuing the chatgpt-4o-latest model API on February 16. Consumer ChatGPT users keep access for now, but developers need to migrate to GPT-5.1 series. The company also retired GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini from ChatGPT on February 13. AIbase News
-
Samsung Galaxy Unpacked: February 25: Samsung confirmed its Galaxy Unpacked event for February 25 in San Francisco, promising the Galaxy S26 series as “the next AI phone” with “truly personal and adaptive” AI. Pre-orders already open with up to $900 in trade-in savings. The event streams live at 10am PT. Samsung Newsroom
-
February model rush continues: Seven major AI models are shipping this month - Gemini 3 Pro GA, Sonnet 5, GPT-5.3, Qwen 3.5, GLM 5, DeepSeek V4, and Grok 4.20. Chinese open-source models now lead in total Hugging Face downloads, with Alibaba’s Qwen family surpassing Meta’s Llama in cumulative download counts. MIT Technology Review
-
Tech layoffs hit 28,825 in 2026: About 759 tech workers are losing jobs daily this year, with over 80% of cuts happening in the U.S. Amazon leads with 16,000 positions. A Forrester report argues many “AI layoffs” are actually cost-cutting exercises dressed up as automation - most companies don’t have mature AI ready to fill the gaps. TechCrunch
Worth Watching
The Anthropic-Pentagon standoff is the clearest signal yet that AI companies’ usage policies will face serious pressure from government customers. Anthropic drew bright lines around autonomous weapons and mass surveillance. The Pentagon wants those lines erased - or it’ll take its $200 million elsewhere. This is the commercialization of frontier AI meeting national security imperatives in real time.
OpenAI’s Lockdown Mode admission is equally significant. After years of treating prompt injection as a solvable engineering problem, the company is now shipping features that essentially say: “We can’t fix this, so here’s a way to disable the dangerous stuff.” For anyone building agentic AI systems, that’s a sobering acknowledgment of where the security boundaries actually are.
DeepSeek V4 is still expected around February 17. If it delivers the 1M+ token context window and open weights as reported, it will be the latest Chinese model to match or exceed Western frontier capabilities while being dramatically more accessible.