AI News: Pentagon Labels Anthropic 'Supply Chain Risk' as AI-Military Tensions Escalate

Daily roundup for March 8, 2026 covering the Anthropic-Pentagon standoff, OpenAI's GPT-5.4 launch, Meta glasses privacy scandal, and more

Top Stories

Pentagon Formally Labels Anthropic a ‘Supply Chain Risk’

The Department of Defense has taken the unprecedented step of formally labeling Anthropic a “supply chain risk” - the first time an American company has received this designation, which is typically reserved for foreign companies with ties to adversarial governments.

The designation bars defense contractors from using Claude in any products that work with the government, effectively cutting Anthropic off from federal defense work. The move follows weeks of failed negotiations between Anthropic and the Pentagon over how much control the military should have over Claude’s capabilities, including its use in autonomous weapons systems and mass domestic surveillance.

Anthropic CEO Dario Amodei plans to challenge the designation in court. Microsoft, Google, and Amazon have confirmed that Claude remains available to their non-defense customers through their cloud platforms. Meanwhile, Claude’s consumer app continues to surge - it’s now seeing more daily installs than ChatGPT as users migrate away from OpenAI following their Pentagon deal.

Source: The Verge

OpenAI Launches GPT-5.4 with Native Computer Use

OpenAI released GPT-5.4, its latest model that combines improvements in reasoning, coding, and professional work. Most notably, it’s OpenAI’s first model with native computer use capabilities - meaning it can operate a computer on behalf of users and complete tasks across different applications.

The model comes in standard and “Pro” versions, with a 1 million token context window and an August 2025 knowledge cutoff. OpenAI highlighted improvements in business productivity tasks, claiming GPT-5.4 achieves 87.3% on internal spreadsheet modeling benchmarks compared to 68.4% for GPT-5.2.

However, the launch was overshadowed by the departure of robotics lead Caitlin Kalinowski, who resigned in protest over OpenAI’s military contract with the Pentagon.

Source: The Verge

Meta AI Glasses Sending Intimate Footage to Human Reviewers

An investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten revealed that Meta’s AI-powered smart glasses are sending sensitive footage to human contractors in Nairobi, Kenya. Reviewers reportedly saw videos showing “bathroom visits, sex and other intimate moments” captured by users’ glasses.

A class action lawsuit has already been filed, accusing Meta of violating privacy and false advertising laws. The company’s marketing had emphasized privacy and user control over sharing footage - claims the lawsuit argues are undermined by the human review process.

This is exactly the kind of privacy nightmare that critics warned about when always-on AI wearables started proliferating. If you’re using Meta’s smart glasses for anything remotely private, assume someone might be watching.

Source: The Verge

Grammarly’s ‘Expert Review’ Feature Uses Real People’s Identities Without Permission

Grammarly’s new “expert review” feature, which offers AI-generated writing feedback supposedly “inspired by” subject matter experts, is using real journalists’ and writers’ identities without their consent. The Verge discovered the feature was generating feedback under the names of its own editors - none of whom gave permission.

The feature also includes recently deceased professors and famous authors. Grammarly claims the AI provides feedback “based on the work of” these experts, but the distinction between AI-generated advice and actual expert guidance is blurred enough to be misleading.

Source: The Verge

More Headlines

Claude Finds 22 Firefox Vulnerabilities in Two Weeks

In a security partnership with Mozilla, Anthropic’s Claude found 22 separate vulnerabilities in Firefox - 14 classified as “high-severity.” The discovery demonstrates AI’s growing capability in automated security research, though it also raises questions about what happens when such tools are used by bad actors.

Source: TechCrunch

Netflix Acquires Ben Affleck’s AI Startup InterPositive

Netflix announced its acquisition of InterPositive, Ben Affleck’s AI company specializing in film and TV production tools. All 16 team members join Netflix, with Affleck becoming a senior adviser.

Source: The Verge

Apple Music Introduces Voluntary AI Transparency Labels

Apple is asking artists and labels to voluntarily tag songs made using AI tools. The “Transparency Tags” system covers four categories: track, composition, artwork, and music videos. It’s voluntary - no AI usage will be assumed for untagged works.

Source: The Verge

Research: AI Can Unmask Anonymous Online Accounts

A new study from researchers at ETH Zurich and Anthropic shows AI agents can deanonymize pseudonymous social media accounts with up to 90% precision. The finding has serious implications for online privacy, potentially undermining the pseudonymity measures many people rely on for sensitive discussions.

Source: Ars Technica

Quick Hits

  • Jack Dorsey’s AI pivot: Block’s CEO laid off 40% of the workforce to rebuild the company “as an intelligence.” He told Wired he wants to use AI to replace most human roles at the company. Wired

  • Cursor launches Automations: The AI coding tool is rolling out a new agentic feature that automatically launches agents triggered by code changes, Slack messages, or timers. TechCrunch

  • AWS healthcare AI agents: Amazon launched Amazon Connect Health, an AI agent platform for patient scheduling, documentation, and verification. TechCrunch

  • WhatsApp opens to rival AI chatbots: Meta is now letting third-party AI companies offer chatbots on WhatsApp in Brazil, following a similar move in Europe. TechCrunch

  • Cline supply chain attack: A prompt injection in an issue title allowed attackers to compromise the Cline coding agent’s production releases through cache poisoning - a cautionary tale for AI-powered CI/CD pipelines. Simon Willison

Worth Watching

The Anthropic-Pentagon standoff continues to reshape the AI industry. Claude’s consumer growth while being blacklisted by defense is an interesting twist - the company may be discovering that principled stances have commercial value among users who don’t want their AI provider building weapons. Meanwhile, OpenAI’s internal tensions over military work suggest this debate is far from settled in Silicon Valley.

The Meta glasses privacy revelation deserves close watching. We’re entering an era of ubiquitous AI wearables, and this is exactly the kind of surveillance infrastructure that privacy advocates have warned about. The lawsuit outcome could set important precedents for the entire industry.