AI News: Amazon Bets $25 Billion More on Anthropic as AI Arms Race Intensifies

Daily roundup for April 22, 2026 covering Amazon's massive Anthropic investment, Meta's employee keystroke tracking for AI training, and Claude's new design tool

Top Stories

Amazon Pours Up to $25 Billion More Into Anthropic

Amazon announced an expanded deal with Anthropic on Sunday that includes $5 billion in immediate funding and up to $20 billion more tied to commercial milestones. The investment, struck at a $350 billion valuation, brings Amazon’s total potential commitment to over $33 billion.

The deal is more than just a check. Anthropic pledged to spend over $100 billion on AWS infrastructure over the next decade, including current and future generations of Amazon’s custom Trainium AI chips. Anthropic has also secured up to 5 gigawatts of compute capacity for training and deploying Claude, with nearly 1 gigawatt expected by the end of 2026.

This is the kind of deal that reshapes the industry. Amazon gets a frontrunner AI lab locked into its cloud platform for a decade. Anthropic gets the capital and hardware to compete at the frontier. Every other cloud provider — Microsoft with OpenAI, Google with its own models — now has to respond. The AI infrastructure race just got substantially more expensive.

Sources: CNBC, TechCrunch

Meta Will Track Employee Keystrokes to Train AI

Meta is installing monitoring software on U.S. employees’ work computers that captures mouse movements, clicks, keystrokes, and some screenshots. The data feeds directly into the company’s AI training pipeline.

The internal tool, called Model Capability Initiative (MCI), runs across work apps and websites. According to a staff memo from Meta’s Superintelligence Labs team, the purpose is to help AI models learn basic computer-use behaviors — navigating dropdown menus, using keyboard shortcuts, the kinds of tasks that current models still struggle with.

Meta says the data won’t be used for performance evaluations. But that assurance raises more questions than it answers. The broader goal is clearly to build AI agents capable of performing white-collar computer tasks autonomously — the same race OpenAI and Anthropic are running. Using employee behavior as training data is a novel approach, and one that’s already illegal in some countries like Italy. In the U.S., there are no federal limits on workplace surveillance of this kind.

Sources: Fortune, Gizmodo

Anthropic Ships Claude Opus 4.7 and Claude Design

Anthropic rolled out Claude Opus 4.7 last week, its most capable generally available model. The headline numbers: 87.6% on SWE-bench Verified, 94.2% on GPQA, a 1 million token context window, and 3.3x higher image resolution support (up to 2576px). Pricing stays the same as Opus 4.6.

The more interesting launch came alongside it. Claude Design is a new Anthropic Labs product for creating prototypes, slides, pitch decks, and mockups through conversation. Users describe what they want, Claude builds a first version, and the design gets refined through back-and-forth or direct edits. It can read a company’s codebase and design files to maintain visual consistency.

Exports go to PDF, PPTX, URL, or directly to Canva for collaborative editing. Available now in research preview for Claude Pro, Max, Team, and Enterprise subscribers.

Sources: CNBC, TechCrunch

Quick Hits

  • Horizon Robotics unveils Stellar chip: China’s first integrated cockpit-and-driving AI chip launched today, consolidating autonomous driving and smart cockpit onto a single chip. Mass production expected this year, cutting per-vehicle costs by $200-550.
  • New York’s RAISE Act finalized: Governor Hochul signed the chapter amendment on March 27, giving the nation’s first frontier AI safety law its final form. Developers of frontier models operating in New York must publish safety protocols by January 1, 2027, or face penalties up to $3 million per violation.
  • State AI bills keep piling up: Over 600 AI bills have been introduced in state legislatures this session. Recent additions include Nebraska’s chatbot disclosure law for minors and Louisiana’s requirement that patients consent before AI transcribes medical visits.
  • Neuro-symbolic AI slashes energy 100x: Tufts University researchers built a neuro-symbolic system for robotics that hit 95% accuracy on manipulation tasks versus 34% for standard models — while using 1% of the training energy. The work will be presented at ICRA in Vienna next month.

Worth Watching

The Amazon-Anthropic deal signals that the AI infrastructure build-out is accelerating, not slowing. Anthropic’s $100 billion AWS commitment over 10 years and 5 gigawatt compute target represent bets on a future where model training and deployment require industrial-scale energy. Meanwhile, Meta’s employee monitoring program for AI training data is worth tracking — if it works, other companies will follow. The privacy implications of turning workers into unwitting AI training subjects extend well beyond Meta’s campus.

On the regulatory front, the growing pile of state AI laws is creating a patchwork that the White House’s AI Preemption Executive Order and the DOJ’s new AI Litigation Task Force are explicitly designed to push back on. The tension between state regulation and federal preemption will define how AI governance plays out in the U.S. through the rest of 2026.