AI News: Anthropic Cuts Off Third-Party Tool Access as Claude Racks Up 500 Zero-Days

Anthropic bans third-party tools from Claude subscriptions, MAD Bugs hits 500 zero-days, Gemma 4 ships under Apache 2.0, and Take-Two guts its AI team

Top Stories

Anthropic Bans Third-Party Tools From Using Claude Subscriptions

Anthropic pulled the plug on third-party access to Claude subscriptions at noon Pacific on Friday, ending the ability to route Claude Pro and Max subscription usage through OpenClaw, Aider, and other external agentic tools. The policy took effect April 4 and will expand to cover all third-party harnesses in the coming weeks.

Boris Cherny, head of Claude Code, said Anthropic’s subscriptions “weren’t built for the usage patterns of these third-party tools” and that external harnesses put “outsized strain” on compute resources. The core issue: Anthropic’s own tools like Claude Code and Cowork are optimized to maximize prompt cache hit rates, reusing previously processed text to save on compute. Third-party tools don’t do this, which means they burn through infrastructure at a much higher rate per user.

The fallout is significant. Users who relied on OpenClaw with a flat-rate subscription now face pay-as-you-go API pricing — some reporting cost increases of up to 50x their previous monthly spend. Anthropic offered a one-time credit equal to one month’s subscription and up to 30% off pre-purchased usage bundles, but the goodwill gesture hasn’t stopped backlash. OpenClaw creator Peter Steinberger said he tried to negotiate a delay but only managed to push enforcement back one week.

This is Anthropic’s most aggressive platform-control move to date, and it signals a clear priority: keep the compute for products Anthropic controls.

Source: TechCrunch, VentureBeat, The Decoder

Claude’s MAD Bugs Program Hits 500 Zero-Days, Writes Working FreeBSD Kernel Exploit

Anthropic’s vulnerability-hunting initiative, MAD Bugs, has now produced more than 500 validated high-severity zero-day vulnerabilities in open-source software — and the program is still running through April. The most dramatic result: Claude Opus 4.6 autonomously wrote a working remote root exploit for FreeBSD kernel vulnerability CVE-2026-4747 in roughly four hours of compute time.

Researcher Nicholas Carlini, who leads the effort at Anthropic, demonstrated the pipeline at the [un]prompted AI security conference. He pointed Claude at Ghost, a publishing platform with 50,000 GitHub stars and no prior critical vulnerabilities in its history. Ninety minutes later, Claude had found a blind SQL injection in Ghost’s Content API that let an unauthenticated attacker compromise the admin database.

The FreeBSD bug — a stack buffer overflow in the RPCSEC_GSS authentication module — had been patched on March 26, but Claude’s exploit chain solved six distinct technical problems without human assistance to achieve remote code execution through NFS port 2049. The AI also uncovered a Linux kernel vulnerability that had gone undetected for 23 years, plus critical bugs in Vim, Emacs, and Firefox.

The question nobody has a good answer for: who patches the vulnerabilities AI finds in abandoned or under-maintained software? Anthropic is disclosing responsibly, but the disclosure-to-patch pipeline assumes someone is on the other end.

Source: RoboRhythms, Calif.io, Futurum Group, WinBuzzer

Google Releases Gemma 4 Under Apache 2.0 — The License Matters More Than the Benchmarks

Google shipped Gemma 4 on April 2 at Cloud Next, and the headline isn’t another round of benchmark numbers — it’s the license. For the first time, every model in the Gemma family ships under Apache 2.0, the same permissive open-source license used by most of the software industry. No restricted-use clauses, no commercial limitations, no fine-print that limits deployment.

The release includes four model sizes: Effective 2B (E2B), Effective 4B (E4B), a 26B Mixture-of-Experts model that activates only 3.8B parameters per token for fast inference, and a 31B dense model that currently ranks as the third-best open model on the Arena AI text leaderboard. All models handle video and images natively, and the smaller E2B and E4B variants also process audio input for speech recognition.

Google designed Gemma 4 for agentic workflows and edge deployment. The MoE architecture keeps the model fast on limited hardware, and a shared KV cache optimization reduces memory during long-context generation. Developers have downloaded Gemma models over 400 million times, with more than 100,000 community variants already built.

The Apache 2.0 shift is Google’s clearest concession to the open-source AI community. Previous Gemma releases carried a custom license that, while permissive, created enough legal uncertainty to make enterprise legal teams nervous. That’s gone now.

Source: Google Blog, VentureBeat

Quick Hits

  • Take-Two guts its AI team: The GTA publisher laid off Luke Dicken, its head of AI, and an unknown number of team members on April 2 — barely two months after CEO Strauss Zelnick said Take-Two was “actively embracing generative AI.” The company declined to comment. Game Developer, TweakTown

  • DeepSeek V4 shapes up as the biggest open-source release yet: The trillion-parameter MoE model, with only ~37B active parameters and a 1M-token context window, is expected to ship under Apache 2.0. It’s been optimized to run on Huawei Ascend and Cambricon chips, proving frontier AI training is possible without Nvidia hardware — and training reportedly cost just $5.2 million. QverLabs, DigitalApplied

  • Six labs now ship competitive open-weight models: Google (Gemma 4), Alibaba (Qwen 3.6 Plus), Meta (Llama 4), Mistral (Small 4), OpenAI (gpt-oss-120b), and Zhipu AI (GLM-5) all have open-weight offerings that match or approach proprietary alternatives on key benchmarks. The gap between open and closed models has shrunk to roughly three months. DigitalApplied

  • 2026 tech layoffs pass 85,000: April is on track to be the worst month yet, driven by Oracle’s 30,000-person cut and continued restructuring at Meta, Google, and Amazon. AI-native companies keep hiring while legacy enterprise firms shed roles to fund infrastructure buildouts. Bloomberg

Worth Watching

The model release calendar for Q2 is stacking up fast. OpenAI’s “Spud” (likely GPT-5.5 or GPT-6) has finished pretraining. Anthropic’s Mythos model, accidentally revealed in the March 26 data leak, is being trialed with early-access enterprise customers — prediction markets give it a 73% chance of public launch by June. DeepSeek V4’s full release under Apache 2.0 could reset expectations for what open-source models can do at scale. And behind all of it, the question of who controls compute access is becoming as important as who builds the best model. Anthropic’s OpenClaw cutoff is just the opening move.