AI News: Google Signs Pentagon AI Deal as 600 Employees Protest

Daily roundup for April 29, 2026 covering Google's classified military AI agreement, the Musk v. OpenAI trial, and critical security vulnerabilities in AI infrastructure

Top Stories

Google Signs Classified AI Deal With Pentagon, One Day After Employee Protest

Google signed a classified agreement with the U.S. Department of Defense on April 28, granting the Pentagon access to its Gemini AI models for “any lawful government purpose.” The deal includes classified data, making it a significant expansion of Google’s existing unclassified government AI contract.

The timing was striking: just one day earlier, more than 600 Google employees — including members of DeepMind — had signed an open letter urging CEO Sundar Pichai to reject exactly this kind of arrangement. The letter warned that Google’s AI could be used for “inhumane or extremely harmful” purposes, citing lethal autonomous weapons and mass surveillance as specific concerns.

Under the agreement, Google will “assist in adjusting its AI safety settings and filters at the government’s request” and has no “right to control or veto lawful government operational decision-making.” An anonymous source suggested the two sides agreed the technology should not be used for domestic mass surveillance or autonomous weapons “without appropriate human oversight and control,” though the enforceability of such verbal assurances in a classified contract remains unclear.

Google now joins OpenAI and xAI in providing AI to the Pentagon for classified work. Anthropic remains the notable exception — it was blacklisted from federal use after refusing to remove weapon and surveillance-related safeguards from its models.

Sources: Engadget, 9to5Google, Gizmodo

Musk v. OpenAI Trial Begins in Oakland

The long-awaited trial between Elon Musk and OpenAI opened on April 28 in Oakland, California. Musk took the stand on day one, telling the jury it’s “not OK to loot a charity” and claiming he conceived of OpenAI, named it, recruited its key people, and provided all initial funding — at least $44 million in its first years.

Musk’s lawyers entered OpenAI’s 2015 founding charter into evidence, which stated the organization would create “open source technology for the public benefit” and was “not organized for the private gain of any person.” The core dispute: whether OpenAI’s conversion to a for-profit structure violated its original mission and whether Musk deserves roughly $130 billion in damages returned to OpenAI’s nonprofit foundation.

OpenAI’s lead attorney Bill Savitt countered that “we are here because Mr. Musk didn’t get his way at OpenAI. He quit, saying they would fail for sure. But my clients had the nerve to go on and succeed without him.” The trial is expected to run roughly four weeks, with Sam Altman and Greg Brockman set to testify under oath.

Sources: NPR, CNBC, Washington Post

Critical RCE Vulnerability in Hugging Face’s LeRobot Remains Unpatched

Security researchers disclosed CVE-2026-25874, a critical vulnerability (CVSS 9.3) in Hugging Face’s open-source robotics platform LeRobot that allows unauthenticated remote code execution. The flaw exists in LeRobot’s PolicyServer component, which uses Python’s pickle.loads() to deserialize incoming data over gRPC endpoints without any authentication or input validation.

An attacker who can reach the PolicyServer network port can send a malicious serialized payload to execute arbitrary commands on the host machine. The vulnerability has been confirmed against LeRobot version 0.4.3 and remains unpatched, with a fix planned for version 0.6.0.

The timing matters: as AI moves from software demos into physical robotics systems, deserialization vulnerabilities carry consequences beyond data loss. A compromised robot controller could cause physical harm.

Sources: The Hacker News, Resecurity

Quick Hits

  • Microsoft Entra ID patched: Microsoft fixed a privilege escalation flaw in the Agent ID Administrator role that allowed users to hijack arbitrary service principals. Silverfort found 99% of tenants had conditions for exploitation. The Hacker News

  • Proofpoint: Half of orgs hit by AI incidents: New report finds 87% of organizations have deployed AI assistants beyond pilot, but over half say security is “catching up, inconsistent or reactive.” 42% report experiencing a suspicious or confirmed AI-related incident. Proofpoint

  • FISA 702 stalls in Congress: Privacy advocates warn that without meaningful reform, Americans face increasingly invasive AI-powered analysis of communications swept up by intelligence programs. Lawmakers got only a 10-day extension while debates continue. NBC News

  • Anthropic ARR hits $30B: Up from roughly $1 billion at the start of 2025 — a 30x increase in just over a year, driven by enterprise adoption and the Google investment pipeline. CryptoIntegrat

Worth Watching

The Musk v. OpenAI trial will produce weeks of testimony from key figures including Altman and Brockman. Whatever the verdict, the discovery process alone is likely to surface internal communications that reshape public understanding of OpenAI’s early governance decisions. The Google-Pentagon deal raises immediate questions about what “appropriate human oversight” means when the contract itself is classified and Google has ceded veto power over government decision-making.