Top Stories
Google in Talks With Pentagon to Deploy Gemini in Classified Settings
Google is negotiating with the Department of Defense to deploy its Gemini AI models in classified military environments, according to reporting from The Information and Reuters. The discussions follow the Pentagon’s six-month phase-out of all Anthropic products, triggered after the DoD labeled Anthropic a “supply chain risk” when the company refused to loosen safety restrictions on Claude for military applications earlier this year.
Google has proposed contract language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without human oversight. A Pentagon official told Newsweek the department will “continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels.”
The pivot tells a clear story about where the leverage sits. Anthropic drew a line on safety guardrails and the Pentagon moved on to the next vendor rather than negotiate. Google — which abandoned its “don’t be evil” era Project Maven protests years ago — is positioning itself as the willing partner, complete with pre-written safeguard language that gives political cover without meaningfully limiting deployment scope. For anyone tracking which AI companies will say no to military contracts, the list just got shorter.
Sources: Newsweek, Interesting Engineering, Quiver Quantitative
Stanford AI Index 2026: China Closes the Gap, Public Trust Keeps Falling
Stanford HAI released its annual AI Index Report this week, and the headline numbers paint a picture of a field sprinting ahead of the guardrails built to contain it. On the technical side, frontier models now match or exceed human baselines on PhD-level science questions, competition math, and multimodal reasoning. SWE-bench Verified coding scores jumped from 60% to near 100% in a single year. Then there is the clock test: the same model that solves graduate-level physics reads an analog clock correctly just 50.1% of the time.
The geopolitical numbers are more consequential. The performance gap between U.S. and Chinese AI models has effectively closed, with labs from both countries trading the top leaderboard position since early 2025. U.S. private AI investment hit $285.9 billion in 2025 — 23 times China’s $12.4 billion — but the number of AI researchers moving to the U.S. has dropped 89% since 2017. Money alone is not winning the talent race.
On the public trust front, documented AI incidents rose to 362 from 233 in 2024. The Foundation Model Transparency Index average score dropped to 40 from 58 the year prior, meaning frontier labs are getting less transparent even as they ask for more of the public’s trust. Just 31% of Americans trust their government to regulate AI — the lowest of any country surveyed. Generative AI adoption hit 53% of the population within three years, faster than the personal computer or the internet, but 73% of U.S. experts view the job market impact positively while only 23% of the general public agrees.
Sources: Stanford HAI, IEEE Spectrum, The Decoder
Tesla AI5 Chip Reaches Tape-Out — Two Years Late
Tesla’s next-generation AI5 chip hit tape-out on Wednesday, the milestone that sends a final design to the foundry for fabrication. Elon Musk announced it on X and thanked Samsung and TSMC for their manufacturing partnership. Tesla shares surged roughly 8% on the news.
The enthusiasm should be weighed against the timeline. Tesla originally said AI5 would be in vehicles by now; tape-out means the chip still has to be fabricated, tested in silicon, validated, and ramped to volume. Tesla has acknowledged it needs “several hundred thousand completed AI5 boards line side” before switching production lines, and that volume is not expected until mid-2027. Musk also mentioned AI6 and Dojo3 are in development, though both are even further out.
The chip is intended for Tesla’s self-driving stack, Optimus humanoid robots, and internal supercomputer clusters. Whether it can rival NVIDIA’s $30K-class accelerators in practice remains to be seen once real silicon data arrives.
Sources: Electrek, Not a Tesla App
Quick Hits
- NVIDIA released Lyra 2.0, an open-source framework (Apache 2.0) that converts a single photograph into a navigable, geometrically consistent 3D world. The system solves temporal drift by training on its own degraded outputs, teaching it to self-correct. Immediate use case: building robotics simulation environments from a single image via Isaac Sim. NVIDIA Research, Glitchwire
- Fortune reported that 54% of workers bypassed their company’s AI tools in the past 30 days and completed work manually, while roughly 80% of enterprise workers either avoid or actively reject AI. At the same time, workers lose 51 days annually to technology friction while AI users gain 40-60 minutes daily. Fortune
- Google launched AI Mode in Chrome, embedding search AI directly into the browser so users get answers without switching tabs. Google Blog
- Senator Maggie Hassan pressed AI voice cloning companies ElevenLabs, LOVO, Speechify, and VEED to strengthen anti-scam measures, citing an FBI report that AI-related scams caused $893 million in losses in 2025. Ctrl+AI+Reg
- Stanford’s AI Index also found that generative AI reached $172 billion in annual value to U.S. consumers, with median per-user value tripling between 2025 and 2026 — even as organizational AI adoption plateaus at 88%. Stanford HAI
Worth Watching
The Google-Pentagon talks are the most significant development to track this week. The Anthropic phase-out created a vacuum in military AI contracting, and Google is moving to fill it with contract language designed to look like restraint while keeping the door open for classified deployment. Watch for whether other frontier labs — particularly OpenAI, which has its own growing defense relationships — try to position themselves similarly, and whether any congressional pushback materializes beyond Hassan’s voice-cloning letter.
The Stanford AI Index transparency scores dropping from 58 to 40 in one year deserves sustained attention. Frontier labs are simultaneously asking regulators for trust-based governance frameworks and providing less information about how their models work, what data they train on, and what incidents occur. That gap will eventually close — either through voluntary disclosure or through regulation. The EU AI Act becoming fully applicable in August may force the issue.