AI News: Stanford Report Reveals AI Adoption Outpacing the Internet While Trust Erodes

Stanford AI Index 2026 shows generative AI hit 53% adoption in three years, ChatGPT goes down for thousands, OpenAI's Spud nears launch, MIT Tech Review debuts AI watchlist

Top Stories

Stanford AI Index: Generative AI Adopted Faster Than the Internet

Stanford’s 2026 AI Index Report dropped last week with a dataset that captures the speed and scale of what is happening. Generative AI reached 53% population adoption within three years — faster than the personal computer or the internet managed. Organizational adoption hit 88%. Four out of five university students now use generative AI tools.

The technical benchmarks are equally striking. Frontier models now meet or exceed human baselines on PhD-level science questions, multimodal reasoning, and competition mathematics. On SWE-bench Verified, a coding benchmark, performance jumped from 60% to near 100% in a single year.

But the report also tracks the costs. Grok 4’s estimated training run produced 72,816 tons of CO2 equivalent. The Foundation Model Transparency Index — which measures how openly companies disclose training data, compute, capabilities, and risks — saw average scores drop from 58 to 40 points. And the flow of AI researchers moving to the U.S. has collapsed by 89% since 2017, with an 80% decline in the last year alone.

On the money side, U.S. private AI investment hit $285.9 billion in 2025, more than 23 times China’s $12.4 billion. The estimated value of generative AI tools to U.S. consumers reached $172 billion annually by early 2026, with median value per user tripling between 2025 and 2026. But 52% of people now say they feel nervous about AI — up 2 percentage points — even as 59% report feeling optimistic. People are using the tools and worrying about them at the same time.

Sources: Stanford HAI, IEEE Spectrum, The Decoder

ChatGPT Goes Down for Thousands on Easter Sunday

ChatGPT suffered a major outage on April 20, with OpenAI confirming a “partial outage” that lasted at least 90 minutes. Problems started around 10:05am ET, peaking at over 8,700 reports in the UK and 1,900 in the US on Downdetector.

The outage was broad. Conversations, login, voice mode, image generation, and Codex — OpenAI’s coding tool — all went down simultaneously. Some users saw blank pages; others could load the interface but couldn’t send or receive messages. OpenAI deployed a fix and said it was “monitoring the recovery” as of 12:03pm ET, though some users reported lingering issues afterward.

The timing underscores a basic infrastructure question: as generative AI tools become core productivity software for millions of users, outages carry real economic weight. OpenAI has not disclosed what caused the failure.

Sources: TechRadar, TechGenyz, The420

Human Scientists Still Trounce AI Agents on Complex Tasks

A Nature report timed to the Stanford AI Index highlighted a stubborn gap: the best AI agents perform only about half as well as human experts with PhDs on complex scientific tasks. Despite the surge in AI adoption across research — 6% to 9% of publications in any given natural-sciences field now mention AI — autonomous AI agents still fall short when tasks require sustained reasoning across multiple steps.

The finding matters because the AI industry’s narrative has shifted hard toward agents. OpenAI, Google, Anthropic, and a wave of startups are all building systems designed to work autonomously on multi-step problems. The Stanford report’s data suggests that narrative is running ahead of the reality, at least in scientific domains where precision and domain expertise are non-negotiable.

Source: Nature

Quick Hits

  • OpenAI’s “Spud” watch continues: OpenAI’s next major model — codenamed Spud, likely shipping as GPT-5.5 or GPT-6 — completed pretraining on March 24. Polymarket gives it 78% odds of release by April 30. Sam Altman described the timeline as “a few weeks” on the same day pretraining ended. Whether it gets the 5.5 or 6.0 label depends on the size of the performance jump over GPT-5.4. Silence from OpenAI so far. LumiChats, FindSkill
  • MIT Tech Review launches AI watchlist: MIT Technology Review publishes its first-ever “10 Things That Matter in AI Right Now” today, April 21. The list covers AI companions, mechanistic interpretability, generative coding, and hyperscale data centers, among other topics. It was unveiled at EmTech AI on MIT’s campus. MIT Technology Review
  • Google ships Gemini 3.1 Flash-Lite: Google’s most cost-efficient Gemini model is now in preview. It matches Gemini 2.5 Flash quality while delivering 2.5x faster time-to-first-token and 45% faster output. Pricing sits at $0.25 per million input tokens and $1.50 per million output — cheap enough for high-volume moderation, translation, and UI generation at scale. Google Blog
  • AI researcher migration to U.S. in freefall: The Stanford report’s most alarming data point for U.S. competitiveness: the number of AI researchers moving to the country has dropped 89% since 2017. The decline accelerated in the last year, with an 80% drop. Immigration policy and global competition for talent are cited as factors. Stanford HAI

Worth Watching

The transparency paradox deepens. Stanford’s Foundation Model Transparency Index dropped from 58 to 40 points year over year. At the same time, adoption surged to 88% of organizations. Companies are disclosing less about how their models work precisely as more people and businesses depend on them. This is the kind of trend that invites regulation — and with 600+ state AI bills already filed this year, some of it is already arriving.

Spud’s silence is getting loud. It has been nearly four weeks since OpenAI’s next model completed pretraining. The usual safety evaluation window is 3-6 weeks, which puts a potential release between now and early May. If OpenAI drops a GPT-6-class model in the next two weeks, April 2026 will be remembered as the month the frontier moved faster than anyone could track.