The Prediction Reckoning: What AI Leaders Promised Six Months Ago vs. What Actually Happened

Dario Amodei said 90% of code would be AI-written by September. Elon Musk said AGI would arrive in 2025. The World Economic Forum predicted 85 million jobs displaced. Time to check the receipts.

Six months ago, in March 2025, Anthropic CEO Dario Amodei told the Council on Foreign Relations that AI would be writing 90% of code within three to six months. That deadline has passed.

In May 2024, Elon Musk was asked “How long until AGI?” He answered: “Next year.” That year has come and gone.

In 2020, the World Economic Forum predicted that by 2025, 85 million jobs would be displaced by automation.

It’s time to check the receipts.

The 90% Code Prediction

Amodei’s claim was specific enough to verify. By September 2025, he said, AI would be writing 90% of code.

It didn’t happen. Not even close.

Google CEO Sundar Pichai revealed that more than 25% of Google’s internal code was AI-generated as of late 2024. Microsoft CEO Satya Nadella confirmed around 30% for Microsoft as of April 2025. Those are the highest figures from any major tech company - and they’re a third of what Amodei predicted for the entire industry.

Worse, research published after his prediction found that AI coding tools actually slowed down experienced software engineers. METR ran a randomized controlled trial and found tasks took longer when AI tools were allowed. Developers spent less time writing code but compensated by spending substantially more time reviewing AI output and refining prompts.

Cybersecurity researchers also found that developers using AI to generate code created ten times more security vulnerabilities than those writing code manually.

In September, Amodei clarified that he was talking about code at Anthropic specifically, not the industry. But that’s not what he said in March. The original prediction was clear enough that it made headlines as an industry forecast.

The 2025 AGI Prediction

Elon Musk has predicted AGI for 2025 since at least May 2024. In December 2025 - with weeks left in the year - he moved the goalpost to 2026.

His current position: xAI’s Grok 5 has a “10% chance” of achieving AGI. The model was supposed to arrive before the end of 2025. It’s now expected in Q1 2026.

This follows a pattern. Musk predicted Tesla would have fully autonomous vehicles by 2020. That didn’t happen either.

The 85 Million Jobs Displaced

The World Economic Forum’s 2020 prediction that 85 million jobs would be displaced by 2025 also missed badly - though in a way that reveals the complexity of these forecasts.

According to Brookings, there’s no evidence of large-scale AI-driven job loss in the US or other developed economies. Unemployment rates remain relatively low in countries most aggressively adopting AI.

A National Bureau of Economic Research survey of C-suite executives found that 90% said AI had no impact on their workforce employment over the prior three years.

Of the roughly 1.2 million job cuts announced in 2025, about 55,000 mentioned AI - less than 5%. Even Sam Altman admitted that many of those are “AI washing,” where companies blame technology for cuts they would have made anyway.

The WEF’s prediction included a caveat that 97 million new jobs would be created alongside the 85 million displaced. But when neither the displacement nor the creation happened at predicted scale, the forecast looks less like foresight and more like speculation.

Why the Predictions Fail

AI 2027, a forecasting project that tracks AI progress, recently graded its own 2025 predictions. The conclusion: “In aggregate, progress on quantitative metrics is at roughly 65% of the pace that AI 2027 predicted.”

The specific misses:

  • SWE-Bench Verified (a coding benchmark) was expected to hit 85% accuracy by mid-2025. Actual performance: 74.5%.
  • AI company valuations were expected to reach $500 billion by June 2025. They hit that mark in October - four months late.
  • AI R&D productivity uplift is behind schedule due to compute bottlenecks.

The forecasters identified a key error: they assumed AI-generated code would accelerate AI research itself, creating a feedback loop. That loop hasn’t materialized because code quality matters, and AI code still requires substantial human review.

The Incentive Problem

These predictions aren’t random guesses. They come from people who benefit from inflated expectations.

When Amodei predicts 90% AI-generated code, Anthropic - which sells AI coding tools - benefits from the hype. When Musk predicts AGI, xAI attracts investment. When the WEF predicts massive job disruption, consulting firms sell transformation services.

This doesn’t mean the predictions are deliberately false. But optimism bias is built into the forecasting system. No AI CEO has ever lost funding by predicting their technology would be more transformative than it turned out to be.

What Actually Happened

The past six months saw real AI progress, just not at the predicted pace:

  • Claude 4.6 and GPT-5.3 showed meaningful improvements over prior versions
  • AI agents became more reliable, though still far from autonomous
  • Enterprise adoption grew, with most companies now running AI pilots
  • Coding assistants became standard tools, genuinely useful for some tasks

The gap isn’t between “AI works” and “AI doesn’t work.” It’s between “AI is useful” and “AI will transform everything within months.”

What This Means

The consistent pattern of failed predictions has consequences beyond embarrassment.

First, it corrodes trust. When executives repeatedly promise transformations that don’t arrive, reasonable skepticism sets in. The people most harmed are those making actual progress - their real achievements get dismissed as more hype.

Second, it misallocates resources. Companies laying off workers in anticipation of AI capabilities that don’t exist hurt both their employees and their own operations. A Harvard Business Review analysis found 55% of employers regret AI-attributed layoffs.

Third, it creates policy confusion. When predictions vary from “AGI next year” to “nothing significant is happening,” policymakers can’t calibrate their responses. The EU AI Act was shaped partly by claims about imminent capabilities that still haven’t materialized.

The Accountability Gap

None of these predictions came with stakes attached. Amodei didn’t bet his job on 90% code. Musk faces no consequences for moving AGI to next year, again. The WEF continues releasing forecasts regardless of prior accuracy.

In a functioning marketplace of ideas, repeated failed predictions would reduce someone’s credibility. In AI, it often increases it - failed predictions at least show you’re thinking big.

Until there are consequences for overpromising, expect more of the same. Six months from now, we’ll probably be grading another round of predictions that didn’t pan out.

At least now we have a baseline. When the next round of forecasts arrives - and they will, probably claiming AGI by 2027 or 95% AI-generated code by 2027 - remember what happened to the last ones.

The technology is real. The progress is genuine. The timelines are fiction.