AI Prediction Scorecard: What We Were Told Six Months Ago vs. What Actually Happened

We tracked the boldest AI predictions from September 2025. Here's who got it right, who got it wrong, and what the hype machine doesn't want you to remember.

Checklist on a clipboard with items being marked off

Six months ago, AI executives were making bold predictions about what would happen by now. AGI timelines. Revolutionary workforce transformation. Autonomous everything. It’s time to check the receipts.

We’ve collected the biggest AI predictions from September and October 2025 and scored them against reality. Some held up. Many didn’t. And a few are still dangling in that convenient “technically possible but not really” zone that prediction-makers love.

The Scorecard

Prediction 1: “AI Agents Will Join the Workforce in 2025”

Who said it: Sam Altman, OpenAI CEO, January 2025

The claim: “In 2025, we may see the first AI agents join the workforce and materially change the output of companies.”

Reality check: Partial credit. AI agents exist, and some companies are using them. But “materially change the output of companies”? MIT Technology Review declared 2025 “the great AI hype correction” specifically because this didn’t happen at scale. Gartner data shows 74% of companies still struggle to scale AI beyond proof-of-concept.

That said, agent adoption is accelerating. Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. The prediction wasn’t wrong—it was early.

Score: 4/10 — The technology exists, but “materially change output” was oversold.


Prediction 2: “AGI by 2025”

Who said it: Elon Musk, May 2024 (for 2025)

The claim: When asked “How long until AGI?” Musk replied “Next year.”

Reality check: It didn’t happen. Musk has since moved the goalposts to 2026, estimating a 10% probability that Grok-5 achieves AGI. This is part of a pattern—Musk has a long history of making optimistic predictions about his companies’ accomplishments that don’t materialize on schedule.

For context, Grok-5’s launch was originally slated for late 2025 but has been delayed to Q1 2026.

Score: 0/10 — Clear miss. No AGI. Not even close.


Prediction 3: “GPT-5 Will Be Smarter Than Most People”

Who said it: Sam Altman, September 2025

The claim: GPT-5 would be “smarter than me and most people.”

Reality check: GPT-5 launched August 7, 2025. It’s impressive—94.6% on AIME 2025 math tests, 74.9% on SWE-bench Verified for coding. But “smarter than most people”?

Reviews of GPT-5.4 tell a more nuanced story: “Initial reception shifted from awe to unease—GPT-5 worked, was fast, capable, and polished, but was not transcendent.” Developers found familiar reasoning errors, researchers encountered brittle logic, and hallucinations persisted.

The telling stat: when GPT-5.4 is wrong, it’s wrong confidently—89% of errors come with a confident-sounding answer.

Score: 5/10 — Capable? Yes. “Smarter than most people”? That’s a marketing claim, not a technical one.


Prediction 4: “90% of Code Written by AI by Mid-2025”

Who said it: Dario Amodei, Anthropic CEO

The claim: 90% of code would be written by AI sometime between June and September 2025.

Reality check: Not even close. GitHub Copilot statistics show the tool generates 46% of code for users who have it enabled. That’s significant, but it’s the percentage for active Copilot users, not all developers.

More telling: only about 30% of AI-suggested code gets accepted. The rest requires human intervention. Studies also found that 48% of AI-generated code contains security vulnerabilities, which is why human review still dominates.

Anthropic’s own chief product officer has since noted that while Claude “is essentially writing itself,” the prediction that 90% of code would be AI-written hasn’t materialized.

Score: 2/10 — Way off. The real number is closer to 20-25% for the subset of developers using AI tools, and far less for the industry overall.


Prediction 5: “Tesla Robotaxis Operating Without Safety Drivers by End of 2025”

Who said it: Elon Musk, multiple occasions in 2024-2025

The claim: Fully autonomous Tesla robotaxis would be operating in Austin and expanding to other cities, “with no one in them.”

Reality check: Tesla’s robotaxi promises fell flat. The Austin service launched in June 2025, but with safety riders present. Their fingers hover over emergency kill switches hidden in door handles.

The numbers are brutal: only about three dozen robotaxis operate in Austin. Musk promised 500 cars by end of 2025 and 1,000+ in the Bay Area. Neither target was met.

Safety data makes it worse. Tesla robotaxis crash once every 57,000 miles. Waymo’s rate? Once every 247,000 miles—more than four times better. Meanwhile, Waymo actually operates fully driverless in four cities with over 250,000 paid rides per week.

Score: 1/10 — The service exists in name only. This was aspirational marketing dressed as prediction.


Prediction 6: “AI Will Cause Mass Unemployment by 2025”

Who said it: Various analysts and commentators throughout 2024

The claim: AI automation would cause significant job losses by 2025.

Reality check: The data shows displacement, but not collapse. According to Congressional analysis, about 76,440 positions were eliminated due to AI in 2025. That’s real, but it’s not the apocalypse predicted.

Entry-level positions took the biggest hit—job postings for entry-level roles declined approximately 35% since January 2023. Unemployment among 20-to-30-year-olds in tech-exposed occupations rose by almost 3 percentage points.

But the broader picture? The World Economic Forum’s Future of Jobs Report projected 92 million jobs displaced by 2030 while 170 million new ones are created—a net gain of 78 million.

Score: 3/10 — There’s displacement, especially for entry-level workers. But “mass unemployment” was fear-mongering.


Prediction 7: “Enterprise AI Will Deliver Clear ROI by 2025”

Who said it: Virtually every AI vendor

The claim: Companies investing in AI would see measurable returns.

Reality check: The hype machine crashed into reality. ISACA reported that vendors consistently overpromised and underdelivered, adding AI features to products where they weren’t necessary.

The numbers: up to 95% of GenAI initiatives struggled to deliver sustained ROI. The average company invested $1.9 million in GenAI projects in 2024, but less than 30% of CEOs were happy with returns.

The problem wasn’t the technology—it was fragmented data, siloed systems, and undocumented workflows. AI didn’t fail; data governance did.

Score: 2/10 — Most enterprises got expensive experiments, not transformative tools.


The Pattern

Looking at these predictions, a clear pattern emerges:

  1. Timeline compression: Executives consistently predict things will happen 2-3 years before they actually do. It’s not that the predictions are wrong—they’re just early, which in business terms means wrong.

  2. Capability vs. deployment gap: The technology often exists, but getting it working reliably in production takes far longer than demos suggest.

  3. The escape hatch: Notice how many predictions use weasel words like “may,” “could,” or “begin to.” This lets prediction-makers claim partial credit for almost anything.

  4. Moving goalposts: When predictions fail, the timeline simply extends. Musk’s AGI prediction moved from 2025 to 2026. Tesla’s robotaxi promises keep rolling forward.

What Actually Happened

The real story of the past six months isn’t about failed predictions—it’s about the “hype correction” that industry analysts now openly discuss.

Gartner’s 2025 Hype Cycle placed generative AI firmly in the “Trough of Disillusionment.” That’s not failure—it’s the natural evolution from inflated expectations to realistic assessment.

The technology is genuinely useful. GitHub Copilot helps developers complete tasks 55% faster. GPT-5 scores impressively on benchmarks. AI agents are finding real applications in customer service and operations.

But the gap between capability and production deployment remains massive. And the gap between marketing claims and actual results is even larger.

What This Means

For those tracking AI, the lesson isn’t to dismiss predictions entirely—it’s to apply appropriate discounting.

When an AI executive says something will happen “next year,” assume 3-5 years. When they say it will “transform” something, assume “marginally improve.” When they say “revolutionary,” assume “evolutionary.”

The technology is advancing. But it’s advancing at the pace of actual engineering, not the pace of investor presentations.

We’ll run this scorecard again in six months. Place your bets now on which current predictions will hold up—and which will quietly disappear from company talking points.

The Bottom Line

Overall prediction accuracy from September 2025: 2.4/10

The AI industry’s prediction track record is worse than a coin flip. Not because AI isn’t advancing—it clearly is. But because there’s no penalty for wrong predictions and significant rewards for bold ones.

Remember this the next time someone promises AGI next year, or claims AI will replace your job by Christmas. The technology is real. The timelines are fantasy.