In September 2025, AI CEOs were making bold predictions about the next six months. AGI by year-end. Ninety percent of code written by AI. Mass job displacement. The year of the agent.
Six months later, we can check the receipts. Here’s how those predictions held up.
The Predictions
Dario Amodei: “90% of Code Will Be AI-Written in 3-6 Months”
The claim: In March 2025, Anthropic’s CEO told interviewers that AI would be writing 90% of code within three to six months, with essentially all code AI-generated within a year.
What actually happened: By September 2025 - six months after the prediction - this hadn’t materialized. Amodei later clarified that 70-80% of code at Anthropic specifically was AI-assisted, but this wasn’t the same as AI replacing developers. Google reports over 30% of new code is AI-generated, Microsoft around 20-30%. These numbers mean AI assists engineers, not that engineers became obsolete.
Verdict: ❌ FAILED (with asterisks for Anthropic’s internal claims)
Elon Musk: “AGI Smarter Than Any Human by 2025”
The claim: In early 2024, Musk predicted AI would surpass any single human’s intelligence by the end of 2025.
What actually happened: It didn’t happen. In January 2026, Musk pushed the timeline back to 2026, claiming “We will have AI that is smarter than any one human probably by the end of this year” - the exact same prediction, one year later. He’s now predicting AI will exceed all human intelligence combined by 2027.
Verdict: ❌ FAILED (and moved goalposts)
Sam Altman: GPT-5 “Weeks/Months Away” in February 2025
The claim: Altman told investors GPT-5 would arrive in summer 2025, promising it would be a significant leap that integrated all of OpenAI’s technologies.
What actually happened: GPT-5 did launch in August 2025. The problem is it was a disaster. A broken routing system randomly served users either cutting-edge AI or older models mid-conversation. Security researchers found significant prompt injection vulnerabilities. Users reported it failing basic algebra that previous models handled. Reddit threads titled “GPT-5 is horrible” got thousands of upvotes. Altman admitted OpenAI “totally screwed up” and had to reinstate GPT-4o.
Verdict: ⚠️ TECHNICALLY DELIVERED (but not as advertised)
“2025: The Year of the AI Agent”
The claim: Throughout 2024 and early 2025, every major tech publication and AI company declared that 2025 would be “the year of the agent.” Autonomous AI agents would handle tasks, streamline workflows, and transform how we work.
What actually happened: It was the year of “agent-washing.” Existing chatbots and RPA scripts were relabeled as “agents” without true autonomy. Gartner estimated only about 130 of thousands of claimed “agentic AI vendors” offered anything genuinely new. While 30% of organizations explored agentic options, only 11% actually deployed production systems. Gartner now predicts 40% of agentic AI projects will be canceled by 2027 due to unclear business value.
Verdict: ❌ MOSTLY HYPE
Dario Amodei: “50% of Entry-Level White-Collar Jobs Gone in Five Years”
The claim: Anthropic’s CEO warned AI could eliminate half of all entry-level white-collar jobs within five years.
What actually happened: 55,000 layoffs in 2025 were attributed to AI - about 4.5% of total job cuts. But research suggests companies are citing AI as justification for layoffs rather than actually having AI systems that replace those workers. When New York required companies to disclose if layoffs were due to automation, none of 160 filing companies - including Amazon and Goldman Sachs - checked the box.
Verdict: ⏳ TOO EARLY TO JUDGE (but early data suggests overstated)
The Scorecard
| Prediction | Predictor | Outcome |
|---|---|---|
| 90% of code AI-written in 6 months | Amodei | ❌ Failed |
| AGI by end of 2025 | Musk | ❌ Failed |
| GPT-5 summer 2025 | Altman | ⚠️ Shipped (badly) |
| Year of the AI Agent | Industry | ❌ Overhyped |
| 50% entry-level jobs gone | Amodei | ⏳ Unlikely |
Final tally: 0 clean wins, 1 technical delivery, 3 clear failures, 1 pending
What This Pattern Tells Us
The AI prediction track record follows a consistent pattern: aggressive timelines from people who benefit from AI hype, followed by quiet goalpost-moving when deadlines pass.
Musk has predicted AGI “within the next year” for several years running. Each January he simply pushes the timeline forward twelve months. Altman promised GPT-5 would be transformative; when it wasn’t, he pivoted to promising GPT-5.2 would fix everything. Amodei’s 90% code prediction quietly became “at Anthropic” when the timeline expired.
These aren’t just optimistic forecasts. They’re marketing. Every prediction generates headlines, drives investment, and shapes expectations in ways that benefit the predictors. When predictions fail, there’s no accountability - just new predictions.
What to Watch
The current round of predictions:
- Musk: AGI by end of 2026, superintelligence by 2027
- Altman: “Significant gains” from GPT-5.2 in Q1 2026
- Anthropic: AGI by early 2027
We’ll check back in six months.
The Honest Answer
When Metaculus forecasters, who have actual prediction track records, estimate AGI timelines, they put a 25% chance by 2029 and 50% by 2033. When surveyed AI researchers give their estimates, the median is somewhere in the 2040s.
The gap between CEO predictions and expert forecasts tells you everything about who’s making predictions and why.
Mark your calendars for September 2026. We’ll do this again.