The AI safety conversation has a favorite villain: the superintelligent system that seizes control in a single dramatic move. Paperclip maximizers. Rogue AGI. The decisive moment where humanity loses the game.
Atoosa Kasirzadeh, a philosopher at the University of Edinburgh and the Alan Turing Institute, argues this fixation on a single catastrophic event is blinding us to a more plausible path to extinction — one that’s already underway.
The Accumulative Hypothesis
In a paper now published in Philosophical Studies, Kasirzadeh introduces a framework that divides AI existential risk into two categories. The “decisive” hypothesis is the one everyone talks about — a superintelligent system outmaneuvers humanity in a definitive strike. The “accumulative” hypothesis is the one almost nobody is planning for.
The accumulative path doesn’t require superintelligence. It doesn’t require a single rogue system. It requires exactly what we have now: many AI systems, deployed across critical infrastructure, each creating manageable problems that compound over time until systemic resilience collapses.
Think of it as the boiling frog problem, applied to civilization. No single temperature increase is lethal. But the water is already warming.
How It Works
Kasirzadeh identifies three mechanisms through which AI-driven disruptions accumulate:
Local causality with systemic impact. Individual AI failures — a biased hiring algorithm here, a flawed medical diagnosis there — look contained. But they ripple across interconnected systems. Enough local disruptions, and critical thresholds get crossed that no individual failure could trigger alone.
Selective infrastructure connectedness. AI systems are now embedded in finance, healthcare, defense, energy, and communications. They don’t need to be superintelligent to create cascading failures. They just need to be wrong in ways that propagate through connected infrastructure.
Multidirectional feedback loops. AI systems shape the social, economic, and political structures that in turn shape the next generation of AI systems. Misinformation erodes trust, which weakens governance, which produces worse AI regulation, which enables more harmful deployments. The loop tightens with each cycle.
The paper maps six specific risk domains — accountability failures, representational bias, misinformation, privacy erosion, dual-use weaponization, and systemic instability — and shows how they interact to degrade the structures that keep civilization functional.
The MISTER Scenario
To make the abstract concrete, Kasirzadeh constructs what she calls the “Perfect Storm MISTER” scenario: AI-driven Manipulation corrodes public trust. Information security erodes as AI-powered attacks overwhelm defenses. Surveillance expands under the guise of security. Trust between citizens and institutions breaks down. Economic structures destabilize as AI concentrates wealth and displaces labor. Rights protections weaken as states deploy AI against their own populations.
None of these is an existential risk on its own. Together, they hollow out the systems that would normally absorb shocks. Then comes what Kasirzadeh calls the “triggering event” — a continent-scale cyberattack, a cascading infrastructure failure, a geopolitical crisis — and the weakened system can’t recover.
The triggering event isn’t the cause of collapse. It’s the match that falls on kindling that’s been drying for years.
Why This Should Worry You
The 2026 International AI Safety Report, chaired by Yoshua Bengio and backed by over 100 experts from 30 countries, found that the most pressing risks from AI “may come not from the models themselves, but from the complex systems companies build around them.” That is the accumulative hypothesis in a sentence.
An Axios investigation in December 2025 found that major AI companies’ rhetoric about existential risk “has not yet translated into quantitative safety plans, concrete alignment-failure mitigation strategies, or credible internal monitoring and control interventions.” They’re planning for the decisive scenario — the dramatic takeover — while the accumulative damage goes unaddressed.
This matters because the two risk types demand completely different responses. Decisive risk calls for alignment research and containment. Accumulative risk calls for systemic governance, infrastructure resilience, and institutional strengthening — the boring, expensive, politically difficult work that no one gets credit for until something goes very wrong.
What’s Being Done (And Why It’s Not Enough)
The AI safety field remains overwhelmingly focused on the decisive scenario. Alignment research, red-teaming, interpretability work — these are essential, but they address only half the problem. Almost no major AI governance framework treats accumulative risk as a first-class concern.
Kasirzadeh argues that effective governance requires integrating AI ethics (which focuses on near-term harms) with AI safety (which focuses on existential risk), recognizing that the former is a pathway to the latter. The gap between the two fields isn’t just academic — it’s a structural blind spot that leaves the accumulative pathway largely unmonitored.
The water is getting warmer. Whether we notice before it boils depends on whether we’re looking for the right kind of danger.