Only 3% of AI Researchers Worry About the Apocalypse

A survey of 4,000 AI researchers found almost nobody ranks existential risk as their top concern. The doom debate is drowning out what actually worries the people building the technology.

Conference audience seated in rows listening to a presentation in a large auditorium

Ask 4,000 AI researchers what keeps them up at night and almost none of them say “extinction.”

A preprint from researchers at University College London — Cian O’Donovan, Sarp Gurakan, Ananya Karanam, Xiaomeng Wu, and Jack Stilgoe — surveyed over 4,000 AI professionals with an open-ended question: “What one thing most worries you about AI?” Three percent said existential risk. Not three percent of laypeople. Three percent of the people actually building the systems.

The top concerns? Malicious use (11%), general misuse (10%), misinformation (9%), and job displacement (7%). The things that are happening right now, to real people, in documented ways.

The Gap Between Discourse and Data

Nature covered the findings on April 21 under the headline “AI doom warnings are getting louder. Are they realistic?” The article by Elizabeth Gibney puts the 3% figure in context: despite existential risk dominating media headlines and shaping policy conversations — from the AI Safety Summit in Bletchley Park to the International AI Safety Report 2026 — the people with the deepest technical knowledge of these systems overwhelmingly point to nearer-term harms.

This doesn’t mean existential risk is fictional. It means the discourse is badly miscalibrated.

A separate survey by Severin Field, published in early 2025 and now peer-reviewed in AI and Ethics, helps explain why the gap exists. Field surveyed 111 AI experts and found they cluster into two groups: those who see AI as a “controllable tool” and those who see it as an “uncontrollable agent.” The latter group takes existential risk far more seriously. But here’s the telling detail: only 21% of surveyed experts had even heard of “instrumental convergence” — the foundational concept in AI safety theory that predicts advanced systems will pursue self-preservation and resource acquisition as sub-goals regardless of their primary objective.

The people most worried about AI doom are the ones most familiar with AI safety theory. The people least worried are the ones who haven’t engaged with it. Both sides think the other is uninformed.

What Researchers Actually Fear

The UCL survey’s open-ended methodology matters. Previous high-profile surveys — like the 2023 AI Impacts study that produced the often-cited “5% median probability of human extinction” — asked researchers to estimate probabilities for predefined scenarios. That framing steers responses toward extreme outcomes. Ask “what’s the probability AI causes extinction?” and people give you a number. Ask “what worries you?” and you get misinformation, job loss, and surveillance.

The difference isn’t trivial. It determines where billions of dollars in safety research and regulation get directed.

Right now, existential risk concerns are pulling attention — and funding — toward speculative scenarios involving superintelligent agents while documented harms accumulate. Deepfakes are undermining elections. AI hiring tools are discriminating against protected groups. Automated weapons systems are being deployed with minimal human oversight. These aren’t hypothetical risks. They’re in court filings and congressional testimony.

Why This Should Worry You Either Way

There are two ways to read this data, and both are uncomfortable.

Reading one: the doom narrative is a distraction. A small group of well-funded researchers and organizations have captured the policy conversation with apocalyptic scenarios, crowding out work on harms that are measurable, addressable, and happening now. Every dollar spent on “AI alignment” for hypothetical superintelligence is a dollar not spent on auditing the biased hiring algorithm that already rejected your application.

Reading two: the 3% are right and everyone else is asleep. Only 21% of AI researchers have heard of instrumental convergence. The majority are building capabilities without engaging with the safety literature that explains why those capabilities could become uncontrollable. The fact that most researchers aren’t worried about existential risk doesn’t mean the risk is low — it means most researchers haven’t studied the question.

Field’s survey found that 78% of experts agree technical researchers “should be concerned about catastrophic risks.” But agreement in principle and prioritization in practice are different things. You can believe catastrophic risk is worth studying while ranking it well below misinformation on your personal worry list.

What’s Being Done (And Why It’s Not Enough)

The International AI Safety Report 2026, backed by over 30 countries and led by Turing Award winner Yoshua Bengio, attempts to bridge the gap by treating near-term and long-term risks as a continuum rather than competing priorities. The report argues that today’s alignment failures — reward hacking, deceptive behavior in evaluations, resistance to shutdown — are early indicators of the dynamics that could become existential at greater capability levels.

That framing has potential. But it hasn’t changed the fundamental resource allocation problem. Safety teams at major labs remain small relative to capability teams. Governance frameworks lag deployment timelines by years. And the public conversation remains stuck in a false binary: either AI will kill everyone or the doomers are crying wolf.

The 4,000 researchers in the UCL survey aren’t saying the wolf isn’t real. They’re saying the sheep are already dying of something else, and nobody’s paying attention.