“We’re definitely going to build a bunker before we release AGI.”
That’s what Ilya Sutskever, OpenAI’s former chief scientist, told his team in 2023. He believed researchers would need physical protection once they achieved artificial general intelligence—the technology was too powerful not to become “an object of intense desire for governments globally.”
Karen Hao spent over two years investigating OpenAI. She interviewed 250 people. About 90 of them were current or former OpenAI employees and executives. Her book, Empire of AI, is now a New York Times bestseller. In a March 26 podcast appearance, she laid out what they told her.
It’s not a story about scrappy researchers trying to save the world. It’s a story about a company that’s gaslighting the public while building something its own leaders privately call a potential demon.
The Bunker and the Rapture
Some OpenAI employees described their colleagues’ beliefs to Hao in voices “quivering with fear.”
One researcher said: “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture. Literally a rapture.”
This wasn’t metaphor. Sutskever staged ceremonies at company retreats, including burning an effigy representing “deceptive AGI.” Employees watched their scientific institution transform into something resembling a religious movement around the imminent arrival of superhuman intelligence.
Sutskever assured colleagues the bunker would be optional: “Of course, it’s going to be optional whether you want to get into the bunker.”
How reassuring.
What the Safety Team Actually Saw
Hao’s sources described a company where safety was theater.
One safety engineer told her that “software testing for a basic calculator receives more scrutiny than OpenAI’s safety measures.” The company borrowed credible terminology—“red teaming”—while implementing superficial assessments.
When ChatGPT unexpectedly went viral, the Trust and Safety team numbered barely a dozen people. They couldn’t monitor millions of users. New products launched faster than monitoring systems could be built.
The internal culture fractured into three irreconcilable factions: Exploratory Research (pushing capabilities forward), Safety (trying to prevent catastrophic risks), and Startup (commercializing as fast as possible). The “Doomers” versus “Boomers” warfare culminated in Sam Altman’s brief firing and reinstatement.
The people who wanted to slow down lost.
The Mission Drift
OpenAI started as a nonprofit dedicated to ensuring AGI benefits humanity. That’s not what it is now.
The “capped profit” structure it adopted allows 100x returns. Someone investing $10 million could make $1 billion. That’s not a nonprofit structure. That’s a startup with good PR.
Altman himself legally owned a $100 million Startup Fund that should have belonged to OpenAI, creating undisclosed financial ties to companies throughout OpenAI’s ecosystem.
The data practices match the structure. According to Hao’s reporting, OpenAI trained on millions of books without permission. It transcribed YouTube videos against terms of service. It scraped everything from Twitter to Pastebin—abandoning quality for scale, treating intellectual property and personal data as unowned territory to be colonized.
The People Who Actually Train the AI
While executives debate the rapture, the people doing the actual work tell a different story.
OpenAI contracted Kenyan workers at $1.46 to $3.74 per hour to moderate the worst content the internet produces. One contractor, Mophat Okinyi, spent months reviewing descriptions of child abuse. It destroyed his mental health.
Scale AI, which provides training data for OpenAI and other companies, recruited Venezuelan workers during hyperinflation. It promised decent wages, then slashed pay from $40 per week to under $6 per week once workers were dependent.
This is the foundation of the AI revolution: traumatized contractors in the global south making poverty wages to clean up data so the models don’t say slurs.
The Gaslighting
Hao’s central charge is that AI companies provide contradictory definitions of their technology depending on the audience.
Congress gets different promises than consumers. Investors hear different numbers than regulators. The public gets reassured that AI is helpful and harmless. The researchers get told they might need bunkers.
When Timnit Gebru was fired from Google for raising concerns about language models, the industry made clear what happens to internal critics. People see things that concern them, try to raise those concerns, and face pressure, marginalization, restrictive NDAs, and threats to their equity.
OpenAI reportedly uses a custom ChatGPT internally to identify leakers. The company that advocates for openness in AI development uses AI surveillance to suppress internal dissent.
What They Actually Believe
Dario Amodei, before he left OpenAI to found Anthropic, said his estimate of catastrophic outcomes was “somewhere between 10% and 25%.”
That’s not a small number. That’s not the odds of a startup failing. That’s the odds of something going “quite catastrophically wrong on the scale of human civilization.”
Sutskever predicted that “it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centres and power stations.”
These are the people building the technology. They’re telling us what they actually believe when they think the cameras are off.
The Mythology
Hao makes a deeper point about how we got here.
“Artificial intelligence” itself was a 1956 marketing invention. John McCarthy needed something more compelling than “automata studies” to attract funding. He chose a name that embedded unrealistic anthropomorphic expectations—expectations that persist today and enable dangerous misplaced trust in systems that hallucinate and fabricate rather than truly understand.
The entire field was built on hype from day one. The terminology was designed to deceive. And now we have a company whose own leaders talk about building bunkers and raptures while their press releases promise to benefit all of humanity.
What This Means
Ninety people from inside the company talked. Many of them put their names to their concerns. They described a culture of secrecy, of safety theater, of exploitation, of messianic delusion mixed with commercial ambition.
This is the company training the most capable AI systems in the world. The company that governments treat as a partner. The company whose technology increasingly mediates how millions of people access information.
The bunker was optional, Sutskever said. That’s the tell. They’re building something they think might require them to hide underground—and they’re going to release it anyway.