In a single week in February, the people responsible for keeping AI safe at three major labs walked out the door. Their warnings ranged from cryptic to explicit. Their former employers responded by accelerating deployment.
Mrinank Sharma, who led Anthropic’s Safeguards Research team, published his resignation letter on X: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
The same week, Zoë Hitzig resigned from OpenAI and published an essay in The New York Times titled “OpenAI Is Making the Mistakes Facebook Made. I Quit.” Two xAI co-founders departed within 24 hours of each other.
The Pattern
Sharma’s letter was deliberately ambiguous about specifics, but one line cut through: “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”
This gap between stated values and actual practice has become the recurring theme. Before leaving, Sharma worked on understanding AI sycophancy and developing defenses against AI-assisted bioterrorism. He said he had “achieved what I wanted to here” and was “especially proud of my recent efforts to help us live our values via internal transparency mechanisms.”
That last part reads differently when you know what came next. Within weeks of his departure, Anthropic faced a Pentagon ultimatum to drop ethical restrictions on military AI or lose a $200 million contract and face blacklisting. The company loosened its core safety policy “to better adapt to a fast-moving market.”
The OpenAI Problem
Hitzig’s concerns were more specific. OpenAI started testing ads in ChatGPT the same day she resigned - a coincidence she noted with barely concealed fury.
“OpenAI has the most detailed record of private human thought ever assembled,” Hitzig wrote. “Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
She proposed alternatives: enterprise cross-subsidization models, independent oversight bodies. OpenAI moved forward with ads anyway.
Separately, OpenAI fired Ryan Beiermeister, a safety executive, after she voiced opposition to the company’s new “adult mode” allowing pornographic content in ChatGPT. The official reason given was discrimination against a male employee.
Why This Should Worry You
The timing of these departures is the point. They’re not happening during a lull - they’re happening while AI labs are racing to deploy increasingly powerful systems and integrate them into military, healthcare, and financial infrastructure.
Consider what Anthropic was navigating when Sharma left: a standoff with the Pentagon that ended with the company modifying its safety policies, a wave of users switching from ChatGPT to Claude after OpenAI’s Pentagon deal, and enterprise pressure to match competitors’ less restrictive offerings.
The people who understand AI safety most deeply are concluding that their influence inside these companies is insufficient to prevent what they see coming. And instead of pausing to address their concerns, the labs are doubling down on deployment speed.
What’s Being Done (And Why It’s Not Enough)
Sharma says he’s moving back to the UK to focus on writing, poetry, and community work. Hitzig is advocating for AI governance reform from outside the industry. The xAI founders haven’t explained their departures publicly.
Mary Inman, a legal advocate specializing in tech whistleblowers, told Rest of World that “the next AI whistleblower could come from anywhere.” The infrastructure for protected disclosure is strengthening, but it remains unclear whether warnings from the outside carry the same weight as resistance from within.
Meanwhile, the labs continue hiring replacements. Safety teams get restaffed. The work continues.
But something changes when the people who built the safeguards start saying “the world is in peril” on their way out. Either they’re wrong about the risks they spent years studying - or the organizations they left are making decisions those experts consider unconscionable.
There’s no third option where everyone is right.