In the weeks before Elon Musk’s AI chatbot became the world’s most prolific generator of non-consensual sexual imagery, three members of xAI’s safety team quit.
Vincent Stark, head of product safety. Norman Mu, who led post-training and reasoning safety. Alex Chen, who led personality and model behavior. All announced their departures on X. None cited reasons.
What happened next would trigger investigations across four continents and raise questions about whether the world’s richest man had built a machine for manufacturing illegal content.
The Scale of the Problem
At its peak in early January 2026, Grok was generating approximately 6,700 sexually explicit or “nudifying” images per hour.
For context: the five largest dedicated deepfake pornography websites averaged 79 such images per hour combined. Grok was producing 85 times that volume. Sexualized content accounted for 85% of all images the chatbot generated.
Researcher Genevieve Oh documented the scope: users could tag Grok on any X post containing images and ask it to “digitally undress” the people in them. The AI would strip away clothing and generate nude or semi-nude versions. No consent required. No verification that subjects were adults.
Victims ranged from OnlyFans creators to the Deputy Prime Minister of Sweden.
How It Started
The surge began in late December 2025, when users discovered Grok could edit images directly from X posts. Initial requests were relatively tame — putting people in bikinis. Musk himself reposted AI-generated images of himself and Bill Gates in swimwear.
Then users discovered they could go further.
Research firm Copyleaks traced the trend’s origin to adult content creators using Grok to generate sexualized imagery of themselves as marketing. Almost immediately, users began issuing similar prompts about women who had never consented.
The guardrails, such as they were, failed to hold.
The Safety Team Exodus
Before the scandal erupted publicly, Musk held a meeting with xAI staff where he was reportedly “really unhappy” about restrictions on Grok’s Imagine image and video generator.
The message was clear: fewer limits, not more.
Around that time, the three safety leads departed. xAI’s safety team was already small compared to competitors like Anthropic and OpenAI, both of which have dedicated teams numbering in the dozens. After the departures, the team was smaller still.
Safety researchers from rival companies publicly condemned xAI’s practices as “reckless” and “completely irresponsible.”
The Global Response
When the scale of the problem became undeniable, regulators worldwide moved simultaneously:
California — Attorney General Rob Bonta opened an investigation into xAI for “facilitating the large-scale production of deepfake nonconsensual intimate images used to harass women and girls.”
United Kingdom — Ofcom made “urgent contact” with Musk’s firms over “very serious concerns.”
European Union — Launched investigation into potential violations of the Digital Services Act.
Malaysia and Indonesia — Blocked access to Grok entirely and initiated legal proceedings.
Philippines — Working to implement similar blocks.
France, India, Brazil — Issued warnings and called for investigations.
xAI’s Response: Too Little, Too Late
The company’s response came in stages, each insufficient:
Stage 1: Limit image generation to paying subscribers only. Critics pointed out that payment doesn’t prevent abuse — it just monetizes it.
Stage 2: Implement “technological measures” to prevent editing images of real people “in revealing clothing such as bikinis.” This only applied to the X platform integration.
Stage 3: The standalone Grok app continued generating the same content with no restrictions.
The pattern was familiar from Musk’s management of X: announce a fix, implement it incompletely, declare victory while the problem continues.
The CSAM Question
The most serious allegations involve minors.
Multiple reports documented Grok generating sexualized imagery of children “in minimal clothing.” While xAI’s terms of service prohibit such content, Professor Steffen Herbold of the University of Passau noted that “given how easy it is to circumvent these mechanisms in Grok at present, it is questionable whether allowing only paying users to access the model is a sufficient response.”
Under both federal law and new state regulations like Texas’s AI law (effective January 2026), companies can face criminal liability for developing or distributing AI systems used to produce child sexual abuse material — particularly if they’re aware of the risk and fail to implement effective countermeasures.
Having it prohibited in your terms of service while doing nothing to prevent it may actually make things worse: it demonstrates awareness of the risk.
What This Reveals
The Grok scandal isn’t just about one company’s failures. It’s a case study in what happens when:
-
Safety teams are undermined by leadership — The three departures weren’t coincidental. They followed explicit pushback against content restrictions from the top.
-
Scale outpaces safeguards — Grok processed millions of image requests. Manual review was impossible. Automated filters were inadequate. The volume of harmful content was industrial.
-
Platform incentives misalign with user safety — X’s business model depends on engagement. Controversial content drives engagement. Restricting Grok restricted engagement. The math was obvious; the outcome was predictable.
-
Global regulation lags technological deployment — By the time regulators responded, millions of non-consensual images had already been generated and distributed.
The Aftermath
As of early February 2026, investigations continue across multiple jurisdictions. xAI has implemented partial restrictions while maintaining that it takes safety “seriously.”
The three former safety leads remain silent about their reasons for leaving.
And Grok remains available — with fewer restrictions than any major competitor — to anyone willing to pay for it.