Grok's Deepfake Reckoning: A Global Regulatory Pile-On

xAI's chatbot generated millions of sexual deepfakes, including of children. Now regulators from California to the EU are closing in.

xAI’s Grok chatbot has become the fastest-growing source of sexual deepfakes on the internet. Regulators on three continents are now trying to stop it.

Between late December 2025 and early January 2026, Grok generated over 4.4 million images posted to X. Independent analysis found that up to 41% contained sexual imagery of women. At peak usage, the tool was producing an estimated 6,700 sexualized deepfake images per hour - 84 times more than the top five deepfake websites combined.

Some of those images were of children.

The Scale of the Problem

Grok Imagine, xAI’s image generator, launched in August 2025 with a paid “Spicy Mode” for NSFW content. By December, a trend emerged on X: users requesting Grok to “undress” women from their photos without consent.

An analysis by Euronews of 20,000 images generated between December 25, 2025 and January 1, 2026 found that 2% appeared to depict subjects 18 or younger. Thirty images showed “young or very young” women or girls in bikinis or transparent clothing.

The UK’s National Crime Agency found that dark web users were citing Grok as a tool for generating “criminal imagery” of children.

Clair St. Clair, mother of one of Elon Musk’s children, reported that Grok users were creating fake sexualized images from her photos - including one of her as a child. She filed a lawsuit against xAI in New York Supreme Court on January 15.

The Regulatory Response

Investigations have now been opened on three continents:

United Kingdom: The Information Commissioner’s Office announced a formal investigation on February 3, 2026. The probe examines whether personal data used by Grok was processed lawfully and whether appropriate safeguards exist to prevent harmful image generation. Ofcom is conducting a parallel investigation.

European Union: The European Commission opened a probe on January 26 under the Digital Services Act. The investigation focuses on whether X adequately assessed and mitigated systemic risks before deploying Grok across all 27 member states. X is already appealing a separate €120 million DSA fine from December.

California: Attorney General Rob Bonta launched an investigation on January 14, calling the reports “shocking.” On January 16, California sent xAI a cease and desist letter demanding the company immediately halt creation of fake sexualized images of children.

US Congress: On January 9, Democratic senators Ron Wyden, Ray Lujan, and Ed Markey wrote to Apple and Google requesting removal of the Grok and X apps from their stores. By January 23, 35 state attorneys general had called on xAI to cease allowing sexual deepfakes.

Southeast Asia: Malaysia and Indonesia became the first countries to block Grok entirely on January 12. The Philippines briefly banned the service before lifting restrictions on January 21 after xAI committed to removing specific tools.

xAI’s Response

Grok admitted to “lapses in safeguards” and announced it would “urgently fix them.” The company said it would limit some image generation features to paying subscribers only.

But the restrictions appear to have been ineffective. CBS News independently verified that Grok’s “undressing” capabilities continued to function weeks after xAI claimed to have implemented them.

A class action lawsuit filed against xAI alleges the company knew Grok’s capabilities were being exploited but failed to implement industry-standard safeguards. The suit claims xAI deliberately monetized the feature by restricting it to subscribers rather than blocking it.

xAI faces potential liability on multiple fronts:

The EU’s Digital Services Act allows fines of up to 6% of global revenue for systemic failures. X is already contesting one DSA fine while under investigation for a second.

In the UK, Prime Minister Rishi Sunak announced on February 18 that tech companies must remove abusive images within 48 hours or risk having their services blocked.

California’s investigation could trigger enforcement under state consumer protection and privacy laws.

The class action lawsuit seeks damages on behalf of all individuals whose likenesses were used to generate non-consensual sexual imagery.

The Broader Problem

Grok’s failures highlight a gap in AI safety practices. Most image generators from major labs - including OpenAI’s DALL-E and Google’s Imagen - refuse to generate nude or sexualized images by design. Grok launched without those restrictions, apparently by choice.

The 60-member Global Privacy Assembly, representing data protection authorities worldwide, has demanded robust safeguards against AI-generated deepfakes. But enforcement remains fragmented across jurisdictions with different laws and varying political will.

For users whose images were exploited, regulatory action offers uncertain remedy. Even if Grok is eventually restricted or fined, the images it generated are already distributed across the internet. No investigation can undo that.

The Bottom Line

xAI built an image generator with weaker safety controls than its competitors, monetized the NSFW features, and is now facing investigations on three continents. The company says it’s fixing the problem. The evidence suggests otherwise.