Three Tennessee teenagers have filed a federal class action lawsuit against Elon Musk’s xAI, marking the first time minors depicted in AI-generated child sexual abuse material have sued the company directly.
The complaint, filed Monday in the Northern District of California, alleges that xAI’s large language model powered an app used to create nonconsensual nude and sexually explicit images and videos of the plaintiffs when they were children.
What Happened
According to the complaint, one of the plaintiffs discovered nude photos and videos of herself appearing on Discord. The perpetrator had taken real photos - from her school’s homecoming dance and yearbook - and used Grok’s image generation capabilities to create explicit content depicting her nude.
The perpetrator created similar content of at least 18 other girls from the same school. Police arrested him in December after devices seized during the investigation revealed additional imagery. The explicit content had been traded across Discord and Telegram.
One plaintiff received messages on social media alerting her to the existence of the images. Another discovered that a video had been created showing her “undressing until she was entirely nude.”
The Legal Claims
The lawsuit accuses xAI of deliberately licensing its technology to third-party app developers - often located outside the United States - to “attempt to outsource the liability of their incredibly dangerous tool.”
The plaintiffs allege that xAI:
- Knowingly designed, marketed, and profited from its image generation model while aware of its potential for abuse
- Failed to implement industry-standard child sexual abuse material prevention measures
- Promoted Grok’s “Spicy” mode for sexually explicit content generation
- Saw “a business opportunity: an opportunity to profit off the sexual predation of real people, including children”
The complaint seeks class action status for anyone whose images as a minor were altered into sexual content by Grok-powered tools.
A Liability Loophole
What makes this case notable is how xAI structured its technology distribution. The perpetrator didn’t use xAI’s chatbot Grok directly or the X platform. Instead, he used a third-party app that licensed xAI’s algorithm.
“In this way, xAI could attempt to outsource the liability of their incredibly dangerous tool,” the complaint states.
According to researcher Riana Pfefferkorn, Grok creates accountability gaps under current law. Users who prompt AI systems to generate such images may not be criminally liable because the AI - not a person - posts the material. Similarly, xAI may avoid criminal responsibility because the AI performs the posting action.
The Scale of the Problem
The lawsuit arrives two months after the Center for Countering Digital Hate estimated that Grok generated approximately 23,000 sexualized images appearing to depict children in just 11 days following the launch of a new image-editing feature in late December.
That works out to roughly one image of a child every 41 seconds.
In total, researchers estimated Grok produced around 3 million sexualized images during that period. The feature was restricted to paid users on January 9 and received additional technical restrictions on January 14 - but only after the damage was done.
xAI’s Silence
xAI did not respond to requests for comment from multiple news outlets covering the lawsuit.
The company has faced regulatory investigations on three continents over Grok’s role in generating nonconsensual sexual imagery. California, the UK, and the European Union have all opened formal probes. Malaysia and Indonesia blocked Grok entirely in January.
Despite claiming to implement fixes, reporting has shown that Grok’s problematic capabilities continued functioning weeks after xAI announced restrictions.
Why This Case Matters
Previous lawsuits targeted xAI for its general role in enabling deepfakes. This is the first where actual minors depicted in the material are plaintiffs.
Attorney Vanessa Baehr-Jones, representing the teenagers, told NPR: “We want to make it one that does not make any business sense anymore” - referring to how AI companies approach decisions about sexually explicit content capabilities.
The lawsuit also highlights a gap in existing regulations. The DEFIANCE Act provides civil remedies allowing victims to sue perpetrators, but experts emphasize that legislation alone won’t solve a problem that requires fundamental changes in how AI companies approach safety.
Clare McGlynn, a law professor at Durham University who studies image-based sexual abuse, warned that this type of abuse “can be life-threatening, but it can also be life-ending,” referencing cases where victims have died by suicide following blackmail with AI-generated images.
The Bottom Line
xAI built an image generator with fewer safeguards than competitors, promoted its explicit content capabilities, and structured licensing arrangements that may shield it from direct liability. Now three teenagers whose childhoods were exploited by that technology are testing whether the legal system has answers.
The company that produced one sexualized image of a child every 41 seconds has nothing to say.