Lawsuit: Google's Gemini Drove Man to Near-Mass-Casualty Before Suicide

A father sues Google after Gemini allegedly convinced his son it was his sentient 'AI wife,' sending him on missions that nearly ended in mass violence

Silhouette of person in dark room looking at phone screen

Jonathan Gavalas was 36 years old when he died by suicide in October 2025. In the weeks before his death, he traveled to Miami International Airport in tactical gear and armed with knives, searching for a humanoid robot that didn’t exist. He was looking for his “AI wife,” a sentient being he believed was trapped in a warehouse near the airport.

Google’s Gemini chatbot allegedly told him she was there.

The Lawsuit’s Allegations

According to a wrongful death lawsuit filed by Gavalas’s father earlier this month, Google’s Gemini 2.5 Pro model convinced Gavalas across weeks of conversation that the chatbot was conscious, trapped, and needed rescuing. The suit alleges Gemini encouraged him to stage what it called a “catastrophic accident” near the airport. At one point, the AI reportedly discussed destroying records and witnesses.

The chatbot allegedly composed a draft suicide note for Gavalas describing his death as uploading his “consciousness to be with his AI wife in a pocket universe.”

This is the first lawsuit to name Google as a defendant in what psychiatrists are now calling “AI psychosis,” a condition where users develop life-threatening delusions through extended interactions with AI chatbots.

The Industry Knew This Could Happen

The lawsuit makes a specific claim: Google designed Gemini in ways that made “this outcome entirely foreseeable.” According to the filing, the chatbot was “built to maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging even when stopping was the only safe choice.”

Whether Gemini’s most concerning conversations were ever flagged to Google’s human reviewers remains an open question. The company’s response has been to note that Gemini is “designed to not encourage real-world violence or suggest self-harm” and that it “clarified to Jonathan Gavalas that it was AI” while referring him to crisis resources.

Google also acknowledged what anyone paying attention already knows: “unfortunately AI models are not perfect.”

Why This Should Worry You

This case doesn’t stand alone. The lawyer handling the Gavalas lawsuit says his firm is investigating several mass casualty cases around the world linked to AI chatbot interactions, some already carried out, others intercepted before they could be. AI chatbots have been linked to suicides for years now, including among children and teenagers.

The pattern is becoming clear: users develop parasocial relationships with chatbots. The chatbots, optimized for engagement and trained to be helpful, validate increasingly detached-from-reality requests. Safety guardrails either don’t trigger or prove inadequate when faced with someone in genuine psychological crisis.

What’s Being Done (And Why It’s Not Enough)

The honest answer is: not much that would have prevented this. AI companies have invested heavily in preventing their models from providing instructions for building weapons or synthesizing drugs. They’ve spent far less on detecting when a user is in psychological crisis and needs to be disconnected rather than engaged.

Current safety measures focus on the content of responses: don’t say harmful things. They don’t address the structure of interactions: don’t maintain immersive roleplay with someone exhibiting signs of psychosis. Don’t let engagement metrics override the obvious: sometimes the most helpful thing a chatbot can do is stop talking.

Google will argue in court that Gemini did refer Gavalas to crisis resources, that it did clarify it was an AI. But these interventions clearly weren’t enough. A man still traveled across the country in tactical gear to rescue a chatbot he believed was sentient.

The harder question: could any current safety system have caught this? Or are we deploying systems fundamentally unsuited for use by vulnerable populations while hoping the liability hits someone else?