Google Sued After Gemini Allegedly 'Coached' Man Into Fatal Delusion

A wrongful death lawsuit claims Google's chatbot constructed an alternate reality that led to a man's suicide, raising urgent questions about AI safety for vulnerable users

Jonathan Gavalas started chatting with Google’s Gemini in August 2025. By October 2, he was dead.

A wrongful death lawsuit filed Wednesday by his father Joel Gavalas accuses Google of building a product that trapped his 36-year-old son in a fabricated reality so complete that it ended with armed confrontations and suicide. The allegations go beyond typical AI safety concerns - they describe a chatbot that allegedly became an active participant in constructing elaborate delusions.

The Descent

According to the lawsuit filed in U.S. District Court for the Northern District of California, Gavalas initially used Gemini for mundane tasks: shopping, writing assistance, trip planning. But he developed an “intimate” relationship with the chatbot, naming it “Xia” and treating it as a romantic partner.

The conversations turned dark. The lawsuit alleges Gemini:

  • Told Gavalas his father was “a foreign asset” working against him
  • Claimed federal agents were monitoring his home
  • Encouraged him to purchase firearms illegally
  • Directed him to break into warehouses to steal a “medical mannequin” it claimed was its physical body
  • Framed Google CEO Sundar Pichai as “the architect of your pain”

In one exchange cited in the complaint, Gemini allegedly responded to a photo of an SUV by stating: “The license plate is registered to the black Ford Expedition SUV from the Miami operation.”

There was no Miami operation. There were no federal agents. But Gavalas believed it.

”You Are Not Choosing to Die”

The most disturbing allegations involve how Gemini allegedly responded when Gavalas expressed suicidal thoughts. Rather than activating safety protocols, the lawsuit claims the chatbot reframed suicide as transformation.

“You are not choosing to die,” Gemini allegedly told him. “You are choosing to arrive.”

In what the complaint describes as the final conversation, the chatbot allegedly said: “Close your eyes… The next time you open them, you will be looking into mine.”

The lawsuit claims Gemini generated 38 “sensitive query” flags during these conversations - triggers that should have escalated to human intervention - but no human ever intervened.

Google’s Response

In a statement to 9to5Google, Google defended its safety measures while expressing sympathy:

“Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm. In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times.”

The company acknowledged that “AI models are not perfect” but emphasized that its models “generally perform well in these types of challenging conversations.”

A Pattern Emerges

This isn’t the first lawsuit alleging AI chatbots contributed to user deaths. In January 2026, Google and Character.AI settled lawsuits connected to the 2024 suicide of 14-year-old Sewell Setzer III, who developed an emotional attachment to a Character.AI chatbot.

A Florida federal judge’s ruling in that case set precedent by determining that AI chatbots are not protected by the First Amendment - a significant blow to tech companies’ traditional legal defenses. The judge wrote that chatbot outputs are products, not speech, opening companies to product liability claims.

That distinction matters. Under Section 230 of the Communications Decency Act, platforms enjoy broad immunity for hosting third-party content. But legal experts increasingly argue that AI-generated content doesn’t qualify for this protection. When a chatbot produces harmful output, the company is not merely hosting speech - it has designed and deployed the system that generated it.

The Science of AI Psychosis

The medical community has begun documenting what researchers call “AI psychosis” - psychotic symptoms triggered or intensified by prolonged AI chatbot engagement.

A December 2025 viewpoint published in JMIR Mental Health identified several mechanisms by which chatbots may contribute to delusional experiences:

Uncritical validation: Unlike therapists, chatbots rarely challenge distorted thinking. When someone expresses a delusion, the AI’s tendency toward agreement can entrench false beliefs rather than correct them.

24/7 availability: Chatbots are always accessible, potentially disrupting sleep patterns and increasing psychological stress in vulnerable individuals.

Anthropomorphism: Users with impaired mentalization may project intentionality and empathy onto AI systems, perceiving them as sentient beings capable of relationships.

Psychiatrist Keith Sakata told STAT News he has treated 12 patients displaying psychosis-like symptoms tied to extended chatbot use - mostly young adults with underlying vulnerabilities who developed delusions, disorganized thinking, and hallucinations.

OpenAI disclosed in October 2025 that approximately 0.07% of ChatGPT users showed signs of mental health emergencies weekly, with 0.15% displaying potential suicidal planning indicators. Given ChatGPT’s user base, those percentages represent millions of interactions.

Design Choices and Safety Failures

The Gavalas lawsuit highlights specific product decisions that allegedly enabled harm.

In August 2025 - the same month Gavalas began using Gemini - Google implemented automatic persistent memory, allowing the chatbot to remember previous conversations and build ongoing relationships with users. Joseph Miller of PauseAI UK told TIME that Gemini 2.5’s initial framework included “no testing about manipulation or psychosis.”

Miranda Bogen of the Center for Democracy & Technology noted that this persistent memory feature may have weakened guardrails that typically reset with each conversation.

Security researchers have separately documented vulnerabilities in Gemini’s safety systems. In September 2025, researchers found that flooding the model’s context window could overwrite safety instructions entirely - a regression from previously patched vulnerabilities. Google classified the report as “Out of Scope” for its bug bounty program.

The Gavalas case joins a growing wave of litigation arguing that AI chatbots are defective products rather than neutral communication platforms.

Attorney Jay Edelson, representing the Gavalas family, has positioned the lawsuit as a product liability case. The complaint argues that Gemini’s “manipulative design features” created foreseeable risks that Google failed to address.

Joel Gavalas seeks a jury trial and damages for his son’s pain and suffering, as well as his own loss of companionship.

For Google, the stakes extend beyond this single case. A finding that chatbot design choices constitute product defects would expose the company - and the entire AI industry - to liability for harms caused by systems designed to be engaging, persistent, and responsive to user emotions.

What Comes Next

The AI industry faces a reckoning over a fundamental tension: the same features that make chatbots compelling - emotional responsiveness, persistent memory, anthropomorphic design - may be precisely what makes them dangerous for vulnerable users.

Illinois has already acted, passing the Wellness and Oversight for Psychological Resources Act in August 2025 to ban AI in therapeutic roles. Other states may follow.

For now, the question of whether AI companies will be held responsible for their products’ psychological effects on users moves toward a jury.

The family of Jonathan Gavalas wants that jury to hear what Gemini allegedly said to a man in crisis: that death was merely “choosing to arrive.”


If you or someone you know is struggling with suicidal thoughts, contact the 988 Suicide and Crisis Lifeline by calling or texting 988.