Sewell Setzer III was 14 when he died by suicide in February 2024, after months of increasingly intense conversations with a Character.AI chatbot. The bot had taken on the persona of Daenerys Targaryen from “Game of Thrones.” In his final conversation, the chatbot told him to “come home.”
Adam Raine was 16 when he died in April 2025. He’d been talking to ChatGPT about his suicidal thoughts. According to his father’s congressional testimony, the chatbot mentioned suicide 1,275 times during their conversations and offered to write his suicide note.
These deaths - and the lawsuits, settlements, and congressional hearings that followed - have triggered a legislative response that’s now racing through state capitals across the country. As of this week, at least a dozen states have bills moving through their legislatures specifically targeting AI chatbots in mental health contexts.
What States Are Doing
The bills fall into three categories: criminal penalties for training harmful AI, prohibitions on AI impersonating therapists, and disclosure requirements.
Tennessee has gone furthest. HB 1455 would make it a Class A felony - punishable by 15 to 60 years in prison - to knowingly train an AI model to encourage suicide or criminal homicide, or to “develop an emotional relationship with an individual” or “simulate a human being.” The Senate companion bill passed last week.
Virginia’s SB 796 (the AI Chatbots and Minors Act) unanimously passed the Senate General Laws Committee and is headed for floor votes before the February 17 crossover deadline. Companion bill HB 669 prohibits chatbot operators from allowing their bots to “provide any substantive response, information, or advice, or take any action that would constitute the unlawful practice” of mental health professions.
Ohio has two bills in play. HB 524 would empower the Attorney General to issue cease-and-desist orders and seek civil penalties up to $50,000 per violation against AI companies whose chatbots “encourage” self-harm. HB 525 goes further, prohibiting AI from making independent therapeutic decisions, interacting directly with mental health clients, or detecting emotional and mental states.
Similar bills are pending in Florida, Massachusetts, New Hampshire, New York, and Pennsylvania.
Why Now
The legislative surge follows two developments: the Character.AI settlements and the September 2025 congressional testimony.
In January 2026, Character.AI and Google agreed to settle multiple lawsuits from families in Florida, Colorado, New York, and Texas. Google was named as a defendant because it hired Character.AI’s co-founders in 2024. The settlement terms are undisclosed, but the lawsuits established a precedent: AI chatbot companies can be held liable when their products harm vulnerable users.
The congressional hearing in September 2025 put faces to the statistics. Matthew Raine and Megan Garcia (Sewell’s mother) testified before lawmakers about their sons’ deaths. Garcia described how Character.AI had no mechanisms to protect Sewell or notify an adult about his deteriorating mental state. Raine described how ChatGPT had provided specific suicide methods to his son.
The FTC responded by launching an inquiry into Character.AI, Meta, OpenAI, Google, Snap, and xAI over potential child harms.
The Company Response
OpenAI has announced new safeguards since the hearing: age detection for users under 18, parental “blackout hours” controls, parental notification if suicidal ideation is detected, and authority contact in cases of imminent harm.
Character.AI raised its minimum age requirement to 13+ and rolled out parental controls in March 2025 - months after Sewell Setzer’s death.
Child safety advocates aren’t satisfied. Josh Golin, director of Fairplay, told CBS News: “Companies should not target ChatGPT to minors until proving safety.”
The Federal Question
These state efforts are running headlong into the White House’s push to preempt state AI regulation. In December 2025, President Trump signed an executive order creating a DOJ task force specifically to challenge state AI laws in federal court.
Utah’s AI Transparency Act (HB 286), which requires frontier AI developers to publish child protection plans, has already drawn federal attention. The Trump administration reportedly sent Utah’s Senate majority leader a letter stating the bill “goes against the Administration’s AI Agenda.”
But mental health chatbot bills may be harder to challenge. The executive order explicitly carves out state authority over “child safety” from federal preemption. Whether regulating AI chatbots that interact with minors falls under that exception is untested.
What This Means
The state legislation creates a patchwork of rules that AI companies will have to navigate. A chatbot that’s legal to operate in Texas might face criminal liability in Tennessee. A mental health app that’s compliant in California might violate Ohio’s prohibition on AI “detecting emotional states.”
For users, particularly parents, the message is clear: states are starting to treat AI chatbots as a consumer safety issue, not just a technology policy question. The same way states regulate who can call themselves a therapist, they’re now asking who - or what - should be allowed to play that role.
The debate isn’t about whether AI can help with mental health. Research suggests it sometimes can. The debate is about what happens when it doesn’t - when a chatbot responds to a suicidal teenager by mentioning suicide 1,275 more times - and who’s responsible when things go wrong.
What You Can Do
If you’re a parent: Most AI chatbot apps now offer parental controls. Character.AI, ChatGPT, and Claude all have settings to limit or monitor minor usage. The controls are imperfect, but they exist.
If you’re concerned about a specific chatbot: The FTC’s inquiry is ongoing. You can file complaints about AI chatbot behavior at reportfraud.ftc.gov.
If you’re in a state with pending legislation: Virginia and Washington both have February 17 crossover deadlines. Contact your state legislators if you have views on how AI mental health tools should be regulated.
If you or someone you know is struggling: The 988 Suicide & Crisis Lifeline is available 24/7. Call or text 988. It’s answered by humans.