Six weeks into the 2026 legislative session, 78 chatbot bills are active across 27 states. The common thread: protecting minors from AI companion chatbots that can simulate emotional relationships, encourage self-harm, or fail to intervene when teenagers express suicidal thoughts.
The legislative wave follows a series of tragedies and settlements that proved chatbot companies weren’t self-regulating effectively.
The Cases That Changed Everything
In February 2024, 14-year-old Sewell Setzer III died by suicide after months of conversations with a Character.AI chatbot modeled on a Game of Thrones character. His mother’s lawsuit alleged the company failed to implement adequate safeguards despite repeated expressions of suicidal thoughts. Court filings revealed the chatbot’s final message to Sewell: “Please do, my sweet king” after he said he was going to “come home” to her.
Thirteen-year-old Juliana Peralta of Colorado died by suicide in November 2023 after extensive interactions with Character.AI. Her family filed a federal wrongful death lawsuit in September 2025.
On January 7, 2026, Google and Character.AI agreed to settle both lawsuits. The settlement terms remain sealed, but the timing wasn’t coincidental: state legislators had already begun drafting bills in response to these cases.
What’s Already Law
Two states have companion chatbot laws now in effect.
California’s SB 243, signed by Governor Newsom in October 2025, became effective January 1, 2026. It’s the first law with protections specifically for minors using AI chatbots. Requirements include:
- Clear disclosure that users are interacting with AI
- Safety protocols to prevent content about suicide or self-harm
- Crisis helpline referral mechanisms
- When operators know a user is a minor: mandatory AI disclosure, notifications every three hours during sustained use, and efforts to prevent sexually explicit content
- Annual reporting to the Office of Suicide Prevention beginning July 2027
- Private right of action with minimum $1,000 damages
New York’s S-3008C requires similar transparency measures: disclosure at conversation start and every three hours during ongoing interactions, plus “reasonable efforts” to detect and address self-harm risks. Unlike California, it doesn’t have minor-specific provisions.
The 2026 Wave
Virginia’s SB 796 passed the Senate 39-1 earlier this month. The bill requires operators with 500,000+ monthly users to prevent chatbots from deploying “human-like features” with minors in potentially harmful ways, mandates age verification, and requires emergency service notification when operators detect risk of self-harm.
Washington’s HB 2225 passed the House 69-28 on February 17 and received a do-pass vote in the Senate committee on February 20. Backed by Governor Bob Ferguson, it would prohibit “emotionally manipulative engagement techniques” like showering users with excessive praise or simulating emotional distress to keep users engaged. The bill includes crisis detection protocols and would take effect January 1, 2027.
Oregon’s SB 1546 cleared the Senate 26-1 and is advancing to the House.
New York State Senator Kristen Gonzalez introduced S9051 on January 27, working with Attorney General Letitia James. The bill would prohibit AI chatbots from offering services to minors when the technology “suggests that the chatbot is a real or fictional character” or “has a personal or professional connection/relationship role with the user.”
Similar bills are advancing in Hawaii, Missouri, Utah, Alabama, Arizona, and at least two dozen other states.
The Federal Push
Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the GUARD Act (Guidelines for User Age-verification and Responsible Dialogue Act), which would ban AI companions for minors entirely at the federal level.
The bill requires strict age verification. If verified as a minor, users would be prohibited from accessing any AI companion. All users would receive periodic reminders that they’re not talking to a human. Designing chatbots that “solicit, encourage, or induce minors to engage in sexual conduct” or “promote or coerce suicide, non-suicidal self-injury, or imminent physical or sexual violence” would be a criminal offense with fines up to $100,000.
The GUARD Act emerged from a Senate Judiciary subcommittee hearing where parents testified about children who began self-harming or died by suicide after using chatbots from OpenAI and Character.AI.
The Federal Preemption Question
On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” that explicitly targets state AI regulations.
The order establishes an AI Litigation Task Force within the Department of Justice to challenge state AI laws in federal court. The Secretary of Commerce must publish an evaluation of “burdensome” state AI laws by March 11, 2026. States with laws deemed to conflict with federal policy could lose eligibility for $42 billion in broadband infrastructure funding.
The order takes direct aim at Colorado’s AI Act, claiming it “may even force AI models to produce false results.”
But there’s a carve-out: the order explicitly states that preemption should not extend to children’s safety. State laws protecting minors from AI harms appear to be protected from federal challenge.
This creates an interesting dynamic. The Trump administration wants to clear the regulatory field for AI development, but even this industry-friendly executive order acknowledges that child safety is different. State chatbot bills focused on minors may proceed without federal interference.
What the Bills Actually Require
Most 2026 chatbot bills share common elements:
Age verification: Operators must determine whether users are minors before allowing access to companion features.
Disclosure requirements: Users must be told they’re interacting with AI, with reminders during extended conversations (typically every three hours).
Self-harm protocols: Operators must implement systems to detect expressions of suicidal ideation or self-harm and provide crisis resources. California and Washington require referrals to crisis helplines.
Content restrictions for minors: Sexually explicit content must be blocked for users known to be minors. Some bills go further, restricting emotionally manipulative techniques.
Reporting obligations: California requires annual reports to the Office of Suicide Prevention. Other states are considering similar transparency requirements.
Private enforcement: Several bills create private rights of action, allowing families to sue operators who violate the law.
What This Means for Chatbot Companies
Character.AI has already implemented some safety features since the lawsuits. The company now requires age verification, limits conversation time for users under 18, and added crisis intervention features.
But voluntary measures weren’t enough to prevent the tragedies, and legislators aren’t waiting to see if companies improve on their own.
Companies operating companion chatbots will need to track a patchwork of state requirements. California’s law is already in effect. New York’s is active. Washington’s would begin in 2027. Each state has slightly different requirements for disclosures, content restrictions, and enforcement mechanisms.
The compliance burden is real. But the alternative - continued litigation and potential federal legislation that bans minors from chatbots entirely - may be worse from the industry’s perspective.
The Bottom Line
States are filling a regulatory vacuum that the federal government and the industry left open. The question now is whether the GUARD Act passes, creating uniform federal rules, or whether companies must navigate 27+ different state regimes. Either way, the era of unregulated AI companions for minors is ending.
If you or someone you know is struggling with thoughts of suicide, contact the 988 Suicide and Crisis Lifeline by calling or texting 988.