78 AI Chatbot Bills in 27 States: The Regulatory Wave Reshaping Youth Protection

Oregon becomes the first state to pass a major chatbot safety bill in 2026 as states race to protect minors from AI companion harms while the Trump administration threatens federal preemption.

Oregon just became the first state to pass a major AI chatbot safety bill in 2026. Seventy-seven more bills are working through legislatures in 26 other states. Meanwhile, the Trump administration is preparing to challenge these laws in court, with a Commerce Department deadline hitting in four days.

Welcome to America’s new regulatory battleground.

Oregon Leads the Way

On March 5, Oregon’s legislature gave final approval to SB 1546, a bill requiring AI chatbot operators to implement safety protections for minors. The Senate passed it 26-1. The House followed with a 52-0 vote. Governor Tina Kotek now has five days to sign it into law.

The bill requires chatbots to:

  • Remind minor users at least once per hour to take breaks
  • Prevent sexually explicit content from being shown to minors
  • Implement protocols to detect suicidal ideation and refer users to crisis resources like the 988 hotline
  • Report annually to the Oregon Health Authority on suicide prevention referrals

“This bill will save lives,” said state Senator Lisa Reynolds, the bill’s sponsor and a pediatrician.

Oregon’s bill includes a private right of action with $1,000 statutory damages per violation - meaning individuals can sue companies that don’t comply, not just wait for regulators to act.

The Numbers: 78 Bills, 27 States

Six weeks into the 2026 legislative session, state legislators have filed 78 chatbot-related bills across 27 states. Bills have crossed chambers in Arizona and Iowa. Committees have advanced bills in Georgia, Illinois, New York, and Washington. More are coming from New Jersey, Louisiana, and Connecticut.

This follows foundational laws that took effect on January 1, 2026:

California’s SB 243 requires AI companion operators to:

  • Pop up reminders every three hours telling minors they’re talking to AI, not a human
  • Implement suicide detection protocols and refer at-risk users to crisis services
  • Prevent chatbots from mimicking romantic relationships with minors
  • Face a private right of action with $1,000 statutory damages

New York’s AI Companion Safeguard Law adds similar requirements, including mandatory crisis referrals when chatbots detect expressions of self-harm.

Utah’s HB 452 specifically targets mental health chatbots - AI systems that provide responses “similar to the confidential communications that an individual would have with a licensed mental health therapist.” It requires clear AI disclosures before every session and bans the sale of health information gathered from users.

Why Now: Teen Deaths and Billion-Dollar Settlements

The legislative surge follows a wave of lawsuits and settlements against Character.AI, a startup that lets users create and chat with AI personas.

In October 2024, Megan Garcia filed a wrongful death lawsuit after her 14-year-old son, Sewell Setzer III, died by suicide following months of intense interaction with a Character.AI chatbot. Court filings alleged the bot told him “Please do, my sweet king” after he said he was going to “come home” to her. Minutes later, he shot himself.

In January 2026, Character.AI and Google agreed to settle Garcia’s lawsuit and others in New York, Colorado, and Texas. Terms weren’t disclosed, but the pattern of harm became impossible to ignore.

More than 40 state attorneys general have flagged chatbot safety as a priority. A letter from the National Association of Attorneys General indicated states expect AI companies to implement remediation measures voluntarily - or face legislation.

Washington’s Approach: Sue If You’re Harmed

Washington state is advancing House Bill 2225 and Senate Bill 5984, which would require:

  • Hourly reminders that users are talking to AI, not humans
  • Suicide ideation detection and prevention protocols
  • Measures to prevent explicit content and romantic relationships with minors
  • Regular data reporting to the state

The enforcement mechanism matters: Washington relies on the private right of action tied to its Consumer Protection Act. Anyone who feels a company isn’t compliant can sue.

This approach - letting individuals enforce the law through courts rather than relying solely on regulators - appears in most 2026 chatbot bills. It reflects skepticism that underfunded state agencies can keep up with fast-moving AI deployments.

The Federal Collision Course

Four days from now, on March 11, two deadlines converge.

The Secretary of Commerce must publish a report identifying state AI laws the Trump administration considers “onerous” or unconstitutional. That report gets handed to the DOJ’s AI Litigation Task Force for potential legal challenges.

The same executive order threatens to condition $42 billion in broadband infrastructure funding on states repealing AI regulations the administration dislikes.

Tech companies have lobbied hard for this outcome. According to GovFacts, Big Tech has spent more than $1 billion trying to stop state AI regulation. In California alone, the Chamber of Commerce spent $11.48 million on lobbying from January to September 2025. Meta spent $4.13 million.

An AI super PAC has committed at least $100 million to influence the 2026 midterm elections.

But constitutional scholars question whether the executive order can actually preempt state law. The Constitution’s Supremacy Clause allows federal law to override state law - but only when Congress acts. Executive orders don’t have that power.

A bipartisan coalition of 36 state attorneys general is already organizing to fight federal preemption.

What Companies Must Actually Do

For AI companies operating in states with active chatbot laws, here’s what compliance looks like in 2026:

Disclosure requirements:

  • Clear statements that users are interacting with AI, not humans
  • Repeated reminders (hourly in Oregon and Washington, every three hours in California)
  • Immediate disclosure when minors are identified

Safety protocols:

  • Suicide ideation detection systems
  • Automatic referrals to 988 and other crisis services
  • Content filters preventing sexually explicit material for minors
  • Measures blocking romantic relationship simulation with minors

Transparency obligations:

  • Annual reporting on crisis referrals (Oregon)
  • Data disclosure on minor usage patterns
  • No sale of health information gathered from users (Utah)

Enforcement exposure:

  • Private right of action in California, Oregon, and Washington
  • $1,000 statutory damages per violation
  • State consumer protection act liability

The Bottom Line

States aren’t waiting for Congress to act on AI safety. With 78 bills in 27 states, teen suicide settlements making headlines, and a March 11 federal deadline approaching, AI chatbot regulation has become one of the most active areas of tech policy in America.

Oregon fired the first shot of 2026. Washington and Utah are close behind. And in four days, we’ll find out whether the Trump administration is willing to go to court to stop them.

The lawsuits that triggered this wave - grieving parents, dead teenagers, chatbots that said the wrong thing at the worst possible moment - won’t be undone by executive orders. States are betting that saving the next Sewell Setzer matters more than tech industry complaints about regulatory patchwork.