AI Regulation Tracker: 19 Laws in Two Weeks, New York Targets Frontier Models, and Tennessee Says AI Isn't a Person

The biggest burst of AI lawmaking in US history. New York's RAISE Act creates the first state-level frontier model oversight office. Utah signs 9 AI bills. Tennessee votes 93-2 that AI is not a person.

Close-up of a pen resting on printed legal documents

Nineteen AI bills became law in the last two weeks. That’s the single biggest burst of AI lawmaking in American history—more than the total number of AI laws passed in all of 2023.

The active bill count has climbed to over 2,028 across the states, with 25 signed into law this year and another 27 that have cleared both chambers. Meanwhile, the DOJ’s AI Litigation Task Force—the White House’s primary tool for blocking state AI regulation—still hasn’t filed a single lawsuit.

The states aren’t waiting.

New York: The RAISE Act

The biggest regulatory story this week isn’t a chatbot bill. It’s New York going after the models themselves.

Governor Kathy Hochul signed the final version of the RAISE Act (Responsible AI Safety and Education Act) on March 27, capping months of negotiation. The law creates the first state-level oversight office specifically tasked with monitoring frontier AI models.

Here’s what the RAISE Act requires:

  • Safety protocols: Large frontier developers must create, publish, and maintain detailed safety frameworks. These aren’t voluntary commitments—they’re legal obligations.
  • Incident reporting: Companies must report safety incidents to the state within 72 hours of determining an incident occurred.
  • Oversight office: A new division within the Department of Financial Services will assess frontier developers and enforce compliance.
  • Critical harm threshold: The law defines “critical harm” as the death or serious injury of at least 100 people or damages of at least $1 billion—the same threshold California used in SB 53.

The RAISE Act takes effect January 1, 2027. The negotiated amendments align it closely with California’s SB 53, which means the two largest state economies in the country are converging on a shared regulatory template for frontier AI. That makes federal preemption harder, not easier.

Utah: Nine Bills, One Governor

Governor Spencer Cox signed nine AI bills in the 2026 session—eight of them in the last two weeks. Utah’s legislature runs for seven weeks. They produced more AI laws per session day than any state this year.

BillWhat It Does
HB 218AI literacy added to digital skills curriculum
HB 273Screen-time limits and AI guardrails in classrooms
HB 276Bans non-consensual AI-generated intimate images, requires provenance data
HB 289Reorganizes offenses for AI-generated child sexual abuse material
HB 320Expands state AI policy office oversight authority
SB 256Defamation and identity protections against AI-generated content
SB 267Study of software best practices in schools
SB 319AI disclosure requirements for health insurers

Utah’s approach covers four areas: education, deepfakes, healthcare, and government oversight. The Digital Voyeurism Prevention Act (HB 276) is notable for requiring AI operators to embed provenance data—watermarks or metadata—so users can determine whether content was AI-generated or altered. Washington passed a similar provenance requirement in HB 1170.

The pattern is bipartisan. Utah is one of the most conservative states in the country, yet it passed more AI regulation this session than most blue states have in two years.

Tennessee: AI Is Not a Person

Tennessee’s legislature voted that artificial intelligence is not a person. The vote wasn’t close.

SB 837 passed the Senate 26-6 on April 6 and the House 93-2 on April 8. The bill explicitly excludes AI from the legal definitions of “person,” “life,” and “natural person” in Tennessee law.

This sounds like it should be obvious. It isn’t. As AI systems become more autonomous—filing legal documents, making medical recommendations, entering into contracts on behalf of users—the legal status of these systems matters. Multiple lawsuits have already tested whether AI-generated work qualifies for copyright. Tennessee wants to close the door before anyone walks through it.

Separately, Tennessee also signed SB 1580 on April 1, banning AI systems from advertising themselves as qualified mental health professionals. Companies can still build therapy-adjacent chatbots, but they can’t claim those bots are therapists. Violations carry a $5,000 penalty per incident and a private right of action under the Consumer Protection Act.

The Deadline Crunch

Two states are approaching session deadlines with significant AI bills still alive:

Nebraska adjourns April 17. LB 1185—the Conversational AI Safety Act—has been attached to the Agricultural Data Privacy Act and cleared for final reading. If it passes, Nebraska joins the chatbot safety wave.

Maine adjourns April 15. Two bills are in play:

  • LD 2082 regulates AI in mental health therapy services. Both chambers approved it April 7.
  • LD 2162 regulates child access to AI chatbots with human-like features. The House approved it April 7.

Missouri has until May 15. HB 2372—an omnibus health bill that includes a therapy chatbot ban with a $10,000 penalty—passed the House April 2 and is now with the Senate.

The Numbers

The scale of state AI legislation has grown exponentially:

YearBills IntroducedStates Active
2023~200~30
202463545
20251,20850
2026 (to date)2,028+45+

The Plural Policy tracker breaks down active bills by category:

CategoryActive Bills
Restricting AI742
AI in Government415
AI Use Restrictions (Private Sector)287
Regulated Content202
Regulating AI Developers171
AI in Healthcare162
AI in Education141
AI in Elections66

The largest category—restricting AI—added 37 new bills in the last two weeks alone.

The Federal Standoff Continues

The White House executive order from December 2025 established two enforcement mechanisms: a DOJ AI Litigation Task Force to challenge state laws in court, and a Commerce Department mandate to identify “burdensome” state regulations by March 11, 2026.

The Commerce report was published on time. It identified dozens of state laws it considers problematic. But identifying laws and challenging them are different things. The DOJ task force hasn’t filed a single lawsuit.

Meanwhile, California’s Governor Newsom signed Executive Order N-5-26 on March 30, ordering state agencies to independently evaluate federal AI supply-chain risk designations. It’s a direct challenge to the federal framework: California will decide for itself which AI companies are trustworthy enough for state contracts.

New York’s RAISE Act adds another wrinkle. The federal preemption strategy explicitly carves out child safety, government procurement, and data center infrastructure from preemption. But the RAISE Act targets frontier model safety—an area the executive order claims federal jurisdiction over. Whether the RAISE Act survives a legal challenge could define the boundary between state and federal AI regulation.

What to Watch

Nebraska has four days. If LB 1185 passes, it becomes the first red-state chatbot safety law attached to a broader agricultural data privacy framework—an unusual legislative strategy that could be replicated elsewhere.

Maine’s two AI bills need to clear final votes by April 15. LD 2082 (AI therapy regulation) has bipartisan support.

Georgia’s Governor Kemp still hasn’t signed SB 540 (chatbot disclosure), SB 444 (AI insurance ban), or SR 789 (AI study committee). No official timeline has been announced.

And the EU AI Act’s high-risk provisions take effect August 2. If the Digital Omnibus succeeds in weakening them, the practical effect is that American state laws will be stricter than Europe’s landmark regulation—a reversal nobody predicted two years ago.

The count keeps climbing. Twenty-five laws signed. Twenty-seven more cleared for signature. Over two thousand bills in the pipeline. And the federal government’s primary response is a task force that hasn’t done anything yet.