AI Regulation Tracker: 19 New Laws Signed, Connecticut Targets Frontier Models, and States Won't Wait for Congress

The count of new AI laws signed in 2026 jumped from 6 to 25 in a month. Connecticut just passed a sweeping frontier AI bill. And the federal government still can't agree on preemption.

A row of state capitol building columns in warm afternoon light

Five weeks ago, six AI bills had been signed into law in 2026. Today that number is 25, with another 27 that have passed both chambers and are heading for governors’ desks.

The states are sprinting. And they’re not waiting for Washington.

19 New Laws in Five Weeks

The Plural Policy tracker documented 19 new AI bills signed into law since mid-March, spread across eight states. The laws cover everything from deepfake porn to health insurance AI to school chatbot safety.

Utah led the pack with eight bills signed by Governor Spencer Cox. The package includes AI literacy requirements for grades 7-8, a ban on non-consensual deepfake intimate images, new rules around AI-generated child sexual abuse material, and expanded disclosure requirements for insurance companies using AI.

Washington state passed four laws covering AI content disclosure for providers with over one million monthly users, chatbot transparency protections for minors, expanded restrictions on AI-generated sexually explicit content involving minors, and limits on AI in health insurance prior authorization decisions.

Tennessee now requires disclaimers on political ads using deepfakes and bars AI systems from claiming to be mental health professionals.

Idaho established a framework for generative AI in K-12 education and passed its own conversational AI regulations.

Colorado addressed search warrant procedures for AI platforms.

Nebraska: Chatbot Rules for Kids

Nebraska became the fourth state this year to enact a chatbot law when Governor Pillen signed the Conversational AI Safety Act on April 14.

The law requires AI chatbot operators to tell minors they’re talking to a machine, not a person. It bars chatbots from generating sexually explicit content for minors or simulating romantic interactions. And it mandates a protocol for detecting and responding to user prompts about suicidal thoughts or self-harm, including referrals to crisis services.

The rules take effect July 1, 2027, and will be enforced by the state attorney general.

New York Signs Frontier Model Law

Governor Hochul signed S 8828 on March 27, making New York the latest state to regulate frontier AI developers directly. The amended RAISE Act requires large-scale AI developers to publish safety plans, report catastrophic incidents to the state, and submit to oversight from the New York Department of Financial Services.

The law aligns with California’s approach under SB 53, focusing on transparency rather than restricting what models can do. But it adds teeth: the NYDFS now has broad rulemaking and enforcement authority over frontier developers operating in the state.

Connecticut Goes Big

On April 21, the Connecticut Senate voted 32-4 to pass Senate Bill 5, one of the most comprehensive state AI proposals yet. The bill now heads to the House, which declined to act on a similar measure last year.

SB 5 packs several regulatory frameworks into one bill:

Frontier model regulation — Developers using significant computing power must implement internal processes to address catastrophic risks, with whistleblower protections for employees who flag safety concerns.

Employment AI rules — New requirements for automated decision technology used to screen applicants, rank candidates, evaluate performance, or support termination decisions.

Chatbot safety — Operators must detect suicidal ideation or self-harm indicators and respond with appropriate crisis resources.

AI infrastructure — The bill creates a state AI Policy Office, an AI Learning Laboratory program, and a Connecticut AI Academy.

Sponsor Sen. James Maroney (D-Milford) pushed the bill through extensive questioning. Whether the House takes it up this session remains the open question.

Federal Preemption: Still Going Nowhere

The White House released its National Policy Framework for AI on March 20, calling for Congress to preempt “cumbersome” state AI laws. The framework recommends federal action across seven areas: child protection, infrastructure, intellectual property, free speech, innovation, workforce development, and state preemption.

The record on actually preempting state laws remains 0 for 3. Congress rejected preemption in the reconciliation bill, the NDAA, and standalone legislation. The Senate stripped out a House-passed provision that would have blocked state AI enforcement for ten years.

Meanwhile, the Commerce Department is supposed to be evaluating which state laws conflict with federal policy. But with 25 laws already signed and dozens more in the pipeline, the states are moving faster than any federal review can keep up with.

EU: 100 Days to Full Enforcement

The EU AI Act’s biggest enforcement date hits August 2, 2026 — just over three months away. That’s when requirements for high-risk AI systems kick in, covering biometrics, critical infrastructure, education, employment, law enforcement, migration, and democratic processes.

Transparency obligations under Article 50 also become enforceable: AI systems must disclose when users are interacting with AI, synthetic content must be labeled, and deepfakes must be identified.

The penalties are substantial — up to 35 million euros or 7% of global annual revenue. National regulators in each member state handle enforcement, which means companies operating across Europe face a patchwork of interpretations even under a single law.

What This Means

The AI regulation story in 2026 has a clear pattern: the closer you are to the problem, the faster you act.

States are passing laws about specific harms — kids talking to chatbots, deepfake revenge porn, AI replacing therapists, algorithms deciding who gets hired. These aren’t abstract policy debates. They’re responses to things that are already happening.

The federal government, by contrast, is stuck arguing about whether states should be allowed to regulate at all. And every month that argument continues, more state laws go into effect.

For companies building and deploying AI, the practical reality is clear: compliance means tracking dozens of state laws, not waiting for a single federal framework. And with the EU’s August deadline approaching, the regulatory pressure is about to get significantly heavier.

The number to watch: 27 bills have passed both chambers in their states and are heading to governors. By next month, the 2026 total could be well past 50.