AI Regulation Tracker: EU Delays High-Risk Enforcement to 2027, States Push 78 Chatbot Bills Across 27 States

Europe votes to push back AI Act enforcement by 16 months. Meanwhile, US states keep legislating at a breakneck pace with chatbot safety, deepfakes, and worker protection bills piling up.

A row of state capitol building columns in warm afternoon light

The EU just blinked. The European Parliament voted overwhelmingly to delay the AI Act’s high-risk enforcement deadline by 16 months, pushing it from August 2026 to December 2027. And while Europe retreats to give companies more time, US states keep flooding the zone — 78 chatbot safety bills are now alive across 27 states, with new laws signed every week.

EU Votes to Delay the Hard Part

The European Parliament adopted its position on the Digital Omnibus proposal by a vote of 569 to 45, with 23 abstentions. The measure pushes back the enforcement deadline for high-risk AI systems — covering biometrics, employment screening, credit scoring, law enforcement, and healthcare — from August 2, 2026 to December 2, 2027.

AI systems embedded in regulated products like medical devices and vehicles get even more time: their deadline moves to August 2, 2028.

The delay isn’t a surprise. The European Commission proposed it back in November 2025, arguing that companies needed more time to comply with the technical requirements laid out in Articles 9 through 15. The Council aligns with Parliament on the new dates, and a second trilogue is scheduled for April 28 to finalize the deal.

Not everything got pushed. The Parliament added a ban on AI-powered nudifier applications — tools that generate non-consensual nude images — to the list of prohibited AI practices that are already enforceable. General-purpose AI model obligations and AI literacy requirements also remain on their original August 2025 and February 2025 timelines respectively.

The practical effect: European companies that had been scrambling to get their high-risk AI systems compliant by this summer just got a reprieve. Whether they use that time to actually comply or just delay further remains to be seen.

Tennessee Passes the CHAT Act Unanimously

Tennessee’s Curbing Harmful AI Technology Act (SB 1700) sailed through both chambers with zero opposition — 31-0 in the Senate, 90-0 in the House. The bill establishes chatbot safety rules and data privacy protections for minors.

In the same session, Tennessee approved SB 837, which addresses the question of AI personhood by a vote of 26-6 in the Senate and 93-2 in the House. The bill clarifies that AI systems cannot hold legal personhood status in the state — a pre-emptive move against a scenario that doesn’t exist yet but that lawmakers apparently wanted to rule out.

Maryland Sends Four Bills to the Governor

Maryland’s legislature sent a batch of four AI bills to Governor Moore’s desk:

  • HB 895 bans retailers and delivery services from using consumer personal data for dynamic pricing — no more AI-powered surge pricing based on your purchase history.
  • SB 8 addresses deepfake protections.
  • SB 720 requires local school systems to develop guidance on AI use.
  • SB 141 targets deepfakes in political campaign materials.

The dynamic pricing ban is notable. It’s one of the first state laws that goes beyond disclosure requirements and actually prohibits a specific commercial use of AI-powered personalization.

Georgia: Governor Has Until May 12

Georgia’s SB 540 — the chatbot disclosure and child safety bill — is still sitting on Governor Kemp’s desk. He has until May 12 to sign or veto.

The bill would require chatbots to disclose their AI nature to users every three hours, and every hour when interacting with minors. It mandates protocols for detecting suicidal ideation and contains no carve-out for chatbots embedded in larger platforms, meaning Meta and Google would need to comply.

Georgia also passed SB 444, which prohibits insurance companies from making coverage decisions based solely on AI systems.

Alabama Signs Healthcare AI Bill

Alabama became the latest state to sign an AI law when Governor Ivey put pen to SB 63 on April 17. The law regulates the use of AI in healthcare coverage determinations, joining a growing list of states that are pushing back against insurers using algorithms to deny claims.

Arizona Races the Clock

With a legislative adjournment deadline of April 25, Arizona had three AI bills in reconciliation:

  • HB 2133 expands unlawful image disclosure statutes to cover synthetic depictions.
  • SB 1786 requires provenance data in AI-generated video, image, and audio content.
  • HB 2592 mandates that state agencies identify AI implementation opportunities and eliminate regulations that restrict AI use — a rare pro-deployment bill in a session dominated by safety measures.

California: 40+ Bills in the Pipeline

California continues to be the most active state. The Transparency Coalition tracker counts over 40 AI bills in active committee hearings, spanning nearly every area of AI regulation:

Worker protection stands out. SB 951 would require companies to provide 90 days notice before AI-driven workforce displacement of 25% or more. AB 2027 would prohibit employers from using worker personal information to train AI systems that replace those workers. SB 947 creates broader worker protections around AI and automated systems.

Child safety bills include AB 2023 and SB 1119 (companion chatbot safety bills), the PAUSE Act (AB 1988), and SB 867, which would ban chatbots in children’s toys.

Education is getting attention too. AB 2148 would require all public school employees to be natural persons — not AI systems. SB 928 applies the same rule to CSU instructors.

Illinois reportedly has more than 50 AI-related bills in active play this session.

Connecticut SB 5: Waiting on the House

Connecticut’s sweeping AI bill passed the Senate 32-4 on April 21 and now sits with the House. As we covered last week, the 64-page, 37-section bill covers frontier model safety, employment AI disclosure, chatbot regulations, and creates a state AI sandbox program.

The House declined to act on similar legislation last year, so its passage isn’t guaranteed. But the 32-4 Senate margin suggests strong legislative appetite.

By the Numbers

  • 25 AI laws signed into law in 2026 so far
  • 27+ bills that have passed both chambers and are heading to governors
  • 78 chatbot safety bills alive across 27 states
  • 40+ AI bills in California committee hearings alone
  • 50+ AI bills in Illinois
  • 99 days until the EU AI Act’s original high-risk deadline (now delayed to December 2027)
  • 0 comprehensive federal AI laws passed by Congress

What This Means

Two stories are playing out simultaneously.

In Europe, the regulatory apparatus is slowing down. The 16-month delay on high-risk enforcement gives companies breathing room but also means that AI systems affecting hiring, healthcare, and law enforcement will operate without enforceable guardrails for longer. The nudifier ban is a positive addition, but it targets a narrow category of harm while the broader risks continue unregulated.

In the US, the opposite is happening. States are legislating faster than companies can track. The 78 chatbot bills across 27 states signal that child safety has become the consensus entry point for AI regulation — it’s the one area where bipartisan support is easy to find. Worker protection bills in California and Tennessee’s unanimous CHAT Act suggest the scope is expanding.

For companies deploying AI, the compliance picture keeps getting more complicated. No federal preemption is coming. The EU is giving you more time but not fewer requirements. And every state legislative session is producing new rules that vary in scope, definitions, and enforcement mechanisms.

The number to watch this week: 27 bills sitting on governors’ desks. By the time the next tracker publishes, that count could push the 2026 total well past 50 signed laws.