The White House wants a single national AI framework. Congress keeps failing to deliver one. And in the gap between ambition and action, states have introduced over 2,000 AI bills this year alone.
That’s the story of AI regulation in April 2026: a federal government that can’t stop the states from writing the rules, states that won’t stop writing them, and a European deadline that’s about to make everything more complicated.
The Federal Preemption Saga
The Trump administration has spent months trying to stop states from regulating AI. The record so far: 0 for 3.
Attempt one: A 10-year moratorium on state AI regulation was quietly tucked into the House budget reconciliation bill last year. The Senate voted 99-1 to strip it out after 40 state attorneys general and 260 state legislators from all 50 states pushed back. Legal scholars argued it clearly violated the Byrd Rule.
Attempt two: The White House released its National Policy Framework for Artificial Intelligence on March 20. The framework recommends that Congress block states from regulating AI model development or holding developers liable for third-party misuse. It’s not binding—it’s a wishlist. And Congress hasn’t acted on it.
Attempt three: HR 5388, the “American Artificial Intelligence Leadership and Uniformity Act,” proposes a temporary moratorium on state AI laws affecting interstate commerce. It hasn’t gained traction.
The pattern is clear. The White House and big tech want federal preemption. Congress doesn’t have the votes to deliver it. And every month that passes without a federal framework, more states fill the vacuum.
The State Bill Avalanche
How fast are states moving? The numbers tell the story:
- 2024: ~600 AI bills introduced, ~100 enacted
- 2025: ~1,200 AI bills introduced
- 2026 (through March): 1,561 AI bills introduced across 45 states—and still climbing
Since our last tracker update on April 13, the total count of enacted laws this year has hit at least 25, with another 27 that have cleared both chambers and await governor signatures.
The 19 bills that became law in late March and early April span a remarkable range:
Health insurance AI restrictions: Indiana, Utah, and Washington all passed laws prohibiting health insurers from using AI as the sole basis for denying or modifying claims. This is one of the clearest examples of AI regulation with immediate consumer impact—if your insurance claim gets rejected, a human now has to be involved in that decision.
Chatbot laws with teeth: Oregon’s SB 1546 regulates AI companion platforms that simulate romantic or intimate relationships. Idaho’s S 1297 requires operators to give parents of users under 13 control over privacy and account settings. Both take effect January 1, 2027.
Deepfake crackdowns: Tennessee requires disclaimers on political ads using AI-generated content. Utah banned non-consensual deepfake intimate images and reorganized offenses related to AI-generated child sexual abuse material. Washington expanded restrictions on sexually explicit AI-generated depictions of minors.
AI in schools: Utah passed two bills addressing AI literacy in schools and Idaho established a framework for generative AI use in K-12 education.
California Goes Its Own Way
Governor Newsom isn’t waiting for federal direction. On March 30, he signed Executive Order N-5-26, setting new standards for AI companies that want state contracts.
Companies seeking to do business with California would need to certify their policies on preventing distribution of illegal content (including CSAM and non-consensual intimate imagery), avoiding harmful bias, and protecting civil rights.
The executive order also includes a pointed provision: California can separate its AI procurement authorization from the federal government’s if needed. Translation: if the feds lower the bar, California will set its own.
The EU’s August Deadline
While the US argues about whether to regulate AI, Europe’s clock is ticking.
August 2, 2026 is when the EU AI Act’s high-risk system obligations take full effect. That means:
- All operators of high-risk AI systems must comply with risk management, data governance, transparency, and cybersecurity requirements
- The European Commission gains full supervision and enforcement powers over general-purpose AI model providers
- Fines kick in at serious levels: up to 35 million euros or 7% of global annual revenue for banned AI practices, 15 million euros or 3% for high-risk system violations
For any AI company doing business in Europe—which includes most major US providers—this isn’t optional. And unlike the messy patchwork of US state laws, it’s a single, comprehensive framework.
The irony isn’t lost on observers: the US tech industry has spent years lobbying against state-level regulation, arguing it needs one consistent national standard. The EU is about to provide exactly that—just not one that US companies helped write.
What This Means
Three things to watch:
The preemption fight isn’t over. The White House framework may be non-binding, but it signals where the administration wants Congress to go. If Republicans gain seats in the midterms, federal preemption could return as a serious legislative threat. For now, it’s dead.
State laws are becoming the de facto national standard. When New York, California, and Washington all regulate frontier models and chatbots, companies build for compliance across the board. The lack of federal action doesn’t mean no regulation—it means regulation by the most aggressive states.
The EU deadline will force compliance regardless. Come August, any AI provider serving European customers must meet EU AI Act requirements. For companies already building to EU standards, US state laws become less burdensome by comparison. For companies that ignored both, August is going to be expensive.
What You Can Do
- If you’re building AI products: Map your obligations across the states where you operate. New York’s RAISE Act (effective January 1, 2027), California’s TFAIA, and the EU AI Act all have different but overlapping requirements. Start with the strictest.
- If you’re using AI in healthcare or insurance: Check whether your state now requires human review of AI-assisted decisions. Indiana, Utah, and Washington already do. More states will follow.
- If you’re a parent: Oregon and Idaho’s chatbot laws won’t take effect until 2027, but you can already check what AI companions your kids are using and what data those platforms collect.
- If you’re following the policy debate: Track the Transparency Coalition’s legislative tracker for real-time updates on all 2,000+ bills.