The EU Parliament voted today to ban AI “nudifier” tools outright and delay AI Act compliance deadlines. Meanwhile, states continue passing chatbot safety laws despite federal preemption threats, Colorado moves to gut its landmark AI law, and Congress introduced a new bill shielding AI data centers from environmental litigation.
EU Bans AI Nudifier Tools, Delays High-Risk Rules
The European Parliament voted today on amendments to the AI Act that add a new prohibition and extend several deadlines.
The Nudifier Ban
MEPs voted to ban “nudifier” AI systems—tools that create or manipulate images to make them sexually explicit without the depicted person’s consent. These apps have proliferated online, with victims ranging from celebrities to schoolchildren.
The ban adds nudifier systems to the AI Act’s list of prohibited practices, making them illegal to develop, deploy, or offer in the EU. Violators face fines of up to €35 million or 7% of global revenue.
Extended Deadlines
The Parliament also backed pushing back compliance dates for high-risk AI systems:
- High-risk systems (biometrics, employment, education, law enforcement): December 2, 2027 instead of August 2, 2026
- AI content watermarking: November 2, 2026 instead of February 2, 2027
Only 8 of 27 EU member states have designated their AI Act enforcement authorities, despite a deadline that passed last August. The delays acknowledge the practical reality: neither companies nor regulators are ready.
The Council already agreed its position on March 13. Trilogue negotiations between Parliament, Council, and Commission begin soon, with a target of publishing final amendments by July.
Pennsylvania SAFECHAT Act Passes Senate 49-1
Pennsylvania’s Senate Bill 1090 cleared the Senate on March 17 with near-unanimous support.
The SAFECHAT Act (Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology) targets companion chatbots—AI systems designed for ongoing personal interaction rather than task completion.
Requirements:
- Disclose that users are interacting with AI, not a human
- Establish protocols to prevent content related to suicide or self-harm
- Connect users showing signs of crisis with mental health resources
- Report crisis intervention incidents to state authorities
The bill now sits with the House Communications and Technology Committee. If passed, it would take effect January 1, 2027, joining Washington and Oregon’s laws that take effect the same day.
Idaho Chatbot Bill Advances
Idaho’s S 1297 passed the Senate 21-12 and was amended and filed for a second reading on March 18.
The Conversational AI Safety Act mirrors Oregon’s SB 1546 but includes a carve-out for chatbots embedded within other services. Like other chatbot bills, it requires hourly disclosure to minors that they’re talking to AI, prohibits self-harm content, and mandates crisis intervention protocols.
If passed, Idaho would be the fourth state with a chatbot safety law on the books.
Colorado Plans to Scrap Landmark AI Law
Colorado’s AI Act—the most comprehensive state AI law in the country—may not survive in recognizable form.
A state-appointed working group released a draft framework on March 17 proposing to repeal and replace the 2024 law entirely.
What’s Changing:
The original Colorado AI Act focused on how high-risk AI systems are designed, deployed, and monitored. It required:
- Risk assessments for high-risk AI systems
- A statutory duty of care for developers and deployers
- Impact assessments and discrimination prevention measures
The new framework scraps most of this in favor of transparency requirements:
- Consumer notices when AI influences “consequential decisions”
- Disclosure of what data is collected and how it’s used
- No more risk assessments or duty of care
Why the Retreat:
Industry groups called the original law “heavy handed and unworkable.” The compliance deadline, originally February 1, 2026, was already pushed to June 30.
Governor Jared Polis, who signed the original act with expressed reservations, said the new draft will ensure residents know when AI affects decisions about their lives—but without the compliance burden that worried businesses.
What This Means:
Colorado was supposed to be the template. Other states were watching to see if comprehensive AI regulation could work at the state level. If Colorado backs down to a disclosure-only regime, it signals that detailed AI governance may require federal action—the same federal action the states have been pursuing because Congress won’t act.
The draft bill hasn’t been introduced yet. If passed, it would take effect January 1, 2027.
Tennessee Passes AI Mental Health Ban
Tennessee’s legislature unanimously passed SB 1580 and HB 1470, prohibiting AI systems from advertising or claiming to be qualified mental health professionals.
The bills address a specific concern: AI chatbots marketing themselves as capable of providing therapy or mental health treatment. The prohibition covers both the development and deployment of such systems.
Tennessee joins a small but growing number of states restricting AI in healthcare contexts, though this is one of the narrowest approaches—targeting false advertising rather than AI use broadly.
Connecticut Employment AI Bill Advances
Connecticut’s SB 435 passed out of the Joint Committee on Labor and Public Employees in March.
The bill requires employers to:
- Notify employees when AI tools influence employment decisions
- Provide detailed disclosures about data collection and automated systems
- Document human review of AI-assisted decisions
- Conduct bias audits by state-approved independent auditors
- Report audit results to the Connecticut Department of Labor
If passed, Connecticut would have some of the strongest employee protections against algorithmic decision-making in the country.
Federal: Congress Shields AI Data Centers
On March 24, Congress introduced H.R. 8037, the “Protect American AI Act of 2026.”
Despite the name, the bill has nothing to do with AI safety. It shields data centers from environmental litigation by preventing lawsuits from affecting permits, licenses, or approvals already issued for data center construction.
A House Judiciary Committee hearing was scheduled for today, March 26.
The bill reflects the administration’s position that AI infrastructure must be built quickly, even if that means limiting citizens’ ability to challenge projects in court.
What’s Still Moving
Chatbot Bills Advancing:
- Arizona HB 2311 in Senate Rules Committee, hearing set for March 23
- Georgia SB 540 advancing before April 6 session end
Healthcare AI:
- Vermont passed two healthcare-related AI bills out of committees
- Colorado healthcare AI bill crossed chambers
California:
- AB 1883 passed committee 5-0 on March 19
- AB 1898 passed committee 7-0
New York:
- RAISE Act chapter amendments (S 8828) passed both chambers
What This Means
The EU is adding teeth to its AI rules with the nudifier ban while acknowledging reality by extending deadlines. States are ignoring federal preemption threats—Pennsylvania passed its chatbot bill 49-1 despite the White House framework released a week earlier.
But Colorado’s retreat is the biggest story. The state that passed America’s most ambitious AI law is preparing to gut it before it takes effect. That’s not regulatory simplification—it’s capitulation.
Chatbot safety laws are succeeding because they’re narrow: specific harms, specific requirements, specific enforcement. Comprehensive algorithmic governance remains elusive.
For companies, the immediate question is chatbot compliance. Washington, Oregon, Pennsylvania (if the House agrees), and Idaho (if passed) all target January 2027. For the EU, the nudifier ban will apply immediately upon final publication—companies running these tools in Europe need to shut them down.
For everyone else, watch Colorado. If the template state for comprehensive AI regulation abandons the model, the regulatory vacuum at the federal level becomes even more significant.