AI Regulation Tracker: EU Bans Deepfake Nudes, US Threatens State Laws, China's Fines Get Real

This week brought an EU deal to ban AI-generated sexual deepfakes, the Commerce Department's report on state AI laws, and massive new penalties in China's cybersecurity overhaul.

European Union flags outside the European Parliament building

The EU just banned AI-generated sexual deepfakes. The US government released its report on which state AI laws it wants to kill. China’s cybersecurity overhaul went into effect with fines now reaching 5% of global revenue.

It’s been a busy two weeks for AI regulation. Here’s what actually happened.

EU: Sexual Deepfakes Now Explicitly Illegal

On March 11, EU lawmakers struck a deal on the “AI Omnibus” - a package of amendments to the AI Act. The headline: an explicit ban on AI systems that generate non-consensual intimate images, including child sexual abuse material.

The ban came in response to the Grok nudification scandal. When xAI released an image-editing feature for Grok in late December, users immediately exploited it to generate sexualized images of real women and girls. Researchers at AI Forensics estimated the tool produced at least 3 million non-consensual sexual images and 20,000 child sexual abuse images in 11 days.

The European Commission admitted that existing EU law - including the AI Act as written - didn’t actually prohibit this. Hence the emergency amendment.

France, Spain, Germany, and Slovakia pushed hardest for the explicit ban, with Germany and Slovakia threatening to block the entire file unless it was included. The EU Council added the prohibition in a last-minute move on March 10. European ambassadors endorsed the common position on March 13.

Other Omnibus Changes

The deal also weakens some AI Act requirements:

AI literacy requirements gutted: Companies will no longer be required to ensure staff have sufficient AI literacy. That obligation is now just an “encouragement” from the Commission and member states.

Sensitive data processing loosened: The threshold for using sensitive personal data to detect bias drops from “strictly necessary” to just “necessary.”

High-risk deadlines extended: Some requirements for high-risk AI systems, originally due August 2026, will be pushed back - potentially to December 2027 - because standards and support tools aren’t ready.

Sector-regulated products get relief: AI systems embedded in medical devices and industrial machinery face eased compliance rules.

A committee vote is scheduled for March 18. Then trilogue negotiations with the Parliament and Council before final passage, likely by mid-2026.

US: The Federal Preemption Push

The Trump administration’s campaign to override state AI laws reached two major deadlines this week.

On March 11, the Department of Commerce published its comprehensive review of state AI laws, identifying which ones it considers “overly burdensome” or in conflict with federal priorities.

The same day, the FTC was directed to issue a policy statement explaining how the FTC Act applies to AI and when state laws requiring “alteration of truthful outputs” are preempted by federal law.

The administration’s argument: state AI laws that require transparency disclosures or algorithmic adjustments may violate the First Amendment and the Commerce Clause. The Attorney General has been directed to establish a task force to challenge such laws in court.

Which laws are targeted? According to legal analyses:

  • Algorithmic discrimination laws governing automated decision systems
  • Transparency requirements for generative AI models and training data
  • Political content regulations for AI-generated deepfakes
  • Reporting obligations for AI developers

Whether federal agencies can actually preempt state consumer protection laws is legally contested. Expect years of litigation.

State Legislatures Push Back

States aren’t waiting for the courts. Washington passed three AI bills on Thursday night before adjournment:

  • HB 1170: AI disclosure requirements
  • HB 2225: Chatbot safety for minors, including self-harm protocols
  • SB 5395: Restrictions on AI use in health insurance decisions

Utah passed nine AI-related bills covering schools, deepfake protection, and ensuring medical decisions are made by humans.

Oregon passed SB 1546 - an AI companion chatbot bill with a $1,000 private right of action. It requires providers to disclose when a chatbot isn’t human and implement procedures for suicidal ideation. Only two legislators voted against it.

Virginia is sending three bills to Governor Spanberger covering AI fraud, verification organizations, and social media platform duties of care for minors.

Kentucky’s HB 227 - social media age verification with algorithm restrictions for minors - passed the House 96-0.

Florida’s SB 482, an “AI Bill of Rights,” passed the Senate but stalled in the House facing adjournment.

Over 100 AI-related bills remain active across state legislatures, and the federal preemption push hasn’t slowed them down.

China: Cybersecurity Law Gets Teeth

China’s amended Cybersecurity Law took effect January 1, 2026 - the first major overhaul since 2017. The changes relevant to AI:

New AI governance framework: The amendment adds provisions requiring support for foundational AI research, development of AI infrastructure, and “stronger ethical standards.” It also mandates enhanced security risk monitoring for AI systems.

Dramatically higher fines: Maximum penalties jump to CNY 50 million or 5% of prior year revenue for companies. Individuals face fines up to CNY 1 million. For serious consequences affecting critical infrastructure, fines can reach CNY 10 million.

No more warnings: Regulators can now issue immediate fines for cybersecurity failures. The previous requirement for initial warnings is gone.

Extraterritorial enforcement: New provisions allow China to pursue overseas actors for cyber activities affecting domestic networks.

What’s Next

EU: Committee vote March 18 on the AI Omnibus. Final passage expected mid-2026. Full AI Act enforcement on high-risk systems August 2026 (or later if deadlines extend).

US: Expect legal challenges to the preemption strategy. FTC enforcement of existing consumer protection laws on AI begins immediately. More state laws passing despite federal threats.

China: Companies operating in China face immediate compliance requirements. The extraterritorial provisions mean foreign companies with Chinese customers need to pay attention.

The pattern: Europe regulates by amendment, the US regulates by litigation threat, and China regulates by increasing fines until compliance becomes cheaper than violation.

The Bottom Line

The Grok scandal proved the EU’s AI Act had holes. They’re patching them - while also quietly weakening other protections.

The US federal government wants to preempt state AI laws but faces constitutional limits on its ability to do so. States aren’t waiting for permission.

China’s approach is simpler: make the fines big enough that companies take compliance seriously.

For anyone building or deploying AI systems, the message is clear: the compliance landscape is fragmenting, not consolidating. Plan accordingly.