The federal government is preparing to sue states over their AI laws. The Commerce Department has until March 11 to identify which state regulations are “overly burdensome.” And California’s chatbot safety rules for minors are already in effect.
Here’s where AI regulation stands as we head into March 2026.
The Big Picture: Federal vs. State Showdown
President Trump’s December 2025 executive order on AI created a two-pronged attack on state regulation. The Department of Justice established an AI Litigation Task Force in January with a single mission: challenge state AI laws in federal court.
The legal theories the task force will deploy include claims that state laws unconstitutionally burden interstate commerce, conflict with federal regulations, or compel speech in violation of the First Amendment.
The executive order specifically called out Colorado’s AI Act as an example of problematic regulation, claiming the state’s anti-discrimination requirements force AI systems to produce “false results.”
Colorado has already blinked. The state delayed implementation of SB 24-205 from February 1 to June 30, 2026 - buying time but not solving the underlying conflict.
The March 11 Deadline
The Commerce Department must publish a comprehensive review of all state AI laws by March 11, identifying two categories for potential challenge:
“Altered truthful outputs”: State laws that require AI models to modify outputs in ways the administration characterizes as forcing “false” results. Colorado’s anti-discrimination requirements are the primary target here.
“Compelled disclosures”: State transparency and reporting requirements that may raise First Amendment concerns. California’s AI training data transparency laws are potential targets.
The review will flag laws appropriate for referral to the DOJ’s litigation task force. The FTC must also issue a policy statement by the same date describing how federal law applies to AI and when state laws requiring alteration of “truthful outputs” are preempted.
The administration is also conditioning $42 billion in broadband funding on states repealing AI regulations deemed onerous - using infrastructure money as leverage against state policy.
State Response: Defiance
Governors in California, Colorado, and New York have issued statements indicating the executive order will not stop them from passing or enforcing local AI statutes.
Their legal argument: federal preemption typically requires congressional action, not executive orders. The administration’s theory rests on the dormant commerce clause and existing federal regulations - grounds that courts haven’t tested for AI-specific laws.
But the uncertainty itself may be the point. Companies facing potential conflicts between state requirements and federal pressure may choose the path of least resistance - waiting to see which laws survive rather than complying with all of them.
What’s Already in Effect
Despite the federal pushback, several state AI laws are now live:
California AB 2885 (effective January 1, 2026): AI-generated election content must be labeled as synthetic. Violators face civil penalties.
California SB 243 (effective January 1, 2026): The “Companion Chatbots Act” requires chatbots to disclose to minors that they’re AI, remind users to take breaks every 3 hours, block sexual content for underage users, and implement suicide prevention protocols. Families can sue developers for violations.
California’s Transparency in Frontier AI Act (effective January 1, 2026): Developers must disclose training data sources and safety testing results.
Texas’s Responsible AI Governance Act (effective January 1, 2026): Establishes guidelines for state agency AI use.
Illinois AI Video Interview Act amendments (effective January 1, 2026): Expanded disclosure requirements for employers using AI in hiring.
Legislative Activity This Week
State legislatures remain active despite federal threats. As of mid-February, companion and consumer chatbot bills have advanced in Virginia, Washington, Utah, Arizona, and Hawaii.
The “year of the chatbot bill” continued last week with bills crossing chambers in Oregon, Utah, Virginia, and Washington. New bills were introduced in at least six additional states.
Most focus on disclosure requirements - forcing chatbots to identify themselves as AI - rather than content restrictions. This may prove strategically smart: disclosure mandates are harder to characterize as compelling “false” outputs than anti-discrimination rules.
Federal Enforcement: A Strategic Retreat
While the DOJ prepares to attack state laws, federal agencies are pulling back their own AI enforcement.
The FTC vacated its 2024 consent order against Rytr LLC in December 2025. The agency had alleged that Rytr’s AI writing assistant could generate fake testimonials and reviews. The reversal came with a policy statement: “Condemning a technology or service simply because it potentially could be used in a problematic manner is inconsistent with the law.”
The CFTC did issue its first advisory on AI-related prediction market misconduct on February 25, clarifying that insider trading rules apply to these platforms. But that’s enforcement of existing law, not new AI-specific regulation.
What This Means
The federal approach to AI regulation has crystallized into a clear strategy: block states from regulating while declining to regulate federally.
The theory is that innovation suffers when companies must navigate a patchwork of state requirements. The risk is that nobody regulates at all - at least not until something goes seriously wrong.
States see it differently. California’s chatbot safety laws exist because a teenager died after extended conversations with a companion AI. Colorado’s anti-discrimination rules exist because algorithmic bias in hiring, lending, and housing has documented real-world harms.
The constitutional questions underlying this fight - does AI output constitute speech? Can states require “truthful” AI when the federal government defines truthful differently? - will take years to resolve in court.
Coming Up
March 11: Commerce Department report on state AI laws; FTC policy statement on federal preemption.
June 30: Colorado’s AI Act scheduled to take effect (if it survives).
July 1, 2027: First annual reports due under California’s chatbot safety law on links between AI use and suicidal ideation.
The next two weeks will reveal which state laws the administration considers worth challenging and whether the FTC will claim authority to preempt them. The litigation that follows could define AI governance for the next decade.