On January 9, 2026, DOJ employees received an internal memo from Attorney General Pam Bondi. It announced the creation of the AI Litigation Task Force — a unit inside the Department of Justice whose explicit purpose is to sue states over their AI laws.
The task force didn’t come from nowhere. On December 11, 2025, President Trump signed Executive Order 14365, titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order calls state AI regulations an obstacle to American dominance and lays out a strategy to neutralize them through litigation, funding pressure, and federal agency action.
Two months later, the consequences are playing out in real time. States are enforcing new AI laws. The federal government is threatening to block them. And neither side has the legal authority to definitively win.
What the Executive Order Actually Does
Executive Order 14365 has four main mechanisms:
1. The AI Litigation Task Force. It’s led by Attorney General Bondi or her designee, with representatives from the offices of the deputy and associate attorneys general, the Solicitor General’s office, and the Civil Division. Their job: identify state AI laws that “unconstitutionally burden interstate commerce, are preempted by federal regulations, or are otherwise unlawful,” and challenge them in federal court.
2. Commerce Department review. The Secretary of Commerce has 90 days — a deadline arriving in early March 2026 — to identify “onerous” state AI laws and refer them to the litigation task force. This creates a pipeline: Commerce flags the laws, DOJ sues over them.
3. FTC enforcement guidance. Within 90 days, the FTC must issue guidance on how the FTC Act applies to AI models. The executive order specifically names Colorado’s AI Act, claiming it forces AI systems to “produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” The FTC is directed to classify state-mandated bias mitigation as a “per se deceptive trade practice.”
4. Federal funding leverage. The order conditions state eligibility for federal assistance on alignment with the new AI policy framework. It explicitly targets $42 billion in broadband infrastructure funding under the BEAD program, requiring states to repeal AI regulations deemed “onerous” as a condition of receiving the money.
There are carve-outs. The order preserves state authority over child safety, AI infrastructure permitting, and state government procurement. But everything else is fair game.
The States That Aren’t Backing Down
Three major state AI laws took effect on January 1, 2026, just three weeks after the executive order was signed:
California’s Transparency in Frontier Artificial Intelligence Act (SB 53) requires developers of powerful AI models to publish risk-management frameworks, conduct red-teaming exercises, and report catastrophic safety incidents. Violations carry civil penalties of up to $1 million per incident. California also enacted the AI Transparency Act, requiring AI-generated content disclosures and detection tools from platforms with more than one million monthly users, though enforcement was pushed to August 2, 2026.
Texas’s Responsible Artificial Intelligence Governance Act (HB 149) prohibits AI systems designed to manipulate human behavior, intentionally discriminate based on political viewpoint, or produce child sexual abuse material. Government entities must disclose when consumers are interacting with AI and cannot use AI for biometric identification from public data without consent.
Colorado’s AI Act was originally set to take effect February 1, 2026, but the state delayed implementation to June 30. The delay came partly from industry pressure over compliance costs and partly from federal pressure — including a proposal in the “Big Beautiful Bill” for a 10-year AI regulation moratorium at the state level, which was later stripped out.
Colorado is the law the White House most directly targets. The executive order calls out its anti-discrimination provisions by name, arguing they force AI companies to choose between accuracy and compliance.
The Constitutional Problem
Here’s the awkward part: executive orders can’t actually preempt state law.
Federal preemption flows from congressional legislation, not presidential directives. Congress has not passed a federal AI law. Without one, the executive order is a statement of policy, not a binding override of state authority.
The DOJ knows this. That’s why the litigation strategy doesn’t rely on the executive order itself. Instead, the task force will argue that specific state laws violate the Dormant Commerce Clause — the constitutional principle that states can’t impose undue burdens on interstate commerce. The theory: AI models are inherently interstate products, and state-by-state regulation creates an unworkable patchwork that fragments the market.
This argument isn’t frivolous. Courts have struck down state regulations of internet-based services on similar grounds. But it’s not a sure thing, either. States have broad police powers to protect their residents. California’s AI safety law is explicitly framed as consumer protection. Texas’s law prohibits AI-facilitated manipulation and abuse. These are traditionally state regulatory domains.
The funding leverage is more immediately powerful. Unlike litigation, which takes years, withholding $42 billion in BEAD broadband money creates pressure right now. States that want their infrastructure funding have a concrete incentive to reconsider AI regulations — regardless of whether those regulations are constitutional.
What Companies Are Actually Doing
Legal advisors are telling companies to comply with state laws anyway. Gibson Dunn, one of the most prominent tech law firms in the country, wrote in December: “Companies are likely well-advised to continue to operate under the expectation that states will legislate — and enforce — their AI-related laws.”
The reasoning is practical. Even if the DOJ eventually wins challenges to specific state laws, those cases will take years to resolve. In the meantime, state attorneys general can and will enforce their laws. Companies that stop complying based on a presidential executive order may find themselves facing state enforcement actions without the federal protection they assumed would materialize.
This creates a strange dynamic: the White House is telling AI companies that state regulations are an unnecessary burden, while legal counsel is telling those same companies to keep complying with state regulations.
Why This Matters
The federal-state AI fight matters because it determines who sets the rules for AI’s most consequential uses — employment decisions, healthcare recommendations, criminal justice risk assessments, consumer lending, and housing.
California’s approach says: AI companies that build powerful models must test them for safety and report failures. Colorado’s approach says: AI systems that affect people’s lives must not discriminate, and companies must show their work. Texas’s approach says: AI can’t be used to manipulate people or violate their political freedom.
The federal approach, as articulated in EO 14365, says: these regulations slow innovation, fragment the market, and might force AI systems to produce less accurate results to satisfy anti-discrimination requirements.
Both sides have legitimate arguments. State-by-state regulation genuinely creates compliance complexity. But the federal government hasn’t proposed alternative protections. The executive order establishes a “minimally burdensome national policy framework” — which in practice means minimal regulation. If the DOJ succeeds in blocking state laws without Congress passing federal ones, the result is a regulatory vacuum for AI systems that make decisions about people’s jobs, credit, and freedom.
What Happens Next
Three deadlines are approaching:
Early March 2026: The Commerce Department’s 90-day deadline to identify “onerous” state AI laws and refer them to the DOJ task force. This is when we’ll know which specific laws face federal lawsuits.
March 11, 2026: The FTC’s deadline to issue policy guidance classifying state-mandated bias mitigation as potentially deceptive. If the FTC follows through, it creates a direct federal-state regulatory conflict that could accelerate litigation.
June 30, 2026: Colorado’s delayed AI Act takes effect — unless the legislature amends it further or the DOJ obtains an injunction.
Meanwhile, at least 15 other states have AI legislation pending for 2026. The fight in California, Colorado, and Texas is a preview. If the federal government succeeds in blocking their laws, other states will think twice about passing their own. If the states hold, the patchwork the White House fears becomes the regulatory reality.
What You Can Do
- Know your state’s AI laws. If you live in California, Colorado, or Texas, your state has passed protections covering AI transparency, anti-discrimination, or behavioral manipulation. These laws are in effect regardless of the executive order.
- Watch the March deadlines. The Commerce Department report and FTC guidance will reveal the administration’s specific targets. If your state’s AI law is on the list, expect federal litigation.
- Support (or oppose) the legislation. Whether you think states should regulate AI or not, the decisions being made in the next four months will set the trajectory. Contact your state legislators. Comment on proposed rules. The window for public input is open.
- Follow the money. If your state is receiving BEAD broadband funding, the administration may use that money as leverage against AI regulation. Understanding the connection between infrastructure spending and regulatory policy helps you make informed demands of your representatives.
The most important AI policy fight of 2026 isn’t about model capabilities or benchmark scores. It’s about governance — who gets to make rules, who enforces them, and what happens to the people those rules are supposed to protect when the regulators start fighting each other instead.