On April 24, the United States Department of Justice formally intervened in a lawsuit filed by Elon Musk’s xAI against the state of Colorado, joining the effort to strike down the first comprehensive state law designed to prevent AI systems from discriminating against people based on race, gender, age, or income. The DOJ’s stated reason: the law requires AI companies to “infect their products with woke DEI ideology.”
That is a direct quote from Assistant Attorney General Harmeet K. Dhillon. It is now the official legal position of the federal government.
What Colorado’s Law Actually Does
Senate Bill 205, signed into law in 2024 and set to take effect June 30, 2026, regulates what it calls “high-risk” AI systems — specifically those used for consequential decisions in mortgage lending, student admissions, and employment. The law requires AI developers and deployers to meet disclosure, reporting, and prevention requirements to guard against algorithmic discrimination: the documented phenomenon where automated systems produce biased results that systematically disadvantage people based on protected characteristics.
This is not speculative. Algorithmic discrimination has been documented in hiring tools that screen out women’s resumes, lending algorithms that charge higher rates to Black borrowers, and healthcare systems that deprioritize care for minority patients. The evidence base for this problem is extensive and well-established. Colorado’s law is an attempt to create accountability mechanisms for it.
The law has a notable provision: it explicitly exempts algorithms designed to advance diversity or redress historical discrimination. This carve-out is central to xAI’s legal challenge.
The Legal Argument: Grok’s Free Speech
xAI filed its complaint on April 9 in federal court in Denver, arguing that SB 205 is unconstitutionally vague, invites arbitrary enforcement, and violates the First Amendment. The specific claim is that AI chatbot outputs constitute protected speech, and that requiring those outputs to avoid discriminatory patterns amounts to compelled speech — the government forcing AI companies to say things they don’t want to say.
“Its provisions prohibit developers of AI systems from producing speech that the State of Colorado dislikes, while compelling them to conform their speech to a State-enforced orthodoxy on controversial topics of great public concern,” the complaint states.
The argument extends Grok’s branding as an AI that pursues “disinterested truth.” xAI claims that the law would force Grok to abandon this approach by requiring it to produce outputs shaped by antidiscrimination requirements rather than “accurate” results.
When the DOJ intervened fifteen days later, it escalated the rhetoric. Assistant Attorney General Brett A. Shumate argued the law “threatens national and economic security and must be stopped.” The DOJ’s position: preventing AI systems from discriminating is a threat to America’s position as “the global AI leader.”
The Diversity Carve-Out Problem
The law’s exemption for algorithms that advance diversity or correct historical bias does create a legitimate constitutional question. If the law prevents AI systems from producing outputs that disadvantage protected groups but exempts outputs that advantage them, that’s differential treatment based on viewpoint. Equal protection doctrine has something to say about that.
But this is a narrow legal issue that could be addressed by modifying or removing the carve-out. xAI’s lawsuit and the DOJ’s intervention go much further than that. They are arguing that any requirement to prevent algorithmic discrimination is unconstitutional — that AI companies have a First Amendment right to deploy systems that discriminate, as long as the discrimination isn’t intentional.
That is a significant legal theory. If it prevails, it would effectively immunize AI systems from antidiscrimination regulation by classifying their outputs as protected speech. Every hiring algorithm that screens out disabled applicants, every lending model that redlines minority neighborhoods, every insurance system that charges more based on zip codes correlated with race — all protected expression.
Why This Should Worry You
Colorado’s law is imperfect. The diversity carve-out is legally vulnerable. Some of its requirements may be vague. These are fixable problems that normal legislative revision could address.
What’s happening instead is that the company owned by the richest person on earth is using the federal government’s legal apparatus to establish a precedent that AI systems cannot be regulated for discriminatory outcomes at all. The framing — “woke DEI ideology” versus “the disinterested pursuit of truth” — is a political argument dressed up as a constitutional one.
The International AI Safety Report 2026, produced by over 100 experts from more than 30 countries, specifically identified the gap between AI capabilities and governance as a critical risk. The report organized AI risks into three categories: malicious use, malfunctions, and systemic risks — with algorithmic discrimination falling squarely in the third category. Yoshua Bengio, who led the report, warned that the gap between technology and safeguards remains the central challenge.
Colorado responded to that challenge by passing a law. The federal government’s response is to help the AI industry tear it down.
What Happens Next
SB 205 is scheduled to take effect June 30. Colorado’s Attorney General Phil Weiser declined to comment on the active litigation. State lawmakers, including the bill’s lead sponsor Rep. Brianna Titone, called the DOJ’s intervention a “distraction” and maintained the law simply prevents discrimination.
Rep. Manny Rutinel put it more directly: the administration is “attacking the law to benefit Musk.”
Whether or not that characterization is fair, the structural dynamics are clear. The company challenging the law has a financial interest in avoiding compliance costs. The federal government joining that challenge sends a signal to every other state considering AI regulation: if you try to hold AI systems accountable for discriminatory outcomes, you will face legal action from both the industry and the DOJ.
That signal will not be lost on state legislators. And it will not be lost on the communities that algorithmic discrimination actually affects.