Back to stories
Policy

DOJ Joins xAI Lawsuit Against Colorado AI Bias Law in First Federal Intervention

Michael Ouroumis3 min read
DOJ Joins xAI Lawsuit Against Colorado AI Bias Law in First Federal Intervention

The U.S. Department of Justice moved on Friday, April 24, to intervene in Elon Musk's xAI lawsuit challenging Colorado's algorithmic discrimination law, marking the first time the federal government has formally joined a court fight over a state-level AI regulation. The intervention escalates a constitutional showdown that could shape how — and whether — states are allowed to police AI systems before a comprehensive federal framework exists.

A First-of-Its-Kind Federal Move

The DOJ's filing in federal court in Colorado backs xAI's bid to block SB24-205, a 2024 statute that requires developers and deployers of "high-risk" AI systems to disclose risks and take steps to prevent algorithmic discrimination. The law covers AI used in consequential decisions across employment, housing, healthcare, mortgage lending, and student admissions, and is scheduled to take effect on June 30.

In its papers, the Justice Department argues the statute violates the Fourteenth Amendment's Equal Protection Clause. The DOJ contends Colorado is forcing AI companies to police unintentional disparate impact tied to protected characteristics like race and sex, while carving out exemptions for practices designed to advance diversity — a structure federal lawyers say amounts to compelled race-conscious design.

What xAI Filed in April

xAI brought its original suit on April 9, arguing that designing and training an AI model is itself an "expressive act" protected by the First Amendment. The company says complying with Colorado's law would force it to retool Grok's training data and system prompts to align with the state's preferred conception of fairness, effectively dictating the model's viewpoint.

The complaint also raises preemption and Commerce Clause concerns, claiming a single state cannot impose design mandates that would, in practice, govern a nationally distributed AI product.

Why the Timing Matters

With the June 30 effective date roughly two months away, the case is on a fast track. A preliminary injunction hearing would determine whether Colorado can begin enforcing disclosure, impact-assessment, and risk-mitigation requirements against frontier AI developers. A ruling for xAI and the DOJ could chill similar bills in other states; a ruling for Colorado could embolden them.

The filing also reflects a broader Trump administration posture against state AI rules. Federal officials have argued throughout 2026 that a patchwork of state laws threatens U.S. competitiveness and that AI policy should be set in Washington — a stance echoed in recent White House efforts to preempt state AI legislation.

Implications for AI Companies

For AI developers, the intervention sharpens an already pressing question: which jurisdiction's rules apply to a model used everywhere? Colorado's framework is one of the most expansive state AI laws on the books, and several other states have taken cues from its language on disparate impact and impact assessments.

If the court enjoins SB24-205, AI labs will likely have more runway before facing state-level compliance regimes. If it doesn't, every developer touching housing, hiring, or lending decisions will need to operationalize Colorado-specific documentation, audit, and disclosure workflows before summer.

The case also tests the legal theory that AI training is constitutionally protected speech — a question with consequences far beyond Colorado. However the court rules, the dispute is now a federal-versus-state contest, not just a private one.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read