Back to stories
Policy

Anthropic vs. Pentagon: Hearing Ends, Judge to Rule Within Days

Michael Ouroumis3 min read
Anthropic vs. Pentagon: Hearing Ends, Judge to Rule Within Days

The preliminary injunction hearing in Anthropic's lawsuit against the Department of Defense wrapped up on Monday, with Judge Rita Lin of the Northern District of California indicating she would issue a ruling within days. The case, which pits one of AI's most prominent safety-focused labs against the Trump administration, could set a precedent for how the government treats AI companies that refuse military use cases.

Anthropic filed suit earlier this month after the Pentagon designated it a "supply-chain risk" and directed all federal agencies to stop using its Claude models within six months. The designation is normally applied to foreign companies suspected of posing cybersecurity or national security threats — not American firms — and the move generated significant bipartisan backlash from lawmakers and AI researchers alike.

What Each Side Argued

Anthropic argued that the designation was unconstitutional retaliation for the company's decision to set "red lines" on certain military use cases, including mass domestic surveillance and fully autonomous weapons. The company contends the government violated its First and Fifth Amendment rights, and that the executive order directing agencies to abandon Anthropic exceeded presidential authority.

The government's position, articulated by DOJ attorneys representing the Pentagon, was that Anthropic poses an "unacceptable risk to national security" — though specifics were limited in public filings. The administration argued the designation was a legitimate national security determination and that courts should defer to executive branch judgment on such matters.

Real-time reporting from the hearing, live-posted to Bluesky by Lawfare's Molly Roberts, indicated a back-and-forth that gave neither side a clear advantage. Judge Lin reportedly pressed both sides on the legal standards for preliminary injunctions, including whether Anthropic could demonstrate irreparable harm.

What's at Stake

The practical stakes are enormous. Anthropic counts the General Services Administration, the Treasury Department, the State Department, and dozens of other federal agencies among its customers. Most have already publicly or privately announced plans to stop using Claude following the Trump administration's order.

Some of Anthropic's biggest private-sector clients — including Microsoft, which builds Anthropic into its Azure AI offerings — have made clear they're continuing to use Claude for non-Pentagon work. But the reputational and revenue damage from the federal pullout is already underway.

The preliminary injunction hearing is the first major legal test of whether the Trump administration can use national security mechanisms to punish AI companies for product decisions. The broader implications extend beyond Anthropic: any AI lab operating under contract with federal agencies is watching the case closely.

Industry Reaction

A group of more than 150 former federal judges filed an amicus brief supporting Anthropic's position, arguing that the designation process had been applied in an unprecedented and constitutionally dubious way. Employees from OpenAI and Google also published an open letter expressing concern about the precedent the government's actions could set for AI safety standards industry-wide.

The ruling, expected within days, will determine whether the supply-chain risk designation can remain in force while the case proceeds to a full trial.

By Michael Ouroumis

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read