Back to stories
Policy

Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today

Michael Ouroumis2 min read
Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today

A pivotal courtroom showdown between one of Silicon Valley's leading AI safety companies and the U.S. Department of War reaches a critical milestone today, with a federal court in San Francisco hearing Anthropic's motion for a preliminary injunction in the case Anthropic PBC v. U.S. Department of War (Case No. 3:26-cv-01996-RFL).

What Triggered the Lawsuit

The dispute began after Anthropic refused to accept the Pentagon's standard "any lawful use" contractual policy — a clause the company argued was incompatible with its AI safety commitments. In response, the Secretary of War issued a Secretarial Determination designating Anthropic a "supply chain risk," prompting federal agencies to discontinue their use of Claude across government operations.

Anthropic then sued, arguing the designation violated the First Amendment, the Administrative Procedure Act, and due process guarantees.

The Government's Counter-Argument

In a 40-page opposition brief filed March 17, the Department of Justice pushed back hard. Attorneys argued that Anthropic could, in their assessment, "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" if the company believed its ethical red lines were being crossed.

"The Pentagon deemed that an unacceptable risk to national security," the filing states.

The government's legal team further argued that Anthropic's refusal to accept the contractual term did not constitute protected speech, and that even if a retaliatory motive were assumed, the government would have taken the same action regardless. Prosecutors also challenged the company's claims of irreparable harm, arguing that Anthropic would not suffer lasting damage before a full ruling on the merits.

Broader Stakes

The case has drawn significant attention across the AI and legal communities. A coalition of 149 former federal and state judges previously filed an amicus brief in support of Anthropic's position, warning that the government's approach could have a chilling effect on principled AI governance.

The hearing before Judge Rita F. Lin — scheduled for 1:30 PM Pacific today — is expected to determine whether a preliminary injunction will pause the supply chain risk designation while the case proceeds to full litigation.

Why This Case Matters

At its core, Anthropic v. Department of War is a test of whether an AI company can maintain ethical deployment constraints when contracting with the federal government — and what happens when those constraints conflict with military objectives. The outcome could set lasting precedent for how AI safety commitments interact with national security law.

For the broader AI industry, the case raises uncomfortable questions: Can safety-focused AI companies avoid the gravitational pull of defense contracts without facing regulatory retaliation? And does the government have the authority to effectively ban AI providers that refuse to offer unrestricted access to their models?

Whatever Judge Lin decides today, the case is unlikely to end here. Both sides have signaled they are prepared to take the dispute to the Ninth Circuit if necessary.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read