Back to stories
Policy

Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

Michael Ouroumis2 min read
Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

The legal battle between Anthropic and the U.S. government escalated this week as the Trump administration filed a court brief defending its blacklisting of the AI company, while former federal judges weighed in with filings supporting Anthropic's challenge.

The case has become the most closely watched AI policy dispute in the country, pitting national security authority against the rights of AI companies to maintain safety restrictions on their own products.

The Government's Defense

In a filing submitted on March 17, government lawyers argued that the Pentagon's designation of Anthropic as a supply chain risk was both justified and lawful. The administration contended that operational urgency justifies swift exclusion of companies from government contracts and that courts must defer to national security assessments made by defense officials.

At the core of the government's argument is a stark position: the Pentagon wants to use Anthropic's Claude AI for "all lawful purposes" and asserts it cannot allow a private company to dictate how its tools are used in a national security context. Government attorneys argued that Anthropic's safety guardrails — which prevent the AI from assisting with autonomous weapons targeting or domestic surveillance — are incompatible with military operational needs.

Former Judges Side With Anthropic

In a notable development on the same day, former federal judges submitted amicus briefs raising concerns about the Pentagon's use of the supply chain risk label against Anthropic. The former judges argued that the designation process lacked the procedural safeguards required by law, lending weight to Anthropic's claim that the government violated due process.

Background

The dispute began on March 3 when Secretary of War Pete Hegseth designated Anthropic a supply chain risk after the company refused to remove its safety guardrails. Anthropic filed suit on March 9, calling the designation "unprecedented and unlawful" and arguing it violated the company's free speech and due process rights.

Broader Implications

The case raises fundamental questions about whether AI companies can be compelled to remove safety restrictions to serve government customers. A ruling in the government's favor could set a precedent that effectively forces AI labs to choose between maintaining safety policies and accessing lucrative federal contracts.

Conversely, a ruling for Anthropic could establish that AI companies retain the right to set usage boundaries on their products, even when the customer is the U.S. military. The outcome will likely shape how every major AI lab approaches government contracts going forward.

A hearing date has not yet been set, but given the national security dimensions, legal observers expect the case to move through the courts on an expedited timeline.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read