A Federal Judge Just Called the Pentagon's Anthropic Ban "Orwellian"

April 1, 2026

U.S. District Judge Rita Lin has blocked the Pentagon from labeling Anthropic a national security supply chain risk — and halted President Trump's order for every federal agency to immediately cease using Claude.

In a scathing 43-page ruling issued last Thursday, Lin called the government's actions "Orwellian" and said the broad punitive measures appeared "designed to punish Anthropic" and could "cripple" the company.

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote.

This is the resolution of a legal clash we covered when it first erupted on March 19. The dispute started when Anthropic pushed to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. Defense Secretary Pete Hegseth responded by invoking a rare military authority — one previously used against foreign adversaries — to designate Anthropic as a supply chain risk and cut it off from all government work.

Anthropic sued, calling it an "unprecedented and unlawful campaign of retaliation." The judge agreed, writing that the record "strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that the government's real motive was unlawful retaliation."

What the ruling does — and doesn't do

The preliminary injunction blocks the supply chain risk designation and Trump's social media directive ordering agencies to stop using Claude. But Lin was careful to note that the ruling doesn't require the Pentagon to use Anthropic's products, and doesn't prevent the government from choosing a different AI provider.

"If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic," Lin wrote.

The order is stayed for seven days to give the government time to appeal. A separate, narrower case is still pending in a federal appeals court in Washington, D.C.

Why it matters

This is a landmark ruling at the intersection of AI safety, government power, and corporate speech. A federal judge has now said, on the record, that the government cannot weaponize procurement authority to punish an AI company for having safety policies the military doesn't like.

For the broader AI industry, the implications are significant. If the ruling holds, it establishes a precedent: AI companies can set usage restrictions on their models — including restrictions the government disagrees with — without being branded as national security threats. If the government appeals successfully, it sends the opposite signal: safety guardrails that conflict with military objectives can carry existential business risk.

Either way, this case is far from over.

Also in the news

Relevant links

← Back to updates