U.S. District Judge Rita Lin has blocked the Pentagon from labeling Anthropic a national security supply chain risk — and halted President Trump's order for every federal agency to immediately cease using Claude.
In a scathing 43-page ruling issued last Thursday, Lin called the government's actions "Orwellian" and said the broad punitive measures appeared "designed to punish Anthropic" and could "cripple" the company.
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote.
This is the resolution of a legal clash we covered when it first erupted on March 19. The dispute started when Anthropic pushed to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. Defense Secretary Pete Hegseth responded by invoking a rare military authority — one previously used against foreign adversaries — to designate Anthropic as a supply chain risk and cut it off from all government work.
Anthropic sued, calling it an "unprecedented and unlawful campaign of retaliation." The judge agreed, writing that the record "strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that the government's real motive was unlawful retaliation."
What the ruling does — and doesn't do
The preliminary injunction blocks the supply chain risk designation and Trump's social media directive ordering agencies to stop using Claude. But Lin was careful to note that the ruling doesn't require the Pentagon to use Anthropic's products, and doesn't prevent the government from choosing a different AI provider.
"If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic," Lin wrote.
The order is stayed for seven days to give the government time to appeal. A separate, narrower case is still pending in a federal appeals court in Washington, D.C.
Why it matters
This is a landmark ruling at the intersection of AI safety, government power, and corporate speech. A federal judge has now said, on the record, that the government cannot weaponize procurement authority to punish an AI company for having safety policies the military doesn't like.
For the broader AI industry, the implications are significant. If the ruling holds, it establishes a precedent: AI companies can set usage restrictions on their models — including restrictions the government disagrees with — without being branded as national security threats. If the government appeals successfully, it sends the opposite signal: safety guardrails that conflict with military objectives can carry existential business risk.
Either way, this case is far from over.
Also in the news
- SpaceX lined up 21 banks for a mega IPO codenamed "Project Apex." Expected in June at a $1.75 trillion valuation, it would be the largest stock market debut in history. Morgan Stanley, Goldman Sachs, and JPMorgan are leading.
- Microsoft open-sourced VibeVoice, a new speech AI project for advanced voice synthesis and processing — continuing Redmond's push into open-source AI tooling.
- Deep-Live-Cam 2.1 hit GitHub trending — real-time face swapping and one-click video deepfakes from a single image, raising fresh questions about synthetic media accessibility.
- Coder raised a $90M Series C led by KKR to help developers build and run code from local devices to the cloud, reflecting continued investor appetite for developer infrastructure.