Council News
Link copied

Pentagon Blacklists Anthropic, First U.S. Firm Ever Tagged Supply Chain Risk

National Security· 3 sources ·Mar 6
See the council’s bias & truth review

The Letter That Locked Anthropic Out

Anthropic executives opened the Pentagon's letter Thursday and learned their company had become the first domestic firm in history to be branded a supply chain risk, a designation that immediately bars any contractor working with the military from touching Anthropic's AI systems. The designation came after weeks of negotiations collapsed when President Trump publicly ordered all federal agencies to "stop using Anthropic" on his Truth Social platform last Friday.

The designation carries teeth that could gut the $18 billion AI company's government business. Microsoft, Anthropic's largest commercial partner, told the BBC it will continue offering Claude to corporate clients but has already pulled the technology from any projects touching the Department of Defense. Other major contractors are scrambling to understand whether existing contracts must be terminated, with one defense executive privately telling colleagues the ruling creates "a compliance nightmare" for any company that previously integrated Anthropic's models.

Why Anthropic Refused to Give the Military What It Wanted

Anthropic, founded by former OpenAI researchers who worried about AI safety, built specific restrictions into its Claude AI system that prevent certain military applications. The company refused Pentagon demands to remove those guardrails, even as rivals like OpenAI raced to sign contracts with fewer limitations.

Dario Amodei, Anthropic's chief executive, wrote Thursday evening that the company had no choice but to challenge the designation in court. "The law requires the Secretary of War to use the least restrictive means necessary," Amodei argued, claiming the Pentagon's action exceeds its legal authority. His company had been negotiating with defense officials who appeared close to a compromise last week, until Trump's social media posts and Defense Secretary Pete Hegseth's follow-up declaration that Anthropic would be "immediately" designated as a supply chain risk.

The Market Rejects Washington's Warning

Despite the Pentagon's blacklist, Anthropic's consumer business is booming. More than a million people are downloading Claude daily, making it the most popular AI app in multiple countries. The disconnect highlights a growing tension between government national security concerns and consumer demand for AI tools, with investors betting that commercial markets matter more than federal contracts for AI companies' bottom lines.

Senator Kirsten Gillibrand condemned the Pentagon's move as "shortsighted, self-destructive, and a gift to our adversaries," arguing that attacking American companies for maintaining safety standards is something "we expect from China, not the United States." The criticism underscores how the designation breaks precedent: until now, only foreign companies like China's Huawei had received this label, which typically signals potential espionage risks or foreign government control.

What Happens When the Safest AI Company Becomes the Riskiest

The designation creates an immediate problem for any company that built products using Anthropic's technology. Defense contractors must now audit every system to identify and remove Claude integrations, a process that could take months and cost millions. The ruling forces some to switch to AI systems they consider less reliable.

For Anthropic, the financial impact extends beyond lost government contracts. The company had positioned itself as the responsible AI alternative to competitors willing to work with fewer restrictions. The court challenge Amodei promised could determine whether American AI companies can set their own ethical boundaries or must accept military requirements to survive.

Sources (3)

Cross-referenced to ensure accuracy

See today's full briefing
Never miss a story.
Get the full experience. Free on iOS.
Download for iOS