Council News
Link copied

Trump Bans Anthropic From Government, OpenAI Seizes Pentagon Contract

Policy & Law· 39 sources ·Feb 28
See the council’s bias & truth review

The Ban That Changed Everything

If you use federal services—from visa processing to military intelligence—your government just switched AI suppliers. On Friday, President Trump ordered every federal agency to immediately stop using Anthropic's technology after the AI company refused the Pentagon's demand to remove safety guardrails. Hours later, OpenAI announced it had reached an agreement to provide technology for classified networks, though no contract has been signed. Some observers say the speed of that pivot reveals what this fight was really about: control over which company gets to power America's most sensitive operations.

Anthropic, the San Francisco startup behind the Claude AI model, had drawn a line in the sand. The Pentagon demanded it allow unrestricted military use of Claude. Anthropic refused, citing concerns that such unrestricted access could enable mass surveillance of Americans and autonomous weapons systems. The Pentagon countered that it operates within legal bounds and has procedures to determine appropriate use. CEO Dario Amodei said the company "cannot in good conscience accede" to those terms. Defense Secretary Pete Hegseth responded by designating Anthropic a "supply chain risk"—a label historically used for foreign adversaries like China's Huawei. Trump then issued an order for federal agencies to stop using Anthropic's technology.

The designation matters because, according to Anthropic, it doesn't just affect the Pentagon. The company argues it prevents any military contractor from using Claude for any purpose, even commercial work unrelated to defense. Anthropic argues the law doesn't allow that scope under statute 10 USC 3252, and the company vowed to sue.

Why the Pentagon Wanted Unrestricted Access

The military's position was straightforward: once the government buys a tool, the military decides how to use it, not the company that made it. Pentagon officials, including undersecretary Emil Michael, argued they needed the ability to deploy AI for "all lawful purposes" without having to negotiate each use case with a private company. They contended there are gray areas around what constitutes mass surveillance or autonomous weapons, and that it's unworkable to litigate individual cases.

But Anthropic and others in Silicon Valley saw it differently. The company worried that "lawful" doesn't mean safe. Anthropic was concerned the Pentagon could use AI to legally collect and analyze vast amounts of publicly available data—geolocation, web browsing, financial information from data brokers—to find patterns about Americans. Anthropic argued this constitutes legal surveillance and said no.

Emil Michael, the Pentagon official steering negotiations, grew frustrated. He called Amodei a "liar" with a "God complex" who was "putting our nation's safety at risk." Trump criticized Anthropic harshly, accusing them of poor judgment.

OpenAI Steps In—With the Same Red Lines

What happened next was extraordinary. Sam Altman, OpenAI's CEO, announced his company would draw the same red lines Anthropic had refused to abandon: no mass surveillance, no fully autonomous weapons. He wrote to staff that "regardless of how we got here, this is no longer just an issue between Anthropic and the Pentagon; this is an issue for the whole industry."

Then OpenAI struck a deal with the Pentagon. The difference: Altman negotiated terms the Pentagon accepted. OpenAI would maintain its safety principles, including restrictions on mass surveillance and autonomous weapons. The Pentagon agreed these principles align with law and policy, though sources indicate some uncertainty remained about whether the Pentagon would still seek certain data collection capabilities. OpenAI wants researchers with security clearances to monitor how the military uses the technology. It wants models confined to cloud systems rather than edge deployments like autonomous weapons. The Pentagon said yes.

Altman acknowledged the optics looked bad. "It may not 'look good' for us in the short term," he wrote. But OpenAI negotiated an agreement the Pentagon accepted, while Anthropic had refused the Pentagon's original demands for unrestricted access.

Silicon Valley Rallies, Then Fractures

The initial response from tech workers was unified. Hundreds of employees at Google and OpenAI signed a letter backing Anthropic's position, urging their own executives to resist Pentagon pressure. Senator Elizabeth Warren called the administration's tactics "extortion," while the administration characterized the action as necessary for national security. Democratic Representative Ro Khanna said "good for Anthropic" for refusing to bend.

But that solidarity fractured the moment OpenAI announced its deal. The company had negotiated an agreement the Pentagon accepted, while Anthropic had refused the Pentagon's original demands for unrestricted access. Critics and supporters disagree on whether OpenAI's approach was pragmatic or a form of capitulation.

What Happens Next

Anthropic has six months to wind down its Pentagon operations. The military must find alternatives to Claude, which had been integrated into some classified systems and was used in recent operations including the capture of Nicolás Maduro. Palantir, the defense contractor that uses Claude for sensitive military work, will need a new supplier.

Elon Musk's xAI agreed to let the military use its Grok model for "all lawful purposes," but defense officials say it's not a like-for-like replacement for Claude. Google's Gemini is being discussed for classified systems, though hundreds of Google employees have signed the same letter backing Anthropic's position.

The real question is whether Anthropic's court challenge succeeds. The company argues the supply chain risk designation exceeds the Pentagon's legal authority under statute 10 USC 3252, which it contends limits such designations to Pentagon contracts only. The Pentagon maintains it has authority to protect national security by restricting contractors' use of technology it deems a supply chain risk. If a judge agrees with Anthropic, it could limit how far the government can reach into private companies' business decisions. If the Pentagon prevails, Anthropic loses not just the Pentagon contract—valued at up to $200 million—but access to any military contractor on Pentagon contracts, though the company argues the law limits the designation's reach.

Sources (39)

Cross-referenced to ensure accuracy

NPR OpenAI announces Pentagon deal after Trump bans Anthropic
ABC News Trump orders US govt to cut ties with Anthropic; Hegseth declares supply chain 'risk'
ABC News WATCH: Trump says he's 'not happy' with the way Iran is negotiating nuclear deal
CBS News Anthropic CEO on "retaliatory and punitive" Pentagon action
CBS News Trump orders federal agencies to stop using Anthropic's AI technology
CBS News Former Air Force Secretary Frank Kendall calls Trump's order to phase out Anthropic use "outrageous"
NBC News OpenAI strikes deal with Pentagon after Trump orders government to stop using Anthropic
Axios Pentagon approves OpenAI safety red lines after dumping Anthropic
Axios Trump moves to blacklist Anthropic AI from all government work
Axios Anthropic to take Trump's Pentagon to court over AI dispute
Axios Amazon strikes $50B OpenAI deal
Axios Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
New York Times OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
New York Times Trump Orders U.S. Agencies to Stop Using Anthropic AI Tech After Pentagon Standoff
New York Times Silicon Valley Rallies Behind Anthropic in A.I. Clash With Trump
The Hill Warren accuses Trump, Hegseth of trying ‘extort’ Anthropic into removing AI guardrails
The Hill Hegseth says Pentagon designating Anthropic as supply chain risk after Trump bans AI firm
The Hill Trump orders federal agencies to ‘immediately cease’ using Anthropic technology
The Hill Hundreds of Google, OpenAI employees back Anthropic in Pentagon fight
The Hill House Democrat: ‘Good for Anthropic’ in rejecting Pentagon demands
The Hill Trump eyeing mail-in ballot, voting machine ban ahead of midterms?
The Hill Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute
The Hill Pentagon official: Anthropic CEO ‘has a God-complex’
The Hill Some Republicans increasingly frustrated stock trading ban touted by Trump is stalled
Fox News Trump says he plans to order federal ban on Anthropic AI after company refuses Pentagon demands
BBC Trump orders government to stop using Anthropic in battle over AI use
The Guardian US Trump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AI
France 24 Anthropic refuses to bend to Pentagon on AI safeguards
Deutsche Welle Trump bans AI firm Anthropic from federal agencies
Deutsche Welle Pentagon pressures Anthropic in escalating AI showdown
South China Morning Post Anthropic vows to sue US after Trump tells government to stop using firm’s AI
Times of India Trump 'bans' Anthropic, gives it same label that China’s Huawei has in US since year 2018
Times of India ‘Leftwing nut jobs’: Trump orders halt to Anthropic tech across US government
Times of India Anthropic to challenge Pentagon in court, hours after Trump orders ban on AI firm
Reason Anthropic CEO Refuses Pentagon Demands To Remove Safeguards on Military AI
PBS NewsHour Why the Trump administration is clashing with AI firm Anthropic
Bloomberg Pentagon Casts Cloud of Doubt Over Anthropic’s AI Business
Bloomberg Trump Tells US Agencies to Stop Using Anthropic's Tech
Bloomberg Anthropic’s Fight With Pentagon Over AI Widens Ahead of Deadline
See today's full briefing
Never miss a story.
Get the full experience. Free on iOS.
Download for iOS