Judge Blocks Pentagon’s Anthropic Supply Chain Designation

Date:

- Advertisement -

A US federal judge in San Francisco has granted Anthropic a significant legal victory, temporarily blocking the Pentagon’s designation of the AI company as a national security supply chain risk. The ruling also puts a hold on a presidential directive that barred federal agencies from using Anthropic’s chatbot, Claude.

- Advertisement -

Court Halts Pentagon’s “Supply Chain Risk” Designation

In an order issued on Thursday, Judge Rita Lin of the US District Court for the Northern District of California granted a preliminary injunction against the Department of Defense. The judge found that the Pentagon’s actions, supported by a February executive order from President Donald Trump, appeared “arbitrary, capricious, [and] an abuse of discretion.”

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” Judge Lin wrote in her March 26 ruling, which followed a 90-minute hearing on March 24.

The core of the dispute traces to negotiations that broke down in February 2025. Anthropic, which had been in talks for a Pentagon contract to make Claude the first frontier AI model approved for classified networks, refused to agree to unrestricted military use. The company publicly stated its technology should not be employed for lethal autonomous weapons or mass domestic surveillance of Americans.

- Advertisement -

Following the collapsed talks, Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk” under a relevant statute. On February 27, President Trump ordered a government-wide cessation of Anthropic product use, criticizing the company on Truth Social.

First Amendment Retaliation Claim Central to Case

Anthropic filed its lawsuit on March 9, arguing that Hegseth exceeded his authority. Judge Lin’s ruling strongly endorsed this view, stating that “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

The court’s decision preserves Anthropic’s market position during the litigation. According to a 2025 report from venture firm Menlo Ventures, Anthropic held a 32% share of the enterprise AI market at the start of the year, leading competitors like OpenAI (25%). A permanent ban would have likely caused that share to “plummet,” as the judge acknowledged.

Anthropic issued a statement following the ruling: “We are grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.” The company is represented by legal counsel in the ongoing case.

Screenshot from court ruling. Source: Courtlistener

Cointelegraph is committed to independent, transparent journalism. This news article is produced in accordance with Cointelegraph’s Editorial Policy and aims to provide accurate and timely information. Readers are encouraged to verify information independently. Read our Editorial Policy.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

We don’t spam! Read our privacy policy for more info.

spot_imgspot_img

Popular

More like this
Related

Nevada Judge Extends Kalshi Ban, Rules Event Contracts Unlicensed Gambling

A pivotal legal showdown over the nature of prediction...

Banking group pushes back on Coinbase trust charter approval over consumer risks

Banking Regulators Approve Coinbase Trust Charter Amid Industry Backlash A...

Polymarket Pulls Missing US Pilot Market, Faces Questions Over Rules

Prediction market platform Polymarket has delisted a controversial betting...

Circle let over $440 million in stolen USDC move freely, ZachXBT says

Allegations of Slow Response: Circle Faces $440M Compliance Scrutiny Crypto...