Anthropic wins early court fight over Pentagon blacklist and Trump ban

Date:

- Advertisement -

Anthropic Secures Preliminary Injunction Against Trump Administration’s Blacklist

In a significant legal development for the AI industry, Anthropic, the developer of the Claude large language model, has obtained a preliminary injunction from a federal court in San Francisco. The ruling temporarily blocks the Trump administration from enforcing directives that blacklisted the company and restricted federal agencies from using its technology.

- Advertisement -

Judge Cites Likely First Amendment Violation

U.S. District Judge Rita Lin granted the AI startup’s request, finding that Anthropic demonstrated a likelihood of success on the merits of its core claims. According to Reuters, Judge Lin wrote that the government’s actions appeared “more punitive than security-driven.” She explicitly stated that punishing Anthropic for publicly highlighting the government’s contracting position likely constitutes illegal First Amendment retaliation.

The seven-day stay on the injunction provides the government a window to appeal the decision to the Ninth Circuit Court of Appeals. The order specifically bars the administration from implementing President Donald Trump’s February directive halting federal use of Anthropic’s technology and from advancing the Pentagon’s designation of the company as a national security supply chain risk.

Origins of the Conflict: Ethical Boundaries vs. Defense Demands

The dispute stems from negotiations between Anthropic and the Department of Defense. The company refused to remove core safety restrictions from Claude for Pentagon use, drawing a firm line against applications involving fully autonomous weapons without human supervision or mass surveillance of Americans. While open to broader collaboration, Anthropic maintained these ethical boundaries.

- Advertisement -

In late February, President Trump responded by ordering a federal procurement freeze on Anthropic’s technology. Separately, Defense Secretary Pete Hegseth designated the company a “supply chain risk,” a label that could effectively ban defense contractors from using Claude in military projects. Anthropic argued this was an unprecedented public use of such a designation against a U.S.-based AI firm.

High Stakes for AI and National Security

The case carries substantial implications. Anthropic had become a major AI vendor for the U.S. government, holding a $200 million Pentagon contract and having already deployed its models on classified Defense Department networks prior to the breakdown in negotiations. The administration used separate legal authorities for the Pentagon blacklist and the wider federal procurement ban, forcing Anthropic to litigate on multiple fronts. A related case concerning civilian agency contracts continues in Washington, D.C.

Legal experts note that this ruling highlights the tension between presidential authority over procurement and national security claims versus companies’ rights to refuse work on ethical grounds and to speak about government actions without fear of reprisal. The outcome may set a precedent for how emerging technologies are integrated into government work and the limits of executive power in regulating the AI sector.


Disclosure: This article was edited by Estefano Gomez. For more information on how we create and review content, see our Editorial Policy.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

We don’t spam! Read our privacy policy for more info.

spot_imgspot_img

Popular

More like this
Related

Nevada Judge Extends Kalshi Ban, Rules Event Contracts Unlicensed Gambling

A pivotal legal showdown over the nature of prediction...

Banking group pushes back on Coinbase trust charter approval over consumer risks

Banking Regulators Approve Coinbase Trust Charter Amid Industry Backlash A...

Polymarket Pulls Missing US Pilot Market, Faces Questions Over Rules

Prediction market platform Polymarket has delisted a controversial betting...

Circle let over $440 million in stolen USDC move freely, ZachXBT says

Allegations of Slow Response: Circle Faces $440M Compliance Scrutiny Crypto...