Anthropic files lawsuit against US government over supply chain risk label

Date:

- Advertisement -

Anthropic Sues U.S. Government Over “Supply Chain Risk” Designation

In a significant legal and ethical clash, AI safety company Anthropic has filed a lawsuit against the U.S. Department of Defense and other federal agencies. The suit challenges the Trump administration’s decision to classify the company as a “supply chain risk,” a designation that effectively bars it from doing business with defense contractors and threatens a major Pentagon contract.

- Advertisement -

The Origin of the Designation: A Clash Over AI Ethics

The conflict stems from collapsed negotiations over how Anthropic’s advanced AI models, including its Claude family, could be used by the government. According to reports from the Financial Times, Anthropic refused to agree to terms that would permit its technology for mass surveillance of Americans or the development of autonomous weapons. This principled stance prompted the government to halt the adoption of its systems and initiate the supply chain risk designation, jeopardizing a potential deal with the Pentagon valued at up to $200 million.

The Pentagon’s position, as reported, is that Anthropic’s AI models must be permissible for “all lawful purposes,” a standard the company argues is overly broad and conflicts with its internal safety policies. Despite last-minute efforts by Anthropic CEO Dario Amodei to de-escalate the situation through direct talks with defense leaders, the formal blacklist process moved forward.

Legal Grounds and Business Impact

Anthropic’s lawsuit contends that the “supply chain risk” classification lacks a proper legal foundation and was applied arbitrarily. The company states that seeking judicial review is a necessary step to protect its business operations, existing customer relationships, and partnerships while it continues discussions with the government.

- Advertisement -

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson explained to CNN.

Market Resilience Amidst Controversy

Despite the high-stakes government dispute, Anthropic’s direct-to-consumer business has demonstrated notable strength. Following news of the Pentagon contract termination, the Claude mobile application briefly surpassed OpenAI’s ChatGPT to claim the number one spot in Apple’s App Store productivity rankings—a first for the company. By early March, Anthropic reported that daily active user sign-ups for Claude had grown to over one million.

This consumer momentum appears to have reassured key cloud partners. Google confirmed it would continue providing Anthropic’s AI technology to its cloud customers for non-defense purposes. Similarly, Microsoft and Amazon issued statements affirming their intent to offer Anthropic’s services to clients outside the scope of defense work, insulating the company’s primary commercial revenue streams from the federal designation.

The case highlights the growing tension between AI developers’ ethical guardrails and government demands for unfettered technological access, setting a potential precedent for how national security policy interacts with corporate AI safety governance.

Disclosure: This article was edited by Vivian Nguyen. For more information on how we create and review content, see our Editorial Policy.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

We don’t spam! Read our privacy policy for more info.

spot_imgspot_img

Popular

More like this
Related

Democrats Question CFTC Chair on Insider Trading in Prediction Markets

The seven House members may have affirmed the commission‘s...

Bank of Korea and Bank of France hold talks on digital assets

Central banks are doubling down on the intersection of...

Argentina’s President Milei faces renewed scrutiny after evidence suggests deeper ties to LIBRA scandal

New Evidence Suggests Deeper Ties Between Argentine President Milei...

Nevada Judge Extends Kalshi Ban, Rules Event Contracts Unlicensed Gambling

A pivotal legal showdown over the nature of prediction...