
Tech Giants Uphold Anthropic AI Access Amid Pentagon Designation
In a coordinated move that underscores the complex intersection of national security policy and commercial artificial intelligence, both Google and Microsoft have publicly affirmed their intent to continue offering Anthropic’s AI models to cloud customers, with a clear exclusion for U.S. Department of Defense work. This follows the Defense Department’s formal designation of Anthropic as a supply chain risk, a move that has sparked a significant industry review of AI partnerships.

Understanding the Supply Chain Risk Designation
The core of the dispute stems from Anthropic’s refusal to agree to new terms regarding the use of its AI systems, as requested by the U.S. Department of Defense. Following this disagreement, President Donald Trump directed federal agencies to halt the use of Anthropic’s technology. Subsequently, Defense Secretary Pete Hegseth announced the Pentagon would phase out existing work with the company over a six-month period. The formal “supply chain risk designation” is a specific legal determination that can trigger restrictions on federal procurement, though its direct impact on private-sector cloud platforms is a matter of legal interpretation.
Google and Microsoft’s Stance: A Legal Review
Google’s announcement, made via a spokesperson on Friday, clarifies its position. The company stated that the Defense Department’s determination “does not prevent the company from working with Anthropic on non-defense related projects.” Its products, including the Claude models accessible through the Vertex AI platform on Google Cloud, will remain available to commercial and non-DoD public sector customers.
Microsoft issued a nearly identical statement one day earlier. A company representative confirmed that after a review by its legal team, Microsoft concluded that Anthropic’s products “can remain available to customers other than the Department of War.” This parallel stance from the two largest cloud infrastructure providers signals a unified industry front in separating commercial AI access from restricted federal defense contracts.

Deep-Rooted Ties: Investment and Infrastructure
Google’s commitment to Anthropic extends far beyond a simple cloud hosting agreement. The search giant is a major financial backer, having committed an additional $1 billion investment in January 2025, building on a previous $2 billion stake. Furthermore, Anthropic relies heavily on Google’s infrastructure; it uses Google Cloud to train its models and, in a significant expansion of its partnership, gained access to up to one million of Google’s custom Tensor Processing Units (TPUs), specialized AI chips.
This deep integration makes an immediate, full severance commercially and technically challenging for both companies, highlighting the practical realities behind their public statements.
Industry Ripple Effects and Legal Challenge
The Pentagon’s action has already prompted some defense technology firms to instruct employees to cease using Claude models and migrate to alternatives like those from OpenAI. However, the decisions by Google, Microsoft, and reportedly Amazon (which confirmed a similar policy on Friday) to maintain access for non-defense customers create a bifurcated market.
Anthropic’s leadership is not accepting the designation passively. CEO Dario Amodei has stated the company plans to challenge the government’s supply chain risk determination in court, setting the stage for a potential legal battle over the scope and application of the ruling.
Navigating a New AI Regulatory Landscape
This episode arrives amid broader federal efforts to establish guardrails for AI. The Trump administration’s 2025 executive order on AI, for instance, mandated the development of AI safety standards and emphasized “American leadership” in the technology. The Defense Department’s action, while specific to one company, serves as a test case for how national security concerns might reshape commercial AI ecosystems and cloud partnerships moving forward.
For now, the practical outcome is clear: the world’s leading cloud providers will keep Anthropic’s powerful AI models in their catalogs, but with a firewall explicitly drawn around the Pentagon. The legal and commercial fallout from this separation will be closely watched by every stakeholder in the AI supply chain.
Disclosure: This article was edited by Estefano Gomez. For more information on how we create and review content, see our Editorial Policy.


