
Anthropic Alleges Massive Illicit Data Extraction by Rival AI Labs
Anthropic, the developer of the Claude family of AI models, has publicly accused three prominent artificial intelligence laboratories—DeepSeek, Moonshot AI, and MiniMax—of conducting large-scale, unauthorized campaigns to extract capabilities from its systems. The company asserts these operations violated its terms of service and regional access restrictions, potentially undermining safety protocols and international export controls.

The Scale and Method of the Alleged Campaigns
According to a detailed statement from Anthropic, the three labs generated over 16 million total exchanges with Claude using approximately 24,000 fraudulent accounts. The company says it identified these campaigns through a combination of IP address correlations, metadata analysis, infrastructure indicators, and corroborating evidence from industry partners.
The core technique employed, as described by Anthropic, is “distillation.” Distillation is a standard machine learning process where a smaller, more efficient model is trained to mimic the outputs of a larger, more capable model. While frontier AI labs commonly use distillation internally to create optimized versions of their own systems, Anthropic alleges the competitors used it to illicitly replicate Claude’s advanced reasoning, coding, and tool-use capabilities.
Breakdown of Alleged Activity by Lab
Anthropic provided specific estimates for each lab’s activity:

- DeepSeek: Reportedly conducted more than 150,000 exchanges, with a focus on reasoning tasks and eliciting detailed, step-by-step explanations to generate training data for its own models.
- Moonshot AI: Allegedly carried out over 3.4 million exchanges, targeting agentic reasoning, coding proficiency, and computer use capabilities.
- MiniMax: Accounted for the largest share, with more than 13 million exchanges. Anthropic noted it detected this activity while it was ongoing and observed distinct traffic shifts following the release of new Claude models, suggesting the extraction was responsive to Anthropic’s updates.
Safety and Security Risks of Illicit Distillation
Anthropic warns that models built through such unauthorized distillation may lack the comprehensive safety guardrails embedded in Claude. These guardrails are designed to mitigate risks in sensitive areas like cyber operations, the generation of biological threats, and other forms of misuse. The company argues that by replicating restricted capabilities, these activities could circumvent U.S. export controls aimed at limiting the proliferation of advanced AI technology to certain foreign entities.
“When a model’s capabilities are extracted without its safety training and constitutional principles, you’re effectively creating an unsecured copy of powerful functionality,” explained a technical brief from Anthropic’s safety team. This creates a dual risk: the original model’s protective measures are bypassed, and the derivative model’s development lacks the rigorous oversight applied by its creator.
Anthropic’s Countermeasures and Industry Call to Action
In response, Anthropic says it has implemented several defensive measures. These include deploying new behavioral detection systems to identify anomalous usage patterns, strengthening account verification protocols, and sharing intelligence with industry peers and relevant authorities. The company is also developing product and API-level safeguards aimed at reducing the effectiveness of large-scale distillation without degrading the experience for legitimate users.
However, Anthropic stresses that solving this systemic issue requires broader collaboration. “Addressing large-scale distillation will require coordinated action across AI labs, cloud providers, and policymakers,” the company stated. This call to action highlights the growing tension between open research, commercial competition, and national security concerns in the rapidly evolving AI landscape.
This report is based on Anthropic’s public statements and technical analysis. DeepSeek, Moonshot AI, and MiniMax have not yet issued formal responses to the specific allegations as of the time of publication. The situation underscores the intensifying scrutiny on data practices and intellectual property in the frontier AI sector.


