Anthropic identified industrial-scale distillation campaigns by three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—using over 24,000 fraudulent accounts to extract capabilities from its Claude model.
The labs conducted more than 16 million exchanges with Claude through these accounts, violating Anthropic’s terms of service and regional access restrictions, according to the company’s February 2026 announcement.
How distillation enables illicit model copying
Distillation involves training a smaller model on the outputs of a larger one, a legitimate practice when used internally but prohibited when targeting competitors’ models without authorization.
Anthropic warned that models built through illicit distillation lack necessary safeguards, increasing risks of misuse for bioweapon development or malicious cyber activities.
Why the White House memo escalates the dispute
A White House memo cited by the BBC corroborates Anthropic’s claims, framing the activity as mass AI theft by Chinese firms and aligning with earlier allegations from OpenAI about DeepSeek.

The memo adds weight to Anthropic’s argument that the campaigns pose national security concerns by potentially stripping safety controls from replicated models.
What is distillation in AI?
Distillation is a technique where a smaller AI model learns from the outputs of a larger, more capable model to achieve similar performance with fewer resources.
Why does Anthropic consider this a security risk?
Anthropic states that models created through illicit distillation are unlikely to retain the safety safeguards built into its Claude model, which are designed to prevent harmful uses like bioweapon development or cyberattacks.