Tony Kim
Feb 23, 2026 18:32
Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation assaults utilizing 24,000 pretend accounts to steal Claude AI capabilities.
Anthropic dropped a bombshell Tuesday, publicly naming three Chinese language AI laboratories—DeepSeek, Moonshot, and MiniMax—as perpetrators of coordinated campaigns to steal Claude’s capabilities by means of over 16 million fraudulent API exchanges.
The assaults used roughly 24,000 pretend accounts to avoid Anthropic’s regional entry restrictions and phrases of service. One proxy community alone managed greater than 20,000 simultaneous fraudulent accounts, mixing distillation visitors with legit requests to evade detection.
The Numbers Inform the Story
MiniMax led the assault with 13 million exchanges concentrating on agentic coding and power orchestration. Moonshot adopted with 3.4 million exchanges targeted on computer-use agent growth and reasoning capabilities. DeepSeek’s marketing campaign, whereas smaller at 150,000 exchanges, employed significantly subtle methods—together with prompts designed to make Claude articulate its inside reasoning step-by-step, basically producing chain-of-thought coaching information on demand.
Anthropic traced a number of DeepSeek accounts on to particular researchers on the lab by means of request metadata evaluation.
Why This Issues Past Company Espionage
The timing right here is not coincidental. OpenAI publicly accused DeepSeek of distilling ChatGPT simply three days earlier on February 21. Google’s Menace Intelligence Group flagged elevated distillation exercise on February 16, together with a marketing campaign utilizing over 100,000 prompts to duplicate Gemini’s reasoning talents.
What makes this significantly regarding? Anthropic argues these assaults undermine U.S. export controls on superior chips. International labs can successfully bypass innovation necessities by extracting capabilities from American fashions—they usually want these restricted chips to run distillation at scale anyway.
“Illicitly distilled fashions lack mandatory safeguards,” Anthropic warned, noting stripped-out protections might allow “offensive cyber operations, disinformation campaigns, and mass surveillance” by authoritarian governments.
The Hydra Downside
Anthropic described the infrastructure enabling these assaults as “hydra cluster” architectures—sprawling networks with no single level of failure. Ban one account, one other spawns instantly. The proxy companies reselling Claude entry made detection exponentially more durable by distributing visitors throughout Anthropic’s API and third-party cloud platforms concurrently.
When Anthropic launched a brand new Claude mannequin throughout MiniMax’s lively marketing campaign, the lab pivoted inside 24 hours, redirecting practically half their visitors to seize the newest capabilities. That type of operational agility suggests these aren’t opportunistic assaults however sustained, well-resourced operations.
Anthropic’s Countermeasures
The corporate outlined a number of defensive measures: behavioral fingerprinting methods to detect distillation patterns, strengthened verification for instructional and startup accounts (probably the most generally exploited pathways), and model-level safeguards designed to degrade output high quality for illicit extraction with out affecting legit customers.
Anthropic is sharing technical indicators with different AI labs, cloud suppliers, and authorities authorities. The message is evident: this requires industry-wide coordination.
For traders monitoring AI infrastructure performs, this escalation provides one other variable to the aggressive panorama. Labs that may’t defend their fashions danger watching their R&D investments stroll out the door—16 million queries at a time.
Picture supply: Shutterstock



