Elon Musk just made things very uncomfortable for Anthropic

Anthropic spent Monday positioning itself as a victim. By nightfall, Elon Musk had flipped the script entirely.

The Amazon-backed artificial intelligence company behind Claude publicly accused three Chinese labs of orchestrating a massive, coordinated operation to steal its model’s capabilities. Hours later, Musk fired back on X with a blunt counter-accusation that sent the story in a very different direction.

“Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact,” Musk posted Monday, according to Benzinga.

What Anthropic accused the Chinese labs of doing

Anthropic’s original claim was striking. In a detailed blog post Monday, the company said DeepSeek, Moonshot AI and MiniMax conducted what it called “industrial-scale distillation attacks” on its Claude models.

The three labs allegedly created more than 24,000 fraudulent accounts and generated over 16 million interactions with Claude to extract its most advanced capabilities. Anthropic said the campaigns specifically targeted Claude’s edge in agentic reasoning, tool use, and complex coding, the areas where it commands a meaningful competitive advantage.

The operation was not clumsy. According to CNBC, the labs used commercial proxy services running sprawling networks of fraudulent accounts to evade detection, mixing distillation traffic with normal customer requests to blend in. When Anthropic released a new model during MiniMax’s active campaign, the lab pivoted within 24 hours, redirecting nearly half its traffic to siphon the latest system’s capabilities.

More AI Stocks:

Anthropic called the threat urgent. “These campaigns are growing in intensity and sophistication,” the company wrote. “The window to act is narrow.”

How each Chinese lab ran its campaign against Claude

  • DeepSeek generated over 150,000 exchanges, using synchronized traffic patterns and shared payment methods to extract foundational reasoning capabilities and censorship-safe alternatives to politically sensitive queries.
  • Moonshot AI ran more than 3.4 million exchanges targeting agentic reasoning, tool use, coding, and computer vision. It used varied account types specifically to make the operation harder to detect as coordinated.
  • MiniMax was the largest operation by far, with more than 13 million exchanges aimed squarely at agentic coding and tool use. It redirected nearly half its traffic to Claude’s newest model within 24 hours of its release.

How Musk turned the accusations back on Anthropic

Musk’s response landed fast. He shared screenshots of X Community Notes alongside his post, which alleged that Anthropic settled a $1.5 billion lawsuit in September 2025 for pirating more than seven million books from shadow libraries to train Claude.

The notes also flagged that the company currently faces a separate $3 billion lawsuit from music publishers over alleged copyright infringement involving more than 20,000 songs.

Photo by Bloomberg on Getty Images

“They built their models on stolen creative work, then complain when others extract their outputs,” one Community Note read, according to Benzinga. Anthropic has not publicly confirmed either settlement and did not immediately respond to requests for comment.

Musk also called the company “super smug, sanctimonious and hypocritical” in a follow-up post, a pointed jab at Anthropic’s well-cultivated safety-first brand identity.

Why the hypocrisy argument is gaining traction

The criticism Musk leveled is not entirely new. OpenAI, Google, Meta and most major artificial intelligence companies face ongoing lawsuits from authors, news publishers, and musicians who argue their copyrighted work was used without permission to train AI systems.

According to TechCrunch, Anthropic’s accusations also arrive at a delicate moment, as the Trump administration recently loosened export controls on advanced Nvidia chips to China, a policy Anthropic has actively lobbied against.

The legal lines around distillation remain genuinely blurry. Courts are still wrestling with whether AI model outputs qualify for copyright protection at all. The U.S. Copyright Office affirmed in January 2025 that copyright requires human authorship, which means Anthropic may face real challenges if it pursues legal action against the Chinese labs.

Musk’s own xAI venture has faced similar questions about how Grok was trained. The debate around data provenance touches every major lab in the industry, and that broad exposure is exactly what made his counter-punch so effective on Monday.

What this feud means for the AI industry

The stakes here go beyond a social media spat. Anthropic’s distillation claims, if proven, represent a significant threat to the economics of building premium AI models. If rivals can extract years of research and hundreds of millions in training costs through coordinated scraping, the competitive moat around frontier models shrinks considerably.

Anthropic is valued at $380 billion following its latest funding round, with Amazon as its anchor investor. xAI raised $6 billion last year. Both companies are racing toward the same destination, and the fight over data is now as fierce as the fight over compute.

Musk’s post landed the sharper punch on Monday. Anthropic came in as the accuser and left the news cycle defending itself. That shift in narrative is the real story, and it is one the entire AI industry will be watching carefully as copyright battles and data wars continue to escalate.

Related: Elon Musk has strong words on new Apple-Google AI move