Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models2026-02-23 19:29 by DanielaTags: Anthropic, Claude
Three Chinese artificial intelligence companies used Claude to improperly obtain capabilities to improve their own models, the chatbot's creator Anthropic said in a blog post on Monday while also making a case for export controls on chips. The announcement follows a memo by OpenAI earlier this month, when the startup warned U.S. lawmakers that Chinese AI firm DeepSeek is targeting the ChatGPT maker and the nation's leading AI companies to replicate models and use them for its own training. DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, in violation of Anthropic's terms of service and regional access restrictions, the company said.They used a technique called "distillation," which involves training a less capable model on the outputs of a stronger one, Anthropic said. "These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region." Anthropic warned that illicitly distilled models lacked necessary safeguards, creating significant national security risks. If these models are open-sourced, the risk multiplies as capabilities spread freely beyond any single government's control. Read more -here-
Post your review/comments
rate:
avg:
![]() ![]() ![]() ![]()
|