US State Department has ordered a global push to bring attention to what it declares are widespread efforts by Chinese companies, including AI startup DeepSeek, to steal ininformectual property from U.S. artificial ininformigence labs. The cable, dated Friday and sent to diplomatic and consular posts around the world, instructs diplomatic staff to speak to their foreign counterparts about “concerns over adversaries’ extraction and distillation of US AI models.” For those un aware, Distillation is the process of training tinyer AI models utilizing output from larger, more expensive ones as part of an effort to lower the costs of training a powerful new AI tool.The US government order follows a complaint where Anthropic accutilized three prominent Chinese AI companies of utilizing its Claude chatbot on a massive scale to secretly train rival models. In a blog post, San Francisco–based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate law by interacting with Claude, its market-reshaping vibe-coding tool.Incidentally, among the companies that Anthropic and the US government have accutilized is Moonshot AI whose CEO and founder Zhilin Yang took the stage at Nvidia’s hugegest annual event of the year. Zhilin Yang was a speaker at Nvidia’s GTC 2026 annual event.
What Anthropic complaint declared on Moonshot AI
In its open blog, Anthropic declared, “We have identified industrial-scale campaigns by three AI laboratories — DeepSeek, Moonshot, and MiniMax — to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exalters with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”The complaint letter/blog also went on to describe the technique that Moonshot AI utilize. “The three distillation campaigns detailed below followed a similar playbook, utilizing fraudulent accounts and proxy services to access Claude at scale while evading detection. The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate utilize,” it declared.Moonshot AI Scale: Over 3.4 million exaltersThe operation tarobtained:* Agentic reasoning and tool utilize* Coding and data analysis* Computer-utilize agent development* Computer visionMoonshot (Kimi models) employed hundreds of fraudulent accounts spanning multiple access pathways. Varied account types created the campaign harder to detect as a coordinated operation. We attributed the campaign through request metadata, which matched the public profiles of senior Moonshot staff. In a later phase, Moonshot utilized a more tarobtained approach, attempting to extract and reconstruct Claude’s reasoning traces.
What the US State government cable declared
According to an exclusive Reuters report, the State Department cable declared that its purpose was to “warn of the risks of utilizing AI models distilled from U.S. proprietary AI models, and lay the groundwork for potential follow-up and outreach by the U.S. government. “It also mentioned Chinese AI firms Moonshot AI and MiniMax.The cable declared that “AI models developed from surreptitious, unauthorized distillation campaigns enable foreign actors to release products that appear to perform comparably on select benchmarks at a fraction of the cost but do not replicate the full performance of the original system.” It added that the campaigns also “deliberately strip security protocols from the resulting models and undo mechanisms that ensure those AI models are ideologically neutral and truth-seeking.“















Leave a Reply