Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Anthropic Says DeepSeek Fraudulently Used Claude

February 23, 2026

AI Labs Are Hiring for the Jobs That AI Is Supposedly Destroying

February 23, 2026

OpenAI calls in the consultants for its enterprise push

February 23, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Anthropic Says DeepSeek Fraudulently Used Claude
Tech

Anthropic Says DeepSeek Fraudulently Used Claude

IQ TIMES MEDIABy IQ TIMES MEDIAFebruary 23, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Anthropic says its Chinese competitors are stealing from the AI startup to gain an edge in the global AI race.

On Monday, Anthropic said three of China’s biggest AI labs, DeepSeek, MiniMax, and Moonshot AI, were “illicitly” using Claude “to improve their own models,” through a process known as distillation.

“These campaigns are growing in intensity and sophistication,” Anthropic said as part of its lengthy statement on Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.

Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions.”

Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.

Every time Brent D. Griffiths publishes a story, you’ll get an alert straight to your inbox!

Stay connected to Brent D. Griffiths and get more of their work as it publishes.

In January 2025, OpenAI said DeepSeek may have “inappropriately” used OpenAI’s outputs to train their models. Earlier this month, Google disclosed it had “identified an increase in model extraction attempts or ‘distillation attacks.'”

“Competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently,” Anthropic said on Monday.

Anthropic disclosed remarkable detail about the extent to which DeepSeek, MiniMax, and Moonshot AI “illicitly” used their systems. Claude is not available for commercial access in China, though Anthropic said the rival labs found workarounds.

Among the notable findings, Anthropic said DeepSeek sought to create “censorship-safe alternatives to policy-sensitive queries.” The company also said it detected MiniMax’s campaign “while it was still active,” giving them an in-depth look at what their competitor was doing.

“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” Anthropic said.

Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to Business Insider’s request for comment.

Beyond cheating in the AI, Anthropic said improper distillation poses security risks because less-trained models may lack the proper safeguards, such as those to prevent the development of bioweapons.

In response to such distillation campaigns, Anthropic said it has built in “behavioral fingerprinting systems,” shares data with other AI companies on what to look out for, and continues to develop additional countermeasures.

Anthropic CEO Dario Amodei recently wrote that leading models are approaching the point where, without proper safeguards, they could help direct someone in building a bioweapon.

Amodei is also an outspoken advocate of US-export controls, a topic that divides some leading tech CEOs. Nvidia CEO Jensen Huang has repeatedly said that restricting US companies, including his own, from selling advanced chips to China won’t curb China’s AI advancements.

“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” Anthropic said.

Anthropic has also faced allegations of using copyrighted material to train its models. In January, the Washington Post reported new details about an endeavor at the company called Project Panama, which the company reportedly described as “our effort to destructively scan all the books in the world.” Last year, Anthropic settled a class-action lawsuit brought by the authors and publishers of some of the books for $1.5 billion. As part of the settlement, the company didn’t admit any wrongdoing.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

AI Labs Are Hiring for the Jobs That AI Is Supposedly Destroying

February 23, 2026

Sam Altman Compares AI Energy Use to Training Humans

February 23, 2026

Claude Code Creator: These Are 3 Principles His Team Must Know

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Education Department sheds more programs as Trump pursues its dismantling

February 23, 2026

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

February 23, 2026

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026
Education

Education Department sheds more programs as Trump pursues its dismantling

By IQ TIMES MEDIAFebruary 23, 20260

WASHINGTON (AP) — The Education Department is handing over more of its programs and grants…

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

February 23, 2026

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.