Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Laid-Off Block Workers Detail AI Push, Jack Dorsey’s ‘Gratitude’ Call

February 28, 2026

Trump Orders Federal Agencies to Stop Using Anthropic

February 27, 2026

Pentagon moves to designate Anthropic as a supply-chain risk

February 27, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » DeepSeek releases ‘sparse attention’ model that cuts API costs in half
AI

DeepSeek releases ‘sparse attention’ model that cuts API costs in half

IQ TIMES MEDIABy IQ TIMES MEDIASeptember 29, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Researchers at DeepSeek on Monday released a new experimental model called V3.2-exp, designed to have dramatically lower inference costs when used in long-context operations. DeepSeek announced the model with a post on Hugging Face, also posting a linked academic paper on GitHub.

The most important feature of the new model is called DeepSeek Sparse Attention, an intricate system described in detail in the diagram below. In essence, the system uses a module called a “lightning indexer” to prioritize specific excerpts from the context window. After that, a separate system called a “fine-grained token selection system” chooses specific tokens from within those excerpts to load into the module’s limited attention window. Taken together, they allow the Sparse Attention models to operate over long portions of context with comparatively small server loads.

Screenshot

For long-context operations, the benefits of the system are significant. Preliminary testing by DeepSeek found that the price of a simple API call could be reduced by as much as half in long-context situations. Further testing will be required to build a more robust assessment, but because the model is open-weight and freely available on Hugging Face, it won’t be long before third-party tests can assess the claims made in the paper.

DeepSeek’s new model is one of a string of recent breakthroughs tackling the problem of inference costs — essentially, the server costs of operating a pre-trained AI model, as distinct from the cost of training it. In DeepSeek’s case, the researchers were looking for ways to make the fundamental transformer architecture operate more efficiently — and finding that there are significant improvements to be made.

Based in China, DeepSeek has been an unusual figure in the AI boom, particularly for those who view AI research as a nationalist struggle between the U.S. and China. The company made waves at the beginning of the year with its R1 model, trained using primarily reinforcement learning at a far lower cost than its American competitors. But the model has not sparked a wholesale revolution in AI training, as some predicted, and the company has receded from the spotlight in the months since.

The new “sparse attention” approach is unlikely to produce the same uproar as R1 — but it could still teach U.S. providers some much needed tricks to help keep inference costs low.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Pentagon moves to designate Anthropic as a supply-chain risk

February 27, 2026

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

February 27, 2026

Anthropic vs. the Pentagon: What’s actually at stake?

February 27, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Pentagon will cut ties with Columbia, Yale and Brown, Hegseth says

February 27, 2026

Wave of California teacher strikes ‘is no coincidence’

February 27, 2026

Father of accused Georgia school shooter Colt Gray testifies in trial

February 27, 2026

Board OKs hundreds of corrections for Texas’ Bible-infused curriculum

February 26, 2026
Education

Pentagon will cut ties with Columbia, Yale and Brown, Hegseth says

By IQ TIMES MEDIAFebruary 27, 20260

WASHINGTON (AP) — The Pentagon will forbid members of the military from attending Columbia, Yale,…

Wave of California teacher strikes ‘is no coincidence’

February 27, 2026

Father of accused Georgia school shooter Colt Gray testifies in trial

February 27, 2026

Board OKs hundreds of corrections for Texas’ Bible-infused curriculum

February 26, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.