Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Anthropic Says DeepSeek Fraudulently Used Claude

February 23, 2026

AI Labs Are Hiring for the Jobs That AI Is Supposedly Destroying

February 23, 2026

OpenAI calls in the consultants for its enterprise push

February 23, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Tensormesh raises $4.5M to squeeze more inference out of AI server loads
AI

Tensormesh raises $4.5M to squeeze more inference out of AI server loads

IQ TIMES MEDIABy IQ TIMES MEDIAOctober 23, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


With the AI infrastructure push reaching staggering proportions, there’s more pressure than ever to squeeze as much inference as possible out of the GPUs they have. And for researchers with expertise in a particular technique, it’s a great time to raise funding.

That’s part of the driving force behind Tensormesh, launching out of stealth this week with $4.5 million in seed funding. The investment was led by Laude Ventures, with additional angel funding from database pioneer Michael Franklin.

Tensormesh is using the money to build a commercial version of the open source LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. Used well, LMCache can reduce inference costs by as much as 10x — a power that’s made it a staple in open source deployments and drawn in integrations from heavy hitters like Google and Nvidia. Now Tensormesh is planning to parlay that academic reputation into a viable business.

The core of the product is the key-value cache (or KV cache), a memory system used to process complex inputs more efficiently by condensing them down to their key values. In traditional architectures, the KV cache is discarded at the end of each query — but Tensormesh co-founder and CEO Junchen Jiang argues that this is an enormous source of inefficiency.

“It’s like having a very smart analyst reading all the data, but they forget what they have learned after each question,” says Jiang.

Instead of discarding that cache, Tensormesh’s systems hold on to it, allowing it to be redeployed when the model executes a similar process in a separate query. Because GPU memory is so precious, this can mean spreading data across several different storage layers, but the reward is significantly more inference power for the same server load.

The change is particularly powerful for chat interfaces, since models need to continually refer back to the growing chat log as the conversation progresses. Agentic systems have a similar issue, with a growing log of actions and goals.

In theory, these are changes AI companies can execute on their own — but the technical complexity makes it a daunting task. Given the Tensormesh team’s work researching the process and the intricacy of the detail itself, the company is betting there will be lots of demand for an out-of-the-box product.

“Keeping the KV cache in a secondary storage system and reused efficiently without slowing the whole system down is a very challenging problem,” says Jiang. “We’ve seen people hire 20 engineers and spend three or four months to build such a system. Or they can use our product and do it very efficiently.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

OpenAI calls in the consultants for its enterprise push

February 23, 2026

Guide Labs debuts a new kind of interpretable LLM

February 23, 2026

Particle’s AI news app listens to podcasts for interesting clips so you you don’t have to

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

February 23, 2026

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

February 20, 2026
Education

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

By IQ TIMES MEDIAFebruary 23, 20260

A white family suing Kamehameha Schools over its Native Hawaiian admissions policy wants to stay…

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

February 20, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.