Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Anthropic Says DeepSeek Fraudulently Used Claude

February 23, 2026

AI Labs Are Hiring for the Jobs That AI Is Supposedly Destroying

February 23, 2026

OpenAI calls in the consultants for its enterprise push

February 23, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Guide Labs debuts a new kind of interpretable LLM
AI

Guide Labs debuts a new kind of interpretable LLM

IQ TIMES MEDIABy IQ TIMES MEDIAFebruary 23, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The challenge of wrangling a deep learning model is often understanding why it does what it does: Whether it’s xAI’s repeated struggle sessions to fine-tune Grok’s odd politics, ChatGPT’s struggles with sycophancy, or run-of-the-mill hallucinations, plumbing through a neural network with billions of parameters isn’t easy.

Guide Labs, a San Francisco start-up founded by CEO Julius Adebayo and chief science officer Aya Abdelsalam Ismail, is offering an answer to that problem today. On Monday, the company open-sourced an 8 billion parameter LLM, Steerling-8B, trained with a new architecture designed to make its actions easily interpretable: Every token produced by the model can be traced back to its origins in the LLM’s training data.

That can as a simple as determining the reference materials for facts cited by the model, or as complex as understanding the model’s understanding of humor or gender.

“If I have a trillion ways to encode gender, and I encode it in 1 billion of the 1 trillion things that I have, you have to make sure you find all those 1 billion things that I’ve encoded, and then you have to be able to reliably turn that on, turn them off,” Adebayo told TechCrunch. “You can do it with current models, but it’s very fragile … It’s sort of one of the holy grail questions.”

Adebayo began this work while earning his PhD at MIT, co-authoring a widely cited 2018 paper that showed existing methods of understanding deep learning models were not reliable. That work ultimately led to the creation of a new way of building LLMs: Developers insert a concept layer in the model that buckets data into traceable categories. This requires more up front data annotation, but by using other AI models to help, they were able to train this model as their largest proof of concept yet.

“The kind of interpretability people do is…neuroscience on a model, and we flip that,” Adebayo said. “What we do is actually engineer the model from the ground up so that you don’t need to do neuroscience.”

Image Credits:Guide Labs

One concern with this approach is that it might eliminate some of the emergent behaviors that make LLMs so intriguing: Their ability to generalize in new ways about things they haven’t been trained on yet. Adebayo says that still happens in his company’s model: His team tracks what they call “discovered concepts” that the model discovered on its own, like quantum computing.

Techcrunch event

Boston, MA
|
June 9, 2026

Adebayo argues this interpretable architecture will be something everyone needs. For consumer-facing LLMs, these techniques should allow model builders to do things like block the use of copyrighted materials, or better control outputs around subjects like violence or drug abuse. Regulated industries will require more controllable LLMs, for example in finance, where a model evaluating loan applicants needs to consider things like financial records but not race. There’s also a need for interpretability in scientific work, another area where Guide Labs has developed technology. Protein folding has been a big success for deep learning models, but scientists need more insight into why their software figured out promising combinations.

“This model demonstrates is that training interpretable models is no longer a sort of science; it’s now an engineering problem,” Adebayo said. “We figured out the science and we can scale them, and there is no reason why this kind of model wouldn’t match the performance of the frontier level models,” which have many more parameters.

Guide Labs says that Steerling-8B can achieved 90% of the capability of existing models, but uses less training data, thanks to its novel architecture. The next step for the company, which emerged from Y Combinator and raised a $9 million seed round from Initialized Capital in November 2024, is to build a larger model and begin offering API and agentic access to users.

“The way we’re current training models is super primitive, and so democratizing inherent interpretability is actually going to be a long term good thing for our role within the human race,” Adebayo told TechCrunch. “As we’re going after these models that are going to be super intelligent, you don’t want something to be making decisions on your behalf that’s sort of mysterious to you.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

OpenAI calls in the consultants for its enterprise push

February 23, 2026

Particle’s AI news app listens to podcasts for interesting clips so you you don’t have to

February 23, 2026

5 days left to lock in the lowest Disrupt 2026 rates

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

February 23, 2026

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

February 20, 2026
Education

Family suing Kamehameha Schools over admissions policy are getting threats, seek anonymity

By IQ TIMES MEDIAFebruary 23, 20260

A white family suing Kamehameha Schools over its Native Hawaiian admissions policy wants to stay…

Mother of accused Georgia school shooter says she asked boy’s father to lock up guns

February 23, 2026

Why adults in midlife and beyond are filling college courses

February 22, 2026

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

February 20, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.