Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Decluttering can be stressful − a clinical psychologist explains how personal values can make it easier

February 14, 2026

He got cancer, then his wife did, too. Their love survived.

February 14, 2026

Lucid’s First SUV Is a Thrill to Drive — If You Can Afford It

February 14, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Why AI Chatbots Hallucinate, According to OpenAI Researchers
Tech

Why AI Chatbots Hallucinate, According to OpenAI Researchers

IQ TIMES MEDIABy IQ TIMES MEDIASeptember 5, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI researchers claim they’ve cracked one of the biggest obstacles to large language model performance — hallucinations.

Hallucinations occur when a large language model generates inaccurate information that it presents as fact. They plague the most popular LLMs, from OpenAI’s GPT-5 to Anthropic’s Claude.

OpenAI’s baseline finding, which it made public in a paper released on Thursday, is that large language models hallucinate because the methods they’re trained under reward guessing more than admitting uncertainty.

In other words, LLMs are being told to fake it till they make it. Some are better than others, however. In a blog post last month, OpenAI said that Claude models are more “aware of their uncertainty and often avoid making statements that are inaccurate.” It also noted that Claude’s high refusal rates risked limiting its utility.

“Hallucinations persist due to the way most evaluations are graded — language models are optimized to be good test-takers, and guessing when uncertain improves test performance,” the researchers wrote in the paper.

Large language models are essentially always in “test-taking mode,” answering questions as if everything in life were binary — right or wrong, black or white.

In many ways, they’re not equipped for the realities of life, where uncertainty is more common than certainty, and true accuracy is not a given.

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

“Humans learn the value of expressing uncertainty outside of school, in the school of hard knocks. On the other hand, language models are primarily evaluated using exams that penalize uncertainty,” the researchers wrote.

The good news is that there is a fix, and it has to do with redesigning evaluation metrics.

“The root problem is the abundance of evaluations that are not aligned,” they wrote. “The numerous primary evaluations must be adjusted to stop penalizing abstentions when uncertain.”

In a blog post about the paper, OpenAI elaborated on what this type of adjustment would entail.

“The widely used, accuracy-based evals need to be updated so that their scoring discourages guessing. If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” OpenAI said.

OpenAI did not immediately respond to a request for comment from Business Insider.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Lucid’s First SUV Is a Thrill to Drive — If You Can Afford It

February 14, 2026

Spotify’s Top Developers Haven’t Written Code Since December, CEO Says

February 14, 2026

Physical Buttons Are Making a Comeback in EVs

February 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
Education

Social media posts extend Epstein fallout to student photo firm Lifetouch

By IQ TIMES MEDIAFebruary 13, 20260

MALAKOFF, Texas (AP) — Some school districts in the U.S. dropped plans for class pictures…

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.