Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Is safety is ‘dead’ at xAI?

February 14, 2026

Hollywood isn’t happy about the new Seedance 2.0 video generator

February 14, 2026

India doubles down on state-backed venture capital, approving $1.1B fund

February 14, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » It’s not you, it’s me. ChatGPT doesn’t want to be your therapist or friend
Health

It’s not you, it’s me. ChatGPT doesn’t want to be your therapist or friend

IQ TIMES MEDIABy IQ TIMES MEDIAAugust 6, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In a case of “it’s not you, it’s me,” the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant.

OpenAI, the company behind the popular bot, announced that it had incorporated some “changes,” specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend.

The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model’s tendency to “validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].”

The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT “noticeably more sycophantic,” or “too agreeable,” “sometimes saying what sounded nice instead of what was helpful.”

The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on January 29, 2025.

The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI’s artificial intelligence chatbot ChatGPT on January 29, 2025.

OpenAI announced they have “rolled back” certain initiatives, including changes in how they use feedback and their approach to measuring “real-world usefulness over the long term, not just whether you liked the answer in the moment.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an Aug. 4 announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Here’s what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users.

ChatGPT integrates ‘changes’ to help users thrive

According to OpenAI, the “changes” were designed to help ChatGPT users “thrive.”

“We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI said. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.”

The company said its “working closely” with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how “ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.

The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.

Thanks to recent “optimization,” ChatGPT is now able to:

Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress

Prompt users to take breaks from lengthy conversations

Avoid giving advice on “high-stakes personal decisions,” instead ask questions/weigh pros and cons to help users come up with a solution on their own

“Our goal to help you thrive won’t change. Our approach will keep evolving as we learn from real-world use,” OpenAI said in its blog post. “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work.”

This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Decluttering can be stressful − a clinical psychologist explains how personal values can make it easier

February 14, 2026

He got cancer, then his wife did, too. Their love survived.

February 14, 2026

How EPA rolling back greenhouse gas endangerment finding could impact health

February 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
Education

Social media posts extend Epstein fallout to student photo firm Lifetouch

By IQ TIMES MEDIAFebruary 13, 20260

MALAKOFF, Texas (AP) — Some school districts in the U.S. dropped plans for class pictures…

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.