Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The haves and have nots of the AI gold rush

May 16, 2026

Research repository ArXiv will ban authors for a year if they let AI do all the work

May 16, 2026

Job Postings for This Tech Job Have Grown Over 700% in the Last Year

May 16, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » It’s not you, it’s me. ChatGPT doesn’t want to be your therapist or friend
Health

It’s not you, it’s me. ChatGPT doesn’t want to be your therapist or friend

IQ TIMES MEDIABy IQ TIMES MEDIAAugust 6, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In a case of “it’s not you, it’s me,” the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant.

OpenAI, the company behind the popular bot, announced that it had incorporated some “changes,” specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend.

The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model’s tendency to “validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].”

The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT “noticeably more sycophantic,” or “too agreeable,” “sometimes saying what sounded nice instead of what was helpful.”

The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on January 29, 2025.

The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI’s artificial intelligence chatbot ChatGPT on January 29, 2025.

OpenAI announced they have “rolled back” certain initiatives, including changes in how they use feedback and their approach to measuring “real-world usefulness over the long term, not just whether you liked the answer in the moment.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an Aug. 4 announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Here’s what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users.

ChatGPT integrates ‘changes’ to help users thrive

According to OpenAI, the “changes” were designed to help ChatGPT users “thrive.”

“We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI said. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.”

The company said its “working closely” with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how “ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.

The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.

Thanks to recent “optimization,” ChatGPT is now able to:

Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress

Prompt users to take breaks from lengthy conversations

Avoid giving advice on “high-stakes personal decisions,” instead ask questions/weigh pros and cons to help users come up with a solution on their own

“Our goal to help you thrive won’t change. Our approach will keep evolving as we learn from real-world use,” OpenAI said in its blog post. “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work.”

This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

What an expert on the gut microbiome eats in a day

March 26, 2026

Wegovy maker Novo sharpens consumer focus with board role for Mars CEO

March 26, 2026

CDC report finds US smoking rate continues to plummet as vape use rises

March 26, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Nashville HBCU Fisk University Launches $900M Campus Transformation

May 15, 2026

Justice Department alleges Yale illegally considered race in medical school admissions

May 14, 2026

Princess of Wales highlights Italy’s Reggio Approach for children

May 14, 2026

Pope Leo XIV warns of AI and weaponry leading to global annihilation

May 14, 2026
Education

Nashville HBCU Fisk University Launches $900M Campus Transformation

By IQ TIMES MEDIAMay 15, 20260

Fisk University President Agenia Clark on Thursday announced a $900 million plan to remake the…

Justice Department alleges Yale illegally considered race in medical school admissions

May 14, 2026

Princess of Wales highlights Italy’s Reggio Approach for children

May 14, 2026

Pope Leo XIV warns of AI and weaponry leading to global annihilation

May 14, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.