Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Sam Altman would like remind you that humans use a lot of energy, too

February 21, 2026

New Jersey City Scraps Potential Data Center After Resident Backlash

February 21, 2026

Microsoft’s new gaming CEO vows not to flood the ecosystem with ‘endless AI slop’

February 21, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions
AI

Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions

IQ TIMES MEDIABy IQ TIMES MEDIANovember 7, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”

OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions.

“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit reads. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

The lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market. TechCrunch contacted OpenAI for comment.

These seven lawsuits build upon the stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly.

In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about methods of suicide for a fictional story he was writing.

Techcrunch event

San Francisco
|
October 13-15, 2026

The company claims it is working on making ChatGPT handle these conversations in a safer manner, but for the families who have sued the AI giant, but the families argue these changes are coming too late.

When Raine’s parents filed a lawsuit against OpenAI in October, the company released a blog post addressing how ChatGPT handles sensitive conversations around mental health.

“Our safeguards work more reliably in common, short exchanges,” the post says. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Sam Altman would like remind you that humans use a lot of energy, too

February 21, 2026

Microsoft’s new gaming CEO vows not to flood the ecosystem with ‘endless AI slop’

February 21, 2026

Google VP warns that two types of AI startups may not survive

February 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

February 20, 2026

Endangered Floreana tortoises reintroduced to native island habitat

February 20, 2026

Kentucky Supreme Court rules charter school funding unconstitutional

February 19, 2026

Students in Savannah grieve teacher killed in crash with driver pursued by ICE

February 19, 2026
Education

Court clears way for Louisiana law requiring Ten Commandments in classrooms to take effect

By IQ TIMES MEDIAFebruary 20, 20260

The 5th Circuit Court of Appeals cleared the way Tuesday for a Louisiana law requiring…

Endangered Floreana tortoises reintroduced to native island habitat

February 20, 2026

Kentucky Supreme Court rules charter school funding unconstitutional

February 19, 2026

Students in Savannah grieve teacher killed in crash with driver pursued by ICE

February 19, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.