Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Molotov Cocktail Thrown at Sam Altman’s $27 Million Home

April 10, 2026

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

April 10, 2026

TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it

April 10, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
AI

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

IQ TIMES MEDIABy IQ TIMES MEDIAApril 10, 2026No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


After months of conversations with ChatGPT,  a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons. 

The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery.

OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with ChatGPT.

The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February. 

The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events.

That legal pressure is now colliding directly with OpenAI’s legislative strategy: The company is backing an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

OpenAI did not respond in time to comment. TechCrunch will update the article if the company responds.

The Jane Doe lawsuit lays out in detail how that liability played out for one woman over several months.

Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “high volume, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to surveil his activities, according to the complaint. 

In July 2025, Jane Doe urged him to stop using ChatGPT and to seek help from a mental health professional. He instead turned back to ChatGPT, which assured him he was “a level 10 in sanity” and helped him double down on his delusions, per the lawsuit. 

Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer. 

Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated safety system flagged him for “Mass Casualty Weapons” activity and deactivated his account.

A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles including “violence list expansion” and “fetal suffocation calculation.”

The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly decided not to alert authorities. Florida’s attorney general this week opened an investigation into OpenAI’s possible link with the FSU shooter.

According to the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the trust and safety team to sort it out, copying Doe on the message. 

In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers,” which he was writing so fast he didn’t “even have time to read.” Included in those emails was a list of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”

Doe, who claims in the lawsuit that she was living in fear and could not sleep in her own home, submitted a Notice of Abuse to OpenAI in November.

“For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to OpenAI requesting the company permanently ban the user’s account.

OpenAI responded, acknowledging the report was “extremely serious and troubling” and that it was carefully reviewing the information. Doe never heard back.

Over the next couple of months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers allege this validates warnings both she and OpenAI’s own safety systems had raised months earlier, warnings the company allegedly chose to ignore.

The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s lawyers. 

Edelson called on OpenAI to cooperate. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it

April 10, 2026

Final 24 hours: Save up to $500 on your Disrupt 2026 pass

April 10, 2026

ChatGPT finally offers $100/month Pro plan

April 9, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

U-M shifts funding from DEI into its free tuition program

April 10, 2026

Plaintiffs settle with Trump administration over library funding cuts

April 9, 2026

Leaders reveal new details behind firing of Universities of Wisconsin president

April 9, 2026

Cadets recount how they and ROTC leader stopped shooter at Old Dominion

April 9, 2026
Education

U-M shifts funding from DEI into its free tuition program

By IQ TIMES MEDIAApril 10, 20260

The University of Michigan more than doubled the financial aid it awarded this year to…

Plaintiffs settle with Trump administration over library funding cuts

April 9, 2026

Leaders reveal new details behind firing of Universities of Wisconsin president

April 9, 2026

Cadets recount how they and ROTC leader stopped shooter at Old Dominion

April 9, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.