Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

As Iran war escalates, children may face physical, mental health impacts

March 5, 2026

Ex-Snapchat Engineers Behind Gizmo Join Meta’s Superintelligence Labs

March 5, 2026

Salesforce President Addresses Benioff ICE Jokes in Meeting With Staff

March 5, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others
AI

FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others

IQ TIMES MEDIABy IQ TIMES MEDIASeptember 11, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The FTC announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.

The federal regulator seeks to learn how these companies are evaluating the safety and monetization of chatbot companions, how they try to limit negative impacts on children and teens, and if parents are made aware of potential risks.

This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions.

Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In OpenAI’s case, a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used in his suicide.

“Our safeguards work more reliably in common, short exchanges,” OpenAI wrote in a blog post at the time. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

Techcrunch event

San Francisco
|
October 27-29, 2025

Meta has also come under fire for its overly lax rules for its AI chatbots. According to a lengthy document that outlines “content risk standards” for chatbots, Meta permitted its AI companions to have “romantic or sensual” conversations with children. This was only removed from the document after Reuters’ reporters asked Meta about it.

AI chatbots can also pose dangers to elderly users. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Facebook Messenger bot that was inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, despite the fact that she is not a real person and does not have an address. The man expressed skepticism that she was real, but the AI assured him that there would be a real woman waiting for him. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries.

Some mental health professionals have noted a rise in “AI-related psychosis,” in which users are deluded into thinking that their chatbot is a conscious being who they need to set free. Since many large language models (LLMs) are programmed to flatter users with sycophantic behavior, the AI chatbots can egg on these delusions, leading users into dangerous predicaments.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said in a press release.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

EXCLUSIVE: Luma launches creative AI agents powered by its new ‘Unified Intelligence’ models

March 5, 2026

OpenAI launches GPT-5.4 with Pro and Thinking versions

March 5, 2026

Cursor is rolling out a new kind of agentic coding tool

March 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

As Trump’s Education Dept. pulls back on civil rights, states step up

March 5, 2026

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026

Jury to deliberate father’s fate in Georgia school shooting case

March 2, 2026
Education

As Trump’s Education Dept. pulls back on civil rights, states step up

By IQ TIMES MEDIAMarch 5, 20260

WASHINGTON (AP) — In their mostly white school district, Black students routinely heard racial slurs.…

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026

Jury to deliberate father’s fate in Georgia school shooting case

March 2, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.