Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Is Tinder the New LinkedIn? Job-Hunters Swipe for Leads on Dating Apps

February 15, 2026

Gary Marcus Says AI Fatigue Won’t Hit Every Kind of Job

February 15, 2026

The great computer science exodus (and where students are going instead)

February 15, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Governments grapple with the flood of non-consensual nudity on X
AI

Governments grapple with the flood of non-consensual nudity on X

IQ TIMES MEDIABy IQ TIMES MEDIAJanuary 8, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


For the past two weeks, X has been flooded with AI-manipulated nude images, created by the Grok AI chatbot. An alarming range of women have been affected by the non-consensual nudes, including prominent models and actresses, as well as news figures, crime victims, and even world leaders. 

A December 31 research paper from Copyleaks estimated roughly one image was being posted each minute, but later tests found far more. A sample gathered from January 5-6 found 6,700 per hour over the 24-hour period. 

But while public figures from around the world have decried the choice to release the model without safeguards, there are few clear mechanisms for regulators hoping to rein in Elon Musk’s new image-manipulating system. The result has become a painful lesson in the limits of tech regulation — and a forward-looking challenge for regulators hoping to make a mark.

Unsurprisingly, the most aggressive action has come from the European Commission, which on Thursday ordered xAI to retain all documents related to its Grok chatbot. The move doesn’t necessarily mean the commission has opened up a new investigation, but it’s a common precursor to such action. It’s particularly ominous given recent reporting from CNN that suggests Elon Musk may have personally intervened to prevent safeguards from being placed on what images could be generated by Grok.

It’s unclear whether X has made any technical changes to the Grok model, although the public media tab for Grok’s X account has been removed. In a statement, the company specifically denounced the use of AI tools to produce child sexual imagery. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the X Safety account posted on January 3, echoing a previous tweet by Elon Musk.

In the meantime, regulators around the world have issued stern warnings. The United Kingdom’s Ofcom issued a statement on Monday, saying it was in touch with xAI and “will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation.” In a radio interview on Thursday, U.K. Prime Minister Keir Starmer called the phenomenon “disgraceful” and “disgusting,” saying “Ofcom has our full support to take action in relation to this.”

In a post on LinkedIn, Australian eSafety Commissioner Julie Inman-Grant said her office had received a doubling in complaints related to Grok since late 2025. But Inman-Grant stopped short of taking action against xAI, saying only, “We will use the range of regulatory tools at our disposal to investigate and take appropriate action.”

Techcrunch event

San Francisco
|
October 13-15, 2026

By far the largest market to threaten action is India, where Grok was the subject of a formal complaint from a member of Parliament. On January, India’s communications regulator MeitY ordered X to address the issue and submit an “action-taken” report within 72 hours — a deadline that was subsequently extended by 48 hours. While a report was submitted to the regulator on January 7, it’s unclear whether MeitY will be satisfied with the response. If not, X could lose its safe harbor status in India, a potentially serious limitation on its ability to operate within the country.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

The great computer science exodus (and where students are going instead)

February 15, 2026

Is safety is ‘dead’ at xAI?

February 14, 2026

Hollywood isn’t happy about the new Seedance 2.0 video generator

February 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
Education

Social media posts extend Epstein fallout to student photo firm Lifetouch

By IQ TIMES MEDIAFebruary 13, 20260

MALAKOFF, Texas (AP) — Some school districts in the U.S. dropped plans for class pictures…

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026

Harvard sued by Justice Department over access to admissions data

February 13, 2026

San Francisco teachers reach deal with district to end strike

February 13, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.