Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

February 12, 2026

Colorectal cancer is rising in younger adults. Here’s who is most at risk and symptoms to watch for

February 12, 2026

US FDA approves labeling changes to menopause hormone therapies

February 12, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Elon Musk teases a new image-labeling system for X… we think?
AI

Elon Musk teases a new image-labeling system for X… we think?

IQ TIMES MEDIABy IQ TIMES MEDIAJanuary 28, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Elon Musk’s X is the latest social network to roll out a feature to label edited images as “manipulated media,” if a post by Elon Musk is to be believed. But the company has not clarified how it will make this determination, or whether it includes images that have been edited using traditional tools, like Adobe’s Photoshop.

So far, the only details on the new feature come from a cryptic X post from Elon Musk saying, “Edited visuals warning,” as he reshares an announcement of a new X feature made by the anonymous X account DogeDesigner. That account is often used as a proxy for introducing new X features, as Musk will repost from it to share news.

Still, details on the new system are thin. DogeDesigner’s post claimed X’s new feature could make it “harder for legacy media groups to spread misleading clips or pictures.” It also claimed the feature is new to X.

Before it was acquired and renamed as X, the company known as Twitter had labeled tweets using manipulated, deceptively altered, or fabricated media as an alternative to removing them. Its policy wasn’t limited to AI but included things like “selected editing or cropping or slowing down or overdubbing, or manipulation of subtitles,” the site integrity head, Yoel Roth, said in 2020.

It’s unclear if X is adopting the same rules or has made any significant changes to tackle AI. Its help documentation currently says there’s a policy against sharing inauthentic media, but it’s rarely enforced, as the recent deepfake debacle of users sharing non-consensual nude images showed. In addition, even the White House now shares manipulated images.

Calling something “manipulated media” or an “AI image” can be nuanced.

Given that X is a playground for political propaganda, both domestically and abroad, some understanding of how the company determines what’s “edited,” or perhaps AI-generated or AI-manipulated, should be documented. In addition, users should know whether or not there’s any sort of dispute process beyond X’s crowdsourced Community Notes.

Techcrunch event

Boston, MA
|
June 23, 2026

As Meta discovered when it introduced AI image labeling in 2024, it’s easy for detection systems to go awry. In its case, Meta was found to be incorrectly tagging real photographs with its “Made with AI” label, even though they had not been created using generative AI.

This happened because AI features are increasingly being integrated into creative tools used by photographers and graphic artists. (Apple’s new Creator Studio suite, launching today, is one recent example.)

As it turned out, this confused Meta’s identification tools. For instance, Adobe’s cropping tool was flattening images before saving them as a JPEG, triggering Meta’s AI detector. In another example, Adobe’s Generative AI Fill, which is used to remove objects — like wrinkles in a shirt, or an unwanted reflection — was also causing images to be labeled as “Made with AI,” when they were only edited with AI tools.

Ultimately, Meta updated its label to say “AI info,” so as not to outright label images as “Made with AI” when they had not been.

Today, there’s a standards-setting body for verifying the authenticity and content provenance for digital content, known as the C2PA (Coalition for Content Provenance and Authenticity). There are also related initiatives like CAI, or Content Authenticity Initiative, and Project Origin, focused on adding tamper-evident provenance metadata to media content.

Presumably, X’s implementation would abide by some sort of known process for identifying AI content, but X’s owner, Elon Musk, didn’t say what that is. Nor did he clarify whether he’s talking specifically about AI images, or just anything that’s not the photo being uploaded to X directly from your smartphone’s camera. It’s even unclear whether the feature is brand-new, as DogeDesigner claims.

X isn’t the only outlet grappling with manipulated media. In addition to Meta, TikTok has also been labeling AI content. Streaming services like Deezer and Spotify are also scaling initiatives to identify and label AI music, as well. Google Photos is using C2PA to indicate how photos on its platform were made. Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are on the C2PA’s steering committee, while many more companies have joined as members.

X is not currently listed among the members, though we’ve reached out to C2PA to see if that recently changed. X doesn’t typically respond to requests for comment, but we asked anyway.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

February 12, 2026

Spotify says its best developers haven’t written a line of code since December, thanks to AI

February 12, 2026

A new version of OpenAI’s Codex is powered by a new dedicated chip

February 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Advances in education and community ties help Pennsylania steel town

February 12, 2026

BYU standout receiver Parker Kingston charged with first-degree rape in Utah

February 11, 2026

Yale suspends professor from teaching while reviewing his correspondence with Epstein

February 11, 2026

Gov. Gretchen Whitmer signs classroom smartphone ban for Michigan schools

February 11, 2026
Education

Advances in education and community ties help Pennsylania steel town

By IQ TIMES MEDIAFebruary 12, 20260

CLAIRTON, Pa. (AP) — At 2 p.m. on a chilly January afternoon, the elementary floor…

BYU standout receiver Parker Kingston charged with first-degree rape in Utah

February 11, 2026

Yale suspends professor from teaching while reviewing his correspondence with Epstein

February 11, 2026

Gov. Gretchen Whitmer signs classroom smartphone ban for Michigan schools

February 11, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.