Technology & AI

Elon Musk teases X’s new photo-tagging system… we think?

Elon Musk’s IX is the latest social media platform to roll out a feature to label edited photos as “manipulative media,” if Elon Musk’s post is to be believed. But the company did not specify how it will make this determination, or whether it includes images edited using traditional tools, such as Adobe’s Photoshop.

So far, the only details of the new feature come from a cryptic X post from Elon Musk saying, “Visual warning planned,” as he re-shared the announcement of the new X feature made by the anonymous X account DogeDesigner. That account is often used as a proxy for introducing new X features, as Musk will also post from it to share news.

However, details on the new system are scant. DogeDesigner’s post said the new X feature would make it “difficult for legacy media groups to distribute misleading clips or images.” It also said that this feature is new to X.

Before it was acquired and renamed iX, the company known as Twitter had written tweets using media that had been altered, manipulated, or created as a means of removing them. Its policy was not limited to AI but included things like “selective editing or cropping or slowing down or overshooting, or manipulating subtitles,” the site’s head of integrity, Yoel Roth, said in 2020.

It is unclear whether X uses the same rules or has made significant changes to deal with AI. The help documentation currently states that there is a policy against sharing fake media, but it is rarely enforced, as shown by the recent heated debate over users sharing nude photos who disagreed. In addition, even the White House is now sharing altered photos.

Calling something “manipulated media” or “AI imagery” might be considered.

Given that X is a playground for political propaganda, both internal and external, some understanding of how the company determines what is “edited,” or perhaps AI-generated or AI-directed, should be documented. In addition, users should know whether or not there is any kind of dispute process beyond X’s Public Notes.

Techcrunch event

San Francisco
|
October 13-15, 2026

As Meta discovered when it introduced AI image labeling in 2024, it’s easy for vision systems to go wrong. In its case, Meta was found to be wrongly tagging real images with the “Made with AI” label, even though they were not created using artificial intelligence.

This has happened because AI features are increasingly being integrated into the creative tools used by photographers and illustrators. (Apple’s new Creators Studio suite, introduced today, is a recent example.)

As it turns out, this confuses Meta’s identification tools. For example, Adobe’s cropping tool was flattening images before saving them as JPEGs, triggering Meta’s AI detector. In another example, Adobe Generative AI Fill, which is used to remove objects – such as shirt wrinkles, or unwanted reflections – was also causing images to be labeled “Made with AI,” when they were only edited with AI tools.

Finally, Meta updated its label to say “AI knowledge,” so that it wouldn’t directly label images as “Made with AI” when they weren’t.

Today, there is a body that sets standards for verifying the authenticity and provenance of digital content, known as C2PA (Coalition for Content Provenance and Authenticity). There are also related initiatives such as CAI, or the Content Authenticity Initiative, and Project Origin, which focuses on adding tamper-proof metadata to media content.

Presumably, the implementation of the X will be accompanied by some kind of known AI content identification process, but the owner of the X, Elon Musk, did not say what that is. He also didn’t specify if he’s talking specifically about AI photos, or anything other than a photo uploaded to X directly from your smartphone’s camera. It’s not clear if the feature is brand new, as DogeDesigner says.

UX is not the only shop dealing with altered media. In addition to Meta, TikTok also labels content with AI. Streaming services like Deezer and Spotify are also scaling AI music identification and labeling initiatives, too. Google Photos uses C2PA to show how the images on its platform were created. Microsoft, BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are on the C2PA steering committee, while many companies have joined as members.

UX is currently not listed as a member, although we reached out to C2PA to see if that has recently changed. UX usually doesn’t respond to requests for comment, but we asked anyway.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button