Meta has officially announced changes to how its AI-edited content on Facebook, Instagram and Threads is labeled. The company has stated that the“AI Info” label will no longer appear directly below the user’s name. Users, instead, will have to click on a menu at the top right of images or videos to see whether the content has been edited or created using AI.
According to Meta, this adjustment has been made to “better reflect the extent of AI used” across its platform’s content. The new categorization has emerged following concerns with its previous “Made with AI” label, which erroneously classified authentic photos as AI-generated. The company will start to make these changes from the next week.
Meta shifts AI labels to hidden menus
The decision by Meta to move the AI labels to the menu marked a shift in user engagement with AI-edited content. Before this, the “AI Info” option was visible immediately beneath the name of the user in the posts that were edited or AI-generated. Now the label will be visible only if the user seeks it out actively by clicking on the menu icon. This update is relevant for Facebook, Instagram, and Threads, and in Meta’s words – such changes are necessary for better explaining how AI tools are used.
The new label will still show when AI tools have been applied, including whether the content was AI-generated or slightly changed using AI-enhanced features in editing tools such as Adobe Photoshop. Meta also noted that industry-shared signals, for example, Content Credentials supported by Adobe’s C2PA help in detecting AI-edited media. This metadata discloses how much the content has been altered with the use of AI technology.
Growing worries as Meta’s AI labels become less visible
Although Meta stresses that the new adjustment is meant to address the concerns of creators, some people are concerned that the reduced visibility of AI labels could enhance the confusion as to what is real and AI-edited content. This means that as generative AI editing tools become more sophisticated, it is becoming more difficult to discern authentic images or videos from manipulated ones. As the “AI Info” tag becomes hidden or disappears completely, users may overlook important information that helps to evaluate the credibility of the content they encounter.
Meta did not clarify the systems it will check for detecting the use of AI, but it mentioned external tools such as SynthID watermarking introduced by Google, which marks content generated by AI. The company reiterated that it will keep on labeling content made using AI based on either industry standards or self-disclosure from creators, ensuring transparency in how AI tools are employed.