December 23, 2024

Meta aims to tag AI images on Instagram and Facebook

3 min read

Nick Clegg, a global executive, states that users ‘desire clarity on boundaries’ due to the increase in AI-generated content

Meta is striving to identify and tag AI-generated images on Facebook, Instagram, and Threads as part of its effort to expose “individuals and organizations seeking to deceive others.”

While photorealistic images produced using Meta’s AI imaging tool are already labeled as AI, the company’s President of Global Affairs, Nick Clegg, revealed in a blog post on Tuesday that they would also begin labeling AI-generated images created on competing platforms.

Meta’s AI images already include metadata and invisible watermarks that can indicate to other organizations that the image was created by AI. The company is also developing tools to detect these markers when used by other companies, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock in their AI image generators, according to Clegg.

“As the line between human and synthetic content becomes blurred, people want clarity on where the boundary lies,” Clegg stated.

“People are often encountering AI-generated content for the first time, and our users have expressed their appreciation for transparency regarding this new technology. Therefore, it is important for us to inform people when the photorealistic content they are viewing has been generated using AI.”

Clegg mentioned that the feature is under development and the labels will be implemented in all languages in the upcoming months.

“We will be implementing this approach over the next year, a period that coincides with several significant elections worldwide,” Clegg stated.

He clarified that this capability is currently limited to images, and AI tools that produce audio and video do not yet have these markers. However, the company plans to enable users to disclose and add labels to such content when it is posted online.

He stated that the company plans to introduce a more noticeable label on “digitally created or altered” images, videos, or audio that pose a significant risk of materially deceiving the public on an important issue.

The company is also exploring the creation of technology to automatically identify AI-generated content, even in cases where the content lacks invisible markers or these markers have been removed.

“This effort is particularly crucial as this arena is expected to become increasingly competitive in the years to come,” Clegg explained.

Individuals and organizations seeking to deceive others with AI-generated content will seek ways to circumvent detection safeguards. Across our industry and society at large, we must continue to seek ways to stay ahead.”

AI deepfakes have already emerged in the US presidential election cycle, including robocalls featuring what is believed to be an AI-generated deepfake of US President Joe Biden’s voice, discouraging voters from participating in the Democratic primary in New Hampshire.

Last week, Nine News in Australia received criticism for modifying an image of Victorian Animal Justice Party MP Georgie Purcell to reveal her midriff and change her chest in a broadcast on the evening news. The network attributed this to “automation” in Adobe’s Photoshop software, which includes AI image tools.

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Specialdealsshop.com |