Home / News / Artificial Intelligence / Meta plans to enhance the labeling of AI-generated imagery amidst a year filled with important elections

Meta plans to enhance the labeling of AI-generated imagery amidst a year filled with important elections

Meta is now broadening its labeling of AI-generated imagery on its social media platforms, including Facebook, Instagram, and Threads. This expansion will now include synthetic imagery that has been created using generative AI tools from competitors. Meta will look for “industry standard indicators” that show the content is AI-generated and are detectable by their systems.

The recent development indicates that the social media giant is anticipating a rise in the labeling of AI-generated imagery on its platforms in the future. However, the lack of specific data on the amount of synthetic versus authentic content being presented to users makes it difficult to determine the true impact of this move in combating AI-driven disinformation and misinformation, especially during a crucial year for elections worldwide.

According to Meta, it has the ability to identify and categorize “photorealistic images” that have been produced using its own “Imagine with Meta” generative AI tool, which was introduced in December of last year. However, until now, it has not been categorizing artificial visuals produced with tools from other companies. So this is the small step it’s announcing today.

In a recent blog post, Meta president Nick Clegg announced the expansion of labelling and highlighted the collaborative efforts with industry partners to establish common technical standards. This initiative aims to provide clear signals when content has been generated using AI. “The ability to identify these signals will enable us to categorize AI-generated images that users share on Facebook, Instagram, and Threads.”

According to Clegg, Meta plans to introduce enhanced labeling in the near future and will apply these labels in all supported languages for each app.

When asked for more information, a representative from Meta was unable to provide a specific timeline or any details regarding which markets will receive the additional labels. According to Clegg’s post, it seems that Meta will be taking a gradual approach to the rollout, spanning over the course of the next year. They are considering election calendars from various countries to determine the optimal timing and locations for launching the expanded labeling in different markets.

“We will be adopting this strategy for the upcoming year, coinciding with several significant global elections,” he stated. “Throughout this period, we anticipate gaining valuable insights into the creation and dissemination of AI content, as well as understanding the preferences for transparency and the future developments of these technologies.” The knowledge we gain from this experience will shape the way we approach our work in the future and contribute to the improvement of industry standards.

In addition to “invisible watermarks” and embedded metadata within the image files, Meta uses visible marks applied by their generative AI technology as part of its labeling process for AI-generated imagery. Meta’s detection technology will be on the lookout for the signals generated by AI image-generating tools used by its competitors. According to Clegg, Meta has been collaborating with other AI companies through platforms like the Partnership on AI to establish common standards and best practices for identifying generative AI.

The blog post fails to fully acknowledge the contributions of others in achieving this goal. According to Clegg, Meta plans to enhance its ability to identify AI-generated imagery from various tools, including those developed by Google, OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and even its own AI image tools, within the next year.

What are your thoughts on AI-generated video and audio?
According to Clegg, the detection of AI-generated videos and audio remains a significant challenge. The lack of widespread adoption of marking and watermarking techniques hinders the effectiveness of detection tools. In addition, these signals can be removed through editing and additional manipulation of the media.

It is currently not feasible to detect all AI-generated content, and individuals have found methods to remove hidden markers. We are exploring various possibilities,” he wrote. We are diligently working on developing classifiers that can assist us in automatically detecting AI-generated content, even in cases where the content does not have any visible indicators. Simultaneously, we are exploring methods to enhance the security of invisible watermarks, making them harder to tamper with or erase.

As an avid game critic, I must mention that Meta’s AI Research lab FAIR has recently unveiled an intriguing research project. They have been working on a cutting-edge invisible watermarking technology known as Stable Signature. This incorporates the watermarking mechanism seamlessly into the image generation process for certain image generators, which could prove beneficial for open source models, ensuring that the watermarking cannot be disabled.

In light of the disparity between the capabilities of AI generation and detection, Meta has decided to update its policy. Users are now required to disclose if they post AI-generated videos that appear “photorealistic” or audio that sounds remarkably real. Additionally, Meta reserves the right to label content that it believes has a significant potential to deceive the public on important matters.

Failure to make this manual disclosure could result in penalties, as outlined in Meta’s existing Community Standards. (So, let’s talk about account suspensions, bans, and other consequences.)

“Our Community Standards are applicable to all users, regardless of their location or the type of content they create, including AI-generated content,” explained Meta’s spokesperson in response to inquiries about potential consequences for users who fail to disclose information.

As we delve into the concerns surrounding AI-generated fakes, it’s important to keep in mind that manipulating digital media to deceive people has been happening for quite some time, and it doesn’t necessarily rely on advanced generative AI tools. Making a fake that goes viral can be as simple as having access to a social media account and some basic media editing skills.

In a recent development, the Oversight Board, a Meta-established content review body, made a decision regarding the removal of an edited video featuring President Biden and his granddaughter. The video had been manipulated to falsely suggest inappropriate touching. The Oversight Board has called on the tech giant to revamp its policies on faked videos, criticizing them as “incoherent.” The Board specifically highlighted Meta’s emphasis on AI-generated content in this particular context.

“The current policy lacks coherence,” expressed Oversight Board co-chair Michael McConnell. It restricts the use of manipulated videos that misrepresent what people actually say, but it does not forbid posts that depict someone engaging in actions they did not actually do. It only pertains to videos generated by AI, while allowing other deceptive content to go unpunished.

When questioned about the Board’s review and the possibility of Meta expanding its policies to address non-AI-related content manipulation risks, the spokesperson chose not to provide a direct answer. Instead, they stated that Meta’s response to the decision would be made available on their transparency center within the specified timeframe of 60 days.

LLMs as a content moderation tool
Clegg’s blog post delves into the current “limited” utilization of generative AI by Meta as a means of enforcing its own policies. The Meta president hints at the possibility of GenAI playing a larger role in this area, particularly during critical periods like elections, by potentially relying on large language models (LLMs) for support.

Our utilization of AI technology to enforce policies has been relatively restricted, particularly when it comes to generative AI tools. However, there is a glimmer of hope that generative AI might assist us in swiftly and precisely eliminating harmful content. It could also prove to be valuable in upholding our policies during times of increased vulnerability, such as elections,” he expressed.

We have initiated the testing of Large Language Models (LLMs) by training them on our Community Standards. This is aimed at assessing whether a piece of content violates our policies. Based on these initial tests, it appears that the LLMs have the potential to outperform current machine learning models. We are utilizing LLMs to efficiently eliminate content from review queues in specific situations where we have a strong assurance that it does not breach our policies. This allows our reviewers to prioritize content that is more likely to violate our rules.

Meta is now exploring the use of generative AI alongside its standard AI-powered content moderation to tackle the overwhelming amount of toxic content that burdens human content reviewers, putting their mental well-being at risk.

AI alone was unable to solve Meta’s content moderation issue, and it remains uncertain if the combination of AI and GenAI can address it. However, this move could potentially enhance the tech giant’s operational effectiveness, especially as the practice of outsourcing content moderation to underpaid workers is currently under legal scrutiny in various regions.

Clegg’s post also highlights that AI-generated content on Meta’s platforms can be fact-checked by independent partners. Consequently, such content may be labeled as debunked, in addition to being identified as AI-generated or “Imagined by AI” as per Meta’s current GenAI image labels. It is becoming more and more perplexing for users to determine the credibility of content they encounter on social media platforms. The application of multiple signposts, a single label, or no label at all to a piece of content only adds to the confusion.

Clegg fails to address the glaring imbalance between the limited resources of nonprofit fact-checkers and the abundance of malicious actors armed with AI tools. These actors, fueled by various incentives and funders, can easily weaponize technology to spread disinformation on social media platforms. It’s worth noting that Meta itself is contributing to this problem by developing and providing content-dependent AI tools.

Without concrete statistics on the ratio of synthetic versus authentic content on Meta’s platforms, and without reliable information on the actual effectiveness of its AI fake detection systems, it is difficult to draw any definitive conclusions. However, it is evident that Meta is facing significant pressure to demonstrate proactive measures, especially in a year when election-related fake content is expected to garner substantial attention.

About Chambers

Check Also

Researchers have recently identified the initial fractal molecule found in the natural world

Fractals, which are self-repeating shapes that can be infinitely magnified without losing their intricate details, …