A deceptive seven-second video of President Biden has the potential to alter Facebook’s policy on disinformation before the 2024 election. However, both the platform and the American voters are facing a limited amount of time.
On Monday, the Oversight Board, an independent advisory group Meta established to assess its moderation decisions on Facebook and Instagram, issued a ruling regarding a fictitious video of Biden that went viral on social media in the previous year.
The first video saw the president escorting his granddaughter, Natalie Biden, as she exercised her right to vote ahead of the 2022 midterm elections. In the video, President Biden gives his granddaughter an “I Voted” sticker before giving her a gentle kiss on the cheek.
The video has been condensed and modified to eliminate any visible indication of the sticker. Additionally, the film has been synchronized with a song with explicit sexual lyrics and repeated on a continuous loop, portraying Biden engaging in improper physical contact with the young lady. In May 2023, a seven-second video was posted on Facebook, along with a statement labeling Biden as a “sick pedophile.”
In October of last year, Meta’s Oversight Board made the decision to handle the issue. This came after a Facebook user complained the video and later escalated the matter when the platform refused to delete it.
The Oversight Board’s judgment, released on Monday, affirms that Meta’s decision to keep the video available was in accordance with the platform’s regulations. However, the board criticizes the policy in question as “incoherent.”
“The current policy lacks coherence,” said Oversight Board Co-Chair Michael McConnell. The policy prohibits the dissemination of manipulated films that falsely attribute statements to individuals, but it does not explicitly forbid the sharing of postings that represent individuals engaging in actions they did not really do. While granting immunity to other forms of misleading content, this policy only applies to artificial intelligence-generated video content.
McConnell also highlighted the policy’s inadequacy in tackling altered audio, which he described as “one of the most powerful forms of electoral disinformation.”
The conclusion of the Oversight Board contends that Meta’s regulations should be driven by the damages they want to avoid, rather than placing emphasis on the process of content creation. The ruling mandates that any modifications be promptly executed in view of worldwide elections.
In addition to extending its policy on manipulated media, the Oversight Board recommended that Meta include labels on modified films to indicate their changed nature, rather than relying only on fact-checkers. The panel criticized the current procedure as “unequal depending on language and market.”
The Oversight Board thinks that Meta may optimize freedom of speech, minimize possible damage, and enhance user knowledge by applying labels to more material rather than removing it.
A Meta representative told me that the business is now examining the guidelines provided by the Oversight Board. They also mentioned that Meta would give a public response in 60 days.
The modified video continues to spread on X, formerly known as Twitter. In the previous month, an X account that has been verified and has a following of 267,000 individuals uploaded the video along with the statement “The media simply feign ignorance towards this occurrence.” The video has accumulated over 611,000 views.
The Oversight Board has already instructed Meta to revise its regulations on many occasions, and the Biden video is just another example of this. The organization expressed its disapproval with Facebook’s decision to ban former President Trump, criticizing the indefinite penalty as lacking clarity and consistency. However, they supported the move to suspend his account. The Oversight Board has consistently advised Meta to enhance the level of specificity and openness in its rules, including all instances.
Upon accepting the Biden “cheap fake” case, the Oversight Board acknowledged that Meta maintained its stance of keeping the modified video accessible on its platform. This decision was based on Meta’s policy regarding manipulated media, which specifically addresses misleadingly altered photos and videos. According to this policy, it only applies when artificial intelligence (AI) is involved or when the video’s subject is depicted uttering statements they did not actually make.
The manipulated media policy specifically targets deepfakes and encompasses films that have been altered or synthesized in a manner that is not readily discernible to an ordinary individual and is likely to deceive an average person.
Detractors of Meta’s content moderation method have deemed Meta’s internally developed review board as insufficient and significantly delayed.
Although Meta currently has a standardized method for reviewing content moderation, disinformation and other harmful material spread at a faster pace than the appeals process and at a considerably faster pace than what was anticipated in the previous two general election cycles.
As the 2024 presidential contest intensifies, researchers and watchdog organizations are preparing for a surge of deceptive assertions and artificially manufactured counterfeits. However, despite the increasing capacity of new technologies to amplify harmful falsehoods, social media corporations have significantly reduced their efforts to ensure trust and safety. Moreover, they have shifted their focus away from what was formerly seen as a united endeavor to eliminate disinformation.
“The prevalence of deceptive content is increasing, and the effectiveness of tools to generate it is rapidly improving,” said McConnell.