In its October response to the board’s request for public comment on the case, Media Matters highlighted numerous instances in which Meta failed to adequately enforce its manipulated media policies during the 2020 and 2022 election cycles.
Those instances included a digitally altered video purporting to show Biden appearing at a dinner with a man in blackface, a video altered to make then-House Speaker Nancy Pelosi (D-CA) look and sound as though she was drunk and slurring her words during a press conference, and a manipulated, misleading video that supposedly shows Biden fumbling as he presented a veteran with a Medal of Honor. Each of these circulated widely across Facebook and were either inconsistently labeled or not labeled at all.
The spread of deceptive and manipulated media remains a challenge for Meta. In December 2023, the company profited from advertisements that used “behind-the-scenes footage from a short film shot in Lebanon” to spread the debunked conspiracy theory that Palestinians injured during the Israeli military’s assault on Gaza are “crisis actors” who have faked their injuries. In January, NBC News reported that “explicit, AI-generated Taylor Swift images” had proliferated on Instagram and Facebook. According to the report, a search for “Taylor Swift AI” on Instagram and Facebook returned “sexually suggestive and explicit deepfakes of Swift” days after the images first surfaced on X. In February, Media Matters reported that Facebook and Instagram users have been promoting a misogynistic campaign called “#dignifAI,” initially launched by 4chan users. The campaign involves manipulating images of women with AI to make them appear more modestly dressed, and then posting the original and manipulated images side-by-side.
On February 6, Meta global affairs president Nick Clegg announced in a blog post that “in the coming months” the company would begin to label “images that users post to Facebook, Instagram and Threads” when it detects “industry standard indicators that they are AI-generated.”
Meta already applies “Imagined with AI” labels to images on its platforms that are generated using the company’s own AI feature, but Meta is planning to label images generated by AI tools from other companies “in the coming months.”
Clegg also explained that Meta “can’t yet detect” video or audio content generated using AI tools, but that it’s “adding a feature for people to disclose when they share AI-generated video or audio” so the company can then “add a label to it.”
In the meantime, the Oversight Board is correct to be “concerned about [Meta’s] Manipulated Media policy in its current form ... given the number of elections in 2024.”
Given the company’s inadequate policies against election misinformation, and its history of failing to consistently enforce its various policies, that concern is warranted. Right-wing figures and social media users continue to spread and amplify election misinformation on Meta’s platforms — and several others. Some of those figures are using manipulated media to spread such misinformation. It’s crucial that the company heed the board’s call to quickly expand its manipulated media policy.