Facebook and its founder Mark Zuckerberg have been boasting about the platform’s election integrity policies and enforcement, with Zuckerberg claiming its “systems performed well” during the election cycle and the company releasing a new report promoting its efforts to remove and label misleading or harmful content. But Media Matters’ extensive reporting on the spread of election misinformation across the platform — and instances of Facebook profiting from ads with misinformation — contradicts Facebook’s claims.
In October 2019, Facebook laid out initiatives “to help protect the democratic process,” including efforts to fight foreign interference, increase transparency, and reduce misinformation. The platform subsequently implemented election-related policies for content and ads, which experts warned were insufficient and had potential problems and loopholes. The election-related policies included a ban on new political ads one week before Election Day, a ban on all political ads following Election Day, labels on posts with misinformation, and a voter information center. As predicted, these policies did little to limit the organic reach of harmful misinformation about the election, much of which came from President Donald Trump himself.
The Senate Judiciary Committee held a hearing with Zuckerberg and Twitter CEO Jack Dorsey on November 17 during which both CEOs outlined their company’s election-related policies and enforcement. Notably, Zuckerberg boasted that Facebook’s “systems performed well” and he was “proud of the work” that they’ve done despite the challenges the pandemic posed.
Two days after Zuckerberg promoted Facebook’s election-related policies to the committee, Facebook released its latest enforcement report and boasted that between March 1 and Election Day, it removed more than 265,000 pieces of content in the U.S. for voter interference and added labels to 180 million pieces of content that were debunked by fact-checkers. Facebook did not report how much election misinformation it missed, but independent reports have indicated that Facebook removed and labeled only a fraction of COVID-19 misinformation on the platform, suggesting it may be similar for election misinformation. It is also unclear how effective labels are. Facebook claimed that 95% of people do not click to see what is behind the warning label, but internal data reportedly shows that the labels decrease reshares of labeled content by only 8%.
Despite these findings and Zuckerberg’s praise of Facebook's performance, reporting on election misinformation on the platform tells another story. Media Matters, along with other experts and journalists, has extensively documented Facebook’s failure to stop the spread of election misinformation this year.
Since January, Media Matters has reported numerous instances of election misinformation spreading on Facebook. We’ve reported on: Facebook profiting from pro-Trump ads containing misinformation, including voting misinformation, and hate speech; the spread of election misinformation and intimidation across the platform, particularly within public and private groups; right-wing media earning millions of interactions on election-related posts, including posts with election misinformation, while decrying censorship from tech platforms; and right-wing media using Facebook to spread talking points, conspiracy theories, fearmongering, and misinformation.
Here's a list of our reporting:
Facebook profited from ads run by the Trump campaign and a pro-Trump PAC that contained misinformation and hate speech
Notably, some ads even contained election misinformation, such as false claims of voter fraud and attacks on mail-in ballots.
Election misinformation and intimidation, including false claims of voter fraud, spread across the platform, particularly within public and private groups
Right-wing media earned millions of interactions on election-related Facebook posts and posts with misinformation
Right-wing media used Facebook to spread their talking points, conspiracy theories, and misinformation