Update (10/13/23): This article has been updated with additional examples.
Social media companies Meta, YouTube, and Twitter are backsliding on their efforts to combat misinformation, capitulating to right-wing pressure and allowing the same sort of dangerous content that has previously led to real-world harm.
Social media platforms’ failures to moderate false and dangerous content during the 2020 election contributed to the January 6 insurrection. Following the violence at the Capitol, platforms including Facebook, Instagram, Twitter, and YouTube suspended former President Donald Trump from posting and developed new policies to curb dangerous falsehoods. The platforms have regularly struggled to enforce moderation policies, but the changes helped prevent some of the spread of harmful misinformation during the 2022 midterm elections.
Both before and since platforms made these changes, right-wing figures have repeatedly and relentlessly claimed that there is an unfair bias against them online. Media Matters and others have repeatedly debunked this claim, with a recent study showing that right-leaning Facebook pages earned more engagement than left-leaning or ideologically nonaligned pages in 2020 and 2021.
Despite this reality, platforms have actually bent their rules and given preferential treatment to right-wing media and politicians. In the lead up to another presidential election and amid recurring election misinformation and conspiracy theories, platforms are further capitulating to right-wing pressure — loosening content moderation policies, downsizing content moderation and safety teams, and reinstating accounts known for pushing misinformation.