Every so often, Twitter rolls out a policy that sounds genuinely good, winning praise from the platform’s fans and critics alike. Its decision to restrict the spread of false and potentially harmful COVID-19 information, to crack down on disinformation about voting, to label manipulated media, and to put a ban on political advertising ahead of the 2020 election were common sense policy decisions cheered by many.
But the problem with Twitter’s policies has always been that the company seems unable or unwilling to actually enforce them with any consistency.
Over the weekend, a video showing House Speaker Nancy Pelosi (D-CA) slurring her words during a press conference spread across social media. The video, posted on July 30 by a Facebook user named Will Allen, was captioned, “This is unbelievable, she is blowed out of her mind, I bet this gets taken down!” The video had been slowed down and altered to make it look and sound as though Pelosi was drunk. The original unedited video — from a May press conference — debunks the implication of the edited version. (And as it so happens, Pelosi doesn’t even drink.) A similarly altered video of Pelosi went viral in May 2019.
After CNN reached out to Twitter and other platforms about the latest doctored video, they removed copies of it. Case closed, right? Not quite.
Though Twitter removed the clip in at least one instance, others remain live on the platform, highlighting the confusing and inconsistent nature of its content moderation.
One version of the video on the platform had Twitter’s “manipulated media” tag attached to it, which redirected to a page containing an explanation of what had been manipulated about the video along with links to fact-checking articles. Other versions, which were shared by multiple users associated with the QAnon conspiracy theory, remain on Twitter, some with and some without the manipulated-media tag.
Twitter’s manipulated-media policy was announced in February, but it remains fairly opaque. The blog post containing the announcement says that moderators take three things into account when determining the correct course of action for an account sharing edited and misleading videos: Has it been edited, was it shared in a misleading way, and is it likely to impact public safety or cause harm? Even then, trying to navigate what is supposed to actually happen when an account violates the policy remains unnecessarily labyrinthine, as the below chart from Twitter’s blog post demonstrates.