In a March 6 blog post, Google and Alphabet CEO Sundar Pichai laid out the company’s plans for responding to the COVID-19 pandemic. Among Pichai’s announcements was a commitment to removing YouTube videos containing misinformation about preventing the novel coronavirus. This week, YouTube expanded its efforts even further by announcing that it would begin showing fact checks within search results in the U.S., something it began doing last year in India and Brazil.
On the surface, it looks as though the company is taking big, positive steps to prevent the spread of misinformation during a public health emergency. Unfortunately for YouTube, its efforts haven’t kept up with its ambition, and the result contributes to a challenging swirl of conflicting information that the World Health Organization has dubbed an “infodemic.”
Nearly two months since Pichai’s blog post, YouTube remains a cesspool of COVID-19 misinformation.
Conspiracy theory YouTube channels like The Next News Network continue to thrive despite obvious violations of YouTube’s efforts to ban COVID-19 misinformation. On April 16, Next News Network posted a video suggesting the novel coronavirus was a “false flag” attack meant to force people into getting “mandatory vaccines” and microchips. It was only after the video received nearly 7 million views that YouTube pulled it. Though it was removed, other channels reuploaded the video, illustrating just how fruitless moderation efforts can be. (The platform eventually pulled dozens of the reuploads, but only after Media Matters reported on them.)
In early April, the Tech Transparency Project published a report that not only were some YouTube videos claiming to have the recipe for a coronavirus “cure,” but there were ads running along with those clips, meaning that creators were earning revenue from them. YouTube had previously announced that it was going to approach monetization around the virus with caution to avoid financially supporting misinformation. It wasn’t until The Guardian reported on the project’s findings that YouTube removed any of the videos, and even then, several remained posted.
One popular coronavirus conspiracy theory involves Microsoft founder Bill Gates. According to the theory, which is based on a misrepresentation of Gates’ past remarks, Gates wants to control and depopulate the world using a microchip. This conspiracy theory has been floating around major conspiracy circles for months -- including in a video that was posted to the High Frequency Radio YouTube page on March 17. As of May 1, High Frequency Radio’s video remains live on YouTube.
On April 15, BuzzFeed reported on a conspiracy theory-riddled YouTube video titled “Dr. SHIVA Ayyadurai, MIT PhD Crushes Dr. Fauci Exposes Birx, Clintons, Bill Gates, And The W.H.O.” Among the video’s comments is the recommendation that people treat COVID-19 with vitamin C, something YouTube CEO Susan Wojcicki specifically mentioned as an example of content the site was removing during an April 19 appearance on CNN’s Reliable Sources. At the time of BuzzFeed’s story, the video, which was posted on April 10, had been viewed more than 6 million times. As of May 1, the video remains live on YouTube and has more than 7.2 million views.
Recently ousted Fox News personalities Diamond & Silk have repeatedly spread outrageous COVID-19 conspiracy theories on YouTube. During a March 29 livestream, they claimed that deaths attributed to COVID-19 were being inflated to make President Donald Trump look bad. During an April 10 livestream, the duo claimed that Gates was using the virus to push for population control and that 5G cell towers were responsible for the virus’s spread. On an April 20 livestream, they suggested that the World Health Organization could control the virus with an on/off switch. All three of those videos remain live on YouTube as of May 1.
Tech companies are really good when it comes to announcing policies designed to protect their users, but they’ll likely remain a disaster when it comes to enforcing them.
YouTube has had a robust set of creator guidelines for years, but its record of following through on the enforcement of those policies has been less than stellar. For instance, there’s a history of LGBTQ YouTubers having their videos wrongfully hidden, demonetized, or restricted. At the same time, the site had previously determined that slurring Carlos Maza as a “lispy queer” wasn’t a violation of its anti-harassment policies (Maza previously worked at Media Matters). Enforcement has been hit-and-miss, even in cases where creators are very clearly in violation of community guidelines.
One thing YouTube has done more recently is shift to more automated content moderation, which comes with both positives and negatives. The company announced the increased reliance on this moderation technique as part of its plan to reduce the number of people who need to travel into a physical office during the pandemic. In a blog post, Google notes that though the company has made large investments into an automated moderation infrastructure for YouTube, “they are not always as accurate or granular in their analysis of content as human reviewers.” In other words, some content that violates YouTube’s policies will remain live, and some content that doesn’t violate the policies will be mistakenly removed.
It’s not perfect, but it is a refreshingly honest explanation of how flawed online content moderation can be. Unfortunately, honesty doesn’t change the fact that one of the world’s biggest and most consequential media platforms contributes to public confusion about such a serious topic. YouTube contributes to what the World Health Organization calls an “infodemic,” which the organization describes as “an over-abundance of information -- some accurate and some not -- that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”
While it’s welcome news that YouTube is taking steps to drive users in the direction of authoritative sources, it doesn’t make up for the fact that years of failing to live up to its lofty moderation enforcement goals have left the world confused and struggling to parse accurate and inaccurate information.
Writing for The Guardian, Julia Carrie Wong explains what happens when you search the term “coronavirus” on Google. Rather than ads, product recommendations, or untrustworthy websites that just happen to be optimized for that particular search -- which often come up via Google searches -- you’ll receive links to mainstream news outlets, government announcements, and information from trusted nongovernmental organizations. “Algorithms and user-generated content are out; gatekeepers and fact checking are in,” she writes. “Silicon Valley has responded to the ‘infodemic’ with aggressive intervention and an embrace of official sources and traditional media outlets.”
Unfortunately, she notes, these steps come far too late to stem the spread of misinformation, even in this specific emergency. An entire culture of conspiratorial thinking has built up under the systems put in place by tech companies, and past reluctance to enforce their own guidelines or grapple with the social consequences of not having adequate guidelines (no site wants to outright say that it welcomes hate speech, threats, or misinformation) has driven the public to a dangerous place. If someone has spent years watching someone like Alex Jones spread conspiracy theories about all matters of things, why would they suddenly brush him off as untrustworthy and instead take advice from a mainstream news outlet?
“[Social media companies have] built this whole ecosystem that is all about engagement, allows viral spread, and hasn’t ever put any currency on accuracy,” University of Washington professor Carl Bergstrom told the Guardian. “Now all of a sudden we have a serious global crisis, and they want to put some Band-Aids on it. It’s better than not acting, but praising them for doing it is like praising Philip Morris for putting filters on cigarettes.”
Just as one could argue that the White House failed to adequately prepare for a public health emergency, the same can be said of YouTube. There is a human cost to the commodification of misinformation, and it’ll take a lot more than scattershot video takedowns or fact check boxes to make this right.
The platform needs to reinvent itself as a place that incentivizes facts and legitimacy over engagement and other social media metrics. The COVID-19 pandemic was a crisis paired with an infodemic; the next one doesn’t have to be.