As Facebook claims it's cleaning up hate speech, hateful content is rampant on the platform
Written by Rhea Bhatnagar & Clara Martiny
Published
Facebook is promoting its lame attempts to enforce community standards to garner positive press amid its recent scandal, boasting that it has cracked down on hate speech. But Media Matters found multiple examples of posts that seemingly violate the platform’s hate speech rules, as well as posts that contain hateful speech that isn’t covered under the platform’s opaque and inadequate policy.
Facebook claims it does not allow and removes hate speech, which it defines as “a direct attack against people” on the basis of “protected characteristics,” including race, ethnicity, national origin, disability, religion, sexual orientation, gender identity, and more.
On October 17, Facebook Vice President of Integrity Guy Rosen published a blog post claiming that hate speech prevalence on the platform had gone down nearly 50% between July 2020 and June 2021. In the post, Rosen called prevalence -- a measure of how often violative content is viewed -- “the most important metric” related to hate speech, which itself is a debatable statement.
Rosen claimed that between April and June 2021, the prevalence of hate speech was 0.05%, meaning that there were 5 views of hate speech for every 10,000 views of content on Facebook. This number can’t be independently verified, and Facebook’s own analysts have been concerned that the company wasn’t taking down enough hateful content. In fact, internal documents from March 2021 reportedly reveal that the platform takes action on only 3% to 5% of content with hate speech, leaving 95% or more unmonitored.
It’s easy to find posts and comments that directly violate the platform’s hate speech policy. Groups such as “WHITE LIVES MATTER,” “Confederate live’s matter trump 2024,” and “Black Lives Matter is A Terrorist Organization” uphold white supremacist ideas and contain plenty of unchecked violations, including violent comments, like threats to “kill them all.”
While Facebook’s hate speech policy is often vague, it does contain specificities like banning comparisons to “animals that are culturally perceived as intellectually or physically inferior.” Yet Media Matters found many posts that made such comparisons, calling Black people “swines” and “subhuman” and claiming they will “never rise in intelligence to that of all other races.”
When it comes to hateful and bigoted content about the LGBTQ community the policy does not comprehensively protect against the kind of attacks we saw in our research -- and the parts of the policy that attempt to are ambiguous and vague.
In a private anti-trans group of “gender critical women” that pushes trans-exclusionary radical feminism, the nearly 5,000 members are exposed to content that mocks LGBTQ families and misgenders trans and nonbinary people. In one post, members target singer Demi Lovato, who recently came out as nonbinary, with a comment section of over 300 comments misgendering Lovato and attacking their weight, appearance, and mental health (unsurprisingly, Facebook has a history of poorly moderating comment sections).
Other anti-LGBTQ groups, such as “Transgender sport and the annihilation of woman” and “Americans Against Pride Month Group,” continuously misgender trans people and refuse to acknowledge their gender identity or preferred pronouns. Yet it is unclear whether these groups fall under Facebook’s hate speech policy, as the only direct mention of transgender or non-binary people states that it is a violation to post “dehumanizing” content and refer to them as “it.”
Despite Rosen’s claims, Facebook is not doing nearly enough to enforce or strengthen its policy, as hateful content is still pervasive on the platform.