facebook generic

Andrea Austria / Media Matters

Research/Study Research/Study

Hate speech and misinformation proliferate on Meta products, with 13,500 policy violations documented in the past year alone

As Meta pushes new products to shareholders next week, the company is losing control of the products it already has

Meta, the parent company of Facebook and Instagram, has repeatedly failed to keep users safe, even as its platforms have contributed to real-world violence and various other harms. Meta frequently boasts about its focus on content moderation policies, yet the company has prioritized new features like the metaverse, short-form video, and artificial intelligence-driven video recommendations, rather than addressing the issues of its current products by developing adequate policies and consistently enforcing them.

On May 25, Meta will hold its annual meeting of shareholders against a backdrop of a less active user base, its slowest revenue growth since its initial public offering, reduced earnings, and, as our latest study suggests: more risk for shareholders as its platforms continue to be plagued by issues that Meta refuses to adequately address, including misinformation, hate speech, and dangerous users. 

Over the last year, Media Matters has regularly reported on Meta’s failure to enforce its community standards on Facebook and Instagram (by not identifying, labeling, or removing violations), and we have reported on its very narrow interpretations of these standards, which in some cases were not adequate to begin with.  

We compiled all of our reporting from the last year (May 1, 2021, through April 30, 2022) and found over 13,500 violations of Meta’s policies on Facebook and Instagram, with much of the violative content still active or not appropriately labeled.

MakeMarkListen_infographic

Key findings of our survey include:

  • In addition to former President Donald Trump’s posts that are still on Facebook despite his two-year suspension from the platform, there are nearly 10,000 posts and ads that allow Trump to evade his ban. They include ads from Trump’s joint fundraising committee, posts promoting Trump’s official statements, and posts containing livestreams of Trump’s misinformation-filled post-presidency rallies.
  • Even though Meta has fairly robust policies against COVID-19 and vaccine misinformation, Media Matters has identified nearly 1,500 violations of these policies as well.
  • Meta’s hate speech policy insufficiently protects transgender and nonbinary users, as well as users who speak languages other than English. Media Matters identified nearly 1,000 violations of Meta’s hate speech policy, including Instagram accounts promoting white supremacy and Facebook posts pushing anti-LGBTQ smears.
  • Meta’s labels on authoritative election information have failed to reduce election misinformation on the platform, and Media Matters has identified over 700 violations of Meta’s election policies, including entire Facebook groups dedicated to election misinformation, unlabeled posts with election misinformation, and posts promoting “Stop the Steal” and related rallies.
  • Along with inadequately addressing health and election-related misinformation, Meta has failed to properly label a wide array of other types of misinformation, including misleading or false content about Russia’s invasion of Ukraine and Judge Ketanji Brown Jackson. It has also allowed and profited from numerous ads containing this misinformation. Over the last year, Media Matters has identified nearly 500 violations of this policy.
  • Over the last year, Media Matters has identified roughly 30 violations of Meta’s dangerous individuals and organizations policy, including ban evasions on Facebook and Instagram and content promoting the QAnon conspiracy theory, which also falls under Meta’s policy.
  • Meta allowed Trump to maintain a presence on its platforms, despite his two-year suspension

  • On June 4, Meta's Nick Clegg announced Trump's two-year suspension:

  • Given the gravity of the circumstances that led to Mr. Trump’s suspension, we believe his actions constituted a severe violation of our rules which merit the highest penalty available under the new enforcement protocols. We are suspending his accounts for two years, effective from the date of the initial suspension on January 7 this year. At the end of this period, we will look to experts to assess whether the risk to public safety has receded.

  • As pro-Trump rioters stormed the Capitol on January 6, Facebook removed two of Trump’s posts (for violating its community standards on dangerous individuals and organizations) and suspended his ability to post on Facebook and Instagram, publicly citing his use of Facebook “to incite violent insurrection against a democratically elected government.”

    Ultimately, Trump’s ban became a two-year suspension, even though his page repeatedly posted content that incites violence or contains misinformation and should have been fully unpublished. In a previous study, Media Matters found multiple posts on Trump’s page that seemingly incited violence and nearly 1,000 posts with COVID-19 or election misinformation. In simply suspending Trump and not unpublishing his page, Meta has allowed these posts to remain on the platform, even allowing users to interact with the posts, including ones with misinformation and extreme rhetoric.

    Meta has allowed Trump to skirt the suspension and maintain a presence on Facebook, via content from other pages, including with posts and ads that seem to break Meta’s rule against content that is “in his voice.” Over the last year, Media Matters has identified nearly 10,000 posts and ads that evade the suspension, including:

    • Over 6,100 posts promoting Trump’s official statements, by including photos of his statements, full text of his statements, or links to his now-failed website that housed these statements.

    Over 6,100 posts promoting Trump’s official statements

    • Over 3,600 Facebook and/or Instagram ads from the Save America Joint Fundraising Committee, which is Trump’s sole political vehicle.

    Over 3,600 Facebook and:or Instagram ads from the Save America Joint Fundraising Committee, which is Trump’s sole political vehicle

    • 115 posts containing livestreams of Trump’s misinformation-filled post-presidency rallies.

    115 posts containing livestreams of Trump’s misinformation-filled post-presidency rallies

  • Here's a list of Media Matters content from the past year related to Trump’s suspension:

    • 3/10/22: Facebook is profiting from ads promoting Trump’s new social network, despite his suspension from the platform
    • 2/3/22: Facebook is allowing livestreams of misinformation-filled Trump rallies to earn tens of millions of views
    • 1/6/22: Trump still has a presence on Facebook one year after he was suspended for inciting violence at the Capitol
    • 11/3/21: Right-leaning Facebook pages have earned billions of interactions on election-related posts since the presidential election
    • 10/4/21: Facebook is letting Trump’s PAC run ads implying he is the “true president”
    • 8/12/21: Facebook has earned up to $220,000 in revenue by allowing Trump’s PAC to run ads, even though he is suspended from Facebook
    • 7/23/21: Facebook's responses to the Oversight Board are a sham
    • 7/6/21: Six months after the Capitol riot, Facebook is still letting Trump and his allies promote rallies filled with election misinformation
    • 6/25/21: Facebook is enabling Trump’s PAC to fundraise on the platform despite his two-year suspension for inciting violence
    • 6/21/21: Facebook still has a double standard for politicians
    • 6/4/21: Facebook’s two-year suspension of Trump proves the tech giant won't take seriously its responsibility for January 6
    • 6/4/21: Facebook should permanently ban Trump
    • 6/3/21: Trump’s failed blog shows how much he needs Facebook -- and the platform could give his account back
    • 6/2/21: Facebook keeps touting its labels, but data suggests labels actually amplified Trump’s misinformation
    • 5/11/21: Trump used Facebook to amplify Fox News and its personalities. Of course the network wants him reinstated.
    • 5/5/21: Facebook’s oversight board puts the power back in the platform’s hands to decide Trump’s fate. History tells us that’s dangerous.
    • 5/4/21: As Facebook’s oversight board decides the fate of Trump's account, he’s showing exactly which lies he’ll spread if reinstated
  • Meta failed to remove harmful health misinformation

  • Meta's community standards on harmful health information state:

  • We remove misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm, including by contributing to the risk of individuals getting or spreading a harmful disease or refusing an associated vaccine.

  • Meta also has additional policies for COVID-19 and vaccines, specifically noting that it removes “false information” that could be “discouraging good health practices,” including “claims about wearing a face mask,” and “about the safety or serious side effects of COVID-19 vaccines.”

    Even though Meta has fairly robust policies against COVID-19 and vaccine misinformation, the company has failed to reliably enforce them. Media Matters has identified nearly 1,500 violations of this policy over the last year.

    Among these violations are:

    • Over 1,200 Facebook groups dedicated to sharing COVID-19 misinformation, opposing mask use, and/or promoting unsubstantiated treatments for COVID-19 like ivermectin.

    Facebook groups dedicated to sharing COVID-19 misinformation, opposing mask use, and promoting unsubstantiated treatments for COVID-10 like ivermectin

    • Over 160 Facebook and Instagram posts with misinformation about COVID-19, masks, and/or vaccines, or that promoted events dedicated to opposing COVID-19 regulations.

    Facebook and Instagram posts with misinformation about COVID-19, masks, and vaccinesEvents dedicated to opposing COVID-19 regulations and posts promoting such events

    • At least 9 Instagram accounts for chronic vaccine misinformers.

    Instagram accounts for chronic vaccine misinformers.

  • Media Matters content from the last year related to health misinformation on Facebook and Instagram:

    • 3/23/22: Why are anti-vaccine influencers on Instagram using the “Let’s Get Vaccinated” stickers?
    • 2/8/22: As YouTube and Google ban Dan Bongino for misinformation, Facebook profits from helping him promote the same false and sensational content
    • 1/24/22: Instagram’s link sticker feature is lining the pockets of some of the platform’s most prolific misinformers
    • 1/21/22: Various social media platforms are enabling known COVID-19 misinformers to promote an anti-vaccine march in Washington, DC
    • 11/11/21: Hundreds of Facebook groups are stoking outrage over school policies like masking and school curricula
    • 11/4/21: Facebook just gave group administrators access to more features, even as they abuse the platform and promote moderation evasion
    • 10/26/21: New data shows Facebook's groups problem goes way beyond “Stop the Steal”
    • 10/20/21: Instagram’s suggestion algorithm is promoting accounts that share misinformation
    • 9/28/21: Facebook is letting COVID-19 vaccine misinformation flourish in its comment sections
    • 9/22/21: Instagram is recommending weight loss gimmicks through the “Explore” page
    • 9/20/21: How one anti-mask activist leveraged TikTok and Facebook to spur local school protests across the country
    • 9/16/21: With its “cross check” program, Facebook knowingly allowed right-wing politicians and media personalities to abuse its platform — just to avoid bad press
    • 8/27/21: Facebook groups around the world are promoting unprescribed livestock medications for COVID-19, while the platform seemingly does nothing to stop them
    • 8/12/21: An anti-vaccine misinformation video has been viewed at least 30 million times on social media
    • 8/10/21: A new Plandemic-like misinformation video has earned tens of millions of Facebook engagements via streaming platforms
    • 8/10/21: As COVID-19 cases surge and schools assess safety measures for the new year, Facebook is allowing anti-mask groups to thrive
    • 8/9/21: Instagram Shopping is full of anti-vaccine merchandise
    • 7/29/21: A week ago Media Matters identified hundreds of anti-vaccine Facebook groups. Most of those groups are still active.
    • 7/20/21: Despite Facebook's COVID-19 promises, anti-vaccine groups are thriving
  • Meta failed to remove hate speech

  • Meta's community standards on hate speech state:

  • We define hate speech as a direct attack against people — rather than concepts or institutions — on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We also prohibit the use of harmful stereotypes, which we define as dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence.

  • Meta has policies against some forms of hate speech and related policies against bullying and harassment, but these policies have clear gaps, such as requiring users to use the name on their ID in their profiles (and subsequently, forcing some trans and nonbinary users to use their deadnames). Meta also has abysmal capacities to detect hate speech in languages other than English.

    In addition to these gaps in the policy, the company has also struggled to enforce its current policies around hate speech, sometimes allowing bigotry, hate, antisemitism, and white supremacy to make it onto users’ feeds. In fact, Media Matters has identified nearly 1,000 violations of these policies over the last year. 

    Among these violations are Instagram accounts promoting white supremacy and promoting harassment of teachers, and Facebook posts pushing the absurd smear that LGBTQ people are “grooming” children to be LGBTQ or to engage in sexual activity.

    FB and Instagram accounts and posts with hate speech

  • Media Matters content from the last year related to hate speech on Facebook and Instagram:

    • 4/21/22: In March, “Libs of TikTok” was the top Twitter user amplifying anti-LGBTQ hate. Now, these attacks are exploding online.
    • 4/6/22: On Facebook, right-leaning pages are dominating discussion of Florida’s new anti-LGBTQ legislation
    • 2/18/22: Instagram is letting accounts promoting hate speech go unchecked
    • 2/9/22: A year ago, Facebook contributed to a military coup in Myanmar. Its response has been an appalling failure.
    • 11/9/21: Right-leaning Facebook pages earned nearly two-thirds of interactions on posts about trans issues
    • 10/29/21: As Facebook claims it's cleaning up hate speech, hateful content is rampant on the platform
    • 9/1/21: ​​Amid the humanitarian crisis in Afghanistan, racist anti-refugee narratives are circulating on Facebook
    • 5/25/21: On YouTube, Facebook, and Twitter, The Daily Wire's Matt Walsh makes extreme attacks on trans kids
    • 5/21/21: The Daily Wire used Demi Lovato's nonbinary identity to attack gender-diverse people
  • Meta failed to remove or label election misinformation

  • Meta labels content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker. It also removes:

  • During the 2020 U.S. elections, Meta relied on labels, including ones with warnings that the content was false or misleading and labels that linked to authoritative information, to combat the barrage of election misinformation on its platforms. It removed misinformation only if it was deemed to directly affect the voting process, advocate for violence related to voting and elections, or coordinate harm or interference. 

    Media Matters, other researchers, and Meta’s own researchers all found that Meta’s labels were ineffective and did not prevent the spread of election misinformation. In fact, a previous Media Matters study found that Trump’s labeled posts had, on average, more than twice as many interactions as his posts overall. 

    After the January 6 insurrection, Meta committed to removing content with “Stop the Steal,” but failed to consistently enforce that policy. Over the last year, Media Matters has identified over 700 violations of this policy.

    Among these violations are Facebook groups with names promoting “Stop the Steal” or other similar language about a stolen election or alleged election fraud, unlabeled posts with election misinformation, and posts promoting “Stop the Steal” and related rallies.

    Facebook groups with names promoting “stop the steal” or other similar language

    unlabeled fb posts with election misinformation and promoting “stop the steal”

    unlabeled fb posts with election misinformation and promoting “stop the steal” 2

  • Media Matters reporting from the last year related to election misinformation and “Stop the Steal” content on Facebook and Instagram:

    • 2/8/22: As YouTube and Google ban Dan Bongino for misinformation, Facebook profits from helping him promote the same false and sensational content
    • 1/5/22: Facebook’s failures with election misinformation contributed to the deadly Capitol insurrection. A year later, the platform has failed to fix its problems.
    • 11/11/21: Facebook is allowing election misinformation about the NJ gubernatorial race to proliferate on the platform
    • 11/3/21: Right-leaning Facebook pages have earned billions of interactions on election-related posts since the presidential election
    • 11/1/21: Facebook is still profiting from ads spreading misinformation about the 2020 election
    • 10/26/21: New data shows Facebook's groups problem goes way beyond “Stop the Steal”
    • 10/8/21: Months after Facebook promised to keep election misinformation off its platform, it continues to thrive
    • 10/7/21: Facebook allowed users to share election misinformation about the sham Arizona “audit” without any fact-checking
    • 9/16/21: With its “cross check” program, Facebook knowingly allowed right-wing politicians and media personalities to abuse its platform — just to avoid bad press
    • 8/11/21: Here are the places airing Mike Lindell's elections lies at his “cyber symposium”
  • Meta failed to remove or label other misinformation

  • Meta prohibits ads that “include content debunked by third-party fact checkers,” but for other content, its community standards state:

  • We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm.





    We focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information. As part of that effort, we partner with third-party fact checking organizations to review and rate the accuracy of the most viral content on our platforms (see here to learn more about how our fact-checking program works).

  • Meta specifically prohibits ads with content deemed false by Facebook’s third-party fact-checkers, but similar to Meta’s election misinformation policies, other content with misinformation is labeled rather than removed. 

    Meta has failed to properly label misinformation — and has allowed ads with such misinformation to continue gaining interactions on the platform — about Russia’s invasion of Ukraine and Judge Ketanji Brown Jackson. Over the last year, Media Matters has identified nearly 500 violations of these policies.

    unlabeled fb posts with misinfo about Ketanji Brown Jackson

    Facebook biolabs ad3

  • Related Media Matters content from the last year:

    • 4/26/22: Tucker Carlson’s segments promoting pro-Russia talking points are spreading across Facebook
    • 3/31/22: On Facebook, right-leaning pages push smears about Supreme Court nominee Ketanji Brown Jackson
    • 3/22/22: Facebook is letting Russian and Chinese state-controlled media push the false Ukraine-US biolabs conspiracy theory
    • 3/21/22: No, actions to restrict Russia's social media access haven’t decreased right-wing engagement or inauthentic behavior
    • 3/18/22: Platforms' crackdowns on Russian propaganda continue to overlook Spanish-language content
    • 3/16/22: Facebook and Instagram have allowed and profited from advertising pushing the false Ukraine-US biolabs conspiracy theory
    • 3/3/22: Russia’s most notorious propagandist retains unrestricted access to social media while Russia invades Ukraine
    • 2/28/22: Kremlin-funded news outlet is mainlining Putin’s falsehoods to Spanish speakers
    • 2/10/22: On Facebook, ads for Tucker Carlson’s Hungary vs. Soros “documentary” have been viewed millions of times
    • 11/4/21: Facebook just gave group administrators access to more features, even as they abuse the platform and promote moderation evasion
    • 8/30/21: A Russian disinformation site outed by State Department last year is still on multiple social media platforms
  • Meta failed to remove dangerous individuals and organizations

  • Meta’s community standards include a policy that does not allow “organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.”

    Within this policy are specific provisions for militarized social movements, violence-inducing conspiracy networks, and groups banned for promoting hate, all of which it considers “Tier 3 entities.” The policy states:

  • Tier 3 focuses on entities that may repeatedly engage in violations of our Hate Speech or Dangerous Organizations policies on-or-off the platform or demonstrate strong intent to engage in offline violence in the near future, but have not necessarily engaged in violence to date or advocated for violence against others based on their protected characteristics. This includes Militarized Social Movements, Violence-Inducing Conspiracy Networks, and individuals and groups banned for promoting hatred. Tier 3 entities may not have a presence or coordinate on our platforms.





    Pages, Groups, Events, and Profiles or other Facebook entities that are - or claim to be - maintained by, or on behalf of, Tier 3 entities are prohibited. Admins of these pages, groups and events will also be removed.

  • Meta has taken action against dangerous individuals and organizations, but often its actions are too late, coming only after the entity has gained a following and/or spread widely across the platforms. For instance, Facebook banned content promoting the dangerous QAnon conspiracy theory only in October 2020, after interest and interactions with QAnon content had soared and the movement was already linked to numerous threats and acts of violence, and political candidates, and had even been amplified by the former president

    Over the last year, Media Matters has identified roughly 30 violations of this policy, including ban evasions on Facebook and Instagram (such as secondary pages for LifeSiteNews, despite a ban) and content promoting the QAnon conspiracy theory, which falls under Meta’s policy as well.

    ban evasions on Facebook and Instagram

  • Media Matters content from the last year related to Meta’s dangerous individuals and organizations policy:

    • 8/11/21: QAnon shows and their hosts remain on Facebook and Instagram despite Facebook’s supposed crackdown on the conspiracy theory
    • 8/6/21: Facebook is profiting off ads promoting fringe platforms known for spreading disinformation and conspiracy theories
    • 6/22/21: Racist troll Owen Benjamin is evading bans from major social media platforms to continue spreading hate and bigotry
    • 5/7/21: Media Matters, GLAAD, Human Rights Campaign, and NARAL react to Facebook’s permanent ban of LifeSiteNews