Content warning: This article contains examples and descriptions of hate speech.
Media Matters has identified several Instagram accounts that are dedicated to generating hate speech which are accumulating significant followings. We reported five of these accounts to Instagram via the app’s reporting channel and were informed by the platform that four of them did not violate its community guidelines. As of publishing, the fifth account is still under review. The continued presence of these accounts shows that Instagram’s reporting channels are inadequate and it is easy for users to circumvent its current content moderation policies. Meta, Instagram’s parent company, has once again fallen short of its promises to improve its detection and removal of extremist content.
Instagram has long been aware of the prevalence of hate speech on the platform and has introduced new features to try to address the issue. While Meta has publicly claimed it's working to expand its understanding and policies around hate speech on its platforms, it has been slow to evolve. It was only in 2019 that the company published a blog post explaining its new understanding of the overlap between white nationalism and white supremacy, and it began including these topics in its policies.
Meta’s current policy on hate speech prohibits users from posting hateful content, defining hate speech as “a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics.” However, Media Matters has identified several accounts that violate these policies, but remain active on Instagram. At least one of these accounts has accumulated upward of 55,000 followers, and some have been on the platform since 2020.
The language in Meta’s reporting policies for Instagram is predominantly centered around individual pieces of content and ignores that a narrative of hate can be created not just through an account's individual posts, but also through its overall ethos -- through the content it shares, comments from other users, its bio, handle, and name. Very few of these accounts post individual content (captions, videos, or images) containing hate speech that explicitly violates Instagram’s policies. Rather, they post or repost content that develops a narrative of hate and encourage followers to interpret it that way. For example, one account with over 17,000 followers claims it is “dedicated to [the] showcasing and appreciation of Jewish accomplishments and prominent Jewish figures.” At first glance, the account seems to have a genuine and positive intent. But a closer look reveals that this account is satirical and deeply harmful. Each post showcases a “prominent Jewish figure” and the account uses these posts to weave the antisemetic “puppet master” conspiracy theory.
With many of these accounts, the comment sections are typically where the most explicitly atrocious language can be found. Some accounts post screengrabs of other content such as an article headline, a TikTok video, or a tinder profile with no or minimal caption. Followers then sound off in the comments, mocking or spewing hate at the subject of the post. For example, a post from one of these accounts that's simply a screenshot of an article about a prominent trans celebrity garnered hundreds of transphobic comments, including deeply personal attacks on the celebrity’s identity, body, and mental wellness. In some cases, the volume of comments is so high that it is virtually impossible to report all of them.
To avoid comment moderation, users often use code words or phrases, intentional misspellings, or emojis. While this is not a new phenomenon, Instagram is clearly still struggling to manage it and users continue to find ways to manipulate the app to allow them to use explicit hate speech. We found several examples on these accounts of users replying single letters to one another's comments, working together to spell out a slur. While a single letter would not seem harmful as an individual comment, when viewed as a whole the message is clearly hateful.
Comments violating Instagram’s policies are just one aspect of the problem. The accounts Media Matters identified craft their narrative in such a way that they don’t need to rely on user comments. For example, one account, which has turned off its comments section, posts exclusively about local violent crime cases. Based on the account’s individual posts and captions, it does not appear to violate Instagram’s policies. However, the account, which has accumulated over 11,000 followers, is pushing a clearly racist narrative unchecked by Instagram, as each post is centered exclusively around cases that allege Black people have committed violent crimes against white people.
After Media Matters discovered these accounts, we followed the appropriate prompts to report five of them to Instagram under the “hate speech” category. The platform responded with notifications saying it won’t take any action against four of these. As of publishing, one account is still in the “review” stage, pending a decision. One of the reported accounts has since disappeared, after remaining on the platform for at least eight months and accumulating over 23,500 followers.
The ways these accounts operate make it difficult to effectively report them, but Instagram’s reliance on user reporting is flawed to begin with. The platform has acknowledged that while it uses artificial intelligence to detect violative content, it also partially relies on users reporting this content when they come across it. While user reporting may help to address issues around harassment and bullying, it is significantly less effective when it comes to accounts that exist to foster a bigoted bubble of like-minded communities, such as the ones highlighted in this piece. While giving users features to hide, block, and report violative content, Instagram’s algorithm also recommends users content that it believes they will like. This means users who are most likely to report content containing offensive language or hate speech, are the least likely to actually encounter this content on the platform. Inversely, users who like such content and thus are least likely to report it are the ones who are most likely to encounter it.
This is not the first time Instagram has been caught failing to moderate this type of content. After an incident last summer, when Black British footballers were harassed through the racist use of emojis and other hate speech in comments, Instagram was forced to acknowledge its role in the problem. When the BBC asked platform head Adam Mosseri about it in July 2021, he stated, “The issue has since been addressed.”