Update (6/16/22): Following publication, it appears 22 of the accounts we identified as posting hateful content were removed from Instagram. However, at least 8 remain on the platform and at least 11 of the removed ones have already returned with new accounts. Additionally, some of the remaining accounts in the network promoted these new accounts, helping several regain as many, if not more, followers as they previously had.
A network of hate-fueled accounts is thriving on Instagram
These accounts use the platform and its features to spread their racist ideology
Written by Camden Carter
Research contributions from Madelyn Webb
Published
Media Matters has identified a network of Instagram accounts that collaborate and coordinate activity to remain on the platform and promote overt racism. These accounts -- some of which self-identify as “racistgram” -- coalesce around white supremacy and are fueled by the “great replacement” conspiracy theory that has inspired multiple racial attacks, including the deadly, racist mass shooting in Buffalo, New York, over the weekend.
Media Matters has reported on hate speech on Instagram before. One of the accounts identified in our previous reporting on Instagram’s poor moderation is affiliated with a website that was cited repeatedly in the alleged Buffalo shooter’s online writings. As of publishing, the account remains on Instagram -- where it has been since at least June 2020 -- with over 18,500 followers. Many of the accounts identified within this “racistgram” network follow this larger account, which pushes the “puppet master” conspiracy theory.
How these accounts operate
While the theme and tone of each node of this network varies, they generally layer their commentary in satire, irony, and in-group jokes, often to the point where the true meaning would be lost on an outside viewer. Other accounts openly discuss strategies to increase the white population, lament supposed Jewish control, and discuss the role of their Instagram posting in the larger movement.
The accounts in this “racistgram” network typically make minimal effort to tailor their content to avoid moderation by Instagram and are repeatedly removed from the platform. But the users behind these accounts are able to return to Instagram by simply making a new account, and having their new handle circulated through the community, using the existing network to promote themselves and regain lost followers. One account Media Matters identified in this network claimed that it has reemerged on Instagram at least 15 times under new handles.
These accounts do not just ambiently fester on Instagram -- they use the platform and its many features with intention. Instagram Live and Stories allow for more ephemeral, and therefore harder to moderate, content. Additionally, accounts sometimes include links in their bios or use the link sticker in order to organize on other platforms, link to back up Instagram accounts, and even sell merchandise. (Link sticker is a feature that allows users to put hyperlinks in Instagram Stories and has a track record of misuse.)
While these features are undoubtedly helpful for creators and influencers, the app’s appeal for these racist accounts likely lies in a more core component of the platform -- the recommendation algorithms. Meta claims that its “Recommendations Guidelines are designed to maintain a higher standard than our Community Guidelines,” but Media Matters has reported on how features like suggested accounts and the “Explore” page recommend violative content.
When discussing the best way to “race pill” other Instagram users, two accounts within the racistgram network noted that, while Instagram tries to prevent its algorithms from leading people to their content, they can circumvent this effort by developing connections with more palatable accounts: “The way the algorithm works, we just have to make these connections,” one said. “Once we start making these connections to them, and we basically just link ourselves to them and the algorithm. … We just pretty much have to tie ourselves into them on the algorithm, but once we do, it's gonna be a trailblazer, because it's not hard.”
Their strategy relies on using the “infrastructure” they have created through this Instagram network, and they acknowledge that “it's going to take multiple pages” to introduce other users to their ideology. They describe the need for pages that have a wide range in how they project racist sentiments, saying they all have a role in the migration of users to the most overtly racist pages in their network.
Coordinated campaigns
In addition to sharing racist and antisemetic content, these accounts work together to orchestrate coordinated campaigns through Instagram. In April, one of the accounts spearheaded an “anitsemite pride month,” during which the network conducted coordinated harassment, sometimes referred to as “raids,” on social media accounts belonging to Jewish users and Jewish organizations. They also worked to spread antisemitic content and celebrated the birthday of Hitler.
Media Matters reported to Instagram the initial post announcing the racist intentions for the month -- a post which the account owner later noted was their most popular to date. Instagram reviewed the report and responded, stating that the company did not remove the post because the “photo likely doesn’t go against our Community Guidelines.”
We then requested an appeal to this decision. As of publishing, there has been no response to this appeal, and the request is still in the “received” stage.
Later in the month, Media Matters attempted to report the account leading the “antisemite pride month.” Instagram again decided not remove the reported content.
In early May, after the “antisemite pride month” campaign had been allowed to flourish on the platform, the account finally disappeared from Instagram, though the cause of its disappearance is unclear.
The coordinated harassment, or “raids,” performed by this network is not isolated to “antisemite pride month.” These users repeatedly share posts from other accounts to their stories and direct their followers to “spam” or raid the comments with racial slurs and other forms of hate speech.
The accounts also collaborated to develop content around the brief #DarkMAGA online trend, which celebrates the idea of former President Donald Trump returning to politics with more anger and violence toward enemies of his supporters.
Instagram has long claimed that it is working to address issues of hate speech and violent rhetoric on its platform. Adam Mosseri, the head of Instagram, even went as far as to say in an interview last summer, “I don't think it's good enough, for instance, for us just to remove hate speech and racism from our platform as much as possible. I think we should be a force to reduce racial inequality in the world.” And yet, the company continues to roll out new features in an attempt to maintain relevance while clearly failing to protect its users by upholding its own policies.