On July 27, the hosts of the popular radio show The Breakfast Club grilled head of Instagram Adam Mosseri about how his platform deals with hate speech, medical misinformation, censorship, and a slate of other issues. Despite being asked specific questions, Mosseri mostly punted, offering vague truisms instead of addressing the platform's many shortcomings. Below are some of Mosseri’s most lackluster moments from that interview.
Instagram chief Adam Mosseri's interview on Breakfast Club was full of evasive half-truths
Mosseri made several eyebrow-raising statements about the platform in the interview
Written by Spencer Silva & Camden Carter
Published
Mosseri’s claim that Instagram removes COVID-19 and anti-vaccine propaganda from its site is easily disprovable. Not only does anti-vaccine misinformation flourish on the platform but you can also buy anti-vaccine merchandise. And when the platform does take action against bad actors, they often return within days and spew the same garbage as before. Lather, rinse, repeat.
In just the past week, for instance, anti-vaccine zealot and one of the Disinformation Dozen Sherri Tenpenny, who has been kicked off the platform several times, has used a ban-evading account to compare mass vaccination against COVID-19 to Nazi Germany and imply that Congress is angling to put unvaccinated people into concentration camps.
And while it’s true that Instagram allows fact-checkers to dispute claims and add warning labels to false and misleading claims, it’s not clear if such labels are effective at stemming misinformation. It doesn’t appear that such interventions have worked on Facebook, for instance.
Claim: Instagram is doing something about hate speech
In recent months, Instagram has come under fire for its inability to shield Black English footballers from a tsunami of racist abuse on its platform. In the interview, Mosseri acknowledged that Instagram has a spotty track record with hate speech. He also said, “I don't think it's good enough for us to erase hate speech and racism from our platform as much as possible. I think we should be a force to reduce racial inequality in the world.”
By that standard, Instagram and every other social media platform fail miserably. In fact, one recent study by the Center for Countering Digital Hate found that Facebook, Instagram, TikTok, Twitter, and YouTube failed to take action on 84% of flagrantly antisemetic posts it reported during a two-month span.
It’s fair to question Instagram’s commitment to being a force for racial justice when it can’t even keep some of its most notorious bigots off its platform.
Claim: Instagram is transparent about its policies
In reality, Instagram's policies aren't clear at all. In fact, they are so unclear that Instagram has had to publicly clarify what they mean (and even then, users often remain confused). A BBC review showed that Instagram and 14 other social media sites “had policies that were written at a university reading level,” even though children as young as 13 were allowed to use the platform.
Despite Mosseri’s talk about transparency regarding Instagram’s content moderation policies, users are often left guessing why their content has been removed. Facebook, Instagram's parent company, has also been criticized for its vague policies. Even Facebook’s Oversight Board —whose members' salaries are funded by the platform — has called the company out for its lack of transparency.
Claim: The problems on Instagram are a result of confirmation bias
While confirmation bias is certainly “a thing,” it is not the thing driving Instagram users toward content that they are predisposed to agree with. It’s the platform’s algorithm. In Instagram’s own words, the goal of its recommendation feature is to "make recommendations that are relevant and valuable to each person who sees them. We do this by personalizing recommendations, which means making unique recommendations for each person.”
By selecting recommended content for users based on perceived interests, the platform is actively creating the echo chamber. We have seen this play out with a negative outcome in the past.
It’s insincere of Mosseri to suggest that this intentionally developed feature of the platform is of a similar nature or impact as the human propensity for confirmation bias — especially since the subject of Charlamagne's prompt was rampant misinformation.
Claim: Instagram is too big to monitor
While offering a perfunctory gesture toward the need for platform accountability, Mosseri tacitly admitted what’s already painfully obvious to anyone who follows Facebook and its subsidiaries: No amount of new features, updated policies, or monitoring technology powered by artificial intelligence has come close to adequately keeping people safe from hate speech and disinformation on any of the company’s platforms. If Facebook and Instagram actually want to mitigate harm on these platforms, the first step would be to architect a business model that doesn’t profit from hate and misinformation.