On July 27, the hosts of the popular radio show The Breakfast Club grilled head of Instagram Adam Mosseri about how his platform deals with hate speech, medical misinformation, censorship, and a slate of other issues. Despite being asked specific questions, Mosseri mostly punted, offering vague truisms instead of addressing the platform's many shortcomings. Below are some of Mosseri’s most lackluster moments from that interview.
Instagram chief Adam Mosseri's interview on Breakfast Club was full of evasive half-truths
Mosseri made several eyebrow-raising statements about the platform in the interview
Written by Spencer Silva & Camden Carter
Published
Citation
From the July 27, 2021, edition of iHeartRadio’s The Breakfast Club
ANGELA YEE (CO-HOST): Let's discuss the election first, right, because as you know there was a lot of fake news that was going around and — so how did you guys, I know Facebook in particular — because you were at Facebook at the time — there were a lot of issues with Facebook, with fake news. And then I saw that Instagram is talking about doing, like, Instagram for kids. You know, how do you guys take responsibility for your platforms, with COVID, with fake news and things like that happening — making sure that that information doesn't get spread out?
ADAM MOSSERI (HEAD OF INSTAGRAM): So do you want to talk — I'll talk about any of them, but do you want to talk about the 2016 election or 2020 election, or both elections? Because we can go in a bunch of different directions.
YEE: Let's start with 2016.
MOSSERI: OK, so maybe backing up even a little further, social media as a technology isn't good, and it isn't bad. It just is. And social media specifically is a great amplifier in a lot of ways. And it can amplify good and it can amplify bad. So it is our responsibility to do all we can to amplify the good and mitigate the bad. And you see both spread on the platform. You saw the Black Lives Matter movement spread on social media. You also saw #MeToo spread on social media. But you also see things like misinformation spread on social media. And so we try to figure out all the different things that we can do to identify problems and address them and then also rethink the core of what we do and how we do it to create better outcomes. And that work never ends.
And so, I mean, you brought up a number of different things there. But on misinformation specifically, what we do is we work with third-party fact-checkers — so people who do this for a living, who do this for publications. And we have them — we give them access to what's shared on the platform, and they can dispute things. They can say, like “This isn't true and here's a link to the why." And when that happens, we reduce the spread of that, when we label things and we give people links to good information. But we don't take it off the platform entirely unless there's a safety risk. So for things like COVID-19 or vaccine-related misinformation we just take it off the platform entirely.
Mosseri’s claim that Instagram removes COVID-19 and anti-vaccine propaganda from its site is easily disprovable. Not only does anti-vaccine misinformation flourish on the platform but you can also buy anti-vaccine merchandise. And when the platform does take action against bad actors, they often return within days and spew the same garbage as before. Lather, rinse, repeat.
In just the past week, for instance, anti-vaccine zealot and one of the Disinformation Dozen Sherri Tenpenny, who has been kicked off the platform several times, has used a ban-evading account to compare mass vaccination against COVID-19 to Nazi Germany and imply that Congress is angling to put unvaccinated people into concentration camps.
And while it’s true that Instagram allows fact-checkers to dispute claims and add warning labels to false and misleading claims, it’s not clear if such labels are effective at stemming misinformation. It doesn’t appear that such interventions have worked on Facebook, for instance.
Claim: Instagram is doing something about hate speech
Citation
From the July 27, 2021, edition of iHeartRadio’s The Breakfast Club
CHARLAMAGNE THA GOD (CO-HOST): I think social media sometimes protects bigots though. It protects bigots like in regard to racism, homophobia, because I've posted videos of, like, racists getting punched in the face, right, for blatantly being racist. And then Instagram will remove it.
ADAM MOSSERI (HEAD OF INSTAGRAM): We definitely make mistakes. We also take a lot of flack for letting people say a lot of crap that we don't necessarily agree with. In general, we're going to try and bias towards letting people say what they want to say on the platform. And we try to only take content down when there's a safety risk — so whether it’s, you know — like I said before — hate speech, violent content, even nudity. That said, we do make mistakes. Sometimes we don't take things down that we should take down. Sometimes we take stuff that we shouldn't. We're getting better over time. Where it gets particularly hard is issues like hate speech and racism, where the context really matters. And we're not as good and we end up making more errors. But, you know, it’s tough. There's also a lot of gray area. So for instance, what someone can say about you as a public figure —
CHARLAMAGNE THA GOD: Oh I hate that.
MOSSERI: — is different than what people can say about, you know, the average person.
In recent months, Instagram has come under fire for its inability to shield Black English footballers from a tsunami of racist abuse on its platform. In the interview, Mosseri acknowledged that Instagram has a spotty track record with hate speech. He also said, “I don't think it's good enough for us to erase hate speech and racism from our platform as much as possible. I think we should be a force to reduce racial inequality in the world.”
By that standard, Instagram and every other social media platform fail miserably. In fact, one recent study by the Center for Countering Digital Hate found that Facebook, Instagram, TikTok, Twitter, and YouTube failed to take action on 84% of flagrantly antisemetic posts it reported during a two-month span.
It’s fair to question Instagram’s commitment to being a force for racial justice when it can’t even keep some of its most notorious bigots off its platform.
Claim: Instagram is transparent about its policies
Citation
From the July 27, 2021, edition of iHeartRadio’s The Breakfast Club
ADAM MOSSERI (HEAD OF INSTAGRAM): You can't say anything about a public figure though. You can just say more, right. So you can definitely say you hate someone's music, but you can't attack them personally or call for violence against them. So the line is —
CHARLAMAGNE THA GOD (CO-HOST): It happens all the time though!
ADAM MOSSERI (GUEST): Yeah. And we take content down all the time, too. Not that we get it all. But the other thing is that these policies, which is really where the most important questions are answered, are also always evolving, right. The world changes. How people communicate changes. How people spread hate changes. And so we have to — not only have to acknowledge the change but also the policies, the rules change along with it. We try to do so as best we can in public. So, all our community guidelines and community policies are public.
I'm trying to be really public not only about those rules but how we do what we do — the algorithms. Like I just did a video a few weeks ago just to try to explain, at a high level, as much as I could. Because I think that we're not going to get everything right. And as uncomfortable it is to be out there and to be talking about what you do — because you're always going to get some serious scrutiny on both sides; some people think we take down too much, some people think we take down too little — we're going to get to a better place if we have this debate out in public and we get the scrutiny and we get the feedback.
In reality, Instagram's policies aren't clear at all. In fact, they are so unclear that Instagram has had to publicly clarify what they mean (and even then, users often remain confused). A BBC review showed that Instagram and 14 other social media sites “had policies that were written at a university reading level,” even though children as young as 13 were allowed to use the platform.
Despite Mosseri’s talk about transparency regarding Instagram’s content moderation policies, users are often left guessing why their content has been removed. Facebook, Instagram's parent company, has also been criticized for its vague policies. Even Facebook’s Oversight Board —whose members' salaries are funded by the platform — has called the company out for its lack of transparency.
Claim: The problems on Instagram are a result of confirmation bias
Citation
From the July 27, 2021, edition of iHeartRadio’s The Breakfast Club
CHARLAMAGNE THA GOD (CO-HOST): I hate QAnon, and I hate the fact that there was an attempted coup of the government on January 6. Like I really think we're headed to some Orson Welles War of the World-type shit because of — nobody cares about the truth on social media or the lies or anything.
ADAM MOSSERI (HEAD OF INSTAGRAM): Well I think confirmation bias is a thing, and that's as old as time, right. People want to hear what they agree with and they don't want to hear what they don't agree with. And I'm not saying there aren't bad things that happen online. I think there very much are. I think we can do more to address those and make sure that more of the great stories actually happen. But there’s no — if success for us is you don't see anything problematic, no one says something racist, there is no, you know, conspiracy theory online, period, with Instagram, Twitter, Facebook, etc, then you're never going to be satisfied.
While confirmation bias is certainly “a thing,” it is not the thing driving Instagram users toward content that they are predisposed to agree with. It’s the platform’s algorithm. In Instagram’s own words, the goal of its recommendation feature is to "make recommendations that are relevant and valuable to each person who sees them. We do this by personalizing recommendations, which means making unique recommendations for each person.”
By selecting recommended content for users based on perceived interests, the platform is actively creating the echo chamber. We have seen this play out with a negative outcome in the past.
It’s insincere of Mosseri to suggest that this intentionally developed feature of the platform is of a similar nature or impact as the human propensity for confirmation bias — especially since the subject of Charlamagne's prompt was rampant misinformation.
Claim: Instagram is too big to monitor
Citation
From the July 27, 2021, edition of iHeartRadio’s The Breakfast Club
CHARLAMAGNE THA GOD (CO-HOST): How liable should social media platforms be in regards to lawsuits? Like if I want to sue somebody for slander, like defamation, should I be able to name the social media platform, since you all gave them a platform?
ADAM MOSSERI (HEAD OF INSTAGRAM): Yeah, so this is actually one of the big legal debates right now, right. Here in the U.S., they talk a lot about section 230 which gives technology platforms essentially no liability, but the people who post that content are liable. In different countries around the world that may not be the case. Actually, it isn’t the case.
CHARLAMAGNE THA GOD: Yeah in South Africa, they don't play.
MOSSERI: Yeah, no, a lot of Europe they don't play either. And so the thing that I think is that it's important that companies are held to account to take measures to keep people safe, but I don't think we can go all the way to have a social media platform be accountable for every single thing that is said on that platform by another person, because there are over a billion people on Instagram at this point. And there's no version of that where there aren't going to be people with problematic opinions, racists, etc. that are going to show up. They're not going to check that at the door when they open up Instagram. But I do think that doesn't mean we can just, you know, wipe our hands clear.
CHARLAMAGNE THA GOD: No.
MOSSERI: The question is where are we on that spectrum.
While offering a perfunctory gesture toward the need for platform accountability, Mosseri tacitly admitted what’s already painfully obvious to anyone who follows Facebook and its subsidiaries: No amount of new features, updated policies, or monitoring technology powered by artificial intelligence has come close to adequately keeping people safe from hate speech and disinformation on any of the company’s platforms. If Facebook and Instagram actually want to mitigate harm on these platforms, the first step would be to architect a business model that doesn’t profit from hate and misinformation.