This is not the only time Facebook has knowingly chosen engagement and profit over platform security. Here are four of the other instances when Facebook -- recently rebranded as Meta -- has traded its users’ safety for profit.
Facebook’s recommendation algorithms lead users down rabbit holes of extreme content
External researchers and journalists have repeatedly demonstrated that Facebook’s and Instagram’s suggestions for accounts, pages, and groups can lead users down a rabbit hole of extreme content and misinformation, even though the platform's recommendation guidelines prohibit such material. From recent whistleblower statements and leaked internal documents, we know that Facebook’s own research showed the company just how bad the problem really is. In fact, a presentation from 2016 revealed that Facebook’s own research found that “64% of all extremist group joins are due to our recommendation tools.”
For example, in one study conducted by Facebook, titled “Carol’s Journey to QAnon,” a researcher set up a fake user account, began interacting with conservative content, and then set out exploring the suggestions made by the platform. The researcher reported that within one week, the account was recommended “a barrage of extreme, conspiratorial, and graphic content.” Another test account was created on the platform in India, with similar if not worse results. Within three weeks, the user’s News Feed started showing “graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence.”
But Facebook has done its best to downplay the role that its platforms play in leading users toward extreme content, calling similar experiments done by journalists in the past “a stunt” and attempting to chalk up the narrow flow of extreme content to “confirmation bias.” While the company has reportedly made changes to its algorithms in response to the coronavirus pandemic and the 2020 U.S. presidential election, research from Media Matters has shown that these changes fail to sufficiently address the problem.
Facebook creates echo chambers that reinforce, exacerbate, and coordinate harmful rhetoric
Once a user has been led down a recommendation-fueled rabbit hole of misinformation and extremism, it can be difficult for them to break out of it. This is another phenomenon that is a core aspect of Facebook’s business model, and the company choose to ignore its negative repercussions.
As the algorithms determine what to recommend to a user, they filter out content that does not fall into the user’s interest category, creating an echo chamber.
Facebook’s own research showed that these echo chambers are dangerous. These insular communities of like-minded users reinforce each other's beliefs, increasing the chance that users’ rhetoric will escalate to the point of taking action offline. That can translate to real-world harm like the U.S. Capitol insurrection on January 6.
Facebook made a meager attempt to slow the growth of these echo chambers of misinformation leading up to the 2020 election by implementing its last resort “‘break the glass measures’ — a list of temporary interventions to keep its platform safe.” The list included actions such as limiting the number of invites that group members could send per day and halting recommendations of political groups that spread “angry vitriol and a slew of conspiracy theories” to users after Election Day.
But as we witnessed on January 6 -- when the Stop the Steal movement erupted into a violent riot at the Capitol -- these measures were not effective. Media Matters research showed that in some cases where Facebook added election misinformation warning labels, posts actually earned more engagement -- meaning that Facebook ultimately continued to benefit from the spread of sensational content.
While company officials like Andy Stone and Sheryl Sandberg have made public statements that Facebook was not to blame for the attack on the Capitol, their own internal reports acknowledge that the company’s platform was used by “election delegitimizing movements” that “helped incite the Capitol insurrection.”
Facebook allows politicians to spread misinformation and hate speech with impunity
In addition to serving up misinformation and hate speech through its recommendations and echo chambers, Facebook was also aware that politicians were abusing loopholes on its platform to spread misinformation, according to internal documents.
According to internal research on “XCheck,” a program which exempted a secret list of high-profile users from Facebook’s policies, certain politicians were allowed without scrutiny to publish content that violated company rules -- including posts that contained incitements to violence. Facebook initially misled its Oversight Board on the system that protects 5.8 million VIP users, stating that it was used only in “a small number of decisions.” Most notoriously, XCheck shielded then-President Donald Trump’s account when he posted the phrase “when the looting starts, the shooting starts,” during last year’s protests over the police murder of George Floyd.
Additional research revealed that politicians also made use of Facebook’s advertising platform to spread misinformation and target vulnerable users. In internal documents, Facebook employees acknowledged that the company’s lax approach to moderating political ads generated risks for users, while its fact-checking partners were not equipped to counter the spread of political misinformation.
Facebook spokesperson Dani Lever claimed that the company rejects “any ad that violates our rules -- including from politicians.” However, Facebook has maintained a policy against fact-checking opinions from public officials even when they appear in ads, leaving users more susceptible to misinformation. Reporting from Media Matters has also shown that Facebook has profited off of misleading political ads in the past and continues to do so.
Facebook is even worse at moderating hate speech outside of the US
Outside of the U.S., Facebook’s failures to moderate content on its platforms has had even worse consequences. The company’s lack of global content moderation, inadequate staffing, failure to translate its community standards for non-English speakers, and absence of machine-learning algorithms to detect hate speech in foreign languages led to an increase in hate speech and violent rhetoric in countries such as India, Ethiopia, and Myanmar. In some cases, this has contributed to the escalation of violence offline.
In India, the Facebook Papers revealed an internal research experiment in which the platform's recommendations led a test account of a 21-year-old woman in Jaipur down a rabbit hole of misinformation and hate speech within just three weeks. The author of the experiment called the result an “integrity nightmare.” The account’s spiral toward extreme content, described as “a maelstrom of fake news and incendiary images,” was exacerbated by the company’s lack of language competencies: Facebook has acknowledged in internal documents that it is not “equipped to handle most of the country’s 22 officially recognized languages” and that it has yet to develop algorithms “that could detect hate speech in Hindi and Bengali.”
Additionally, the company’s fear of pushback from the ruling Bharatiya Janata Party and its associated Rashtriya Swayamsevak Sangh group have enabled political actors to use the platform to spread anti-Muslim hate speech. In February 2020, a BJP politician shared a video on Facebook calling on supporters to remove protestors — mostly Muslims — from a road in New Delhi. Violent riots followed within hours, leaving 53 people dead.
In Ethiopia, insufficient language capabilities as well as coordinated inauthentic behavior by state and government actors opened the door for the spread of violence as political conflicts devolved into a civil war. Although the country had been acknowledged as high risk by Facebook last year, internal documents revealed that there were no artificial intelligence classifiers in place, algorithms used by the company to detect misinformation. (Typically, classifiers are built using training data that contains examples of harmful content and then screen posts that “might match the company’s definitions of content that violates its rules.”)
In an August 2020 memo, company engineers reportedly “flagged an ongoing lack of language skills, particularly in less mainstream dialects,” that hampered efforts to combat hate speech. Furthermore, a March 2021 internal report called “Coordinated Social Harm” revealed that armed militia groups in Ethiopia “were using the platform to incite violence against ethnic minorities in the ‘context of civil war.’”
Politicians and civil society groups expressed fears that the country was headed down the same path as Myanmar, where Facebook had also lacked misinformation classifiers and acknowledged that it did not do enough to stop anti-Rohingya hate speech and incitements to violence against the country’s ethnic minority. Despite assurances that it has invested in safety for the platform in Ethiopia, researchers and journalists there claim that Facebook’s efforts fall short and employees in charge of moderation often lack significant cultural context to understand why some content is dangerous.
Facebook’s switch to the Meaningful Social Interactions (MSI) algorithm in 2018 further increased the presence of hate speech and misinformation and the probability that such content would go viral. The algorithm was meant to increase engagement and drive users back to the platform, but internal memos revealed concerns from employees that it allowed users to capitalize off of negativity. Facebook did nothing to address this.
The company’s more than 3 billion users worldwide remain vulnerable to dangerous content, especially in regions where there is insufficient content moderation in key languages such as Arabic, Pashto, and Dari. Facebook’s announcement of its Metaverse project, which will cost the company an estimated $10 billion in 2021 alone, leaves doubt there will be considerable investment in content moderation in the future.
While Facebook has recently disclosed the amount it has invested in platform safety and security since 2016, these reports show that its efforts continue to fall short when they come into friction with the company's bottom line. The new report from The Washington Post is the latest example of Facebook’s long-standing pattern of choosing to ignore its own research and recommendations to limit hate speech and misinformation on its platforms.