When Facebook recently announced its much-discussed decision to uphold the suspension of former President Donald Trump’s account for two years, it also published a response to 19 policy recommendations provided by the Facebook Oversight Board after the review of Trump’s case. The company claimed it took “substantial steps” to address those recommendations and was “committed to fully implementing 15” of them. But as with many examples of policy enforcement by the company, Facebook’s definition of “fully implemented” is inconsistent, and its responses are severely lacking and seem unlikely to address the platform’s repeated failures to clearly enforce its own policies.
On July 15, Facebook provided an update on another 18 recommendations doled out by the board in responses to cases evaluated prior to Trump’s. The commitments Facebook made in these instances are similar to the ones outlined below — and often rely on vague language and nonbinding timelines. The report did highlight some progress Facebook has made that was supposedly prompted by these 18 recommendations, such as providing users more information on why their content violated community guidelines. However, the document is littered with hedging phrases like “continue to assess” that don’t indicate real change is occuring or give external parties a way to validate or assess Facebook’s followthrough on these matters of great interest to the public.
Just as with Facebook’s response to previous recommendations, there are obvious problems and glaring loopholes in all the commitments made by Facebook in response to the oversight board’s specific guidance surrounding Trump’s suspension, including the 15 recommendations that the company claimed to have “fully implemented”:
Recommendation #1: “Facebook should act quickly on posts made by influential users that pose a high probability of imminent harm.”
Facebook’s response: “Facebook often quickly reviews content posted by public figures that potentially violates our policies. We will continue to do so and find ways to improve this process while accounting for the complexity of analysis that is often required for this kind of content.”
The facts: Facebook often does not quickly review content posted by public figures that violate its policies, or at least the platform has an inconsistent or inadequate definition of “public figure.” For example, roughly a quarter of Trump’s posts on Facebook in the year prior to his suspension contained misinformation, election lies, or extreme rhetoric about his critics. Not only were these posts left untouched for months, but they remain on the platform following his suspension.
Recommendation #2: “Facebook should consider the context of posts by influential users when assessing a post’s risk of harm.”
Facebook’s response: “Facebook already considers the broader context of content from public figures in the course of our review, and we will continue to do so. Our consideration includes the relevant historical significance of statements, comments on the content that show how it is being understood, and how others are receiving similar content on our platform.”
The facts: Despite its claims to consider the “broader context,” Facebook has shown time and again that it is not a good judge of what constitutes dangerous and inflammatory rhetoric. For example, Trump’s post appearing to support violence against anti-police brutality protesters is still available on the platform.
Recommendation #3: “Facebook should prioritize safety over expression when taking action on a threat of harm from influential users.”
Facebook’s response: “Facebook is, and has always been, committed to removing content where the risk of harm outweighs any public interest value. We will continue to prioritize public safety when making these judgements and will impose use restrictions and other feature blocks on accounts that violate our policies. We also quickly review content posted by public figures that potentially violates our policies so we can remove any violating content.”
The facts: Facebook claims that it is “committed to removing content where the risk of harm outweighs any public interest value,” but the company has a history of not taking action against dangerous content, including by militia groups using the site to organize. Following the events of January 6, Facebook was slow -- and, in many cases, haphazard -- in its removal of “Stop the Steal” groups and posts that had helped to organize or foment the violent insurrection.
Recommendation #4: “Facebook should suspend the accounts of high government officials, such as heads of state, if their posts repeatedly pose a risk of harm.”
Facebook’s response: “Today, we are providing information in our Transparency Center about restricting accounts by public figures during civil unrest, which we created in response to the board’s recommendations.”
The facts: Facebook's explanation of how the platform goes about suspending high-profile accounts relies heavily on the word “may.” It says the platform “may consider” factors like the accounts' past behavior but not that it will. It says that the platform “may disable any account that persistently posts violating content” but not that it will. Trump, a regular misinformer, was allowed to abuse the platform for years seemingly without consequence.
Recommendation #5: “Facebook should suspend accounts of high government officials, such as heads of state, for a determinate period sufficient to protect against imminent harm. Periods of suspension should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.”
Facebook’s response: “Today, we are providing information in our Transparency Center about restricting accounts by public figures during civil unrest, which we created in response to the board’s recommendations.”
The facts: Facebook has not actually committed to suspending high-profile accounts spreading harm, it has only reserved the right to do so in specific instances. As but one example, Facebook currently provides a platform to several high-profile white nationalists, including Stefan Molyneux.
Recommendation #6: “Facebook should resist pressure from governments to silence their political opposition and consider the relevant political context, including off of Facebook and Instagram, when evaluating political speech from highly influential users.”
Facebook’s response: “Today, we are providing information in our Transparency Center about restricting accounts by public figures during civil unrest, which we created in response to the board’s recommendations.”
The facts: Facebook did not directly respond to the recommendation about resisting pressure from governments, despite its claim this recommendation was “implemented fully.” History has shown that Facebook frequently gives in to political pressure — as has been the case in India, where the company has repeatedly sided with the ruling political party even when it means defying Facebook’s own policies. What's more, Facebook says it is “committed to exploring ways we can improve our external accountability” without a timeline for results.
Recommendation #7: “Facebook should have a process that utilizes regional political and linguistic expertise, with adequate resourcing when evaluating political speech from highly influential users.”
Facebook’s response: “Facebook already considers the broader context of content from public figures in the course of our review in some instances and undertakes accelerated review for public figures with adequate staff and resources. We will continue to do so. We are committed to exploring ways we can improve our external accountability as well as incorporate additional external feedback for our evaluation of political speech from public figures in accordance with our policies and especially during high risk events. In addition, we have a robust process for reviewing government reports alleging that content on Facebook violates local law.”
The facts: In the “considerations” section of Facebook's response, the platform also claimed to “ensure that content reviewers are supported by teams with regional and linguistic expertise, including the context in which the speech is presented.” In reality, this has not been the case. In fact, Facebook is notoriously bad at moderating content in languages other than English, and this response provides little indication that this will change.
Recommendation #8: “Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users.”
Facebook’s response: “Today, we are providing information in our Transparency Center about restricting accounts by public figures during civil unrest, which we created in response to the board’s recommendations.”
The facts: Facebook said that the platform will inform the public of sanctions only if they are applied during times of civil unrest. However, as with many so-called transparency measures, the platform has not adequately defined “civil unrest” or the conditions under which such disclosures would be appropriate. Given that Facebook has admitted that it did not view the “Stop the Steal” campaign as a coordinated effort to undermine the election results, researchers and the public should question how effective this definition of civil unrest is and whether or not it requires Facebook to disclose any additional information.
Recommendation #9: “Facebook should assess the on-and-offline risk of harm before lifting an influential user’s account suspension.”
Facebook’s response: “Today, we are providing information in our Transparency Center about restricting accounts by public figures during civil unrest, which we created in response to the board’s recommendations.”
The facts: Facebook says it will “assess the on-and-offline" harm a user is posing and “extend the restriction for a set period of time and continue to re-evaluate” if a user appears to pose “a serious risk to public safety." But the company does not explain who will do this or how it will be done. At a minimum, this will likely become relevant again when Trump’s two-year suspension is over.
Recommendation #10: “Facebook should document any exceptional processes that apply to influential users.”
Facebook’s response: “Our Community Standards apply around the world to all types of content and are designed so they can be applied consistently and fairly to a community that transcends regions, cultures, and languages. Today we are providing more information about our system of reviews for public figures’ content, which includes our cross check process and newsworthiness allowance, in our Transparency Center.”
The facts: Facebook claims to have considered two “exceptional processes” when reviewing these recommendations -- the “newsworthiness allowance” and the cross-check system. In keeping with the habit of relying on undefined terminology or vague wording, Facebook’s updated explanation of the newsworthiness allowance says the platform will “assign special value to content that surfaces imminent threats to public health or safety” but does not define “imminent threat.”
Recommendation #11: “Facebook should more clearly explain its newsworthiness allowance.”
Facebook’s response: “Today, we are providing more information in our Transparency Center about our newsworthiness allowance and how we apply it. Next year we will also begin providing regular updates about when we apply our newsworthiness allowance. Finally, we are removing the presumption we announced in 2019 that speech from politicians is inherently of public interest.”
The facts: Facebook's updated explanation of the newsworthiness allowance says, “We assign special value to content that surfaces imminent threats to public health or safety or that gives voice to perspectives currently being debated as part of a political process.” This latest discussion of the newsworthiness allowance does not explain what the platform considers to be an “imminent threat.” Facebook’s newsworthiness policy is still applied at the discretion of the company, as it always has been -- giving little confidence in the efficacy of the platform to apply it consistently going forward.
Recommendation #12: “In regard to cross check review for influential users, Facebook should clearly explain the rationale, standards, and processes of review, including the criteria to determine which pages and accounts are selected for inclusion.”
Facebook’s response: “Our Community Standards apply around the world to all types of content and are designed so they can be applied consistently and fairly to a community that transcends regions, cultures, and languages. Today we are providing more information about our system of reviews for public figures’ content, which includes our cross check process and newsworthiness allowance, in our Transparency Center.”
The facts: Facebook does explain that the cross-check process is a system of additional review for certain pages. However, as with many of its moderation and enforcement systems, the company does not provide any information on which pages qualify for cross-check or why. Facebook has shown time and again that the platform’s barometer for determining who is considered an “influential user” lacks the nuance and consistent application to significantly curtail the spread of harmful misinformation. What's more, the company has inconsistently applied policies to influential users in the past, and this latest commitment does nothing to prevent this from happening again.
Recommendation #13: “Facebook should report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures.”
Facebook’s response: “We will take no further action on this recommendation because it is not feasible to track this information.”
The facts: It is very hard to believe that Facebook, one of the largest and most powerful tech companies in the world, cannot not track the information recommended by the oversight board. Facebook has revealed that it often collects data that the company does not make available to the public. Even if this recommendation is truly unfeasible, it remains disconcerting that Facebook is taking enforcement actions that the company itself cannot track.
Recommendation #14: “Facebook should review its potential role in the election fraud narrative that sparked violence in the United States on January 6, 2021 and report on its findings.”
Facebook’s response: “We regularly review our policies and processes in response to real world events. We will continue to cooperate with law enforcement and any US government investigations related to the events on January 6. We have recently expanded our research initiatives to understand the effect that Facebook and Instagram have on elections, including by forming a partnership with nearly 20 outside academics to study this issue.”
The facts: Facebook claims to “regularly review” policies and processes, but these reviews are entirely self-serving. Policy reviews are released at-will when Facebook needs a PR boost and rarely result in any real or consistent change. In fact, a new report details how Facebook “lost” a rule on dangerous individuals for three years, calling into question the thoroughness of these supposedly regular reviews.
Recommendation #15: “Facebook should be clear in its Corporate Human Rights policy how it collects, preserves and shares information related to investigations and potential prosecutions, including how researchers can access that information.”
Facebook’s response: “We commit to reviewing our Corporate Human Rights Policy in response to this recommendation. We need time to evaluate the correct approach to data collection and preservation to facilitate lawful cooperation with diverse stakeholders given the complex legal and privacy issues in play. We will also explore how we can be more transparent about our protocols.”
The facts: Facebook says it will commit to “reviewing our Corporate Human Rights Policy” but provides no timeline or expectations for what qualifies as a “review” in this process. What’s more, the platform did not make any real commitments to improve transparency on this matter -- meaning the measures for success are opaque and determined by Facebook itself, rather than based on public transparency and understanding of its review or conclusions.
Recommendation #16: “Facebook should explain in its Community Standards and Guidelines its strikes and penalties process for restricting profiles, pages, groups and accounts on Facebook and Instagram in a clear, comprehensive, and accessible manner.”
Facebook’s response: “Today we are publishing detailed information in our Transparency Center about our strikes and penalties. Our goal is to provide people with more information about our process for restricting profiles, pages, groups, and accounts on Facebook and Instagram.”
The facts: Facebook’s strike policy states: “If you post content that goes against the Facebook Community Guidelines, we’ll remove it and may then apply a strike to your Facebook or Instagram account. Whether we apply a strike depends on the severity of the content, the context in which it was shared and when it was posted.” But like many explanations of how the platform deals with high-profile accounts, this policy relies heavily on the word may to absolve the company of real responsibility. The company also does not provide any additional context for how it makes these decisions, rendering its policy neither clear, comprehensive, or accessible. Instead, this leaves discretion entirely up to Facebook, which has been shown to wield this power inconsistently and with little transparency.
Recommendation #17: “Facebook should tell users how many violations, strikes, and penalties they have, as well as the consequences of future violations.
Facebook’s response: “Earlier this year, we launched ‘Account Status’ on Facebook, an in-product experience to help every user understand the penalties Facebook applied to their accounts. It provides information about the penalties on a person’s account (currently active penalties as well as past penalties), including why we applied the penalty. In general, if people have a restriction on their account, they can see their history of certain violations, warnings, and restrictions their account might have, as well as how long this information will stay in Account Status on Facebook. We are committed to making further investments in this product to help people understand the details of our enforcement actions.”
The facts: Facebook claims that it has already at least partially met this recommendation by touting an “Account Status” feature. However, the company did not commit to providing information for users about the consequences of future strikes, as recommended by the board.
Recommendation #18: “In its transparency reporting, Facebook should include numbers of profile, page, and account restrictions, including the reason and manner in which enforcement action was taken, with information broken down by region and country.”
Facebook’s response: “We agree that sharing more information about enforcement actions would be beneficial and are assessing how best to do so in a way that is consistent and comprehensive.”
The facts: In its response, Facebook agreed that transparency would be helpful but did not outline any commitments that would actually make the company more transparent. Historically, Facebook’s transparency efforts have been lacking and focused more on garnering positive press than lasting or consistent change. For example, Facebook’s political ad archive was touted as a major effort by the company to increase transparency. In reality, it is rife with issues that complicate its use as a transparency or research tool, and has barely been updated since it was released. Meanwhile, Facebook has taken action against external parties attempting to provide this transparency outside of the platform.
Recommendation #19: “Facebook should develop and publish a policy that governs its response to crises or novel situations where its regular processes would not prevent or avoid imminent harm.”
Facebook’s response: “Facebook will develop a Crisis Policy Protocol which will be informed by various frameworks that we use to address risk, imminent harm, and integrity challenges. The protocol will focus on the threshold for when context specific policies are deployed, deactivated, and reassessed.”
The facts: Facebook did not provide substantial details for how and when this would be completed. What’s more, a robust crisis response policy would require Facebook to recognize when the platform is involved in a crisis -- something the company has historically been reticent to do. Buzzfeed previously reported on a leaked internal document detailing the company's vast failures in identifying and taking seriously the organizing Facebook enabled before the January 6 insurrection, which suggests this policy will be insufficient.
Facebook’s responses to recommendations from the oversight board — an independent body created by the platform specifically to resolve the company’s continuing policy failures — are nothing but smoke and mirrors: They don’t actually provide any understanding of how Facebook will enforce its policies, and they are littered with “coulds,” “mights,” and “mays,” that make it impossible to say definitively what the platform’s policies actually are.