Facebook's chaotic policy announcements are self-serving and confusing
Facebook’s habit of announcing major policy changes on a variety of platforms makes it hard to hold the company accountable
Written by Madelyn Webb
Published
As the public understanding of social media’s role in spreading harmful misinformation grows, Facebook is scrambling to improve its image. Rather than craft intentional anti-misinformation policies that truly address the myriad problems on the platform, Facebook has often prioritized releasing self-serving public relations announcements across a variety of platforms. And the scattershot approach to sharing this information makes it hard to keep track of shifting policies, let alone hold the platform accountable.
The company crafted new misinformation policies for the 2020 election. It established the Facebook Oversight Board, a $130 million public relations project masked as an independent content moderation system. And it frequently touts the COVID-19 preventative measures it is involved with, while misinformation about the pandemic abounds on the platform. These PR stunts are disingenuous, as these policies are weak and insufficiently enforced, and the way in which they are delivered is equally insidious.
Facebook policy updates can come from practically anywhere — Twitter, press releases, investor calls, and more -- and they often lack any accompanying collateral on the company’s site. This makes it difficult to keep up to date on what is and isn’t allowed on the platform, and what the consequences are for violative behavior. Without knowing what the rules are, it is difficult if not impossible to assess whether Facebook is enforcing them effectively.
As but the latest example of this behavior, on May 8, the BBC podcast The Anti-Vax Files aired excerpts from an exclusive interview with Facebook’s vice president for Northern Europe, Steve Hatch. During the episode, the reporter said Hatch told her that “the company is now removing groups, pages, and accounts, including from another social media site it owns, Instagram, that deliberately discourage people from taking vaccines. That’s a new policy. Facebook is going further than just removing false claims.” The policy update was further reported on by the BBC and shared in tech reporter Casey Newton’s industry memo, the Platformer.
This new policy, which supposedly applies to a considerable amount of content given the wide range of anti-vax and vaccine-skeptical material on Facebook’s platforms, was not announced via Facebook’s official blog. The current policy has been on the website since at least April and does say the platform will “remove certain Pages, Groups, and Instagram accounts that have shared content that violates our COVID-19 and vaccine policies and are dedicated to spreading vaccine discouraging information on platform." There have been no substantial changes made to the COVID-19 vaccine misinformation policy listed on the website, so the policy reportedly described by Hatch is either not new (and Facebook is likely promoting it as new for good press), or it is new and the company has confusingly not provided a public update on its site. Facebook has occasionally removed groups and pages for spreading COVID-19 misinformation that could “lead to someone being harmed” or “contribute to physical harm,” but this rule has been weakly and inconsistently enforced, and it is nowhere near as comprehensive as the approach Hatch reportedly described during the BBC interview.
This is not the first time Facebook has announced policies in inconsistent ways, seemingly to garner positive press. Facebook recently announced, via the Facebook Newsroom Twitter account, that it would be testing a new pop-up that urges users to open and read an article if Facebook detects that they are attempting to share a link they haven’t clicked on. This announcement, which fails to mention that the feature will initially be rolled out only to 6% of Android users, was reported on by several outlets. But the tweet was not accompanied by a link, and the new feature does not appear to be explained anywhere on the company’s website.
In another example, when Facebook founder and CEO Mark Zuckerberg said Facebook would no longer be recommending political groups to users — a policy the company did not follow through on despite numerous headlines promising Facebook would end the practice permanently — the change was announced in a phone call with investors and posted to Zuckerberg’s personal Facebook page. When former President Donald Trump’s Instagram account was suspended following the January 6 attack on the Capitol, it was announced by the head of Instagram, Adam Mosseri, on his personal Twitter account.
Sporadically announcing policy changes or enforcement measures in various decentralized places makes it difficult for others to assess what behavior does or doesn’t violate Facebook’s rules, which enables the company to eschew accountability. This approach poses challenges for journalists trying to report on Facebook's operations, who have been increasingly stymied by increasing opacity at the company.
Facebook policies are rolled out in a way that emphasizes positive press over effective application. And make no mistake — the company’s intentionally vague and elusive policy updates are but one example of the many techniques Facebook uses to save face while disregarding misinformation on the platform.