Melissa Joskow / Media Matters
Update (12/20/18): This piece has been updated with additional information.
Facebook recently announced the worst data breach in the company’s history, affecting approximately 30 million users. This breach allowed hackers to “directly take over user accounts” and see everything in their profiles. The breach “impacted Facebook's implementation of Single Sign-On, the practice that lets you use one account to log into others.” Essentially, any site users signed into using their Facebook login -- like Yelp, Airbnb, or Tinder -- was also vulnerable. Hackers who have access to the sign-on tokens could theoretically log into any of these sites as any user whose data was exposed in the hack. As a precaution, Facebook logged 90 million users out of their accounts. On October 12, the company offered users a breakdown of how many people were affected and what data was exposed.
The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people. For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. For 1 million people, the attackers did not access any information.
Users can find out if they were affected and what data was accessed at Facebook’s help center.
Even with the update, we still don’t know enough information about the breach. We don’t know who was behind the attack. The FBI is investigating the hack, as well as the European Union (via Ireland’s Data Protection Commission, Facebook’s lead privacy regulator in Europe). Multiple members of Congress have expressed concern about the breach.
What we do know is that this latest data breach is hardly the only way Facebook has failed its consumers. Media Matters has cataloged Facebook’s multitude failures to protect its consumers since the company’s beginnings.
Data privacy
Cambridge Analytica
The public learned about Facebook’s most notorious data privacy breach on March 16 of this year. Facebook abruptly announced that it had banned Cambridge Analytica, the firm that did data targeting for Donald Trump’s presidential campaign, from using the platform for, according to The Verge, “violating its policies around data collection and retention.” The next day, The New York Times and The Observer broke the story Facebook was clearly trying to get ahead of: Cambridge Analytica had illegally obtained and exploited the Facebook data of 50 million users in multiple countries.
Christopher Wylie, Cambridge Analytica’s former research director, blew the whistle on how the firm used the ill-gotten data of Facebook’s users to target American voters in 2016. The company, founded by right-wing megadonor Robert Mercer, had political clients in the U.S. and around the world; it did work for President Donald Trump’s campaign, Ted Cruz’s presidential campaign, current national security adviser John Bolton’s super PAC, and more. Following Wylie’s exposé, more information was revealed about the firm: Its leadership was caught on camera “talking about using bribes, ex-spies, fake IDs and sex workers.” It gave a sales presentation about disrupting elections to a Russian oligarch in 2014. And the firm reached out to WikiLeaks in 2016 offering to help release then-Democratic presidential nominee Hillary Clinton’s emails. Following these revelations, Cambridge Analytica shut down (though there are serious questions about whether it spun off into a new company).
The data breach didn’t just expose Facebook user data to a political consulting firm; it exposed it to a company backed by a right-wing billionaire whose full operations aren’t yet known. Put another way, a shady operation was offering services like entrapment to potential clients, and the only tool required to do that was Facebook.
Facebook continues to find more unauthorized scraping of user data. The company disabled a network of accounts belonging to Russian database provider SocialDataHub for unauthorized collection of user information. The company previously provided analytical services to the Russian government, and its CEO even praised Cambridge Analytica.
Advertising profits over user privacy
Facebook’s business model monetizes the personal information of its users for advertising purposes. Advertisers on Facebook pay for access to information about users in order to create better-targeted ad campaigns. But over the course of Facebook’s history, the company has continually exposed user data without their consent, putting profits over privacy considerations.
In 2009, Facebook was forced to settle a class action lawsuit from users and shut down its Beacon ad network, which posted users’ online purchases from participating websites on their news feeds without their permission. In 2010, Facebook was caught selling data to advertising companies that could be used to identify individual users. The company has been fined in Europe multiple times for tracking non-users for the purpose of selling ads. It admitted in March that it collected call history and text messages from users on Android phones for years.
Exposing data of Facebook employees
Facebook’s privacy failures affect its employees as well. The Guardian reported last year that a security lapse exposed the personal details of 1,000 content moderators across 22 departments to users suspected of being terrorists. Forty of those moderators worked on Facebook’s counterterrorism unit in Ireland, at least one of whom was forced to go into hiding for his own safety because of potential threats from terrorist groups he banned on the platform.
Released emails showing Facebook had special data agreements with partners, targeted perceived competitors
In early December, the British Digital, Culture, Media and Sport Committee released a trove of emails from Facebook officials that stemmed from a lawsuit between the platform and an app developer. The emails, sent between 2012 and 2015, showed that Facebook CEO Mark Zuckerberg and other Facebook officials considered ways to make money off of user data and had special “white list” agreements with certain companies for data it restricted from other companies, along with cutting off data to perceived competitors. In one 2012 email, Zuckerberg even wrote that he “can’t think if (sic) any instances where that data has leaked from developer to developer and caused a real issue for us.” The emails also showed that Facebook tried to hide asking users to give permission to read Android phone call logs and text messages. A staffer wrote in an email that it was “a pretty high-risk thing to do from a PR perspective, but it appears that the growth team will charge ahead and do it.”
Photo data breach
On December 14, Facebook announced a bug that had allowed published photos and even photos that people uploaded but never posted to be exposed to unauthorized apps, potentially impacting up to 6.8 million users. Facebook discovered the breach in September but only disclosed it three months later, despite the European Union’s General Data Protection Regulation requiring such breaches to be disclosed within 72 hours of discovery.
Letting third parties slide into your messenger and share your secrets
The New York Times broke the story that Facebook gave companies including Microsoft, Amazon, and Spotify access to users’ personal data, including the ability to read and delete private messages. These same partners were exempted from the platform’s usual rules around data privacy, and some of them still had access to sensitive user data when the article was published. BuzzFeed also reported that apps on Android like Tinder, Grindr, and Pregnancy+ collect user data such as “religious affiliation, dating profiles, and healthcare data” and share that data with Facebook.
Misinformation
Trending Topics
In response to a Gizmodo article claiming Facebook employees were suppressing conservative outlets in its Trending Topics section, the company fired its human editors in 2016 and starting relying on an algorithm to decide what was trending. Following this decision, multiple fake stories and conspiracy theories appeared in the trending section. The problems with Trending Topics continued through this year, with the section repeatedly featuring links to conspiracy theory websites and posts from figures known for pushing conspiracy theories. Facebook mercifully removed Trending Topics altogether in June 2018.
State-sponsored influence operations and propaganda
During the 2016 campaign, Russian operatives from the organization known as the Internet Research Agency (IRA) -- which is owned by a close associate of Russian President Vladimir Putin -- ran multiple pages that tried to exploit American polarization. In particular, the IRA ran ads meant to stoke tensions about the way American police treat Black people while using other pages to support the police; the organization also played both sides on immigration.
The IRA also stole identities of Americans and created fake profiles to populate its pages focusing on “social issues like race and religion.” It then used the pages to organize political rallies about those issues. During the campaign, some Facebook officials were aware of the Russian activity, yet did not take any action. In 2017, Facebook officials told the head of the company’s security team to tamp down details in a public report it had prepared about the extent of Russian activity on the platform. It was only after media reporting suggested Facebook had missed something that the company found out the extent of that activity. So far this year, Facebook has taken down accounts potentially associated with the IRA.
Facebook in August 2018 also removed a number of accounts that the company had linked to state media in Iran.
On December 17, two reports commissioned by the Senate Intelligence Committee to examine the extent of Russian disinformation on social media were released. The reports found that the Russian disinformation operations received millions of interactions on Facebook and Instagram, which is also owned by Facebook. One of the reports noted that “Instagram was a significant front in the IRA’s influence operation, something that Facebook executives appear to have avoided mentioning in Congressional testimony.” In particular, the reports confirmed extensive targeting of African-Americans by exploiting issues such as police brutality, trying to suppress the vote for the 2016 election, and urging them to oppose Clinton. The reports also criticized the American tech companies, including Facebook, for seeming to provide the “bare minimum” amount of data to study, leaving out data such as the comments in response to disinformation or posts from other user accounts run by the IRA.
Foreign networks spreading fake news and getting ad revenue
Since at least 2015, Facebook has been plagued by fake news stories originating from Macedonia that are pushed on the platform to get clicks for ad revenue. Despite being aware of those activities during the 2016 campaign, Facebook took no action to stop it, even as locals in Macedonia “launched at least 140 US politics websites.” Since then, Facebook has claimed that it has taken steps to prevent this kind of activity. But it has continued as Macedonian accounts used the platform to spread fake stories about voter fraud in special elections in Alabama in 2017 and Pennsylvania in 2018.
Macedonians aren’t the only foreign spammers on Facebook: A large network of users posing as Native Americans has operated on the platform since at least 2016. The network exploited the Standing Rock protests to sell merchandise, and it has posted fake stories to get ad revenue. While much of this activity has come out of Kosovo, users from Serbia, Cambodia, Vietnam, Macedonia, and the Philippines are also involved.
Facebook has also regularly struggled to notice and respond to large foreign spammer networks that spread viral hoaxes on the platform:
-
The platform allowed a Kosovo-based network of pages and groups that had more than 100,000 followers combined to repeatedly push fake news. Facebook finally removed the network following multiple Media Matters reports.
-
The platform allowed a network of pages and groups centered in Saudi Arabia and Pakistan that had more than 60,000 followers to publish fake stories. It was taken down following a Media Matters report.
Facebook officials have also downplayed the key role Facebook groups play in spreading fake news, even though the platform has been used regularly by people in other countries to push fake stories.
Domestic disinformation campaigns
Until just recently, Facebook did not respond to network of pages that regularly posted false stories and hoaxes and worked together to amplify their disinformation. The pages in the networks would coordinate and amplify their disinformation content. Facebook finally took down some of these domestic disinformation networks on October 11, right before the 2018 midterms, noting they violated its spam and inauthentic behavior policies. But as Media Matters has documented, even this sweep missed some obvious targets.
Fake news thriving on Facebook
Facebook’s fake news problem can be illustrated well by one of the most successful fake news sites on the platform, YourNewsWire. Based in California, YourNewsWire has been one of the most popular fake news sites in the United States and has more than 800,000 followers through its Facebook pages. Time and time again, hoaxes the site has published have gone viral via Facebook. Some of these fake stories have been flat out dangerous and have been shared on Facebook hundreds of thousands of times. Facebook’s designated third-party fact-checkers debunked the stories the site had published more than 80 times before it appears Facebook finally took action and penalized it in its news feed, forcing the site to respond to the fact-checkers’ repeated debunks.
Fake news has also been a problem in Facebook searches: Since at least 2017, fake stories about celebrities have popped up in Facebook searches, even after some had been debunked by Facebook’s designated third-party fact-checkers. Facebook in response has said it is trying to improve Facebook search results.
The problem has also extended to its ads. In May 2018, Facebook launched a public database of paid ads deemed “political” that ran on the platform. A review of the database found that the platform, in violation of its own policies, allowed ads featuring fake stories and conspiracy theories.
Withholding 2016 data from researchers
After the 2016 election, researchers repeatedly urged Facebook to give them access to its data to examine how misinformation spreads on the platform. In April, the platform announced it would launch an independent research commission that would have access to the data. However, the platform has refused to allow researchers to examine data from before 2017, meaning data from during the 2016 election is still inaccessible.
Misuse of Instant Articles
BuzzFeed reported earlier this year that fake news creators were pushing their content via Facebook’s Instant Articles, a feature that allows stories to load on the Facebook mobile app itself and which Facebook partly earns revenue from. In response, Facebook claimed it had “launched a comprehensive effort across all products to take on these scammers.” Yet the platform has continued to allow bad actors to use the feature for fake stories and conspiracy theories.
Problems with fact-checking
In response to the proliferation of fake news on the platform after the 2016 campaign, Facebook partnered with third-party fact-checkers to review posts flagged by users as possible fake news. Since then, some of these fact-checkers have criticized Facebook for not being transparent, particularly in its flagging process, withholding data on the effectiveness of the debunks, and failing to properly communicate with them.
In 2017, Facebook included the conservative Weekly Standard in its fact-checking program in the United States. The platform otherwise included only nonpartisan fact-checkers in its program, and since then it has not included any corresponding progressive outlet. This has resulted in the conservative outlet fact-checking and penalizing in the news feed a progressive outlet over a disputed headline, which was harshly criticized.
Repeated flaws with ad policies
Multiple outlets found instances where Facebook’s ad policies showed significant problems leading up to the 2018 midterm elections. CNN and The New York Times found pages without any information on who was operating them running ads attacking congressional candidates in Virginia and Texas. The platform also allowed a political action committee to run ads with anti-Semitic imagery attacking Florida gubernatorial candidate Andrew Gillum without disclosing its connections to a Republican ad firm. Vice News was able to submit ads using IRA material while posing as ISIS and Vice President Mike Pence without issue, and it was also able to run ads posing as every U.S. senator. ProPublica also found that multiple interest groups were able to cloak their identities while running ads on Facebook.
Human and civil rights violations
Poor policies for monitoring white supremacy and hate
This year, leaked documents showed that while Facebook’s content policies forbid hate speech arising from white supremacy, so-called white nationalist and white separatist views were considered acceptable, a policy it is now reviewing after public scrutiny. A 2017 Pro Publica investigation of Facebook’s content policies showed that white men were protected from hate speech but Black children were not. Neo-Nazis and white supremacists continue to profit by selling white supremacist clothing and products on Facebook and Instagram. Zuckerberg also defended the rights of Holocaust deniers to share their conspiracy theories on the platform.
After years of pressure from civil rights groups, Facebook finally agreed to submit to a civil rights audit, but it also announced the creation of a panel to review supposed bias against conservatives the same day, equating the civil rights of its users with partisan bickering by Republicans.
After months of silence and stonewalling, Facebook finally released an update on their promised civil rights audit. In a December 18 post for Facebook’s newsroom blog, chief operating officer Sheryl Sandberg claimed that the audit was “deeply important” to her and one of her “top priorities for 2019,” however she gave no indication of what that meant exactly. Facebook released an update on the audit one day after 32 civil rights groups (including Media Matters) published an open letter calling for significant reforms to Facebook’s leadership and board and after the NAACP called for a boycott of Facebook for allowing African-American Facebook users to be victimized by Russia’s IRA trolls.
Contributing to violence in multiple countries
Facebook in recent years has actively expanded to developing countries. Since then, the platform has been used in Myanmar and Sri Lanka to encourage hate and violence against minorities, resulting in riots and killings. In Libya, militias have used the platform to sell weapons, find their opponents, and coordinate attacks. The United Nations has issued multiple reports criticizing Facebook’s role in Myanmar, suggesting the platform “contributed to the commission of atrocity crimes” in the country. Activists and officials in those countries also complained that Facebook had not employed moderators to monitor for hateful content, nor had they established clear points of contact for people in those countries to contact them to issue concerns.
Content sent via messaging app WhatsApp, which Facebook owns, has also caused problems. In India, hoaxes spreading through the platform have led to multiple lynchings, and the Indian government (whose supporters have themselves spread hoaxes) has pressured the company to clamp down on misinformation. In response, the platform has resorted to going on the road to perform skits to warn people about WhatsApp hoaxes. Other countries like Brazil and Mexico have also struggled with hoaxes spreading through WhatsApp, with the latter also seeing lynchings as a result.
Used by authoritarians to target opponents
Certain governments have also used Facebook as a means to target and punish their perceived opponents. In the Philippines, supporters of President Rodrigo Duterte, some of whom have been part of Duterte’s government, have spread fake content on the platform to harass and threaten his opponents. And in Cambodia, government officials have tried to exploit Facebook’s policies to target critics of Prime Minister Hun Sen.
Ads discrimination
Facebook’s ad policies have allowed people to exclude groups based on their race while creating a target audience for their ads, as ProPublica noted in 2016. The following year, it found that despite Facebook’s claims to stop such discrimination, housing ads on the platform continued to exclude target audience by race, sex, disability, and other factors. In 2017, civil rights groups filed a lawsuit against the platform and the Department of Housing and Urban Development also filed a complaint. Another investigation the same year found that the platform could exclude viewers by age from seeing job ads, a potential violation of federal law. In 2018, the American Civil Liberties Union sued Facebook for allegedly allowing employers to exclude women from recruiting campaigns.
Helping anti-refugee campaign in swing states
In 2016, Facebook, along with Google, directly collaborated with an agency that was working with far-right group Secure America Now to help target anti-Muslim ads on Facebook to users in swing states that warned about Sharia law and attacked refugees.
Online harassment
Facebook has done little to protect people who become targets of online harassment campaigns, even though most of them are likely users of Facebook themselves. Time and again, Facebook has allowed itself to be weaponized for this purpose. Alex Jones and Infowars are perhaps the most famous examples of this problem. Even though Jones harassed Sandy Hook families for years, calling the school shooting a false flag, spreading hate speech, and engaging in other forms of bullying, Facebook continued to allow him free rein on its platform. The company finally banned Jones in July this year, after weeks of public pressure, including an open letter from two Sandy Hook parents, but only after Apple “stopped distributing five podcasts associated with Jones.”
Facebook has also allowed conspiracy theorists and far-right activists to harass the student survivors of the Parkland school shooting, most of whom were minors, on the platform. More recently, it allowed right-wing meme pages to run a meme disinformation campaign targeting professor Christine Blasey Ford, Deborah Ramirez, and other survivors who came forward during the confirmation process of now Supreme Court Justice Brett Kavanaugh.
Still more screw-ups
Then there are the failures that defy category. In 2012, Facebook conducted psychological tests on nearly 700,000 users without their consent or knowledge. Zuckerberg had to apologize after giving a virtual reality tour of hurricane-struck Puerto Rico. Illegal opioid sales run rampant on Facebook, among other platforms, and the company has been unable to curb or stop them.
Even advertisers, the source of Facebook’s profit, haven’t been spared. Facebook’s latest political ad restrictions have created problems for local news outlets, LGBTQ groups, and undocumented immigrants seeking to buy ads. Facebook also had to admit to advertisers that it gave them inflated video-viewing metrics for the platform for over two years.
In November, it was reported that Facebook hired Definers Public Affairs, a Republican public relations firm, to do opposition research on its political opponents and place unflattering news articles about Facebook’s competitors on a fake news website affiliated with the firm. Definers’ strategist Tim Miller pushed opposition research about billionaire philanthropist George Soros to reporters and far-right influencers, including Holocaust denier Chuck C. Johnson, playing into the anti-Semitic conspiracy theories already thriving online and putting Soros and his family’s personal safety at risk.
What Facebook owes consumers
As a college student, Zuckerberg offered the personal data of Facebook’s initial users at Harvard to his friend and joked that people were “dumb fucks” for trusting him with their personal information. One hopes that Zuckerberg’s respect for his customer base has improved since then, but Facebook’s many failures since suggest that it hasn’t.
BuzzFeed’s Charlie Warzel suggested that Facebook’s users simply don’t care enough about data privacy to stop using the platform. We have a slightly different theory: Users don’t leave Facebook because there’s no available alternative. Without a competitor, Facebook has no real incentive to fix what it’s broken.
The impact of Facebook’s failures compound on society at large. As the founder of one of Facebook’s designated third-party fact-checkers told The New York Times, “Facebook broke democracy. Now they have to fix it.”