In a Los Angeles Times op-ed, professors from Harvard and Northeastern University argued that tech giants and social media platforms such as Google and Facebook are not taking necessary steps to stem the spread of fake news online, such as less prominently displaying potential fake news stories. The op-ed also suggested that media outlets stop repeating false claims in headlines.
Facebook, with its algorithm that allows fake news stories to go viral, and Google, with its advertising service that continues to fund multiple fake news purveyors, have become two of the largest platforms on which fake news stories and their purveyors spread and grow. Although they have taken some steps to address the issue, such as recruiting fact-checkers, the platforms still continue to host fake news. And media outlets, some of which have been recruited by Facebook to fact-check potential fake news stories, can inadvertently spread fake news by repeating dubious or false claims in their headlines -- a practice with which many have struggled.
In their May 8 op-ed, Harvard professor Matthew Baum and Northeastern University professor David Lazer -- who recently co-authored a report on combating fake news that made several suggestions for stemming its proliferation -- wrote that “the solutions Google, Facebook and other tech giants and media companies are pursuing aren’t in many instances the ones social scientists and computer scientists are convinced will work.” The article cited research to explain that “the more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true.” Instead, they urged the platforms to “move suspect news stories farther down the lists of items returned through search engines or social media feeds.” They added that while “Google recently announced some promising steps in this direction,” such as “responding to criticism that its search algorithm had elevated to front-page status some stories featuring Holocaust denial,” “more remains to be done.” The professors also called on “editors, producers, distributors and aggregators” to stop “repeating” false information, “especially in their headlines,” in order to be “debunking the myth, not restating it.” From the op-ed:
We know a lot about fake news. It’s an old problem. Academics have been studying it — and how to combat it — for decades. In 1925, Harper’s Magazine published “Fake News and the Public,” calling its spread via new communication technologies “a source of unprecedented danger.”
That danger has only increased. Some of the most shared “news stories” from the 2016 U.S. election — such as Hillary Clinton selling weapons to Islamic State or the pope endorsing Donald Trump for president — were simply made up.Unfortunately — as a conference we recently convened at Harvard revealed — the solutions Google, Facebook and other tech giants and media companies are pursuing aren’t in many instances the ones social scientists and computer scientists are convinced will work.
We know, for example, that the more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true. As recent studies led by psychologist Gordon Pennycook, political scientist Adam Berinsky and others have shown, over time people tend to forget where or how they found out about a news story. When they encounter it again, it is familiar from the prior exposure, and so they are more likely to accept it as true. It doesn’t matter if from the start it was labeled as fake news or unreliable — repetition is what counts.
Reducing acceptance of fake news thus means making it less familiar. Editors, producers, distributors and aggregators need to stop repeating these stories, especially in their headlines. For example, a fact-check story about “birtherism” should lead by debunking the myth, not restating it. This flies in the face of a lot of traditional journalistic practice.
[...]
The Internet platforms have perhaps the most important role in the fight against fake news. They need to move suspect news stories farther down the lists of items returned through search engines or social media feeds. The key to evaluating credibility, and story placement, is to focus not on individual items but on the cumulative stream of content from a given website. Evaluating individual stories is simply too slow to reliably stem their spread.Google recently announced some promising steps in this direction. It was responding to criticism that its search algorithm had elevated to front-page status some stories featuring Holocaust denial and false information about the 2016 election. But more remains to be done. Holocaust denial is, after all, low-hanging fruit, relatively easily flagged. Yet even here Google’s initial efforts produced at best mixed results, initially shifting the denial site downward, then ceasing to work reliably, before ultimately eliminating the site from search results.
[...]
Finally, the public must hold Facebook, Google and other platforms to account for their choices. It is almost impossible to assess how real or effective their anti-fake news efforts are because the platforms control the data necessary for such evaluations. Independent researchers must have access to these data in a way that protects user privacy but helps us all figure out what is or is not working in the fight against misinformation.