Shorenstein Report Identifies Steps For Stemming The Spread Of Fake News

A new report from the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy, which examined fake news and misinformation in the media ecosystem, has identified possible steps that academics, internet platforms, and media outlets could take in the short term to help stem the spread of fake news.

Fake news -- information that is clearly and demonstrably fabricated and that has been packaged and distributed to appear as legitimate news -- was a major problem during the 2016 election, and such misinformation continues to be pervasive. Websites that spread fake news, which Media Matters has dubbed fake news purveyors, have additionally become part of an ecosystem with the “alt-right” that also spreads other kinds of misinformation, such as dubious claims and conspiracy theories. Aides and allies of President Donald Trump have also pushed articles from fake news purveyors and from the “alt-right”/fake news ecosystem, helping spread their reach.

The Harvard report provides an overview of misinformation in the current media ecosystem, discusses the psychology of fake news, identifies potential areas for further research on the topic, and presents three possible approaches to addressing the problem of fake news in the short term.

Making The Fight Against Fake News Bipartisan

First, the report explains that “bringing more conservatives into the deliberation process about misinformation is an essential step in combating fake news,” adding that fake news, “for the moment at least,” is a problem on “predominantly the right side of the political spectrum.” It further notes that corrections to fake news are “most likely to be effective when coming from a co-partisan with whom one might expect to agree.” From the report:

Bringing more conservatives into the deliberation process about misinformation is an essential step in combating fake news and providing an unbiased scientific treatment to the research topic. Significant evidence suggests that fake news and misinformation impact, for the moment at least, predominantly the right side of the political spectrum (e.g., Lazer n.d., Benkler, 2017). Research suggests that error correction of fake news is most likely to be effective when coming from a co-partisan with whom one might expect to agree (Berinsky, 2017). Collaboration between conservatives and liberals to identify bases for factual agreement will therefore heighten the credibility of the endeavors, even where interpretations of facts differ. Some of the immediate steps suggested during the conference were to reach out to academics in law schools, economists who could speak to the business models of fake news, individuals who expressed opposition to the rise in distrust of the press, more center-right private institutions (e.g. Cato Institute, Koch Institute), and news outlets (e.g. Washington Times, Weekly Standard, National Review).

Fake news is not inherently a conservative phenomenon, but as the report suggests, it is currently an asymmetric political problem. As a result, the media debate over fake news has become similarly partisan. Following the 2016 election, while some in right-wing media acknowledged the problem, other figures dismissed concerns about fake news as “silly” and called fake news simply “satire.” Along with the president and his administration, they have delegitimized the term “fake news” by using it to erroneously label credible news sources and have attacked the fact-checking organizations that social media platforms like Facebook partnered with to fight fake news. The report’s recommendations for conservative figures -- and ideas of organizations that could potentially be engaged -- could help serve as a counter to this reactionary backlash to the fight against fake news.

Strengthening Reliable Information Sources And Broadening Their Reach

Secondly, the report says that “we need to strengthen trustworthy sources of information,” partly by “seek[ing] stronger future collaborations between researchers and the media” and “support[ing] efforts to strengthen local reporting.” It also says that “the identification of fake news and interventions by platforms” appears to be “pretty straightforward,” suggesting that it would help to identify “the responsibilities of the platforms” where fake news spreads and get “their proactive involvement” in fighting it. From the report:

[T]he apparent concentration of circulated fake news (Lazer et al., n.d.) makes the identification of fake news and interventions by platforms pretty straightforward. While there are examples of fake news websites emerging from nowhere, in fact it may be that most fake news comes from a handful of websites. Identifying the responsibilities of the platforms and getting their proactive involvement will be essential in any major strategy to fight fake news. If platforms dampened the spread of information from just a few web sites, the fake news problem might drop precipitously overnight. Further, it appears that the spread of fake news is driven substantially by external manipulation, such as bots and “cyborgs” (individuals who have given control of their accounts to apps). Steps by the platforms to detect and respond to manipulation will also naturally dampen the spread of fake news.

Internet platforms like Facebook and Google have taken some steps to temper the spread of fake news. Facebook, for example, made an initial move to address the problems with its algorithms that allowed fake news to spread and become trending topics. Yet the website continues to verify fake news purveyors’ Facebook pages, lending them a sense of legitimacy, and misinformation continues to be disseminated via the social networking site. Meanwhile, Google is still allowing fake news purveyors to use its advertising network, as are other ad networks.

Creating A Cooperative Infrastructure For Additional Research On Social Media And The Spread Of Misinformation

Finally, the report calls for academics to partner with other companies and organizations to build a cooperative infrastructure for social media research and to help “develop datasets that are useful for studying the spread of misinformation online and that can be shared for research purposes and replicability.” The report details the value academics can bring to the study of how misinformation spreads, but notes that “accessing data for research is either impossible or difficult, whether due to platform constraints, constraints on sharing, or the size of the data”:

With very little collaboration academics can still join forces to create a panel of people’s actions over time, ideally from multiple sources of online activity both mobile and non-mobile (e.g. MediaCloud, Volunteer Science, IBSEN, TurkServer). The cost for creating and maintaining such a panel can potentially be mitigated by partnering with companies that collect similar data. For example, we could seek out partnerships with companies that hold web panels (e.g. Nielsen, Microsoft, Google, ComScore), TV consumption (e.g. Nielsen), news consumption (e.g. Parsely, Chartbeat, The New York Times, The Wall Street Journal, The Guardian), polling (e.g. Pollfish, YouGov, Pew), voter registration records (e.g. L2, Catalist, TargetSmart), and financial consumer records (e.g. Experian, Axciom, InfoUSA). Of course, partnerships with leading social media platforms such as Facebook and Twitter are possible. Twitter provides APIs that make public data available, but sharing agreements are needed to collect high-volume data samples. Additionally, Facebook would require custom APIs. With more accessible data for research purposes, academics can help platforms design more useful and informative tools for social news consumption.