The Guardian: Facebook's Attempt To Combat Fake News Is A Total Disaster
Written by Alex Kaplan
Published
According to The Guardian, Facebook’s efforts to combat fake news on its platform have been “regularly ineffective,” appear to be “having minimal impact,” and may even be backfiring. These findings from a review of Facebook’s efforts, published May 16, come as experts have warned that Facebook’s tactics against fake news are unlikely to work and have recommended other approaches.
The outlet reviewed “false news articles” on Facebook and interviewed fact-checkers with whom the social media platform partnered and writers who produce fake news content. Under that partnership, articles shared on Facebook that were labeled by fact-checkers as fake news would supposedly be labeled “disputed” when shared by other users. The Guardian in its review found that “articles formally debunked” by those fact-checkers “frequently remain on the site without the ‘disputed’ tag warning users about the content.” Additionally, “the label often comes after the story has already gone viral and the damage has been done,” and the labeling sometimes has the opposite effect, as the traffic to the story can actually increase. Recently, professors from Harvard and Northeastern universities warned that Facebook’s labeling would likely be insufficient because the “more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true.” The professors had also urged Facebook to disclose its data so “independent researchers” could analyze the effectiveness of its fact-checking system. But, as the Guardian reported, Facebook refused to share the “data or information” with the newspaper. Thus, the report said it is ”unclear to what extent the flag [by fact-checkers] actually limits the spread of propaganda.” From The Guardian’s report:
A Guardian review of false news articles and interviews with fact-checkers [who have partnered with Facebook] and writers who produce fake content suggests that Facebook’s highly promoted initiatives are regularly ineffective, and in some cases appear to be having minimal impact.
Articles formally debunked by Facebook’s fact-checking partners – including the Associated Press, Snopes, ABC News and PolitiFact – frequently remain on the site without the “disputed” tag warning users about the content. And when fake news stories do get branded as potentially false, the label often comes after the story has already gone viral and the damage has been done. Even in those cases, it’s unclear to what extent the flag actually limits the spread of propaganda.
[...]
While some of the fact-checking groups said the collaboration has been a productive step in the right direction, a review of content suggests that the labor going into the checks may have little consequences.
ABC News, for example, has a total of 12 stories on its site that its reporters have debunked as part of its Facebook partnership. But with more than half of those stories, versions can still be shared on Facebook without the disputed tag, even though they were proven false.
[...]
Facebook refused to provide data or information on the number of articles that have been tagged as disputed, how a flag impacts traffic and engagement, if there are specific websites most frequently cited and how long after publication the flags are typically added. A spokesman said “we have seen that a disputed flag does lead to a decrease in traffic and shares”, but declined to elaborate.
The Guardian study also found that conservatives are more likely to share fake news in response to fact-checkers disputing it. A former fake news writer told the newspaper, “A far-right individual who sees it’s been disputed by Snopes, that adds fuel to the fire and entrenches them more in their belief.” This statement is not surprising, given that right-wing outlets have repeatedly attacked and tried to delegitimize fact-checking websites and even the term “fake news” itself. From The Guardian's report:
When Facebook’s new fact-checking system labeled a Newport Buzz article as possible “fake news”, warning users against sharing it, something unexpected happened. Traffic to the story skyrocketed, according to Christian Winthrop, editor of the local Rhode Island website.
“A bunch of conservative groups grabbed this and said, ‘Hey, they are trying to silence this blog – share, share share,’” said Winthrop, who published the story that falsely claimed hundreds of thousands of Irish people were brought to the US as slaves. “With Facebook trying to throttle it and say, ‘Don’t share it,’ it actually had the opposite effect.”
[...]
Jestin Coler, a writer who got widespread attention for the fake news he published last year, said it was hard to imagine Facebook’s effort having any impact.
“These stories are like flash grenades. They go off and explode for a day,” said Coler, who said he is no longer publishing false news. “If you’re three days late on a fact check, you already missed the boat.”
He also noted that many consumers of fake news won’t be swayed by a “disputed” tag given their distrust of the media and fact-checkers: “A far-right individual who sees it’s been disputed by Snopes, that adds fuel to the fire and entrenches them more in their belief.”
The review of Facebook’s ongoing attempts to battle fake news on its website comes just weeks after Harvard Kennedy School’s Shorenstein Center published a study recommending steps to fight fake news. These steps included making the effort bipartisan by engaging “center-right private institutions” and “news outlets” in dealing with fake news; strengthening reliable and credible information sources and broadening their reach; and having social media platforms such as Facebook and Google actually share their data on fake news with academics so that they may gauge the effectiveness of these companies’ efforts.