YouTube outsources truth to Wikipedia

YouTube’s solution to conspiracy theory videos? Let Wikipedia handle it. There are three big reasons that will not work.

Sarah Wasko / Media Matters

YouTube has a conspiracy theory problem. The platform is full of conspiracy theory videos, and its algorithm moves viewers up a ladder of engagement. YouTube encourages consumption of more videos on a daisy chain of content that becomes more radical with each new suggested video. Last week, Zeynep Tufekci outlined this process in an op-ed for The New York Times, making the point that what “keeps people glued to YouTube” is that its “algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.”

Conspiracy theory videos that correlate to news events go viral on YouTube with alarming regularity, often spreading misinformation and lies about real people in the process. Last month, YouTube was forced to remove a conspiracy theory video alleging that underage Parkland student David Hogg was a paid crisis actor after it became YouTube’s top trending video. False information about Hogg and his family spread on YouTube for days before the company removed the smears. This week, YouTube admitted that it didn’t know why an “InfoWars video claiming that Antifa members are the ‘prime suspects’ in the mysterious package bombings in Austin, Texas appeared at the top of search results.” YouTube has reportedly informed InfoWars that the site is on its second strike and dangerously close to being permanently banned from the video-sharing platform. But even if YouTube follows through with its threat, InfoWars is merely a drop in the bucket.

YouTube CEO Susan Wojcicki was asked about the problem during a panel at South by Southwest (SXSW) this week and previewed the platform’s latest attempt at a solution: information cues. YouTube will apparently keep a running list of known conspiracy theories, and videos referring to these conspiracies will include a text box underneath them with links to Wikipedia articles challenging the claims. You can see how this would look on YouTube’s platform here.

I have some bad news for Wojicki. Adding “information cues” isn’t going to solve the problem. It might actually make it worse.

It passes the buck: Tech platforms don’t want to be held responsible for the content on their sites. Both Facebook and Twitter have made it clear that they don’t want to be “arbiters of truth.” The platforms have also pushed back hard against the idea that they are media companies, continually arguing that they’re neutral platforms for individuals and media companies to publish content. Yet the tech platforms seem more than willing to outsource the truth to other entities like Snopes, The Associated Press, and now Wikipedia. Determining what is and isn’t true isn’t something tech platforms should feel free to outsource, especially to an organization of volunteer editors at Wikipedia who weren’t informed in advance, much less consulted, about the feasibility of using their website in this way.

It tips off the trolls: If we’ve learned anything over the past couple of years, it’s that trolls are quite good at organizing to keep ahead of the tech platforms’ attempts to curb them. Whether it’s Russian trolls getting Americans to RSVP for events on Facebook, white nationalists attempting to flood Rotten Tomatoes with fake movie reviews, or Nazis taking on the video gaming platform Steam, there’s no denying that trolls are constantly manipulating the rules of the game. The platforms can’t keep up with things as they are, let alone plan for the next thing. And now Wojcicki’s “information cues” announcement gives trolls a heads-up. Informations cues aren’t even live yet, but hostile actors foreign and domestic can already start to plan how they’ll game Wikipedia pages that debunk conspiracy theories. I’m sure the volunteer editors at Wikipedia are really looking forward to the onslaught!

It won’t have the desired effect: Information cues have been tried before and failed miserably. Recall Facebook's attempt to have fact-checkers such as Snopes dispute fake news. It failed, causing Facebook to alter the program in December so that fact checks now show up simply as “related articles.” It turns out that flagging content as potentially untrue can backfire, further entrenching mistaken beliefs. Other research on misinformation found similar effect. YouTube’s information cues have the potential to make their already viral conspiracy problem even worse.

As long as conspiracy theories are allowed to live online, they’ll continue to flourish. The trolls who disseminate them have mastered almost every platform and they know that tech companies will take only half steps to stop them. Meanwhile, tech companies offer no protection for real people who become entangled in organized conspiracy theory campaigns and whose professional and personal lives can be upended as a result.