Misinformer of the Year: Facebook CEO Mark Zuckerberg

Facebook's “personalized newspaper” became a global clearinghouse for misinformation

Sarah Wasko / Media Matters

In late August, as Hurricane Harvey crashed through the Texas coastline, millions of Americans searching for news on the crisis were instead exposed to a toxic slurry of fabrications. Fake news articles with headlines like “Black Lives Matter Thugs Blocking Emergency Crews From Reaching Hurricane Victims” and “Hurricane Victims Storm And Occupy Texas Mosque Who Refused To Help Christians” went viral on Facebook, spreading disinformation that encouraged readers to think the worst about their fellow citizens.

When Facebook set up a crisis response page a month later -- following a mass shooting in Las Vegas, Nevada, that killed dozens and injured hundreds -- the company intended to provide a platform for users in the area to confirm they were safe and to help people across the country learn how to support the survivors and keep up to date on the events as they unfolded. But the page soon became a clearinghouse for hyperpartisan and fake news articles, including one which baselessly described the shooter as a “Trump-hating Rachel Maddow fan.”

In Myanmar, an ethnic cleansing of the Muslim Rohingya minority population was aided by misinformation on Facebook, the only source of news for many people in the country. In India, The Washington Post reported, “false news stories have become a part of everyday life, exacerbating weather crises, increasing violence between castes and religions, and even affecting matters of public health.” In Indonesia, disinformation spread by social media stoked ethnic tensions and even triggered a riot in the capital of Jakarta.

Throughout the year, countries from Kenya to Canada either fell prey to fake news efforts to influence their elections, or took steps they hoped would quell the sort of disinformation campaign that infected the 2016 U.S. presidential race.

Last December, Media Matters dubbed the fake news infrastructure 2016’s Misinformer of the Year, our annual award for the media figure, news outlet, or organization which stands out for promoting conservative lies and smears in the U.S. media. We warned that the unique dangers to the information ecosystem meant “merely calling out the lies” would not suffice, and that “the objective now is to protect people from the lies.” We joined numerous experts and journalists in pointing to weaknesses in our nation’s information ecosystem, exposed by the presidential election and fueled by key decisions made by leading social media platforms.

Twelve months later, too little has changed in the United States, and fake news has infected democracies around the world. Facebook has been central to the spread of disinformation, stalling and obfuscating rather than taking responsibility for its outsized impact.

Media Matters is recognizing Facebook CEO Mark Zuckerberg as 2017’s Misinformer of the Year.

He narrowly edges Larry Page, whose leadership of Google has produced similar failures in reining in misinformation. Other past recipients include the Center for Medical Progress (2015), George Will (2014), CBS News (2013), Rush Limbaugh (2012), Rupert Murdoch and News Corp. (2011), Sarah Palin (2010), Glenn Beck (2009), Sean Hannity (2008), ABC (2006), Chris Matthews (2005), and Bill O'Reilly (2004).

Facebook is the most powerful force in journalism

“Does Even Mark Zuckerberg Know What Facebook Is?” Max Read asked in an October profile for New York magazine. Rattling off statistics pointing to the dizzying reach and breadth of a site with two billion monthly active users, Read concluded that the social media platform Zuckerberg launched for his Harvard peers in 2004 “has grown so big, and become so totalizing, that we can’t really grasp it all at once.”

Facebook’s sheer size and power make comparisons difficult. But Zuckerberg himself has defined at least one key role for the website. In 2013, he told reporters that the redesign of Facebook’s news feed was intended to “give everyone in the world the best personalized newspaper we can.” This strategy had obvious benefits for the company: If users treated the website as a news source, they would log on more frequently, stay longer, and view more advertisements.

Zuckerberg achieved his goal. Forty five percent of U.S. adults now say they get news on Facebook, dominating all other social media platforms, and the percentage of people using the website for that purpose is rising. The website is the country’s largest single source of news, and indisputably its most powerful media company.

That goal of becoming its users' personalized newspaper was assuredly in the best interest of Facebook. The company makes money due to its massive usership, so the value of any particular piece of content is in whether it keeps people engaged with the website. (Facebook reported in 2016 that users spend an average of 50 minutes per day on the platform.) But ultimately, Zuckerberg’s declaration showed that he had put his company at the center of the information ecosystem -- crucial in a democracy because of its role in setting public opinion -- but refused to be held accountable for the results.

That failure to take responsibility exposes another key difference between Facebook and the media companies Zuckerberg said he wanted to ape. Newspapers face a wide variety of competitors, from other papers to radio, television, and digital news products. If a newspaper gains a reputation for publishing false information, or promoting extremist views, it risks losing subscribers and advertising dollars to more credible rivals and potentially going out of business. But Facebook is so popular that it has no real competitors in the space. For the foreseeable future, it will remain a leading force in the information ecosystem.

Fake news, confirmation bias, and the news feed

Sarah Wasko / Media Matters

Facebook’s news feed is designed to give users exactly what they want. And that’s the problem.

All content largely looks the same on the feed -- regardless of where it comes from or how credible it is -- and succeeds based on how many people share it. Facebook’s mysterious algorithm favors “content designed to generate either a sense of oversize delight or righteous outrage and go viral,” serving the website’s billions of users the types of posts they previously engaged with in order to keep people on the website.

When it comes to political information, Facebook largely helps users seek out information that confirms their biases, with liberals and conservatives alike receiving news that they are likely to approve of and share.

A wide range of would-be internet moguls -- everyone from Macedonian teenagers eager to make a quick buck to ideological true believers hoping to change the political system -- have sought to take advantage of this tendency. They have founded hyperpartisan ideological websites and churned out content, which they have then shared on associated Facebook pages. If the story is interesting enough, it goes viral, garnering user engagement that leads to the story popping up in more Facebook feeds. The site’s owners profit when readers click the Facebook story and are directed back to the hyperpartisan website, thereby driving up traffic numbers and helping boost advertising payouts. The more extreme the content, the more it is shared, and the more lucrative it becomes. Facebook did not create ideological echo chambers, but it has certainly amplified the effect to an unprecedented degree.

Intentional fabrications packaged as legitimate news have become just another way for hyperpartisan websites to generate Facebook user engagement and cash in, launching outlandish lies into the mainstream. Users seem generally unable to differentiate between real and fake news, and as they see more and more conspiracy theories in their news feed, they become more willing to accept them.

Facebook’s 2016 decision to bow to a conservative pressure campaign has accelerated this process. That May, a flimsy report claimed that conservative outlets and stories had been “blacklisted” by the Facebook employees who selected the stories featured in its Trending Topics news section, a feature that helps push stories viral. The notion that Facebook employees might be suppressing conservative news triggered an angry backlash from right-wing media outlets and Republican leaders, who declared that the site had a liberal bias. In an effort to defuse concerns, Zuckerberg and his top executives hosted representatives from Donald Trump’s presidential campaign, Fox News, the Heritage Foundation, and other bastions of the right at Facebook’s headquarters. After hearing their grievances over a 90-minute meeting, Zuckerberg posted on Facebook that he took the concerns seriously and wanted to ensure that the community remained a “platform for all ideas.”

While Facebook’s own internal investigation found “no evidence of systematic political bias” in the selection or prominence of stories featured in the Trending Topics section, the company announced the following week that its curators would no longer rely on a list of credible journalism outlets to help them determine whether a topic was newsworthy, thereby removing a key method of screening the credibility of stories. And in late August, as the presidential election entered its stretch run, Facebook fired its “news curators,” putting Trending Topics under the control of an algorithm. The company promised that removing the human element “allows our team to make fewer individual decisions about topics.” That’s true. But the algorithm promoted a slew of fabricated stories from bogus sources in the place of news articles from credible outlets.

This confluence of factors -- users seeking information that confirms their biases, sites competing to give it to them, and a platform whose craven executives deliberately refused to take sides between truth and misinformation -- gave rise to the fake news ecosystem.

The result is a flood of misinformation and conspiracy theories pouring into the news feeds of Facebook users around the world. Every new crisis seems to bring with it a new example of the dangerous hold Facebook has over the information ecosystem.

Obfuscation and false starts for enforcement

Zuckerberg resisted cracking down on fake news for as long as he possibly could. Days after the 2016 presidential election, he said it was “crazy” to suggest fake news on Facebook played a role in the outcome. After an uproar, he said that he took the problem “seriously” and was “committed to getting this right.” Seven months later, after using his control of more than 50 percent of Facebook shares to vote down a proposal for the company to publicly report on its fake news efforts, the CEO defended the company’s work. Zuckerberg said Facebook was disrupting the financial incentives for fake news websites, and he touted a new process by which third-party fact-checkers could review articles posted on the site and mark them as “disputed” for users. This combination of small-bore proposals, halting enforcement, and minimal transparency has characterized Zuckerberg’s approach to the problem.

Under Facebook’s third-party fact-checking system, rolled out in March to much hype, the website’s users have the ability to flag individual stories as potential “false news.” A fact-checker from one of a handful of news outlets -- paid by Facebook and approved by the International Fact-Checking Network at Poynter, a non-partisan journalism think tank -- may then review the story, and, if the fact-checker deems it inaccurate, place an icon on the story that warns users it has been “disputed.”

This is not a serious effort at impacting an information infrastructure encompassing two billion monthly users. It’s a fig leaf that Facebook is using to benefit from the shinier brands of the outlets it has enlisted in the effort, while creating a conflict of interest that limits the ability of those news organizations to scrutinize the company.  

The program places the onus first on users to identify the false stories and then on a small group of professionals from third parties -- including The Associated Press, Snopes, ABC News and PolitiFact -- to take action. The sheer size of Facebook means the fact-checkers cannot hope to review even a tiny fraction of the fake news circulating on the website. The Guardian's reviews of the effort have found that it was unclear whether the flagging process actually impeded the spread of false information, as the “disputed” tag is often only added long after the story had already gone viral and other versions of the same story can circulate freely without the tag. The fact-checkers themselves have warned that it is impossible for them to tell how effective their work is because Facebook won’t share information about their impact.

Zuckerberg’s happy talk about the company’s efforts to demonetize the fake news economy also continues to ring hollow. According to Sheryl Sandberg, Facebook’s chief operating officer, this means the company is “making sure” that fake news sites “aren’t able to buy ads on our system.” It’s unclear whether that is true, since Facebook refuses to be transparent about what it’s doing. But whether the fake news sites buy Facebook ads or not, the same websites continue to benefit from the viral traffic that Facebook makes possible. They can even benefit from having their associated Facebook pages verified. (Facebook verifies pages for public figures, brands, and media outlets with a blue check mark to confirm it is “the authentic Page” or profile for the associated group or person, imbuing the page with what inevitably looks like a stamp of approval from the social media giant.)

Facebook’s response to criticism of its political advertising standards has been more robust. During the 2016 presidential election, the company was complicit in what the Trump campaign acknowledged was a massive “voter suppression” effort. Trump’s digital team spent more than $70 million on Facebook advertising, churning out hundreds of thousands of microtargeted “dark” ads, which appeared only on the timelines of the target audience and did not include disclosures that they were paid for by the campaign. A hefty portion of those ads targeted voters from major Democratic demographics with negative messages about Hillary Clinton and was intended to dissuade them from going to the polls. Facebook employees embedded with the Trump team aided this effort, helping with targeting and ensuring the ads were approved through an automated system. But in October, the company received major blowback following the disclosure that Russian-bought ads were targeted using similar strategies. Facebook subsequently announced that in the future, election ads would be manually reviewed by an employee to ensure it meets the company’s standards. The company plans to require disclosure of who paid for political ads, and increase transparency by ensuring that users can view all ads a particular page had purchased. Those are meaningful steps, but Facebook should go further by making clear that the company opposes civil suppression and will instruct its employees not to approve ads intended for that purpose.   

It’s notable, of course, that Facebook’s effort to curb abuse of its political advertising came only after U.S. senators unveiled legislation requiring stricter disclosure for online political ads. The company took action in order to preempt a meaningful federal response. With no such pressure on offer with regard to fake news, Facebook has been left to its own devices, responding only as needed to quiet public anger at its failures. At every step, experts have warned that Facebook’s efforts to push back against fake news have been insufficient and poorly implemented. The company is doing as little as it can get away with.

What can be done?

Sarah Wasko / Media Matters

Hoaxes and disinformation have always been a part of human society, with each new generation enlisting the era’s dominant forms of mass communication in their service. But Facebook’s information ecosystem and news feed algorithm has proven particularly ripe for abuse, allowing fake news purveyors to game the system and deceive the public. Those bad actors know that user engagement is the major component in ensuring virality, and have engineered their content with that in mind, leading to a system where Facebook turbocharges false content from disreputable sources.

Facebook could fight back against fake news by including an authority component in its algorithm, ensuring that articles from more credible outlets have a better chance of virality than ones from less credible ones. Facebook’s algorithm should recognize that real news outlets like The New York Times or CNN are more credible than websites that serve up deliberate fabrications, and respond accordingly, the way Google’s (admittedly imperfect) search engine does.

This will also require Facebook to stop conveying authority on users that do not deserve it by stripping verified tags from pages that regularly traffic in fake news.

Facebook also has a serious problem with bots: software that mimics human behavior and cheats the company’s algorithm, creating fake engagement and sending stories viral. The company will need to step up its efforts to identify algorithmic anomalies caused by these bots, and develop heightened countermeasures, which should include minimizing the impact on users by known bots.

If Facebook can find a way to change its algorithm to avoid clickbait, as it has claimed, it should be able to do the same to limit the influence of websites that regularly produce fabrications.

But algorithms alone won’t be enough to solve the problem. Facebook announced it would hire 1,000 people to review and remove Facebook ads that don’t meet its standards. So why hasn’t Zuckerberg done something similar to combat fake news? Why won’t Facebook, as one of the third-party fact-checkers suggested in an interview with The Guardian, hire “armies of moderators and their own fact-checkers” to solve that problem?

Given the collapse of the news industry over the last decade, there is no shortage of journalists with experience at verifying information and debunking falsehoods. Facebook could hire thousands of them; train them; give them the actual data that they need to determine whether they are effective and ensure that their rulings impact the ability of individual stories to go viral; and penalize websites, associated Facebook pages, and website networks for repeat offenses.

If Zuckerberg wants Facebook to be a “personalized newspaper,” he needs to take responsibility for being its editor in chief.

There is a danger, of course, to having a single news outlet with that much power over the U.S. information ecosystem. But Facebook already has that power, though there are compelling arguments in favor of limiting it, either with government regulation or antitrust actions.

What’s clear is that Facebook will only act under pressure. Earlier this month, The Weekly Standard, a conservative magazine and regular publisher of misinformation, announced it had been approved to join Facebook’s fact-checking initiative. The magazine was founded by Bill Kristol, the former chief of staff to Vice President Dan Quayle, and is owned by right-wing billionaire Philip Anschutz. Stephen Hayes, The Weekly Standard’s editor-in-chief and the author of The Connection: How al Qaeda's Collaboration with Saddam Hussein Has Endangered America, praised Facebook for the decision, telling The Guardian: “I think it’s a good move for [Facebook] to partner with conservative outlets that do real reporting and emphasize facts.” Conservatives, including those at The Weekly Standard, had previously criticized the initiative, claiming the mainstream news outlets and fact-checking organizations Facebook partnered with were actually liberal partisans. Facebook responded by trying to “appease all sides.”

Nineteen months after Facebook’s CEO sat down with conservative leaders and responded to their concerns with steps that inadvertently strengthened the fake news infrastructure, his company remains more interested in bowing to conservative criticisms than stopping misinformation.


The very people who helped build Facebook now warn that it is helping to tear the world apart.

Founding President Sean Parker lamented “the unintended consequences of a network when it grows to a billion or 2 billion people” during a November event. “It literally changes your relationship with society, with each other,” he said. “It probably interferes with productivity in weird ways. God only knows what it's doing to our children's brains."

Chamath Palihapitiya, who joined Facebook in 2007 and served as vice president for user growth at the company, said earlier this month that he regrets helping build up the platform: “I think we have created tools that are ripping apart the social fabric of how society works.”

Facebook’s current employees also worry about the damage the company is doing, according to an October The New York Times report detailing “growing concern among employees.” Last week, the Facebook’s director of research tried to allay some of these fears with a press release titled, “Hard Questions: Is Spending Time on Social Media Bad for Us."

Mark Zuckerberg built a platform to connect people that has become an incredibly powerful tool to divide them with misinformation, and he’s facing increasing criticism for it. But he only ever seems interested in fixing the public relations problem, not the information one. That’s why he is 2017’s Misinformer of the Year.