On October 3, Facebook Vice President of Global Affairs and Communications Nick Clegg appeared on CNN’s Reliable Sources to preempt a 60 Minutes interview with a Facebook whistleblower that aired the same day. During the interview, Clegg attempted to defend Facebook’s content moderation and downplay The Wall Street Journal’s reporting of internal Facebook research revealing the company’s disregard for the issues that plague the platform. In the interview, Clegg made a number of claims that were false or misleading -- as has been the company’s habit whenever it is faced with tough questions.
Claim: Facebook’s influence on January 6 was negligible to nonexistent
“I think if the assertion is that January the 6 can be explained because of social media, I just think that's ludicrous," Clegg said. "The responsibility for the violence on January the 6th and the insurrection on that day lies squarely with the people who inflicted the violence and those who encouraged them, including then-President Trump, and, candidly, many other people elsewhere in the media who were encouraging the assertion that the election was stolen. And look, I think it gives people false comfort to assume that there must be a technological or technical explanation for the issues of political polarization in the United States.”
Reality: Facebook was a central tool in organizing the events on January 6
Clegg’s claims ignore the central role Facebook played in the spread of election misinformation, and sidesteps the platform's well-documented use by organizers of the Stop the Steal events, including the insurrection at the U.S. Capitol on January 6.
Facebook itself has acknowledged that its platform was used by organizers of the Capitol insurrection to plan the event, in an internal memo leaked to BuzzFeed in April. The memo claimed that the company failed to recognize such coordination, even though it had access to extensive data, while journalists and researchers had reported extensively on the presence of election misinformation and “Stop the Steal” on Facebook.
Clegg’s assertion that the responsibility for January 6 “lies squarely with the people who inflicted the violence and those who encouraged them, including then-President Trump,” conveniently ignores the fact that Facebook was a primary tool Trump used to sow doubt about the election and encourage violence. Between January 1, 2020, and January 6, 2021, Trump repeatedly used his Facebook page to attack his critics and spread harmful misinformation; such posts made up roughly 24% of his total posts and roughly 36% of all his interactions in that period. He also pushed election misinformation in 363 different posts.
And even while Trump is currently suspended from Facebook, the company continues to allow him to abuse the platform. Between June and August, Trump’s PAC has paid Facebook up to $220,000 to run ads that include calls for his supporters to “defend our elections” and claims that only Trump can “save our country.”
Claim: Facebook performs a huge amount of research and shares as much as it can
“We release a huge amount of research," Clegg said. "We have a thousand Ph.D.s working in Facebook. They publish or are involved in thousands of peer reviewed academic papers and academic conferences -- I think 400 papers this year alone. We run, I think, the world's largest COVID survey in cooperation with two universities, Maryland university -- University of Maryland and Carnegie Mellon. We have an industry-leading project with a number of academics to look into how social media was used in the run-up to the U.S. elections. In fact, last week I announced that we are investing $50 million as an initial fund to fund research into augmenting virtual reality.
“So we do a huge amount of research. We share it with external researchers as much as we can. But do remember, and I'm not a researcher, but researchers will tell you that there's a world of difference between doing a peer-reviewed exercise in cooperation with other academics and preparing papers internally to provoke an informed internal discussion.”
Reality: Facebook controls access to data, lacks transparency, and penalizes researchers
Media Matters has repeatedly reported on Facebook’s habit of using research as a PR tool. And while Facebook regularly publicizes minor policy updates as a way of suggesting progress by the company, it also hides behind intentionally vague and ineffective content moderation policies to shield itself from accountability. In fact, these vague and opaque policies, along with a secret exemption program, have given preferential treatment to right-wing accounts that frequently post misinformation.
The company that supposedly “release[s] a huge amount of research” requires reporters and researchers to jump through hoops to get access to information and frequently takes active steps that hinder the ability for researchers to study the platform. Two years ago, Facebook removed Graph Search, a tool they launched that was intended to make the platform more searchable and was widely used by researchers. A report from The New York Times in July revealed that Facebook wanted to limit journalists and researchers’ access to CrowdTangle, a data analytics tool Facebook acquired in 2016 that allows users to search public social media posts. In the report, technology columnist Kevin Roose detailed an “internal battle over data transparency” at Facebook after researchers used CrowdTangle and repeatedly found that right-leaning pages (and misinformation) dominated on the platform. In August, Facebook cut off a team of New York University researchers access to the platform after their research on political ads began painting an unflattering picture of the company. Facebook claimed the researchers were breaching privacy protections.
It’s very unclear what internal research Facebook maintains, but what is clear is that the company is not good at using it to improve the safety of the platform. Facebook has ignored its own internal data showing that its News Feed algorithm was fomenting political polarization, while claiming to “help communities connect.” When public outcry following the 2020 election pushed the company to do something about it, Zuckerberg reportedly authorized a change to the News Feed algorithm to increase visibility of so-called “mainstream publishers” and reduce visibility of ideologically aligned pages. The change, which was reported weeks after the election, didn’t have the stated effect and was quickly reversed.
The company also does not appear to be testing the interventions it has implemented on the platform to see if they are working. Its labeling mechanism for example, did nothing to quell misinformation and, in fact, may have backfired. Media Matters research shows that the 506 posts from Trump that received a warning label from Facebook in 2020 and 2021 received, on average, over two times more interactions per post than his overall posts. And yet misinformation labels have been touted by the company as a primary means by which it’s addressing misinformation on the platform.
Claim: Facebook uses the independent oversight board to hold the company to account
“We're doing something which no one else is doing in the industry," Clegg added, "because in a sense why should people believe in a sense our data, people don’t want us to be the judge and jury of our own performance. We are submitting that data, those reports, to an independent audit. No one else is doing that. No one else has set up an independent oversight board to hold us to account. So we accept transparency, we accept criticism, we accept where that criticism is fair, that we need to act on it.”
Reality: The Facebook Oversight Board is a sham that Facebook uses to save face over tough decisions
The Facebook Oversight Board’s model in which the panel chosen by Facebook rules on a handful of individual pieces of content every few months is completely incompatible with the idea of real accountability. What's more, the oversight board has been used by Facebook so the company can avoid making tough decisions, including its decision on whether to permanently ban Trump from the platform after he used it to incite violence.
Facebook takes PR victory laps every time the oversight board makes a ruling, but reading the fine print reveals how little substantial change the oversight board has instigated at the company. When the oversight board issued 19 policy recommendations after reviewing Trump’s case, Facebook claimed it took “substantial steps” to address those recommendations and was “committed to fully implementing 15” of them. But as with many examples of policy enforcement by the company, Facebook’s definition of “fully implemented” is inconsistent, and its responses are severely lacking and seem unlikely to address the platform’s repeated failures to clearly enforce its own policies.
Claim: Facebook’s reliance on advertisers means the company has no economic incentive not to crack down on hateful content.
“In the past it is true that there was more hate speech on Facebook than there should’ve been," Clegg said. "We applied a huge amount of resources and research -- and by the way, let me give you one very simple reason why this is such a misleading analogy. The people who pay our lunch are advertisers. Advertisers don't want their content next to hateful extreme or unpleasant content. We have absolutely no commercial incentive, no moral incentive, no company-wide incentive to do anything other than try and give the maximum number of people as much of a positive experience as possible, and that is what we do day in and day out.”
Reality: Facebook has so far shunned advertiser pressure and the platform remains riddled with hateful, toxic content
Media Matters and others have repeatedly reported on ways that the platform earns revenue on misinformation, conspiracy theories, and sensational and anti-LGBTQ content. When major corporations, such as Walmart, GEICO, Allstate, Kellogg’s, Kohl’s, Dell, McDonald’s, Peloton, and Ikea, all quietly paused Facebook advertising in July 2020 for the #StopHateForProfit campaign, "Zuckerberg told employees he was reluctant to bow to the threats of a growing ad boycott, saying in private remarks that 'my guess is that all these advertisers will be back on the platform soon enough.’”
Claim: Facebook research shows that only a small minority of young Instagram users were negatively impacted by use of the platform
“Well, the vast majority of teen girls and indeed boys who've been covered by some of the surveys that you referred to, say that for the overwhelming majority of them, it either makes them feel better or it doesn't make very much difference one way or the other." Clegg said. “The thing that we’re -- I think everyone is quite rightly focusing on -- and again I don't think it's intuitively surprising if you're not feeling great about yourself already, that then, you know, going on to social media can actually make you feel a bit worse.”
He added, “The research you’re referring to earlier is about -- is simply first asking teens -- so first teens were asked on, I think, a measure of 12 measures, do you suffer from anxiety, from sleeplessness, from food issues, from body image issues and so on. Then those teens who said yes to any of that were then asked and do what do you feel better, or the same, or the worse on those 12 counts when you go on to onto Instagram. And on all counts, the people who said either together it made no difference or it made them feel better outweigh those who, the minority, who said it made them feel worse. For those who made it feel worse, particularly when it came to body issues, body image issues for teenage girls, we want to understand what we can do to help them in those instances."
Reality: Facebook has no substantive argument against the Wall Street Journal’s reporting and is dismissing its own researchers
Clegg’s statements mirror other recent statements from Facebook in response to The Wall Street Journal’s reporting that Facebook “knew the harmful effects its Instagram photo-sharing app was having on teenagers.” Journalists have pushed back against Facebook’s arguments that Facebook's annotations on the leaked internal documents dismissed its own researchers and their research.
In the spring of 2021, Tech Transparency Project and Reset Australia reported that “Facebook allowed ads for things like alcohol, drugs, and extreme weight loss to target teens as young as 13.” Following the report, the platform announced that it would no longer allow advertisers to target users under 18 based on their interests and activity. In September, Tech Transparency Project tried an ads experiment and found that advertisers can still target teens broadly as a group. Media Matters has also found that Instagram recommends bogus weight loss and wellness tips through the “Explore” page, even though this is against its stated policy. Such lapses call into question the sincerity of Clegg’s claims that “particularly when it came to body issues, body image issues for teenage girls, we want to understand what we can do to help them in those instances.”