Mark Zuckerberg | Media Matters for America

Mark Zuckerberg

Tags ››› Mark Zuckerberg
  • Facebook has a climate-denial problem

    Blog ››› ››› LISA HYMAS


    Melissa Joskow / Media Matters

    Facebook, well-known as a breeding ground for misinformation, has a particular problem with disseminating false and misleading messages about climate change science. The platform spreads climate-denying videos and other posts, hosts climate-denying ads, and officially partners with climate-denying media outlets and organizations.

    Climate-denier videos get millions of views on Facebook

    A recent video promoting false arguments against climate change science got more than 5 million views on Facebook, The Guardian's Dana Nuccitelli reported last week.

    The video -- posted in June by The Daily Signal, an arm of the right-wing Heritage Foundation -- is titled "Why Climate Change Is Fake News." It features Marc Morano, a longtime spokesperson and blogger for the climate-denial cause, who outlines three things that "the left gets wrong about climate change." Nuccitelli points out that all three are common and easily debunked myths.

    Nuccitelli notes that Facebook's viewership numbers are likely inflated, but the video has still reached a lot of people:

    Fortunately, the exposure to Morano’s misinformation video is not as bad as it seems at first blush. Although Facebook implies the video has been viewed over 5m times, a “view” is counted after just three seconds, and videos on the site play automatically.

    Nevertheless, the video has been shared over 75,000 times, so it has certainly reached a wide audience. Facebook needs to come to terms with the fact that there is an objective reality. Even if Marc Morano sincerely believes humans aren’t causing global warming, that belief is false, and by continuing to host his myth-filled video, Facebook is misinforming tens of thousands, perhaps even millions of its users.

    As of this writing, the Daily Signal video has now been "viewed" 6.3 million times and shared 102,000 times.

    Other denier videos get traction on Facebook as well. For example, one titled "GLOBAL WARMING IS THE BIGGEST FRAUD IN HISTORY," which features a rant by a climate-denying retired businessman, has gotten at least 2 million views by Facebook's count.

    Facebook is partnering with climate-denying organizations

    In an interview with Recode published on July 18, Facebook founder and CEO Mark Zuckerberg said that Facebook shouldn't remove content just because it's wrong. Using the example of Holocaust denial, he said it's “deeply offensive,” but “I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.”

    Zuckerberg tried to clarify his views two days later, writing, "Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services. If something is spreading and is rated false by fact checkers, it would lose the vast majority of its distribution in News Feed."

    Joe Romm at ThinkProgress pointed out that Zuckerberg's approach is a major problem when it comes to climate denial, a particularly pernicious form of disinformation.

    One of Facebook's official fact-checking partners, the conservative magazine The Weekly Standard, has at times been dismissive of climate science and the need for climate action. A piece from July 2017, headlined "Dadaist Science," cast doubt on research that found a scientific consensus around the human causes of climate change. A piece from June 2017 criticized arguments being made on behalf of the Paris climate agreement. A long feature in the magazine from 2014 lauded climate-denying scientist Richard Lindzen.

    As Romm put it, "How can Facebook stop climate misinformation when its ‘fact-checkers’ are deniers?"

    Meanwhile, Facebook is partnering with the Heritage Foundation to determine whether the platform displays liberal bias -- a persistent but blatantly false claim made by conservatives. Heritage gets funding from the Kochs and other fossil fuel interests, and it has a long history of spreading climate denial. It brought us the "Why Climate Change Is Fake News" video mentioned above.

    And the Facebook Watch initiative, in which Facebook partners with media companies to produce original videos, has teamed up with Fox News, despite the network's long history of climate denial. Last month, when Facebook Watch debuted a slate of news shows from eight news publishers, Fox got more than twice as many slots per week as any other outlet.

    Facebook hosts climate-denying ads

    Late last year, a climate-denier blogger tried to buy ads linking to his site on five social-media platforms and found that Facebook was the only one that ran them with no pushback or questions asked.

    Leo Goldstein writes a blog at DefyCCC.com that focuses on what he calls "climate realism." The CCC in the URL stands for "cult of climate change." He also writes periodically for WattsUpWithThat, a more well-known climate-denial blog. He claims that climate change is a "pseudo-scientific fraud" and that "real scientists are against climate alarmism."

    Goldstein attempted to buy ads linking to his DefyCCC site. "In November and December 2017, I experimented with distributing the climate realism message using advertising options on Google and some other platforms," Goldstein wrote in a December 31 post on WattsUpWithThat. In a follow-up post the next day, Goldstein described the outcome of his experiment. The short version: Twitter refused to run his ads. Google ran some of his ads for a period of time. Facebook ran his ads with no pushback.

    "Facebook has been acting squeaky clean," Goldstein wrote. "None of my messages have been banned for content." Facebook is the only platform that gave him no problems, he reported.

    Since then, Goldstein has continued to place ads on Facebook, often under the banner of the Science For Humans and Freedom Institute. One ad he ran on Facebook in July claimed, "CO2 is the gas of life, not a pollutant. Climate alarmism is a dangerous cult":

    Facebook's advertising policies prohibit "deceptive, false, or misleading content," but the company has still allowed Goldstein to purchase space for ads like this.

    Zuckerberg talks the talk about climate change, but doesn't walk the walk

    Zuckerberg has expressed concern about climate change, arguing last year that the U.S. should not pull out of the Paris climate agreement and noting that rising temperatures are melting the glaciers at Glacier National Park.

    But he is not using the immense power of his platform to halt misinformation about climate change. To the contrary, Facebook is enabling and disseminating climate denial on multiple fronts. In addition to the problems outlined above, the platform helps bogus climate stories to spread -- like a hugely popular climate-denial story from YourNewsWire, a fake news site that Facebook refuses to ban even though fact-checkers have debunked its stories at least 80 times. And one of Facebook's most high-profile scandals involved handing user data over to Cambridge Analytica, a shady political consultancy that has close ties to fossil fuel companies and climate deniers.

    Media Matters named Zuckerberg as its misinformer of the year in 2017 for leading a company that is spreading misinformation far and wide. In the first half of 2018, he and Facebook have not changed their ways. Rather, Facebook is currently bending over backward to cater to conservatives who falsely claim that they're discriminated against on the platform, when in fact right-leaning Facebook pages get more interactions than left-leaning ones.

    Combating fake news is key to combating climate change. As an editorial in the journal Nature Communications argued last year, "Successfully inoculating society against fake news is arguably essential" if major climate initiatives are to succeed. Facebook could be a big part of the solution. But by kowtowing to conservatives, prioritizing profits over accuracy, and maintaining open-door policies toward misinformation, Facebook is entrenching itself as a major part of the problem.

  • This data conclusively debunks the myth of conservative censorship on Facebook

    We studied Facebook pages that post content about American political news. Conservatives are not being censored -- in fact, right-wing Facebook pages are thriving.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Right-wing politicians, pundits, and campaigns continually claim that Facebook and other tech platforms censor conservative content online. President Donald Trump’s campaign manager, Brad Parscale, frequently makes this argument. At every congressional hearing about social media, Republican members reliably make the same accusation. The GOP-controlled House Judiciary Committee has already held one hearing on the supposed censorship, and they’re scheduled to hold a second on July 17. Conservatives believe that attacking tech companies about so-called censorship will rally their base, and they plan to continue the attacks.

    Even though those making these accusations have offered no evidence to support censorship claims, Facebook responded by announcing a conservative bias review -- retaining former Republican Sen. Jon Kyl from Arizona and his lobbying firm to advise the company. (Kyl is now also shepherding Supreme Court nominee Judge Brett Kavanaugh through confirmation hearings.)

    It’s not the first time Facebook has reacted to claims of nonexistent right-wing censorship. In May 2016, a flimsy report claimed that conservative outlets and stories were “blacklisted” from Facebook’s Trending Topics section. To great fanfare, Facebook CEO Mark Zuckerberg met with conservatives, including a representative from Trump's campaign, and made promises to be good to them. A subsequent internal investigation revealed “no evidence of systematic political bias” in the Trending Topics section. But Facebook soon gave in anyway and fired the curators of the section, resorting instead to using an algorithm that routinely promoted fabricated stories from bogus sources. Add this cravenness to existing confirmation bias and plenty of dishonest actors willing to take advantage, and Facebook became a cesspool of fake news.

    The algorithm change that was announced in January 2018 was supposed to fix to the fake news problem, which existed only because of previous failures at Facebook. And now with Facebook rolling out the welcome mat for conservatives, we’re about to begin that cycle anew.

    And once again, conservatives are pressuring Facebook with a total myth. Media Matters conducted an extensive six-month study into alleged conservative censorship on Facebook and found no evidence that conservative content is being censored on the platform or that it is not reaching a large audience.

    We identified 463 Facebook pages that had more than 500,000 likes each and regularly posted content dealing with American political news. We analyzed data from these pages, week by week, between January 1, 2018, and July 1, 2018, to observe trends in post interactions (reactions, comments, and shares) and page likes. We found two key things:

    • Partisan pages had roughly equal engagement, and they had more engagement than nonpartisan pages: Right-leaning and left-leaning Facebook pages had virtually identical average interaction rates -- measurements of a page's engagement -- at .18 percent and .17 percent, respectively, and nonaligned pages had the lowest interaction rates at .08 percent.
    • Right-leaning pages in total have a bigger presence on Facebook: In every week but one, right-leaning Facebook pages had a higher total number of interactions than left-leaning Facebook pages. Right-leaning pages had 23 percent more total interactions than nonaligned pages and 51 percent more total interactions than left-leaning pages. Images shared by right-leaning pages -- including memes that frequently include false and bigoted messages -- were by far the highest performing content on the Facebook pages examined.

    The data indicates something I’ve long assumed anecdotally: The right is out-organizing the left on Facebook. Even though the right-leaning pages had fewer page likes than the left-leaning pages, the rates of interaction are virtually identical. And when you look at the individual metrics, especially on image-based posts, the news gets even worse. Despite having a larger base of aligned supporters on Facebook in terms of page likes, left-leaning pages don't have as much impact with their base.

    You can view the full study here. 

    It’s time to end the charade. The Trump campaign and politically aligned groups aren't going to stop advertising on Facebook. They need Facebook to reach their voters. Facebook should disband the conservative bias review and stop enabling political theater. Considering how many problems Facebook as a company is facing, it's long overdue for the company to stop wasting its time and resources on a problem that doesn't exist. Political media also need to stop giving this myth oxygen. Next time Parscale or Sen. Ted Cruz (R-TX) start whining about bias, reporters need to ask for some actual numbers to back up their claims.

  • Facebook caves to debunked claims of right-wing censorship

    Facebook will get advice about supposed bias from a Republican lobbyist who in 2008 alleged a connection between Saddam Hussein and Al Qaeda

    Blog ››› ››› JOHN WHITEHOUSE


    Sarah Wasko / Media Matters

    Axios reported on May 2 that Facebook will bring on lobbyist and former Arizona Republican Sen. Jon Kyl to advise the company regarding claims of conservative bias on its platform -- even though the allegations have been repeatedly debunked using Facebook’s own data. As Thinkprogress noted, the effort will not include any liberals. Additionally, Facebook executives will be receiving advice from the conservative think tank Heritage Foundation on the supposed anti-conservative bias, according to the Axios report

    Conservatives have been complaining about Facebook censoring them for years, and Facebook, in turn, gave in to that pressure in ways that immediately made things worse.

    In May 2016, a flimsy report claimed that Facebook employees had “blacklisted” conservative outlets and stories from the platform’s Trending Topics news section. Facebook CEO Mark Zuckerberg quickly met with conservatives, including a representative from Donald Trump's campaign, to promise that Facebook would be good to them. A subsequent internal investigation revealed “no evidence of systematic political bias” in the trending topics, but Facebook soon gave in to right-wing pressure anyway. The company fired the “news curators” of the section, instead opting to use an algorithm that routinely promoted fabricated stories from bogus sources.

    After that change in 2016, fake news increasingly flooded the site. It was only after the 2016 election that Zuckerberg committed to doing something about the problem. One of the first solutions the company implemented was to add fact checks to disputed stories. When conservatives started wrongly complaining that fact-checkers were liberal, Facebook added right-wing publication The Weekly Standard -- which has a long history of pushing debunked lies -- as a fact-checker. (Facebook has since moved away from this fact-check feature as originally conceived.)

    The conservative complaints against Facebook have grown into a fever pitch since Facebook tweaked its news feed algorithm again in January 2018. Pro-Trump personalities Lynette Hardaway and Rochelle Richardson, who go by the moniker Diamond and Silk, repeatedly appeared on Fox News in April to complain about Facebook’s supposed censorship of their page and said the company never reached out to them to address their concern.

    Zuckerberg testified before Congress in April, and right-wing sites were thrilled when Sen. Ted Cruz (R-TX) confronted him about this alleged bias against conservatives -- and downright giddy when Zuckerberg said in response that Silicon Valley is an “extremely left-leaning” place. House Republicans repeatedly asked Zuckerberg about supposed censorship of Diamond and Silk instead of asking pressing questions about Facebook’s monopolistic role in global information and violence.

    Shortly after Zuckerberg’s testimony, the entire narrative about Diamond and Silk was debunked. Judd Legum reviewed data from CrowdTangle showing that Diamond and Silk were never suppressed on Facebook and that the pair “get more video views on Facebook than Rachel Maddow, even though Maddow’s show has a much larger page and is the most popular cable news program in the country.” Erick Erickson and Andrew Kirell revealed emails from Facebook showing that contrary to Diamond and Silk’s public allegations, Facebook had tried to reach out to them regarding monetization of their videos.

    None of this made any difference in the right-wing bubble. The day after their claims were debunked, the pair appeared on Fox News and restated their claims. While hosting the duo, host Neil Cavuto gave no impression that the claims had been debunked, and indeed he once even implied their page had been taken down by Facebook, which was never the case nor was it even alleged.

    Conservatives also rallied around Diamond and Silk, ignoring the fact that their claims have been proved untrue. Rep. Steve King (R-IA) invited the pair to testify during an April 26 congressional hearing where they made a number of demonstrably false claims while under oath. They have since continued to appear on Fox News and are scheduled to appear at a “leadership forum” during the NRA annual meeting this week.

    And right-wing claims of suppression are only growing. During a conversation with Facebook’s head of global policy management, Monika Bickert, that was hosted by the Heritage Foundation, a representative from hyperpartisan and anti-Muslim conglomerate Liftable Media asked about supposed suppression of its site Western Journalism under the new algorithm. Bickert was noncommittal, but more and more conservatives are pressing Facebook for mass distribution. Allen West, Tomi Lahren, Dan Bongino, and others have also complained on Fox News in recent days about Facebook censoring conservatives.

    None of these accusations are reflected in the data. A 2017 Newswhip report found that conservative publishers receive 2.5 times the engagement that liberal sites got. (The finding mirrors internal data that Media Matters has collected.) Newswhip data for February and March 2018 show that a number of right-wing sites are among the biggest publishers on Facebook. Newswhip also noted that the top reporters on Facebook were almost all right-wing media figures.

    This right-wing complaining should sound familiar. It’s the same model that conservatives have used to take on the media for decades.

    Media Matters senior fellow Matt Gertz has previously examined the origins of right-wing animus toward the media:

    Those attacks first boiled over at the Republican National Convention in 1964, which followed weeks of vitriolic criticism against the press by Sen. Barry Goldwater (R-AZ) and his supporters. Goldwater had been widely castigated by columnists and commentators for his opposition to the Civil Rights Act, generating a backlash from activists who believed (quite accurately) that reporters had taken sides against segregation over the previous decade.

    As conservatives triumphed over the moderates who had controlled the party for decades and installed the Arizona senator as the party’s nominee, activists raged at and even assaulted the purportedly liberal press. Former President Dwight Eisenhower’s exhortation from the podium to “scorn the divisive efforts of those outside our family, including sensation-seeking columnists and commentators” drew wild applause and jeers from the crowd.

    This anti-press animus would enter the White House with Richard Nixon’s election in 1968.

    The line from the Nixon administration to modern right-wing media goes directly through Roger Ailes. Ailes produced Rush Limbaugh’s short-lived television show and later co-founded Fox News, before being given $40 million to leave following an investigation into reported sexual misconduct. The right-wing architecture that Ailes constructed and inspired was built on and dominated by attacks on the media. This culminated in Trump’s candidacy for president. Trump has constantly railed against the media, both on the campaign trail and in the White House, in unprecedented ways.

    This pressure campaign by conservatives against the media has worked. The media take conservative criticism far more seriously than they do left-wing criticism. This is reflected in the data as well: Conservatives are far more likely to be invited onto the most prominent political talk shows. The media ignore topics like climate change until Trump brings it up. Speaking truth to conservatives just makes the media think that conservatives are being bullied, even if the conservatives in question are some of the most powerful people in the world.

    Charlie Brown kept falling for Lucy’s football routine, and the media keep falling for right-wing complaints about the fake news media. We know appeasement will not work because it never has. In fact, many of the criticisms are not even made in good faith. They’re merely a strategy to assume permanent power for the far right.

    And so now, by hiring Kyl, Facebook is building its own apparatus to appease conservatives. Kyl has been working at Washington lobbying firm Covington and Burling, where one of his clients is a former member of Facebook’s board, Donald E. Graham. (Graham, the former publisher of The Washington Post, in March published an op-ed in the paper decrying attempts to regulate Facebook, worrying about potential censorship of newspapers.)

    It’s unclear what advice from Kyl will look like. Kyl has a track record of bigotry toward Muslims and once even gave an award to an anti-Muslim conspiracy theorist. Gizmodo has also noted that Kyl spread lies about Planned Parenthood while in the Senate. Kyl’s comments about the 2010 New START treaty between Russia and the U.S. to reduce nuclear arms were also “thoroughly debunked.” In 2008, Kyl even wrote a letter to The Washington Post asserting a connection between former Iraq President Saddam Hussein and Al Qaeda. This myth, which had long been debunked, was also the subject of the book The Connection: How al Qaeda’s Collaboration With Saddam Hussein Has Endangered America by Weekly Standard editor Stephen F. Hayes.

    But whatever Kyl’s advice is, it won’t work. The complaints are the point. The goal is to discredit any potential news source that undermines the right-wing narrative. If Facebook gives in to this pressure and further helps out right-wing outlets, that’s a win. If Facebook does not give in, these conservatives will threaten to push right-wing audiences to other platforms, and they'll use that threat to push for more concessions from Facebook. Nothing will ever stop the complaints. Mainstream media figures have refused to learn that lesson, but it’s not too late for Facebook.

    As America worries about whether the post-truth era it has found itself in can be reversed, Facebook should stop playing games with liars.

  • Facebook agrees to a much-needed civil rights audit in the worst possible way

    Blog ››› ››› MELISSA RYAN

    After months of advocacy from civil rights groups, this morning Facebook announced that it will conduct a civil Rights audit of its platform. Via Axios:

    The civil rights audit will be guided by Laura Murphy, a national civil liberties and civil rights leader. Murphy will take feedback from civil rights groups, like The Leadership Conference on Civil and Human Rights, and advise Facebook on the best path forward.

    Calls for a civil rights audit, led by Muslim Advocates and Color of Change, along with the Center for Media Justice, the NAACP Legal Defense and Educational Fund, the Southern Poverty Law Center, and the Leadership Conference on Civil and Human Rights, have been ongoing. As the Color of Change petition notes, “Through their data malpractice, opaque interactions with law enforcement, erasure of Black activist voices, and inability and unwillingness to tackle the rise of white supremacist organizing and hate speech on the platform, Facebook's failures have put our communities at risk.”

    The civil rights audit is a win. After months of pressure, Facebook has agreed to examine how its platform has been weaponized to spread hate and harm underrepresented communities and people of color. If Facebook takes the audit seriously and implements its recommendations, the platform could change for the better, making Facebook a safer space and a creating better overall user experience for all communities.

    But alongside this welcome announcement from Facebook came a not-so-welcome one. Also from Axios:

    To address allegations of bias, Facebook is bringing in two outside advisors — one to conduct a legal audit of its impact on underrepresented communities and communities of color, and another to advise the company on potential bias against conservative voices.

    ...

    The conservative bias advising partnership will be led by former Arizona Republican Sen. Jon Kyl, along with his team at Covington and Burling, a Washington law firm.

    The civil rights audit isn’t partisan. Hate speech, safety, and privacy aren’t political issues, but moral issues. Yet by announcing the audit at the same time as a “conservative bias advisory partnership” (presumably to address a claim that has already been debunked), Facebook is conflating the two. It suggests that the audit is meant to address criticism from the political left, not the problems underrepresented communities face on the platform every single day.

    Muslim Advocates addresses this concern in its response:

    We are concerned, however, about the pairing of this announcement with another that Facebook will be bringing on advisors responsible for determining bias against conservatives. We strongly reject the message this sends regarding the moral equivalency of hate group activities and conservative viewpoints. Despite this concern, we hope this first step is a sign that Facebook will begin to take responsibility for the hate and bigotry that has flourished on its platform.

    Safety online isn’t partisan. Facebook’s users should have the expectation that their civil rights won’t be violated while using the platform. Pairing the civil rights audit with a partisan panel on supposed conservative bias on the platform suggests that Facebook doesn’t take civil rights seriously, instead viewing it as a partisan complaint that must be appeased.

    This is not acceptable. Facebook must fully commit to ensuring its users are safe online and that their rights aren’t violated. Today’s announcement was a big misstep. It creates the impression that Facebook sees the audit as a political issue and not a moral one. The ball is in Facebook’s court to correct this.

  • Anti-abortion extremists keep crying censorship to raise money

    Blog ››› ››› JULIE TULBERT


    Sarah Wasko / Media Matters

    If there’s one thing Republicans love more than pretending they’re being victimized by liberal elites, it’s raising money off this inaccurate claim -- a tendency demonstrated clearly during recent congressional hearings on the activities of Facebook. During these hearings, Republican members of Congress elevated various overinflated right-wing grievances against social media companies (such as claims of anti-abortion censorship and anti-Christian bias) in order to pressure the platform into allowing greater promotion of inflammatory or inaccurate content. In particular, they seized on pro-Trump YouTubers Diamond and Silk, who have actively lied about Facebook censoring them and then used the attention to raise money. As close watchers of the anti-abortion movement know, this tactic of crying censorship to garner attention and raise funds is a favorite of anti-choice actors. Here are a few that have recently employed this practice:

    Live Action

    Lila Rose, founder of the anti-abortion group Live Action, appeared on Fox News’ Tucker Carlson Tonight in June 2017 alleging that Twitter was censoring Live Action’s ads due to ideological bias. In reality, the content still appeared on Live Action’s Twitter page, but was not allowed to be promoted as an advertisement to other users, not because of bias, but because it violated several of Twitter’s content policies regarding "hate content, sensitive topics, and violence.”

    Instead of altering the organization’s content to meet Twitter’s policies, Rose appeared on Tucker Carlson Tonight and used claims of supposed censorship to raise funds for Live Action. As Rose told Carlson, “We’re actually doing a campaign right now to get people to fund Live Action and to get out the information that Twitter is trying to block using other platforms -- using Facebook, using YouTube, using the blogosphere, obviously coming on here and talking with you.”

    Live Action continued to deploy this dishonest tactic even after Rose’s Fox News appearance. Following the June 26 segment, Live Action sent a fundraising email claiming that “Live Action is being suppressed” and asking supporters “to help us strengthen our efforts against the abortion industry.” Live Action’s censorship allegations also animated other right-wing media outlets. For example, on June 29, Christian Broadcasting Network published an article promoting Live Action’s claims about Twitter’s ad policy, which stated that “Live Action has launched a campaign to compensate for their losses due to Twitter’s censoring,” and directed readers to Live Action’s fundraising page. Rose and Live Action also pushed the narrative on Twitter, using the hashtag #DontDeleteMe -- even though all of Live Action tweets remained publicly available on the platform.

    The group also continued to use claims of censorship to raise funds in three October 2017 emails. In one email, Live Action stated that “Twitter is STILL banning our paid ads” and asked whether members would “give a gift to Live Action today so that we can expose more people to the truth.” In another email, Live Action claimed, “While we work to pressure Twitter to lift their ban on ads for pro-life content, we must double our efforts elsewhere” and asked people to “make a gift … so that we can reach more people with the truth.” Live Action made a similar plea in another email, asking people to “consider helping us reach more Americans with the truth about abortion through our other social media platforms like Facebook, YouTube, and Instagram.”

    Operation Rescue

    The extremist anti-abortion group Operation Rescue claimed in July 2017 that Google was censoring parts of its website after its page rankings decreased in the results of searches for “abortions in US” or “abortion statistics.” The group alleged that “Google’s search engine has manipulated search parameters to dramatically reduce exposure” to Operation Rescue's web pages, which contain abortion statistics purporting to show the "truth about abortion." Operation Rescue then sent a fundraising email asking for support to "launch a massive campaign to ensure our critical abortion research and pro-life content is available, and no longer pushed down by the pro-abortion radicals at Google." Prior to the complaint, Google announced a policy change regarding how sites containing misleading or false information would be ranked.

    Susan B. Anthony List

    In October 2017, Susan B. Anthony List (SBA List) claimed that one of the organization’s Twitter ads, targeting Virginia Attorney General Mark Herring in the 2017 election, was taken down by the platform, seemingly for inflammatory language. Citing this example and other anti-abortion censorship allegations, SBA List asked people to “make a gift today to get our pro-life message past Twitter’s censorship” and to “fight back against Twitter’s censorship.”

    Following Facebook CEO Mark Zuckerberg’s testimony before Congress last week, SBA List reprised this tactic and emailed supporters to detail instances where the group claimed to have been censored by social media companies. SBA List then directed people to “please make a generous donation of $250 to help win the fight against pro-abortion Silicon Valley elites.”

    Anti-abortion outlets

    Not to be left out of the conversation about supposed anti-abortion censorship, the anti-choice news outlet Life News also sent an email after Zuckerberg’s testimony stating, “Social media companies like Facebook, Twitter, Google and YouTube are increasingly censoring pro-life voices,” and asking readers to sign a petition and to “make a donation today … so we can continue to stand up to these social media giants [and] their censorship.”

    Another anti-abortion outlet, LifeSite News, also asked for donations in light of supposed censorship by social media companies. The site posted in March 2018 about the “surprising and disturbing reason why LifeSite’s Spring campaign is struggling.” The reason, according to LifeSite News, “is an almost declared war by the globalist social media giants – Facebook, Google, Twitter and YouTube against websites, blogs and individuals who promote conservative views.” LifeSite argued that its inability to raise funds was due to censorship from Facebook and Google and pleaded to readers, writing, “To those of you who were not blocked from reading this letter, we are depending on you much more than normal to help us to reach our goal.” Unsurprisingly, the outlet provided zero evidence of the censorship it was allegedly experiencing.

    Roe v. Wade -- the movie

    The producer of an anti-abortion film about Roe v. Wade claimed that Facebook temporarily blocked his ability to post an Indiegogo crowdfunding page for the production of the film. On the Indiegogo page, the film is described as “the real untold story of how people lied; how the media lied; and how the courts were manipulated to pass a law that has since killed over 60 million Americans.” According to the film’s crowdfunding page, the film needs “support now more than ever. Facebook has banned us from inviting friends to ‘Like’ our page and from ‘Sharing’ our PAID ads.”

    Rep. Marsha Blackburn

    In October 2017, Rep. Marsha Blackburn (R-TN) announced she was running for a Senate seat by tweeting out a campaign video that included a mention of her time as chair of the House Select Investigative Panel on Infant Lives -- a sham investigation based on deceptive and disproven claims by the anti-abortion group Center for Medical Progress. The video included inflammatory language such as that Blackburn had “stopped the sale of baby body parts.” After Twitter temporarily blocked her from running the tweet as a paid ad due to its inflammatory language, Blackburn claimed censorship and made the rounds on Fox News to push this story. Blackburn also used the opportunity to tweet that the “conservative revolution won’t be stopped by @Twitter and the liberal elite,” urging people to “donate to my Senate campaign today.”

    Anti-abortion groups and outlets have found a great deal of success in crying censorship -- a lesson that wider conservative media outlets and figures appear to be taking to heart. As a recently published report from the right-wing Media Research Center (a report that was readily promoted by outlets like Life News) melodramatically framed the issue: “The question facing the conservative movement is one of survival. Can it survive online if the tech companies no longer allow conservative speech and speakers? And, if that happens, can the movement survive at all?”

  • Lack of diversity is at the core of social media's harassment problem

    Right-wing figures and far-right trolls mocked questions to Facebook's Zuckerberg about diversity. But it's crucial to understanding how platforms enable harassment.

    Blog ››› ››› CRISTINA LóPEZ G.


    Sarah Wasko / Media Matters

    This week, Facebook CEO Mark Zuckerberg was questioned on racial diversity within his company as he appeared before House and Senate committees to address Facebook’s handling of user data. Facebook -- and more generally, the tech industry -- has often been criticized for its lack of diversity, an issue that, as members of Congress pointed out, can hinder the platform’s ability to respond to discrimination against African-American users and fake news.

    Rep. Yvette Clarke (D-NY) discussed the relationship between Facebook’s fake news problem and lack of diversity within the company itself:

    Sen. Cory Booker (D-NJ) asked Zuckerberg about racial discrimination enabled by Facebook and indicated a "growing distrust ... about Facebook's sense of urgency” in addressing such discrimination:

    Rep. G.K. Butterfield (D-NC) questioned Zuckerberg on Facebook’s lack of diversity:

    REP. G.K. BUTTERFIELD (D-NC): You and your team certainly know how I feel about racial diversity in corporate America, and [Facebook Chief Operating Officer] Sheryl Sandberg and I talk about that all of the time. Let me ask you this, and the Congressional Black Caucus has been very focused on holding your industry accountable -- not just Facebook, your industry -- accountable for increasing African-American inclusion at all levels of the industry. And I know you have a number of diversity initiatives. In 2017, you’ve increased your black representation from 2 to 3 percent. While this is a small increase, it's better than none. And this does not nearly meet the definition of building a racially diverse community. CEO leadership -- and I have found this to be absolutely true -- CEO leadership on issues of diversity is the only way that the technology industry will change. So, will you commit, sir, to convene, personally convene a meeting of CEOs in your sectors -- many of them, all of them perhaps, are your friends -- and to do this very quickly to develop a strategy to increase racial diversity in the technology industry?

    MARK ZUCKERBERG: Congressman, I think that that's a good idea and we should follow up on it. From the conversations that I have with my fellow leaders in the tech industry, I know that this is something that we all understand, that the whole industry is behind on, and Facebook is certainly a big part of that issue. We care about this not just from the justice angle, but because we know that having diverse viewpoints is what will help us serve our community better, which is ultimately what we're here to do. And I think we know that the industry is behind on this.

    Right-wing media figures and far-right trolls scoffed at the idea of questioning the tech industry’s lack of diversity

    Right-wing figures and far-right trolls scoffed at these questions on different social media platforms -- including Gab, an alternative to Twitter that has been called a "haven for white nationalists" and has on occasion served as a platform to coordinate online harassment -- dismissing them as “insane” and describing efforts to increase racial diversity as discrimination “against white people.” 

    But experts have criticized Facebook and other platforms for the lack of racial diversity within their ranks and explained that diversity is at the core of social media’s harassment problems

    Members of Congress were not alone in their concern that Facebook’s racial homogeneity might diminish its capacity to create a safe environment for every user and protect user data. Bärí A. Williams, formerly a senior commercial attorney at Facebook, explained that racial diversity specifically would improve the platform’s ability to respond to data breaches, “fill blind spots,” and improve “cultural competency” through “lived experience.”

    While Zuckerberg announced Facebook’s intention to rely on Artificial Intelligence (AI) to adress many of the social network’s shortcomings, Molly Wood, host of the Marketplace Tech radio show, pointed out that AI is not a substitute for a racially inclusive workforce:

    A lack of racial diversity in companies’ ranks is at the core of the harassment problem on their social media platforms, as online harassment disproportionately targets minorities of color. According to Pew, “harassment is often focused on personal or physical characteristics; political views, gender, physical appearance and race are among the most common,” with African-Americans experiencing more harassment because of their ethnicity than other groups, and women experiencing more harassment than men:

    Some 14% of U.S. adults say they have ever been harassed online specifically because of their political views, while roughly one-in-ten have been targeted due to their physical appearance (9%), race (8%) or gender (8%). Somewhat smaller shares have been targeted for other reasons, such as their religion (5%) or sexual orientation (3%).

    Certain groups are more likely than others to experience this sort of trait-based harassment. For instance, one-in-four blacks say they have been targeted with harassment online because of their race or ethnicity, as have one-in-ten Hispanics. The share among whites is lower (3%). Similarly, women are about twice as likely as men to say they have been targeted as a result of their gender (11% vs. 5%)

    During a conversation with Wired about how Silicon Valley can address harassment in social media platforms, Black Lives Matter’s Chinyere Tutashinda talked about her experiences online as a black social activist, confirming Pew’s findings by remarking on the ways that people of color are targeted disproportionately online:

    CHINYERE TUTASHINDA: I work within the social justice movement, and there’s no one, especially in the black community, who doesn’t expect harassment online. It’s just replicating what happens in the real world, right? How do we make other people know and care?

    [...]

    There is a lack of diversity in who’s creating platforms and tools. Too often it’s not about people, it’s about how to take this tool and make the most money off it. As long as people are using it, it doesn’t matter how they’re using it. There’s still profit to earn from it. So until those cultures really shift in the companies themselves, it’s really difficult to be able to have structures that are combating harassment.

    [...]

    Diversity plays a huge role in shifting the culture of organizations and companies. Outside of that, being able to broaden the story helps. There has been a lot of media on cyberbullying, for example, and how horrible it is for young people. And now there are whole curricula in elementary and high schools. There’s been a huge campaign around it, and the culture is shifting. The same needs to happen when it comes to harassment. Not just about young people but about the ways in which people of color are treated.

    Experts have weighed in on the specific implications of social media platforms lacking racial diversity among their ranks. As Alice Marwick, a fellow for the Data & Society Research Institute, pointed out on Quartz,“the people who build social technologies are primarily white and Asian men” and because “white, male technologists don’t feel vulnerable to harassment” in the same way that minorities or people of color do, they often fail to incorporate protections against online abuse in their digital designs.

    To illustrate Marwick’s point, take Twitter’s mute button, a feature that can filter unwanted content from users' timelines, making it easier for users to avoid abusive content directed at them. As Leslie Miley -- a black former engineering manager at Twitter who left the company specifically because of how it was addressing diversity issues -- told The Nation, the feature wasn’t perfected until a diverse group of people worked together to fix it:

    [Leslie] Miley was a part of a diverse team at Twitter that he says proves his point. His first project as the engineering manager was to fix Twitter’s “mute” option, a feature that allows users to filter from their timelines unwanted tweets, such as the kind of harassment and personal attacks that many prominent women have experienced on the platform.

    “Twitter released a version in the past that did not go over well. They were so badly received by critics and the public that they had to be rolled back. No one wanted to touch the project,” says Miley. So he pulled together a team from across the organization, including women and people of color. “Who better to build the feature than people who often experience abuse online?” he asks. The result was a new “mute” option that was roundly praised as a major step by Twitter to address bullying and abuse.

    The blind spots caused by racial homogeneity might also delay platforms’ responses to rampant harassment. As documented by Model View Culture magazine, far-right troll and white nationalist sympathizer Milo Yiannopoulos was allowed to rampantly harass users for years on Twitter before getting permanently banned for his “sustained racist and sexist” harassment of African-American comedian Leslie Jones. As Model View Culture points out, racial diversity could be extremely helpful in addressing the challenge social media platforms face in content moderation:

    From start to finish of the moderation pipeline, the lack of input from people who have real, lived experience with dealing with these issues shows. Policy creators likely aren’t aware of the many, subtle ways that oppressive groups use the vague wording of the TOS to silence marginalized voices. Not having a background in dealing with that sort of harassment, they simply don’t have the tools to identify these issues before they arise.

    The simple solution is adding diversity to staff. This means more than just one or two people from marginalized groups; the representation that would need to be present to make a real change is far larger than what exists in the population. Diversity needs to be closer to 50% of the staff in charge of policy creation and moderation to ensure that they are actually given equal time at the table and their voices aren’t overshadowed by the overwhelming majority. Diversity and context must also be considered in outsourcing moderation. The end moderation team, when it comes to social issues specific to location, context and identity, needs to have the background and lived experience to process those reports.

    To get better, platforms must also address how user-generated reports are often weaponized against people of color. Although there’s nothing that can be done about the sheer numbers of majority-White users on platforms, better, clearer policy that helps them question their own bias would likely stop many reports from being generated in the first place. It may also help to implement more controls that would stop targeted mass-reporting of pages and communities by and for marginalized people.

    Ultimately, acknowledging these issues in the moderation pipeline is the first step to correcting them. Social media platforms must step away from the idea that they are inherently “fair,” and accept that their idea of “fairness” in interaction is skewed simply by virtue of being born of a culture steeped in White Supremacy and patriarchy.

  • The tragedy and lost opportunity of Zuckerberg’s testimony to Congress

    Congress didn’t do nearly enough to hold Mark Zuckerberg accountable

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg came to Washington to testify before Congress over two days of hearings. Expectations were low -- to the point of infantilization. Unsurprisingly, Zuckerberg was able to clear the extremely low bar America sets for white men in business. He showed up in a suit and tie, didn’t say anything too embarrassing, and, for the most part, the members of Congress questioning him made more news than his testimony did. Facebook’s public relations team probably considers the hearings a win. The stock market certainly did.

    Facebook’s users, however, lost bigly. Congress failed to hold Zuckerberg accountable. The Senate hearing, held jointly by the judiciary and commerce committees, devolved into Zuckerberg explaining how the Internet worked to the poorly informed senators. The House commerce committee members were more up to speed, but Republican members -- following Ted Cruz’s lead from the day before -- spent most of their time and energy grilling Zuckerberg about nonexistent censorship of right-wing content. If Facebook’s leaders are ill-prepared to handle the challenges they’re facing, Congress appears even less up to the challenge.

    Tech press had a field day on Twitter in feigning outrage at Congress for its lack of tech savvy, but the Congress’ lack of interest in holding Facebook accountable is far more problematic. As David Dayen noted in the Intercept:

    This willingness, against interest and impulse, to do the job of a policymaker was sorely absent throughout Tuesday’s testimony, which involved both the judiciary and commerce committees, as well as nearly half the members of the Senate. Far too many senators framed the problems with Facebook — almost unilaterally agreed, on both sides of the aisle, to be pernicious and requiring some action — as something for Zuckerberg to fix, and then tell Congress about later.

    Sen. Lindsey Graham (R-SC) was the rare exception. He was one of few members of Congress comfortable with calling Facebook a monopoly.

    Facebook’s issues with civil rights was barely covered, with a few notable exceptions. Sen. Mazie Hirono (D-HI) asked Zuckerberg if Facebook would ever assist the government in vetting immigrants (it would not in most cases), and Sen. Cory Booker (D-NJ) asked Zuckerberg to protect Black Lives Matter activists from improper surveillance (he agreed). Reps. Bobby Rush (D-IL) and G.J. Butterfield (D-NC) asked similar questions during the House hearing, and Rep. Susan Brooks (R-IN) asked about Facebook as a recruitment tool for ISIS. But not one question was asked about Facebook’s role as a recruitment tool for white supremacists and neo-Nazis.

    While the House hearing featured better questions, the majority of Republican members nevertheless managed to turn it into a circus. They repeatedly asked Zuckerberg about the supposed censorship of pro-Trump social media stars Diamond and Silk (which has since been debunked) and suggested that the biggest issue Facebook faces is the censorship of right-wing content. The concern trolling over Diamond and Silk came between questions exposing deep societal problems including opioid sales on the social media platform that are literally responsible for overdose deaths and Facebook’s role in the Rohingya genocide in Myanmar.

    The Diamond and Silk obsession derives from another one of Facebook’s societal problems: the prominence of propaganda, conspiracy theories, and misinformation on the platform. Multiple members who asked Zuckerberg about Diamond and Silk said they’d heard directly from their constituents about the matter, which they almost certainly did. Pro-Trump media lost their collective minds when the news broke. The facts are that the Diamond and Silk supposed censorship didn't actually happen and that data does not back up the claim of right-wing media being censored on Facebook. If anything, the platform is a cesspool of far-right activity.

    Not one member of Congress asked Zuckerberg about Facebook’s role in the spread of conspiracy theories and propaganda. Republicans were wasting valuable time demanding answers over a nonexistent conspiracy theory, and no one at all felt compelled to ask Zuckerberg how the hell we got to here. It is extremely telling that while this was going on, Diamond and Silk made an appearance on Alex Jones’ Infowars broadcast, another conspiracy theory site that owes its popularity in part to Facebook.

    If social media filter bubbles have split Americans into different realities, it would seem that Congress is a victim to the same problem. Research shows that the right-wing’s filter bubble influences the left’s in a way that isn’t reciprocated. Right-wing content isn’t actually being censored on Facebook. The newly minted Diamond and Silk Caucus (or the Alex Jones Caucus) in Congress was demanding that even more right-wing content show up in our feeds, sending the right-wing base even deeper into their bubble. It’s the same schtick that the same people have pulled for years with the political media.

    While many in Congress have complained about far-right conspiracy theories becoming a part of mainstream American society, it’s a shame that they didn’t hold accountable the one man who more than anyone created this reality.

  • Facebook’s latest announcements serve as a reminder that fixing the platform is a global issue

    Effective consumer pushback must be global as well.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    A few huge updates from Facebook this week are worth paying attention to.

    First, the company announced the removal of “70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — that were controlled by the Russia-based Internet Research Agency (IRA).” Facebook also removed any ads associated with the IRA pages. In an unusual bit of transparency, the company provided stats of what was deleted and who those pages were targeting:

    Of the Pages that had content, the vast majority of them (95%) were in Russian — targeted either at people living in Russia or Russian-speakers around the world including from neighboring countries like Azerbaijan, Uzbekistan and Ukraine.

    Facebook also provided a few samples from the pages as well as ad samples, none of which were written in English. “The IRA has consistently used inauthentic accounts to deceive and manipulate people,” the announcement said. “It’s why we remove every account we find that is linked to the organization — whether linked to activity in the US, Russia or elsewhere.”

    CEO Mark Zuckerberg reiterated IRA’s global reach in a post on his personal page, saying, “Most of our actions against the IRA to date have been to prevent them from interfering in foreign elections. This update is about taking down their pages targeting people living in Russia. This Russian agency has repeatedly acted deceptively and tried to manipulate people in the US, Europe, and Russia -- and we don't want them on Facebook anywhere in the world.”

    Facebook also announced an updated terms of service and data policy that the company claims will be easier for users to understand. “It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy,” the announcement reads. “So we’re proposing updates to our terms of service that include our commitments to everyone using Facebook. We explain the services we offer in language that’s easier to read. We’re also updating our data policy to better spell out what data we collect and how we use it in Facebook, Instagram, Messenger and other products.”

    Finally, Facebook announced major changes to how third parties can interact with and collect data. The company acknowledged that the number of users whose data was being illegally used by Cambridge Analytica -- reported to be 50 million -- was actually 87 million. Facebook promised, “Overall, we believe these changes will better protect people’s information while still enabling developers to create useful experiences. We know we have more work to do — and we’ll keep you updated as we make more changes.”

    Facebook is finally responding to consumer pressure in a systematic way. These changes will curb the amount of propaganda users are exposed to, limit how third parties can interact with users on the platform, and make the rules of the road clearer for everyone.

    It’s important to note that all of these changes appear to be global, not limited to specific countries, which is good because the problems Facebook has caused are also global. Facebook has been weaponized by hostile actors seeking to manipulate users in dozens of countries. Facebook employees have admitted, on the company's Hard Questions Blog, that Facebook as a platform can be harmful to democracy. Facebook’s ability to reach people across the world is unprecedented in scale, and because of this, there’s no institution or government with the ability to regulate Facebook and protect the totality of its users.

    We have Facebook on the defensive, but they’re going to change only as much as it’s pressured to change. Tech lawyer and privacy advocate Tiffany Li, in an op-ed for NBC News, has identified three groups of stakeholders Facebook needs to appease in order to save their company: “shareholders, policymakers, and of course, consumers.” I like her categorization but would add that Facebook needs to appease these three groups in countries across the globe, not just in the U.S., U.K., and European Union nations.

    This isn’t a problem that can be solved overnight, something Zuckerberg acknowledged when he spoke with Vox’s Ezra Klein this week, saying, “I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.” Generally, I’m a Zuckerberg critic, but I appreciate this comment and agree we’re in for a turbulent couple of years coming to grips with everything.

    Here’s the good news. Thanks to social media (including Facebook!) we’re more connected than ever before. Facebook’s users have an opportunity to have a global conversation about what changes are needed and take any activist campaigns or direct actions global. We can pressure multiple governments, work with civil society groups in multiple countries, and create a global consumer movement.

    Facebook still has a long way to go and it’s users have 87 million (or 2 billion) reasons to be upset. The company has a lot do before it can earn back the trust of their consumers across the globe. That said, I appreciate that Facebook is finally taking some decisive action, even as they acknowledge curbing abuse of all kinds on the platform will be an ongoing battle. It’s a welcome correction to the company’s PR apology tour, adding action to words that would otherwise ring hollow. To be clear: Facebook was forced to take these actions thanks to global activism and consumer pressure. We have the momentum to force needed systemic changes. Let’s keep at it.

    Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action

  • Mark Zuckerberg’s apology PR tour and why now is our best opportunity yet to push for change

    Facebook to everyone: our bad

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg is sorry. Specifically, as he told CNN, he’s “really sorry that this happened.”

    “I think we let the community down, and I feel really bad and I’m sorry about that,” he told Recode’s Kara Swisher. Facebook Chief Operating Officer Sheryl Sandberg, appearing on CNBC, also offered an apology: “I am so sorry that we let so many people down.”

    Zuckerberg and Facebook have a lot to apologize for. In addition to the numerous other problems plaguing Facebook under Zuckerberg’s watch, he allowed Cambridge Analytica to obtain and exploit the Facebook data of 50 million users in multiple countries. When the platform discovered the stolen data, it took the firm’s word that the data had been deleted (it hadn’t). Facebook made no attempts to independently verify that the data was no longer being used, nor did it notify users whose data was exploited. Even after the news broke, it took Zuckerberg and Sandberg six days to face the public and give interviews.

    In addition to offering their apologies, both Sandberg and Zuckerberg acknowledged that trust between Facebook and users had been breached. Sandberg said on CNBC, “This is about trust, and earning the trust of the people who use our service is the most important thing we do. And we are very committed to earning it.”

    What surprised me most, however, was their acknowledgment that regulation was coming and that perhaps Facebook needs to be checked. Zuckerberg in his CNN interview suggested that regulation of tech companies like Facebook might be necessary. Sandberg went even further: “It's not a question of if regulation, it's a question of what type. ... We're open to regulation. We work with lawmakers all over the world." At first this read to me like another attempt at passing the buck of responsibility onto another entity, and while that might still be partially true, there’s more to it. Facebook is responding to public outrage, including the growing calls for regulation. Facebook executives have concluded they’re not getting out of this mess without regulation, and their best path forward is to try to get the best deal they can get, given the circumstances.

    Were Zuckerberg and Sandberg forthcoming enough? No. I don’t think anyone was convinced that Facebook is telling us everything it knows, nor did the company present much of a plan for protecting consumers moving forward. But consumers have the momentum. Facebook will change only as much as its users demand. The fact that Facebook’s leadership is on a full-blown apology tour means that public pressure is starting to work. After months of bad press and user backlash, Facebook is finally acknowledging that some things need to change.

    Facebook failed to protect users from a consulting firm so shady that it bragged to a potential client about entrapping candidates for office, potentially breaking U.S. election laws to help Donald Trump win in 2016, and avoiding congressional investigations. Consumers are outraged, many to the point of quitting Facebook entirely. Cambridge Analytica probably isn’t the only problematic company that Facebook allowed to exploit user data, but from an organizing perspective, we couldn’t ask for a better villain. After months of outrage, Facebook is on the defensive. This is the best opportunity we’ll have to force it and other tech platforms to make systemic change.

    Here’s a good place to start: Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action.

  • For Zuck's sake

    Blog ››› ››› MELISSA RYAN

    Mark Zuckerberg has been sharing a lot this month. First, he posted that his “personal challenge” for 2018 is to fix the glaring and obvious problems for which he’s taken so much heat. Last week, he announced that he had directed Facebook’s product teams to change their focus from “helping you find relevant content to helping you have more meaningful social interactions.” Zuckerberg promised users that they’d see less content from “businesses, brands and media” and more content from “your friends, family and groups.” On Friday, Zuckerberg shared another major change: Facebook would improve the news that does get shared by crowdsourcing what news sources were and weren’t trustworthy via user surveys.

    The first change, a return to “meaningful interaction,” is one I can get behind. I’m all for anything that discourages fake news sites from monetizing on Facebook. I’ve long suspected that part of why these sites took hold in the first place was a lack of meaningful content available on our feeds. Less sponsored content and more pictures and videos from family and friends will greatly improve my Facebook experience. I suspect I’m not the only one.

    I’m also hopeful this change will move digital advocacy away from broadcasting and back to organizing. Given how Facebook groups have become such a crucial part of #TheResistance I’m glad to hear they’ll be emphasized. I want to see more groups like Pantsuit Nation and the many local Indivisible groups that have formed in the last year. (Media outlets fear not, Vox has also been building Facebook groups in addition to their pages.) Digital ads and acquisition shouldn’t be the only tools digital organizers use. Increased engagement should involve actually engaging folks rather than simply broadcasting to them.

    The second change, user surveys to determine what news people trust, is maddening. If you were going to design a system that could be easily gamed, this is how you’d do it. “Freeping” online polls and surveys is a longstanding tactic of the far right online, going back nearly 20 years. It’s in their online DNA and they have groups of activists at the ready who live for this activity. Facebook isn’t handing authority over to their broader community but to an engaged group of users with an agenda. Even if the freeping wasn’t inevitable, it’s pretty well established that there’s already no common ground when it comes to what news sources people with different political viewpoints trust.

    The crux of the problem is that Facebook desperately wants to be seen a neutral platform while Facebook’s users want them to keep inaccurate information off of Facebook. In his New Year’s post, Zuckerberg emphasized he believes technology “can be a decentralizing force that puts more power in people’s hands” while acknowledging that the reality might be the opposite. There’s a tension between his core beliefs and what Facebook users currently expect from the company. My sense is that’s a driving force behind attempting to pass the buck back to us.

    Facebook will only go as far as their users pressure them, especially in the US where regulation from the government will be minimal. If we want Facebook to take responsibility, we have to continually hold them accountable when things go wrong or when proposed solutions don’t go far enough. Mark Zuckerberg’s personal challenge is to fix what’s broken. Ours is to keep pressing him in the right direction.

    This piece was originally published as part of Melissa Ryan's Ctrl Alt Right Delete newsletter -- subscribe here

  • Facebook’s news feed changes could elevate fake news while harming legitimate news outlets

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    New changes announced by Facebook to elevate content on its users’ news feed that is shared by friends and family over that shared by news publishers could wind up exacerbating Facebook’s fake news problem.

    Over the past year, Facebook has struggled to combat the spread of fake news and misinformation on its platform. On January 11, the social media giant announced that it would change the algorithm of its news feed so that it would “prioritize what [users’] friends and family share and comment on,” according to The New York Times. Facebook CEO Mark Zuckerberg, who was named Media Matters2017 Misinformer of the Year, told the Times that the shift was “intended to maximize the amount of content with ‘meaningful interaction’ that people consume on Facebook.” Additionally, content from news publishers and brands will be given less exposure on the news feed. Facebook is also weighing including some kind of authority component to its news feed algorithm so outlets that are considered more credible will get more prominence in the news feed.

    In the past year or so, Facebook has attempted to employ some measures in its effort to fight fake news, including its third party fact-checking initiative. Though these efforts have thus far been far from effective, the new changes threaten to undercut the measures even more.

    At least one study has shown that Facebook users are influenced by their friends and family members’ actions and reactions on the site. Last year, New York magazine reported on a study that found that “people who see an article from a trusted sharer, but one written by an unknown media source, have much more trust in the information than people who see the same article from a reputable media source shared by a person they do not trust.” With Facebook’s new changes, as the Times noted, “If a relative or friend posts a link with an inaccurate news article that is widely commented on, that post will be prominently displayed.”

    An additional point of concern is how this will exacerbate the problem of conservative misinformation specifically. Up until now, misinformation and fake news on social media have seemingly come from and been spread more by conservatives than liberals. And according to research conducted by Media Matters, right-wing communities on Facebook are much bigger than left-wing communities and mainstream distribution networks, and right-wing engagement is also bigger than in left-wing circles. These changes then could mean that peer-to-peer promotion of right-wing misinformation will more likely lead to fake news being pushed toward the top of people’s news feed.

    The changes will also likely cause real harm to legitimate news outlets by burying their stories. The head of Facebook’s news feed admitted that some pages “may see their reach, video watch time and referral traffic decrease.” Smaller, less-known outlets, especially those that do not produce content on the platform (such as live videos), could face major financial losses from the move. Facebook’s head of news partnerships, Campbell Brown, also wrote to some major publishers that the changes would cause people to see less content from “publishers, brands, and celebrities,” but that “news stories shared between friends will not be impacted,” which could suggest that fake news might get promoted over content directly from legitimate news outlets.

    It’s conceivable that adding some kind of authority component that ensures “articles from more credible outlets have a better chance of virality” could help lessen this possibility. Such a move would be a welcome development, and Media Matters has recommended that Facebook include it in its algorithm. But the possible criteria that Facebook is currently considering to determine which publisher is credible -- such as “public polling about news outlets” and “whether readers are willing to pay for news from particular publishers” -- is vague and could be problematic to enforce. And The Wall Street Journal noted that Facebook was still undecided about adding the authority component; without that, the possible negative impact from these news feed changes could be even worse.

    It is possible that Facebook’s move to include “Related Articles” next to the posts that its fact-checking partners have flagged could override people’s tendency to believe what their peers share. And perhaps the algorithm that tries to stop the spread of stories the fact-checkers have flagged may decrease the spread of fake news. But it’s also possible that these new moves undermine those initiatives, and that Zuckerberg’s aim to make users more happy could also make them more misinformed.