Author Page | Media Matters for America

Melissa Ryan

Author ››› Melissa Ryan
  • Facebook has a long history of failing its users. The massive data breach is just the latest example.

    As Facebook continues to deal with the fallout from the largest data breach in its history, Media Matters takes a look back at some of its previous failures

    Blog ››› ››› MELISSA RYAN & ALEX KAPLAN


    Melissa Joskow / Media Matters

    Facebook recently announced the worst data breach in the company’s history, affecting approximately 30 million users. This breach allowed hackers to “directly take over user accounts” and see everything in their profiles. The breach “impacted Facebook's implementation of Single Sign-On, the practice that lets you use one account to log into others.” Essentially, any site users signed into using their Facebook login -- like Yelp, Airbnb, or Tinder -- was also vulnerable. Hackers who have access to the sign-on tokens could theoretically log into any of these sites as any user whose data was exposed in the hack. As a precaution, Facebook logged 90 million users out of their accounts. On October 12, the company offered users a breakdown of how many people were affected and what data was exposed.

    Via Facebook:

    The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people. For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. For 1 million people, the attackers did not access any information.

    Users can find out if they were affected and what data was accessed at Facebook’s help center.

    Even with the update, we still don’t know enough information about the breach. We don’t know who was behind the attack. The FBI is investigating the hack, as well as the European Union (via Ireland’s Data Protection Commission, Facebook’s lead privacy regulator in Europe). Multiple members of Congress have expressed concern about the breach.

    What we do know is that this latest data breach is hardly the only way Facebook has failed its consumers. Media Matters has cataloged Facebook’s multitude failures to protect its consumers since the company’s beginnings.

    Data privacy

    Cambridge Analytica

    The public learned about Facebook’s most notorious data privacy breach on March 16 of this year. Facebook abruptly announced that it had banned Cambridge Analytica, the firm that did data targeting for Donald Trump’s presidential campaign, from using the platform for, according to The Verge, “violating its policies around data collection and retention.” The next day, The New York Times and The Observer broke the story Facebook was clearly trying to get ahead of: Cambridge Analytica had illegally obtained and exploited the Facebook data of 50 million users in multiple countries.

    Christopher Wylie, Cambridge Analytica’s former research director, blew the whistle on how the firm used the ill-gotten data of Facebook’s users to target American voters in 2016. The company, founded by right-wing megadonor Robert Mercer, had political clients in the U.S. and around the world; it did work for President Donald Trump’s campaign, Ted Cruz’s presidential campaign, current national security adviser John Bolton’s super PAC, and more. Following Wylie’s exposé, more information was revealed about the firm: Its leadership was caught on camera “talking about using bribes, ex-spies, fake IDs and sex workers.” It gave a sales presentation about disrupting elections to a Russian oligarch in 2014. And the firm reached out to WikiLeaks in 2016 offering to help release then-Democratic presidential nominee Hillary Clinton’s emails. Following these revelations, Cambridge Analytica shut down (though there are serious questions about whether it spun off into a new company).

    The data breach didn’t just expose Facebook user data to a political consulting firm; it exposed it to a company backed by a right-wing billionaire whose full operations aren’t yet known. Put another way, a shady operation was offering services like entrapment to potential clients, and the only tool required to do that was Facebook.

    Facebook continues to find more unauthorized scraping of user data. The company disabled a network of accounts belonging to Russian database provider SocialDataHub for unauthorized collection of user information. The company previously provided analytical services to the Russian government, and its CEO even praised Cambridge Analytica.

    Advertising profits over user privacy

    Facebook’s business model monetizes the personal information of its users for advertising purposes. Advertisers on Facebook pay for access to information about users in order to create better-targeted ad campaigns. But over the course of Facebook’s history, the company has continually exposed user data without their consent, putting profits over privacy considerations.

    In 2009, Facebook was forced to settle a class action lawsuit from users and shut down its Beacon ad network, which posted users’ online purchases from participating websites on their news feeds without their permission. In 2010, Facebook was caught selling data to advertising companies that could be used to identify individual users. The company has been fined in Europe multiple times for tracking non-users for the purpose of selling ads. It admitted in March that it collected call history and text messages from users on Android phones for years.

    Exposing data of Facebook employees

    Facebook’s privacy failures affect its employees as well. The Guardian reported last year that a security lapse exposed the personal details of 1,000 content moderators across 22 departments to users suspected of being terrorists. Forty of those moderators worked on Facebook’s counterterrorism unit in Ireland, at least one of whom was forced to go into hiding for his own safety because of potential threats from terrorist groups he banned on the platform.

    Misinformation

    Trending Topics

    In response to a Gizmodo article claiming Facebook employees were suppressing conservative outlets in its Trending Topics section, the company fired its human editors in 2016 and starting relying on an algorithm to decide what was trending. Following this decision, multiple fake stories and conspiracy theories appeared in the trending section. The problems with Trending Topics continued through this year, with the section repeatedly featuring links to conspiracy theory websites and posts from figures known for pushing conspiracy theories. Facebook mercifully removed Trending Topics altogether in June 2018.

    State-sponsored influence operations and propaganda

    During the 2016 campaign, Russian operatives from the organization known as the Internet Research Agency (IRA) -- which is owned by a close associate of Russian President Vladimir Putin -- ran multiple pages that tried to exploit American polarization. In particular, the IRA ran ads meant to stoke tensions about the way American police treat Black people while using other pages to support the police; the organization also played both sides on immigration.

    The IRA also stole identities of Americans and created fake profiles to populate its pages focusing on “social issues like race and religion.” It then used the pages to organize political rallies about those issues. During the campaign, some Facebook officials were aware of the Russian activity, yet did not take any action. In 2017, Facebook officials told the head of the company’s security team to tamp down details in a public report it had prepared about the extent of Russian activity on the platform. It was only after media reporting suggested Facebook had missed something that the company found out the extent of that activity. So far this year, Facebook has taken down accounts potentially associated with the IRA.

    Facebook in August 2018 also removed a number of accounts that the company had linked to state media in Iran.

    Foreign networks spreading fake news and getting ad revenue

    Since at least 2015, Facebook has been plagued by fake news stories originating from Macedonia that are pushed on the platform to get clicks for ad revenue. Despite being aware of those activities during the 2016 campaign, Facebook took no action to stop it, even as locals in Macedonia “launched at least 140 US politics websites.” Since then, Facebook has claimed that it has taken steps to prevent this kind of activity. But it has continued as Macedonian accounts used the platform to spread fake stories about voter fraud in special elections in Alabama in 2017 and Pennsylvania in 2018.

    Macedonians aren’t the only foreign spammers on Facebook: A large network of users posing as Native Americans has operated on the platform since at least 2016. The network exploited the Standing Rock protests to sell merchandise, and it has posted fake stories to get ad revenue. While much of this activity has come out of Kosovo, users from Serbia, Cambodia, Vietnam, Macedonia, and the Philippines are also involved.

    Facebook has also regularly struggled to notice and respond to large foreign spammer networks that spread viral hoaxes on the platform:

    • The platform allowed a Kosovo-based network of pages and groups that had more than 100,000 followers combined to repeatedly push fake news. Facebook finally removed the network following multiple Media Matters reports.

    • The platform allowed a network of pages and groups centered in Saudi Arabia and Pakistan that had more than 60,000 followers to publish fake stories. It was taken down following a Media Matters report.

    Facebook officials have also downplayed the key role Facebook groups play in spreading fake news, even though the platform has been used regularly by people in other countries to push fake stories.

    Domestic disinformation campaigns

    Until just recently, Facebook did not respond to network of pages that regularly posted false stories and hoaxes and worked together to amplify their disinformation. The pages in the networks would coordinate and amplify their disinformation content. Facebook finally took down some of these domestic disinformation networks on October 11, right before the 2018 midterms, noting they violated its spam and inauthentic behavior policies. But as Media Matters has documented, even this sweep missed some obvious targets.

    Fake news thriving on Facebook

    Facebook’s fake news problem can be illustrated well by one of the most successful fake news sites on the platform, YourNewsWire. Based in California, YourNewsWire has been one of the most popular fake news sites in the United States and has more than 800,000 followers through its Facebook pages. Time and time again, hoaxes the site has published have gone viral via Facebook. Some of these fake stories have been flat out dangerous and have been shared on Facebook hundreds of thousands of times. Facebook’s designated third-party fact-checkers debunked the stories the site had published more than 80 times before it appears Facebook finally took action and penalized it in its news feed, forcing the site to respond to the fact-checkers’ repeated debunks.

    Fake news has also been a problem in Facebook searches: Since at least 2017, fake stories about celebrities have popped up in Facebook searches, even after some had been debunked by Facebook’s designated third-party fact-checkers. Facebook in response has said it is trying to improve Facebook search results.

    The problem has also extended to its ads. In May 2018, Facebook launched a public database of paid ads deemed “political” that ran on the platform. A review of the database found that the platform, in violation of its own policies, allowed ads featuring fake stories and conspiracy theories.

    Withholding 2016 data from researchers

    After the 2016 election, researchers repeatedly urged Facebook to give them access to its data to examine how misinformation spreads on the platform. In April, the platform announced it would launch an independent research commission that would have access to the data. However, the platform has refused to allow researchers to examine data from before 2017, meaning data from during the 2016 election is still inaccessible.

    Misuse of Instant Articles

    BuzzFeed reported earlier this year that fake news creators were pushing their content via Facebook’s Instant Articles, a feature that allows stories to load on the Facebook mobile app itself and which Facebook partly earns revenue from. In response, Facebook claimed it had “launched a comprehensive effort across all products to take on these scammers.” Yet the platform has continued to allow bad actors to use the feature for fake stories and conspiracy theories.

    Problems with fact-checking

    In response to the proliferation of fake news on the platform after the 2016 campaign, Facebook partnered with third-party fact-checkers to review posts flagged by users as possible fake news. Since then, some of these fact-checkers have criticized Facebook for not being transparent, particularly in its flagging process, withholding data on the effectiveness of the debunks, and failing to properly communicate with them.

    In 2017, Facebook included the conservative Weekly Standard in its fact-checking program in the United States. The platform otherwise included only nonpartisan fact-checkers in its program, and since then it has not included any corresponding progressive outlet. This has resulted in the conservative outlet fact-checking and penalizing in the news feed a progressive outlet over a disputed headline, which was harshly criticized.

    Human and civil rights violations

    Poor policies for monitoring white supremacy and hate

    This year, leaked documents showed that while Facebook’s content policies forbid hate speech arising from white supremacy, so-called white nationalist and white separatist views were considered acceptable, a policy it is now reviewing after public scrutiny. A 2017 Pro Publica investigation of Facebook’s content policies showed that white men were protected from hate speech but Black children were not. Neo-Nazis and white supremacists continue to profit by selling white supremacist clothing and products on Facebook and Instagram. Zuckerberg also defended the rights of Holocaust deniers to share their conspiracy theories on the platform.

    After years of pressure from civil rights groups, Facebook finally agreed to submit to a civil rights audit, but it also announced the creation of a panel to review supposed bias against conservatives the same day, equating the civil rights of its users with partisan bickering by Republicans.

    Contributing to violence in multiple countries

    Facebook in recent years has actively expanded to developing countries. Since then, the platform has been used in Myanmar and Sri Lanka to encourage hate and violence against minorities, resulting in riots and killings. In Libya, militias have used the platform to sell weapons, find their opponents, and coordinate attacks. The United Nations has issued multiple reports criticizing Facebook’s role in Myanmar, suggesting the platform “contributed to the commission of atrocity crimes” in the country. Activists and officials in those countries also complained that Facebook had not employed moderators to monitor for hateful content, nor had they established clear points of contact for people in those countries to contact them to issue concerns.

    Content sent via messaging app WhatsApp, which Facebook owns, has also caused problems. In India, hoaxes spreading through the platform have led to multiple lynchings, and the Indian government (whose supporters have themselves spread hoaxes) has pressured the company to clamp down on misinformation. In response, the platform has resorted to going on the road to perform skits to warn people about WhatsApp hoaxes. Other countries like Brazil and Mexico have also struggled with hoaxes spreading through WhatsApp, with the latter also seeing lynchings as a result.

    Used by authoritarians to target opponents

    Certain governments have also used Facebook as a means to target and punish their perceived opponents. In the Philippines, supporters of President Rodrigo Duterte, some of whom have been part of Duterte’s government, have spread fake content on the platform to harass and threaten his opponents. And in Cambodia, government officials have tried to exploit Facebook’s policies to target critics of Prime Minister Hun Sen.

    Ads discrimination

    Facebook’s ad policies have allowed people to exclude groups based on their race while creating a target audience for their ads, as ProPublica noted in 2016. The following year, it found that despite Facebook’s claims to stop such discrimination, housing ads on the platform continued to exclude target audience by race, sex, disability, and other factors. In 2017, civil rights groups filed a lawsuit against the platform and the Department of Housing and Urban Development also filed a complaint. Another investigation the same year found that the platform could exclude viewers by age from seeing job ads, a potential violation of federal law. In 2018, the American Civil Liberties Union sued Facebook for allegedly allowing employers to exclude women from recruiting campaigns.

    Helping anti-refugee campaign in swing states

    In 2016, Facebook, along with Google, directly collaborated with an agency that was working with far-right group Secure America Now to help target anti-Muslim ads on Facebook to users in swing states that warned about Sharia law and attacked refugees.

    Online harassment

    Facebook has done little to protect people who become targets of online harassment campaigns, even though most of them are likely users of Facebook themselves. Time and again, Facebook has allowed itself to be weaponized for this purpose. Alex Jones and Infowars are perhaps the most famous examples of this problem. Even though Jones harassed Sandy Hook families for years, calling the school shooting a false flag, spreading hate speech, and engaging in other forms of bullying, Facebook continued to allow him free rein on its platform. The company finally banned Jones in July this year, after weeks of public pressure, including an open letter from two Sandy Hook parents, but only after Apple “stopped distributing five podcasts associated with Jones.”

    Facebook has also allowed conspiracy theorists and far-right activists to harass the student survivors of the Parkland school shooting, most of whom were minors, on the platform. More recently, it allowed right-wing meme pages to run a meme disinformation campaign targeting professor Christine Blasey Ford, Deborah Ramirez, and other survivors who came forward during the confirmation process of now Supreme Court Justice Brett Kavanaugh.

    Still more screw-ups

    Then there are the failures that defy category. In 2012, Facebook conducted psychological tests on nearly 700,000 users without their consent or knowledge. Zuckerberg had to apologize after giving a virtual reality tour of hurricane-struck Puerto Rico. Illegal opioid sales run rampant on Facebook, among other platforms, and the company has been unable to curb or stop them.

    Even advertisers, the source of Facebook’s profit, haven’t been spared. Facebook’s latest political ad restrictions have created problems for local news outlets, LGBTQ groups, and undocumented immigrants seeking to buy ads. Facebook also had to admit to advertisers that it gave them inflated video-viewing metrics for the platform for over two years.

    What Facebook owes consumers

    As a college student, Zuckerberg offered the personal data of Facebook’s initial users at Harvard to his friend and joked that people were “dumb fucks” for trusting him with their personal information. One hopes that Zuckerberg’s respect for his customer base has improved since then, but Facebook’s many failures since suggest that it hasn’t.

    BuzzFeed’s Charlie Warzel suggested that Facebook’s users simply don’t care enough about data privacy to stop using the platform. We have a slightly different theory: Users don’t leave Facebook because there’s no available alternative. Without a competitor, Facebook has no real incentive to fix what it’s broken.

    The impact of Facebook’s failures compound on society at large. As the founder of one of Facebook’s designated third-party fact-checkers told The New York Times, “Facebook broke democracy. Now they have to fix it.”

  • At Senate hearing about election interference, tech companies prove they won't do a damn thing unless they are forced

    Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg testified before the Senate intelligence committee this morning. Here’s what you need to know.

    Blog ››› ››› MELISSA RYAN

    This morning, the Senate intelligence committee questioned Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey on Russian interference in the 2016 election. The hearing was the culmination of a two-year investigation into Russian election interference by the committee and Congress’ best opportunity to publicly hold Facebook and Twitter accountable for their role in allowing Russian operatives to game their platforms to target Americans with propaganda. As Angelo Carusone said earlier: “The tech industry’s failure to grapple with its roles in allowing -- and sometimes even enabling -- the fake news crisis and foreign interference in American elections is a national security crisis.” Today Americans had the opportunity to hear from Sandberg and Dorsey directly what Facebook and Twitter have done to protect them since 2016.

    The first time tech executives from Facebook, Twitter, and Google testified before the Senate intelligence committee last year, committee members took a hostile posture. Committee chair Richard Burr (R-NC) and vice chair Mark Warner (D-VA) both scolded Facebook, Twitter, and Google for not taking election interference or the fact that their platforms were weaponized by foreign propagandists, seriously. At one point, Warner, frustrated by how little the tech companies claimed to know about what was happening on their own platforms said, “Candidly, your companies know more about Americans, in many ways, than the United States government does. The idea that you had no idea any of this was happening strains my credibility.”

    Ten months later, as I watched Dorsey and Sandberg testify before the committee, it felt like relations had thawed -- perhaps not with Google, who refused to send its CEO and instead was represented by an empty chair, but certainly with Facebook and Twitter. Members of the committee continued to ask tough questions and press Dorsey and Sandberg when they weren’t forthcoming, but the atmosphere had changed. I get the sense that after nearly a year of conversations and hearings, the working relationship is perhaps in a better place.

    Of course the tech companies have taken a beating in the press since that first hearing. We’ve since learned that Russian trolls got tens of thousands of Americans to RSVP for actual local events via Facebook. Americans have now seen the thousands of ads and organic content that Russian propagandists deployed on Facebook. Conspiracy theories about the Parkland shooting survivors, most of whom were still minors, spread like wildfire on social media. News broke that Cambridge Analytica had breached data of at least 50 million Facebook users. Russia is still interfering in our political conversation, and, Iran is now gaming the platforms as well.

    This morning’s hearing was probably the last time we’ll hear from the tech companies or the committee before the midterm election. Here’s what we’ve learned (and what we still don’t know):

    Promises made, promises kept?

    Facebook and Twitter made a lot of promises to the committee in the 2017 hearing. Facebook and Twitter both promised to change their ad policies, enhance user safety, build better teams and tools to curb malicious activity, better collaborate with law enforcement and one another, and communicate more transparently with the public.

    How’d they do?

    • Updated ads policy. Both Facebook and Twitter have announced new political and issue ad policies. Both companies have also announced their support for the Honest Ads Act. During the hearing, Sen. Ron Wyden (D-OR) asked Facebook specifically about voter suppression ads which both Russia and the Trump campaign used in 2016. Sandberg said that in the future, this kind of targeting would not be allowed, though she didn’t specify if she was talking about just foreign actors or American political campaigns as well.

    • User safety. Perhaps the most telling moment of the hearing was Sen. Martin Heinrich (D-NM) asked Sandberg about the real harm done when real people (not just fake accounts) intentionally spread conspiracy theories. Sandberg’s solution, rather than removing the incendiary content, was to have third-party fact-checkers look at potentially incorrect content because, according to her, Facebook isn’t the arbiter of truth, mark the content as false, warn users before they share the content and  present users with “alternative facts.”

    • Build better teams and tools to curb malicious activity.  In her opening statement, Sandberg said: “We’re investing heavily in people and technology to keep our community safe and keep our service secure. This includes using artificial intelligence to help find bad content and locate bad actors. We’re shutting down fake accounts and reducing the spread of false news. We’ve put in place new ad transparency policies, ad content restrictions, and documentation requirements for political ad buyers. We’re getting better at anticipating risks and taking a broader view of our responsibilities. And we’re working closely with law enforcement and our industry peers to share information and make progress together.” Dorsey also highlighted Twitter’s progress in his opening statement, saying: “We‘ve made significant progress recently on tactical solutions like identification of many forms of manipulation intending to artificially amplify information, more transparency around who buys ads and how they are targeted, and challenging suspicious logins and account creation.”

    • Better collaboration with law enforcement and with one another. Committee members asked Dorsey and Sandberg about this multiple times during the hearing. Both agreed that when it came to American security, Twitter and Facebook weren’t in competition and collaborated frequently. They also expressed a good relationship with law enforcement agencies, though Dorsey complained more than once about having too many points of contact.

    • Communicate more transparently to the public. Committee members pressed both Dorsey and Sandberg to be more transparent. Warner asked Dorsey if Twitter users have a right to know if the account they’re interacting with is a bot. Dorsey agreed to this, adding the caveat that “as far as we can detect them.”  Warner suggested to Sandberg that most of Facebook’s users don’t know what data Facebook has on them or how that data is used. Further, Warner pressed Sandberg, asking if users had a right to know how much their data was worth to Facebook. Wyden pointed out that data privacy is a national security issue as Russians used our own data to target us, saying, “Personal data is now the weapon of choice for political influence campaigns.” Sen. Susan Collins (R-ME) asked Dorsey if Twitter had done enough to disclose to users that they were exposed to IRA propaganda, which Dorsey admitted the platform had not yet done enough.

    Questions still outstanding

    For every question Sandberg and Dorsey answered during the hearing, there were plenty that they couldn’t or wouldn’t answer. Most of the time, they promised to follow-up with the committee but here’s what we still don’t know and won’t likely get an answer to before the 2018 elections:

    • What are the tech companies doing to prepare for “deepfake” video and audio? Sen. Angus King (I-ME) asked if the companies were prepared to combat “deepfake” videos and audios, content that is digitally manipulated to look and sound extremely real. Neither Sandberg nor Dorsey had a good answer, which is worrisome given that “deepfake” audio and video are just around the corner.

    • Are the tech companies keeping an archive of suspended and removed accounts and will make this archive available to researchers and/or the general public? Both Sens. Roy Blunt (R-MO) and James Lankford (R-OK) asked about this. which is an important question, especially for academic researchers. Neither Sandberg nor Dorsey had a clear answer.

    • Anything to be done with the selling of opioids online? This question came from Sen. Joe Manchin (D-WV) who also asked Sandberg and Dorsey if their companies bore and moral responsibility for deaths caused by opioid sales on social media.

    • How much did tech companies profit from Russian propaganda? Sen. Kamala Harris (D-CA) has asked Facebook this question repeatedly both during intelligence and judiciary committee hearings. The most follow-up she’s received from Facebook is that the number is “immaterial.”

    What happens next?

    Burr and Warner generally close these hearings by previewing what happens next. This time there was no such preview. Given that the election is almost two months away, that’s a bit unsettling. But the reality is that with the current makeup in Congress (and the executive branch), the government isn’t going to do anything else to protect Americans. No legislation will be passed, and if social media companies are called to testify before the House again anytime soon, it will likely be another circus hearing devoted to the right’s pet issue of social media censorship. On the Senate’s part, however, holding tech companies accountable and producing reports is about as much as the intelligence committee can do right now.

    Facebook, Twitter, and the absentee Google left today's hearing with questions unresolved and problems nowhere near fixed. Beyond the Senate Intelligence Committee asking pertinent questions, Congress has shown no interest in holding social media companies to account for those issues that remain outstanding.

  • As Jon Kyl fills John McCain's senate vacancy, Facebook needs to act

    Kyl’s appointment to the Senate should have been an opportunity for Facebook to scrap the conservative bias review and stop caving to the right

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Former U.S. Sen. Jon Kyl has been announced to fill the vacancy left by the late Arizona Sen. John McCain.

    It’s the third gig Kyl has gotten in Republican politics this year. In July, the White House tapped Kyl to act as the “sherpa” for Supreme Court nominee Judge Brett Kavanaugh’s Senate confirmation hearing.

    Earlier, Facebook also hired Kyl and his law firm, Covington and Burling, to lead the social media company’s conservative bias review -- to look for supposed bias that has since been debunked against right-wing figures and content on social media platforms. There haven’t been any updates on the review since it was announced early May and it’s unclear how Kyl’s newest job as a senator will affect the conservative bias review or his continued role in it.

    The Hill reports that Facebook will continue with the conservative bias review. No word on who will take over the review now that Kyl's new job will likely leave him unable to head the project.

    Kyl’s appointment gives Facebook an opportunity to reevaluate its strategy of engaging with the right on claims that have no basis in fact. Facebook should scrap the conservative bias review entirely and stop helping the Republicans rally their base at the expense of Facebook’s own integrity. Sadly it's an opportunity Facebook has chosen not to take.

    Media Matters has reached out to Facebook for a comment and will update if they respond.

  • Executives from Twitter and Facebook are testifying before Congress. Here’s what you need to know.

    The six questions that tech executives need to answer before Congress

    Blog ››› ››› MELISSA RYAN


    Melissa Joskow / Media Matters

    Silicon Valley hikes back up to Capitol Hill this week. Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg will testify before the Senate Select Committee on Intelligence in an open hearing on “foreign influence operations and their use of social media platforms.” Larry Page, CEO of Google parent company Alphabet, was invited to testify as well but has so far refused the invitation. The committee plans to have an empty chair at the hearing to illustrate Google’s absence.

    This will be the highest profile hearing on Russian interference on social media to date. Thus it’s Congress’ best opportunity to publicly hold Facebook and Twitter accountable for their role in allowing Russian operatives to game their platforms to target Americans with propaganda.

    I’ve been following this committee’s investigation from its first open hearing last year. I’ve watched (and often rewatched) every public hearing the committee has held and read every statement and report it’s issued. Here’s what you need to know.

    Senate intelligence: The adults in the room

    The Senate intelligence committee is tasked with overseeing the 19 entities that make up America’s intelligence community. The committee began investigating possible Russian interference in 2016 elections and collusion with the Trump campaign in January of last year, months before the special counsel’s investigation began. Committee Chairman Sen. Richard Burr (R-NC) and Vice Chairman Sen. Mark Warner (D-VA) pledged from the start to conduct the investigation in a bipartisan manner, working together to uncover the truth and produce “both classified and unclassified reports.”

    So far, Burr and Warner have stayed true to those principles, in stark contrast to their counterparts on the House committee, whose own investigation has become a dumpster fire. Whereas Rep. Devin Nunes (R-CA) and his Republican colleagues in the House seem mostly interested in giving the Trump administration cover, Burr actually seems to understand the gravity of the situation and works alongside Warner accordingly. The committee has produced two unclassified reports so far, the first intended to show election officials, political campaigns, and the general public what Russian attacks looked like in 2016, where government agencies failed in protecting us, and what actionable recommendations federal and state governments could take moving forward. The second report backed the assessment of intelligence agencies that the “Russian effort was extensive and sophisticated, and its goals were to undermine public faith in the democratic process, to hurt Secretary Clinton (Democratic candidate Hillary Clinton) and to help Donald Trump.” The committee has also produced classified reports available to federal agencies and state election officials.

    To put it another way, for the most part, the committee is acting in good faith and acknowledging reality. Members have gone out of their way to avoid political theater, give the public actionable information about election interference from Russia, and demonstrate what the future could look like. Their open hearings on election interference are the most useful source of information currently available from the U.S. government.

    Speaking of political theater, let’s talk about that other tech hearing on the same day

    In an impressive feat of counterprogramming, the Republican-led Energy and Commerce Committee is holding a hearing on “Twitter’s algorithms and content monitoring,” also with Twitter’s CEO Jack Dorsey, on the same day!

    Google, Facebook, and Twitter executives are staple witnesses at congressional hearings, but most of the time we don’t learn all that much from them. This is partly because Congress overall has a severe knowledge gap when it comes to technology issues, but mostly because these hearings often become moments of political theater for members of Congress looking to create a viral moment on YouTube or a fundraising hook.

    President Donald Trump and most other elected Republicans seem wholly uninterested in holding the tech companies accountable for election interference by foreign actors, opting instead to complain about censorship of conservatives on social media that doesn’t actually exist. (Trump tweeted last week that Google is “rigged” against him after Fox Business’ Lou Dobbs reported on a sketchy study about the search engine by PJ Media.)

    There’s no data to back up the GOP’s claims of censorship. Media Matters studied six months of data from political Facebook pages and found that right-leaning Facebook pages had virtually identical engagement to left-leaning pages and received more engagement than other political pages. The methodology of the PJ Media Google study that Trump mentioned on Twitter makes no sense. And reporters were able to debunk Trump’s most recent claim that Google gave former President Barack Obama’s State of the Union special treatment on the homepage that it did not give to President Trump in a matter of minutes using a screenshot from the pro-Trump subreddit “r/The_Donald.”

    Look for Republicans outside of the intelligence committee to try to derail the Senate hearing and focus instead on riling up their base around the mythical censorship issue. The right has been fairly open about the fact that this “major line of escalated attack” is its plan. Hopefully, Republicans on the committee won’t contribute to this line of attack, wasting valuable hearing minutes that should be devoted to election and national security.

    Facebook CEO Mark Zuckerberg’s visit to Congress earlier this year is a prime example of how easy it is to derail a hearing. Zuckerberg testified over two days before House and Senate committees. The Senate hearing, held jointly by the judiciary and commerce committees, devolved into Zuckerberg explaining how the internet works to the poorly informed senators. House commerce committee members were more up to speed, but Republican members -- following Ted Cruz’s lead from the day before -- spent most of their time grilling Zuckerberg about nonexistent censorship of social media personalities Diamond and Silk.

    What tech companies will need to answer

    One thing that always comes across when you watch these hearings is the frustration that members of the committee feel toward the tech industry. Facebook has taken the most heat, but the frustration extends to Twitter and Google too. There’s a lot of blame to go around (Congress hasn’t passed one piece of legislation to protect American voters before the midterm elections), but tech companies allowed their platforms to be weaponized, missed what was happening until it was too late, and remain on the front lines of protecting Americans from attacks that game social media platforms.

    Both Facebook and Twitter made a lot of promises to the committee in a 2017 hearing. Tomorrow’s hearing will give committee members an opportunity to report back on promises kept and hold Facebook’s and Twitter’s leadership accountable for promises broken.

    In his opening statement at that 2017 hearing, Sean Edgett, Twitter’s general counsel, assured the committee, “We are making meaningful improvements based on our findings. Last week, we announced industry-leading changes to our advertising policies that will help protect our platform from unwanted content. We are also enhancing our safety systems, sharpening our tools for stopping malicious activity, and increasing transparency to promote public understanding of all of these areas. Our work on these challenges will continue for as long as malicious actors seek to abuse our system and will need to evolve to stay ahead of new tactics.”

    Facebook vice president and general counsel Colin Stretch promised that “going forward, we are making significant investments. We're hiring more ad reviewers, doubling or more our security engineering efforts, putting in place tighter ad content restrictions, launching new tools to improve ad transparency, and requiring documentation from political ad buyers. We're building artificial intelligence to help locate more banned content and bad actors. We're working more closely with industry to share information on how to identify and prevent threats, so that we can all respond faster and more effectively. And we're expanding our efforts to work more closely with law enforcement.”

    Members of the committee also pressed the tech companies to continue to share documents and relevant information with them, cross-check Russian-related accounts that the companies took down during the 2017 French election to see if they also participated in American influence operations, improve algorithms, report back on how much money they made from legitimate ads that ran alongside Russian propaganda, and confirm to the committee the total amount of financial resources they devoted to protecting Americans from future foreign influence attacks.

    Beyond what’s been promised, these companies need to answer:

    • What’s their plan to protect Americans in 2018 (and beyond)? By now, Americans know what Russian interference in 2016 looked like. We also know that Russian meddling hasn’t stopped and that other hostile foreign actors (Iran) are waging their own campaigns against us. The committee should ask Dorsey and Sandberg to walk Americans through their plan to protect their American users from foreign interference and to pledge accountability.

    • How are they combating algorithmic manipulation on your platforms? Algorithmic manipulation is at the heart of Russian interference operations. Russia weaponized social media platforms to amplify content, spread disinformation, harass targets, and fan the flames of discord. This manipulation warps our social media experience, most of the time without our knowledge. Americans need to know what the tech companies are doing to fight algorithmic manipulation and what new policies have been put in place.

    • Are their new ad policies effective? Facebook, Google, and Twitter have all rolled out changes in their advertising policies meant to curb the ability of foreign entities to illegally buy ads. It’s time for a report back on how those policies are working and whether any more changes are necessary for the midterm elections.

    • What support and resources do they need from government? As Facebook’s former chief security officer recently pointed out, “In some ways, the United States has broadcast to the world that it doesn’t take these issues seriously and that any perpetrators of information warfare against the West will get, at most, a slap on the wrist.” As hard as I’ve been on the tech companies, government’s failures to protect us and the current administration’s complete indifference to the issue are just as abysmal. Americans should know where tech executives believe government is failing and what resources they need to better fight back against foreign interference.

    • Do they have the right people in the room? Russia used America’s issues with racial resentment in its influence operations. Members of Congress have made the point in past hearings that tech companies’ lack of diversity in their staffs likely contributed to their inability to recognize inauthentic content from Russians posing as, say, #BlackLivesMatter activists online. In fact, #BlackLivesMatters activists attempted to alert Facebook about potentially inauthentic content and were ignored. Americans need to know if Facebook and Twitter have the right team of people in place to fight foreign interference and if those teams include diverse voices.

    • How are they protecting Americans’ data? Facebook’s record is particularly abysmal here. The company failed to protect user data from being exploited by Cambridge Analytica and still can’t tell us in full what data the company had or what other entities had access to it. Given how common data breaches are and that Russia used data to target Americans, we need to know what steps tech companies are taking to protect us from data theft and the resulting harm.

    Twitter and Facebook are American-born companies that make a lot of money from their American users. Having top executives testify on election interference, in an open hearing, is long overdue. As Burr and Warner warned us just a few weeks ago, time is running out. Burr invoked the famous “this is fine” meme to illustrate his point, saying that Congress is “sitting in a burning room calmly with a cup of coffee, telling ourselves ‘this is fine.’”

    As any American who uses the internet can tell you, it isn’t.

  • Pro-Trump media politicizes the murder of Mollie Tibbetts, even as her family begs for space

    Tibbetts' family should be able to grieve their daughter without becoming political props

    Blog ››› ››› MELISSA RYAN

    As Trump associates keep getting indicted, found guilty, and agree to plea deals while surrendering themselves to the FBI, pro-Trump media have seized on the murder of Mollie Tibbetts, and they are exploiting it for their own purposes. The body of Tibbetts, a 20-year-old University of Iowa student, was discovered on August 21 and her alleged killer has been charged with first degree murder. While a lot of news coverage and social media conversation has centered around Cohen, Manafort and why their dual felony convictions are disastrous news for President Donald Trump, some members of right-wing media and their supporters on social media have instead chosen to politicize Tibbetts’ death -- ignoring her family’s own grief and objections -- in an effort to distract from these bombshell stories.

    Below are just a few examples of what the Tibbetts’ family members are having to deal with, just a day after Tibbetts’ body was found:

    • Former Speaker of the House and conservative pundit Newt Gingrich emailed reporters about how Tibbetts’ death was potentially good news for Republicans in the fall, provided they could exploit it enough.

    • Turning Point USA communications director and right wing social media star Candace Owens got into an argument on Twitter with someone who says she is Tibbetts’ second cousin, accusing her of hating Trump and his supporters more than Tibbetts’ alleged murderer.

    • Fox News contributor Sebastian Gorka, Fox News contributor Tom Homan (the former acting director of ICE), Fox News guest Jonna Spilbor, CRTV host Eric Bolling, and Breitbart editor-at-large Joel Pollak cited the murder as a reason to build a wall on America’s border with Mexican border. Fox News contributor Tomi Lahren, and Fox News guest Mike Huckabee cited the murder as a reason to end the policy of “sanctuary cities.”

    • Mike Cernovich used the occasion to promote his involvement with Republican Senate candidate Kelli Ward’s campaign. Ward has already run a Facebook ad on the matter.

    • In a particularly dark note, users on 4chan and 8chan have been actively celebrating Tibbetts’ death. Anonymous postings on these message boards have been highlighting an old tweet of hers and claiming she got what she deserved because of a combination of her political views and her gender. Mentions of Tibbetts on these boards spiked just as the Manafort and Cohen stories were dominating news coverage. Other far right communities have pushed the meme as well. And the neo-nazi site Daily Stormer published a misogynist screed in the same vein.

    Tibbetts’ aunt took to Facebook the evening of August 21 and begged others not to politicize her niece’s murder, writing, “Please remember, Evil comes in EVERY color. Our family has been blessed to be surrounded by love, friendship and support throughout this entire ordeal by friends from all different nations and races. From the bottom of our hearts, thank you.”

    Grieving families shouldn’t have to make statements like this. They shouldn’t have to beg politicians and media figures not to exploit the tragic death of a loved one. They shouldn’t have to watch in real time as their loved one is defamed and dehumanized until her memory is merely a caricature to be memed on the internet in perpetuity. But that’s exactly what happens. Right-wing media exploit tragedies and rewrite biographies of victims in the blink of an eye. They have no consideration for the victims they claim to care about or the grieving families and friends they’ve left behind.

    Additional Research by Nick Fernandez, Natalie Martinez, and Katie Sullivan.

  • Trump and GOP influencers escalate the debunked claim that social media is censoring conservatives

    Blog ››› ››› MELISSA RYAN


    Melissa Joskow / Media Matters

    Update (8:15 p.m.): In an interview with Reuters, President Trump said “I won’t mention names but when they take certain people off of Twitter or Facebook and they’re making that decision, that is really a dangerous thing because that could be you tomorrow."

    The conspiracy theory that social media companies are censoring conservatives got a boost this weekend when President Donald Trump took to Twitter to complain.

    Trump is just whining about social media censorship (on social media) to push for more favorable conditions on the platforms. As I’ve written before, realistically,  the Trump campaign isn’t going to abandon social media. It needs the platforms to reach voters via targeted advertising. The Trump campaign spent millions on digital ads in 2016, and it will do the same in 2020. But it can badger the platforms in hopes of getting more attention from sales staff and potentially discounted ad buys. The platforms, which stand to make millions from Trump and GOP ad buys in 2018 and 2020, are going to feel the pressure to keep the campaign -- and the GOP overall -- happy.

    Trump wasn’t the only GOP figure whining about censorship this weekend. GOP Majority Leader Kevin McCarthy claimed that Twitter was censoring Fox News host Laura Ingraham’s tweets and called on Twitter CEO Jack Dorsey to “explain to Congress what’s going on.” What’s going on is that McCarthy doesn’t know how Twitter settings work. McCarthy could see Ingraham’s tweet by checking this box in his profile’s Twitter settings.

    Other users pointed this out to him, but as of this writing, the tweet is still live.

    Trump campaign senior adviser Katrina Pierson, who has ties to a pro-Trump fake news site, also tweeted about social media censorship and quoted Jim Hoft, founder of the far-right conspiracy site The Gateway Pundit.

    Trump campaign manager Brad Parscale retweeted Pierson.

    Fox News is also all-in on the debunked narrative. Turning Point USA spokesperson Candace Owens showed up on the network multiple times over the weekend to talk about the matter. Fox & Friends spent the weekend portraying conservatives as victims of social media.

    This morning was no different. Fox & Friends dedicated two segments to alleged social media censorship of conservatives. Mornings with Maria on Fox Business also discussed Trump’s social media rant.

    Here’s the problem. The claim of social media censoring conservatives has been pretty thoroughly debunked. Back in May 2016, a report claimed that conservative outlets and stories were “blacklisted” from Facebook’s Trending Topics section. Facebook CEO Mark Zuckerberg met with conservatives, including a representative from Trump's 2016 campaign. And a subsequent internal investigation revealed “no evidence of systematic political bias” in the Trending Topics section. When the GOP started clamouring about censorship in April of this year -- after right-wing social media personalities Diamond and Silk falsely claimed their content was being censored on Facebook -- they offered no credible evidence. In fact, a report from the social media analytics firm NewsWhip found the opposite: “There are more than three times as many conservative publishers than liberal publishers on Facebook, and they receive more than 2.5 times the engagement on the social media platform than those who push opposing viewpoints,” the report claimed. Media Matters recently released a study of six months’ worth of data conclusively debunking this claim for Facebook pages. It found that right-wing Facebook pages are thriving.

    Trump’s tweets came just days after conspiracy theorist Alex Jones begged him for help on air. Other outlets have noted that Trump’s tweets could be interpreted as a defense of Jones, who has been banned from multiple platforms.

    Meanwhile, Republicans continue to harp on the conservative censorship conspiracy theory as a way to rally their base. Trump’s tweets this weekend are part of that overall strategy. But if Trump and the GOP are going to continue crying wolf, reporters should start asking if they’re talking about conservatives or Alex Jones and his ilk of hatemongers and conspiracy theorists. If the GOP wants to stand up for Jones’ right to harass the Sandy Hook families and spread hate, the party should feel comfortable naming him as the cause for complaint.

  • Tech companies must do more to stop online harassment and online radicalization

    Women have warned about the dangers of online radicalization for years

    Blog ››› ››› MELISSA RYAN


    Melissa Joskow / Media Matters

    Buzzfeed News’ Joseph Bernstein wrote a chilling profile of Lane Davis, a far-right conspiracy theorist known for his online videos and writing, who fatally stabbed his own father last year. Davis was a fixture in the world of radicalized online communities; he wrote and researched for Nazi-sympathizer and troll Milo Yiannopoulos and was a prolific writer for Ralph Retort, an online conspiracy theory and misinformation website. He was also a source for Bernstein on more than one occasion.

    Bernstein’s profile lays out Davis’ online activities pretty thoroughly. While Davis’ content garnered him internet fame and influence within the world of the alt-right, he never managed to make much money from all of his labor. He lived full time with his parents, unwilling or unable to support himself. According to Bernstein, Yiannopoulos really wanted Lane to join Breitbart and recommended him to the co-editor of Breitbart Tech. But Lane’s use of an explicit racial slur on a livestream was too extreme for even Breitbart, a publication with a record of coordinating content with white nationalists.

    Davis had a long history of internet conspiracy theory mongering. It was the misogynistic crusade known as Gamergate, a campaign of online harassment that led to death and rape threats against women gamers and female journalists covering the gaming industry, that put him on the map and opened up a whole new world for him and his content. Lane put “hundreds of hours” into conspiracy theory videos as he built his online profile. From the article:

    Like so many others, [Lane] had joined the late-Obama-era culture wars through Gamergate, the often radical online campaign that claimed to be concerned with ethics in gaming journalism. And he was there from the start, actively participating in a chatroom called Burgers and Fries, members of which more or less astroturfed the start of the movement through well-placed hashtags and well-timed confrontations. Here, Lane would have learned how a small group of dedicated people could compel an enormous, participatory audience by wielding an ever-expanding conspiracy theory about liberal influence.

    A lot has been written about how Gamergate was a precursor to the “alt-right” and other extremist anti-feminist movements currently dominating online communities. As reported by The Guardian, Gamergate was the launchpad for many current far-right celebrities, most notably Yiannopoulos and opportunistic troll Mike Cernovich. As Sarah Jeong wrote in The Washington Post:

    Many of the microcelebrities of the “alt-right” on the Internet built their brands during Gamergate. Mike Cernovich went from being relatively unknown to a voice for the alt-right. ... Milo Yiannopoulos, despite having never played a video game in his life, glommed onto the Gamergate phenomenon and rode it out to his benefit, using his platform at Breitbart to write long rambling “exposés” of various Gamergate targets, regardless of whether they were public figures.

    More broadly, the weaponized online harassment that unleashed during Gamergate has been adopted by far-right movements across the globe. “Pizzagate” -- a conspiracy theory born online during the 2016 presidential that claimed powerful celebrities and Democratic politicians had links to a child trafficking ring being operated out of a Washington, D.C. pizza parlor -- continues to attract believers long after gunman Edgar Welch was jailed for opening fire and terrorizing staff and patrons at the restaurant where he had gone to “self-investigate.”

    Women and people of color have been complaining to tech companies about online harassment for years, and for the most part their complaints have been ignored. Reading about Lane Davis, I couldn’t stop thinking about the platforms, especially YouTube and Reddit, that enabled his radicalization and extremism for years. Davis’ videos, which as of this writing are still live, spread lies and conspiracy theories that caused real harm to people.

    Conspiracy theories, Lane’s favorite topic to pontificate about online, dehumanize the people who they target. Real people’s biographies get rewritten online. In the blink of an eye the owner of a local pizza place is recast as the leader of a child sex trafficking ring, a teenage shooting survivor becomes an FBI plant, an activist murdered by neo-Nazis is claimed to have actually died of a heart attack. The creators of these theories, the amplifiers, the sharers, and the believers don’t care about the harm they’re causing. The people involved are no longer human to them, just characters in a story. These victims are subjected to online harassment, doxxing, swatting, and in some instances, like in the case of Welch, violence.

    The executives at tech companies either still don’t understand the consequences of hosting content linked to extremism on their platforms or they’re deliberately choosing to ignore reality. Just two days ago, Mark Zuckerberg, in an interview with Recode’s Kara Swisher, spoke about Holocaust deniers who spread their views on Facebook and said that while he personally found those views “deeply offensive,” he didn’t believe Facebook should remove them. Zuckerberg’s comments suggested that Holocaust denial was a simple misunderstanding: “I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.” After public outcry, Zuckerberg was forced to clarify his remarks, explaining he wasn’t defending Holocaust deniers, and that yes, people who believe the Holocaust didn’t happen likely don’t have good intentions. But his statement failed to address Facebook’s policy of inaction regarding that brand of hateful content.

    The tragedy brought on by Lane Davis’ radicalism is just one of the many reasons tech companies have to face this issue head-on. From Gamergate to “Pizzagate,” Davis exploited social media’s lax policies regarding extremist conspiracy theories to build his platform and perhaps aid in the radicalization of others with no consequences. Davis isn’t the first online extremist to get violent offline and he won’t be the last. He joins a growing list of young men who were radicalized online and whose “activism” became terrorism offline. That list includes the white supremacist who committed the mass shooting in Charleston, SC, the racist and misogynistic vlogger who killed six people and injured 14 more in the college town of Isla Vista, CA, and the alleged incel terrorist who killed 10 in Toronto. Tech companies have a moral obligation to protect their users from extremists and to do everything in their power to stop the spread of radicalization that creates them in the first place. They can no longer feign ignorance about the potential of online radical content leading to violence. It’s time to finally listen to their users who’ve been harmed by weaponized online harassment and stop giving men like Lane Davis a platform to spread hate and disinformation.

  • Facebook, stop hitting yourself

    Facebook’s attempts to appease the GOP over mythical conservative censorship claims have the opposite effect. It’s time for the tech giant to push back.

    Blog ››› ››› MELISSA RYAN


    Melissa Joskow / Media Matters

    Facebook has a bullying problem. No, not the one you’ve heard so much about that it’s the preferred tech platform of bullies. Facebook the company is being bullied by the Republican Party. And only Facebook can put a stop to it.

    The GOP -- from Trump’s campaign manager, to the Republican National Committee chairwoman, to apparently every member of the GOP House Judiciary Committee -- continually make this claim, despite offering no data or evidence to back it up. As research from Media Matters definitively shows, there is no conservative censorship on Facebook and I strongly suspect the same is true on Twitter and Google. The tech companies know that the GOP officials aren’t being truthful when they make these claims, but instead of calling them out, they continue a public face of working with the party as honest brokers. Facebook has gone above and beyond to address the GOP’s faux concerns, creating an anti-conservative bias review led by lobbyist and former Republican Sen. Jon Kyl and his firm.

    Yesterday, the GOP-controlled House Judiciary Committee held a second hearing devoted to supposed anti-conservative bias on the tech platforms. Unlike the first hearing, which the tech companies sat out, Facebook, Google, and Twitter all sent representatives to testify. Republicans repeatedly made the same false claims about anti-conservative bias on tech platforms. Democrats on the committee came out in force, calling their Republican colleagues out for their evidence-free claims. But the tech companies refused to stand up for themselves. As I watched the hearing, I wondered why the tech platforms had even bothered to show up. If you’re not going to stand up to the schoolyard bully, why show up at the playground at all?

    Key hearing highlights

    Rep. David Cicilline (D-RI) highlighted the repercussions of Facebook’s bowing to conservative pressure over the last two years. He pressed Facebook, in particular, on its decision to fire its human editors who reviewed content for its Trending Topics section after conservative leaders complained in 2016 that the company was biased against conservative publishers.

    Rep. Hakeem Jeffries (D-NY) pressed Facebook about it hiring Kyl to lead the review of supposed anti-conservative bias and asked if the tech giant had “engaged any former Democratic members of the House or the Senate to participate in this exercise.” Facebook’s representative responded that “we do have conversations on both sides of the aisle” and pointed to a civil rights audit the company has also started -- implying that civil rights are a partisan issue. Facebook’s representative dodged a question about Kyl’s also acting as the “sherpa” for the White House to steer President Donald Trump’s Supreme Court nominee Brett Kavanaugh through the Senate confirmation process. Jeffries also asked the Facebook representative that since it has engaged a “right-wing conservative-leaning organization” among other nonpartisan outlets for its fact-checking initiative, whether Facebook has engaged any “left-leaning progressive” outlets for the program. The company’s representative dodged the question.

    Rep. Ted Deutch (D-FL), whose district includes the city of Parkland, pressed Facebook and Google on the platforms’ inability to protect the student survivors of the February Parkland school shooting, many of whom are still minors, from being the subject of conspiracy theories and from misinformation being spread about their personal lives, among other similar attacks. Deutch asked Google and Facebook representatives what it would take, in particular, for conspiracy theory outlet Infowars to be banned from their platforms.

    Rep. Jamie Raskin (D-MD) also highlighted the hollowness of conservative claims of bias at the tech platforms.

    To give you a flavor of what the other side brought to the table, Rep. Steve King (R-IA) asked Facebook about claims made by Jim Hoft of the highly disreputable conspiracy theory site Gateway Pundit, that Facebook traffic to his website has decreased by 54 percent since 2016.

    As Verge’s Casey Newton tweeted while referring to our study that debunked claims of conservative censorship on Facebook,“the most important fact to keep in mind” regarding the hearing is that conservative content performs really well on Facebook. Republicans should be more than happy with the engagement they’re seeing.

    But this hysteria about anti-conservative bias isn’t about the truth. Republicans continue to harp on the myth because they know it will rally their base. They continue to peddle a myth and tech companies continue to let them. It’s long past time to end the charade. Facebook needs to stand up for truth and call the right-wing lie out. Objective truth isn’t a partisan issue. Tech companies must do right by their users and take a stand for it. The only way to win against a bully is to stand up to them.

    It's not clear that Facebook has gotten the message.

    This post has been updated for clarity.

  • This data conclusively debunks the myth of conservative censorship on Facebook

    We studied Facebook pages that post content about American political news. Conservatives are not being censored -- in fact, right-wing Facebook pages are thriving.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Right-wing politicians, pundits, and campaigns continually claim that Facebook and other tech platforms censor conservative content online. President Donald Trump’s campaign manager, Brad Parscale, frequently makes this argument. At every congressional hearing about social media, Republican members reliably make the same accusation. The GOP-controlled House Judiciary Committee has already held one hearing on the supposed censorship, and they’re scheduled to hold a second on July 17. Conservatives believe that attacking tech companies about so-called censorship will rally their base, and they plan to continue the attacks.

    Even though those making these accusations have offered no evidence to support censorship claims, Facebook responded by announcing a conservative bias review -- retaining former Republican Sen. Jon Kyl from Arizona and his lobbying firm to advise the company. (Kyl is now also shepherding Supreme Court nominee Judge Brett Kavanaugh through confirmation hearings.)

    It’s not the first time Facebook has reacted to claims of nonexistent right-wing censorship. In May 2016, a flimsy report claimed that conservative outlets and stories were “blacklisted” from Facebook’s Trending Topics section. To great fanfare, Facebook CEO Mark Zuckerberg met with conservatives, including a representative from Trump's campaign, and made promises to be good to them. A subsequent internal investigation revealed “no evidence of systematic political bias” in the Trending Topics section. But Facebook soon gave in anyway and fired the curators of the section, resorting instead to using an algorithm that routinely promoted fabricated stories from bogus sources. Add this cravenness to existing confirmation bias and plenty of dishonest actors willing to take advantage, and Facebook became a cesspool of fake news.

    The algorithm change that was announced in January 2018 was supposed to fix to the fake news problem, which existed only because of previous failures at Facebook. And now with Facebook rolling out the welcome mat for conservatives, we’re about to begin that cycle anew.

    And once again, conservatives are pressuring Facebook with a total myth. Media Matters conducted an extensive six-month study into alleged conservative censorship on Facebook and found no evidence that conservative content is being censored on the platform or that it is not reaching a large audience.

    We identified 463 Facebook pages that had more than 500,000 likes each and regularly posted content dealing with American political news. We analyzed data from these pages, week by week, between January 1, 2018, and July 1, 2018, to observe trends in post interactions (reactions, comments, and shares) and page likes. We found two key things:

    • Partisan pages had roughly equal engagement, and they had more engagement than nonpartisan pages: Right-leaning and left-leaning Facebook pages had virtually identical average interaction rates -- measurements of a page's engagement -- at .18 percent and .17 percent, respectively, and nonaligned pages had the lowest interaction rates at .08 percent.
    • Right-leaning pages in total have a bigger presence on Facebook: In every week but one, right-leaning Facebook pages had a higher total number of interactions than left-leaning Facebook pages. Right-leaning pages had 23 percent more total interactions than nonaligned pages and 51 percent more total interactions than left-leaning pages. Images shared by right-leaning pages -- including memes that frequently include false and bigoted messages -- were by far the highest performing content on the Facebook pages examined.

    The data indicates something I’ve long assumed anecdotally: The right is out-organizing the left on Facebook. Even though the right-leaning pages had fewer page likes than the left-leaning pages, the rates of interaction are virtually identical. And when you look at the individual metrics, especially on image-based posts, the news gets even worse. Despite having a larger base of aligned supporters on Facebook in terms of page likes, left-leaning pages don't have as much impact with their base.

    You can view the full study here. 

    It’s time to end the charade. The Trump campaign and politically aligned groups aren't going to stop advertising on Facebook. They need Facebook to reach their voters. Facebook should disband the conservative bias review and stop enabling political theater. Considering how many problems Facebook as a company is facing, it's long overdue for the company to stop wasting its time and resources on a problem that doesn't exist. Political media also need to stop giving this myth oxygen. Next time Parscale or Sen. Ted Cruz (R-TX) start whining about bias, reporters need to ask for some actual numbers to back up their claims.

  • The UK’s Information Commissioner's Office just fined Facebook 500,000 pounds

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Big news from across the pond: The U.K. Information Commissioner’s Office (ICO) has completed an interim investigation report about Facebook’s data-sharing practices and fined the tech company 500,000 pounds for two breaches of the Data Protection Act 1998. Further, the report states that SCL, parent company of Cambridge Analytica, will face criminal prosecution for not complying with an order the office issued the now-defunct company in May.

    The fine is the largest ever given out for a breach under the Data Protection Act. Facebook’s actions came before a new set of European Union data rules -- the General Data Protection Regulation -- went into effect, but had the data breach happened under GDPR, the fine could have been up to 359 million pounds.

    The ICO first began investigating Cambridge Analytica when an American academic, David Carroll, asked Cambridge Analytica to provide all of the data it had about him -- a request U.K. law required the company follow. Note that the data Carroll was requesting was his voter profile, which he was unable to obtain under U.S. law even though the information was used in U.S. elections.

    When Cambridge Analytica failed to supply the data, Carroll asked the ICO to enforce his request, which spurred the office to open an investigation. Just a few weeks later, the news broke that Cambridge Analytica had harvested the Facebook profiles of 50 million users (the reported number has since increased to at least 87 million). Cambridge Analytica executives were also caught on hidden camera bragging to potential customers about the company’s use of “bribes, ex-spies, fake IDs and sex workers” on behalf of its clients. Because Facebook failed to protect its users, the company became part of the ICO investigation.

    Today’s interim report doesn’t mean the investigation is over. According to The Guardian, “More than 20 different organisations, including political parties, data brokers, and social media companies, were approached by the ICO. One of the commissioner’s announcements on Wednesday was that the ICO would audit the data-processing practices of 11 political parties in the UK.” The ICO has also called on the U.K. government to “legislate a statutory code of practice under the new Data Protection Act to govern the use of data in political campaigns.”

    I appreciate the ICO’s suggestion that the U.K. needs additional legislation to protect Facebook’s users, but to be honest, that won’t be enough. Practically speaking, Facebook is too large a company for any one government to oversee. We already know that Cambridge Analytica wasn’t the only firm to exploit Facebook’s user data, and just yesterday news broke that a Russian company with Kremlin links also had access to user data, having developed “hundreds of Facebook apps” to collect data, “some of which were test apps that were not made public.”

    Facebook’s users are spread across the globe, and breaches of their data and other abuses have a global impact. The response to Facebook’s failures must be global as well. The  American professor who is suing in the U.K. came up with a creative approach, and we need more of the same, as Facebook will change only in response to pressure. The more we can organize pressure campaigns with international reach and the more those campaigns utilize institutions in multiple countries, the more successful we’ll be at forcing Facebook’s hand.