Facebook has a long history of failing its users. The massive data breach is just the latest example.

Facebook has a long history of failing its users. The massive data breach is just the latest example.

As Facebook continues to deal with the fallout from the largest data breach in its history, Media Matters takes a look back at some of its previous failures

Blog ››› ››› MELISSA RYAN & ALEX KAPLAN


Melissa Joskow / Media Matters

Facebook recently announced the worst data breach in the company’s history, affecting approximately 30 million users. This breach allowed hackers to “directly take over user accounts” and see everything in their profiles. The breach “impacted Facebook's implementation of Single Sign-On, the practice that lets you use one account to log into others.” Essentially, any site users signed into using their Facebook login -- like Yelp, Airbnb, or Tinder -- was also vulnerable. Hackers who have access to the sign-on tokens could theoretically log into any of these sites as any user whose data was exposed in the hack. As a precaution, Facebook logged 90 million users out of their accounts. On October 12, the company offered users a breakdown of how many people were affected and what data was exposed.

Via Facebook:

The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people. For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. For 1 million people, the attackers did not access any information.

Users can find out if they were affected and what data was accessed at Facebook’s help center.

Even with the update, we still don’t know enough information about the breach. We don’t know who was behind the attack. The FBI is investigating the hack, as well as the European Union (via Ireland’s Data Protection Commission, Facebook’s lead privacy regulator in Europe). Multiple members of Congress have expressed concern about the breach.

What we do know is that this latest data breach is hardly the only way Facebook has failed its consumers. Media Matters has cataloged Facebook’s multitude failures to protect its consumers since the company’s beginnings.

Data privacy

Cambridge Analytica

The public learned about Facebook’s most notorious data privacy breach on March 16 of this year. Facebook abruptly announced that it had banned Cambridge Analytica, the firm that did data targeting for Donald Trump’s presidential campaign, from using the platform for, according to The Verge, “violating its policies around data collection and retention.” The next day, The New York Times and The Observer broke the story Facebook was clearly trying to get ahead of: Cambridge Analytica had illegally obtained and exploited the Facebook data of 50 million users in multiple countries.

Christopher Wylie, Cambridge Analytica’s former research director, blew the whistle on how the firm used the ill-gotten data of Facebook’s users to target American voters in 2016. The company, founded by right-wing megadonor Robert Mercer, had political clients in the U.S. and around the world; it did work for President Donald Trump’s campaign, Ted Cruz’s presidential campaign, current national security adviser John Bolton’s super PAC, and more. Following Wylie’s exposé, more information was revealed about the firm: Its leadership was caught on camera “talking about using bribes, ex-spies, fake IDs and sex workers.” It gave a sales presentation about disrupting elections to a Russian oligarch in 2014. And the firm reached out to WikiLeaks in 2016 offering to help release then-Democratic presidential nominee Hillary Clinton’s emails. Following these revelations, Cambridge Analytica shut down (though there are serious questions about whether it spun off into a new company).

The data breach didn’t just expose Facebook user data to a political consulting firm; it exposed it to a company backed by a right-wing billionaire whose full operations aren’t yet known. Put another way, a shady operation was offering services like entrapment to potential clients, and the only tool required to do that was Facebook.

Facebook continues to find more unauthorized scraping of user data. The company disabled a network of accounts belonging to Russian database provider SocialDataHub for unauthorized collection of user information. The company previously provided analytical services to the Russian government, and its CEO even praised Cambridge Analytica.

Advertising profits over user privacy

Facebook’s business model monetizes the personal information of its users for advertising purposes. Advertisers on Facebook pay for access to information about users in order to create better-targeted ad campaigns. But over the course of Facebook’s history, the company has continually exposed user data without their consent, putting profits over privacy considerations.

In 2009, Facebook was forced to settle a class action lawsuit from users and shut down its Beacon ad network, which posted users’ online purchases from participating websites on their news feeds without their permission. In 2010, Facebook was caught selling data to advertising companies that could be used to identify individual users. The company has been fined in Europe multiple times for tracking non-users for the purpose of selling ads. It admitted in March that it collected call history and text messages from users on Android phones for years.

Exposing data of Facebook employees

Facebook’s privacy failures affect its employees as well. The Guardian reported last year that a security lapse exposed the personal details of 1,000 content moderators across 22 departments to users suspected of being terrorists. Forty of those moderators worked on Facebook’s counterterrorism unit in Ireland, at least one of whom was forced to go into hiding for his own safety because of potential threats from terrorist groups he banned on the platform.

Misinformation

Trending Topics

In response to a Gizmodo article claiming Facebook employees were suppressing conservative outlets in its Trending Topics section, the company fired its human editors in 2016 and starting relying on an algorithm to decide what was trending. Following this decision, multiple fake stories and conspiracy theories appeared in the trending section. The problems with Trending Topics continued through this year, with the section repeatedly featuring links to conspiracy theory websites and posts from figures known for pushing conspiracy theories. Facebook mercifully removed Trending Topics altogether in June 2018.

State-sponsored influence operations and propaganda

During the 2016 campaign, Russian operatives from the organization known as the Internet Research Agency (IRA) -- which is owned by a close associate of Russian President Vladimir Putin -- ran multiple pages that tried to exploit American polarization. In particular, the IRA ran ads meant to stoke tensions about the way American police treat Black people while using other pages to support the police; the organization also played both sides on immigration.

The IRA also stole identities of Americans and created fake profiles to populate its pages focusing on “social issues like race and religion.” It then used the pages to organize political rallies about those issues. During the campaign, some Facebook officials were aware of the Russian activity, yet did not take any action. In 2017, Facebook officials told the head of the company’s security team to tamp down details in a public report it had prepared about the extent of Russian activity on the platform. It was only after media reporting suggested Facebook had missed something that the company found out the extent of that activity. So far this year, Facebook has taken down accounts potentially associated with the IRA.

Facebook in August 2018 also removed a number of accounts that the company had linked to state media in Iran.

Foreign networks spreading fake news and getting ad revenue

Since at least 2015, Facebook has been plagued by fake news stories originating from Macedonia that are pushed on the platform to get clicks for ad revenue. Despite being aware of those activities during the 2016 campaign, Facebook took no action to stop it, even as locals in Macedonia “launched at least 140 US politics websites.” Since then, Facebook has claimed that it has taken steps to prevent this kind of activity. But it has continued as Macedonian accounts used the platform to spread fake stories about voter fraud in special elections in Alabama in 2017 and Pennsylvania in 2018.

Macedonians aren’t the only foreign spammers on Facebook: A large network of users posing as Native Americans has operated on the platform since at least 2016. The network exploited the Standing Rock protests to sell merchandise, and it has posted fake stories to get ad revenue. While much of this activity has come out of Kosovo, users from Serbia, Cambodia, Vietnam, Macedonia, and the Philippines are also involved.

Facebook has also regularly struggled to notice and respond to large foreign spammer networks that spread viral hoaxes on the platform:

  • The platform allowed a Kosovo-based network of pages and groups that had more than 100,000 followers combined to repeatedly push fake news. Facebook finally removed the network following multiple Media Matters reports.

  • The platform allowed a network of pages and groups centered in Saudi Arabia and Pakistan that had more than 60,000 followers to publish fake stories. It was taken down following a Media Matters report.

Facebook officials have also downplayed the key role Facebook groups play in spreading fake news, even though the platform has been used regularly by people in other countries to push fake stories.

Domestic disinformation campaigns

Until just recently, Facebook did not respond to network of pages that regularly posted false stories and hoaxes and worked together to amplify their disinformation. The pages in the networks would coordinate and amplify their disinformation content. Facebook finally took down some of these domestic disinformation networks on October 11, right before the 2018 midterms, noting they violated its spam and inauthentic behavior policies. But as Media Matters has documented, even this sweep missed some obvious targets.

Fake news thriving on Facebook

Facebook’s fake news problem can be illustrated well by one of the most successful fake news sites on the platform, YourNewsWire. Based in California, YourNewsWire has been one of the most popular fake news sites in the United States and has more than 800,000 followers through its Facebook pages. Time and time again, hoaxes the site has published have gone viral via Facebook. Some of these fake stories have been flat out dangerous and have been shared on Facebook hundreds of thousands of times. Facebook’s designated third-party fact-checkers debunked the stories the site had published more than 80 times before it appears Facebook finally took action and penalized it in its news feed, forcing the site to respond to the fact-checkers’ repeated debunks.

Fake news has also been a problem in Facebook searches: Since at least 2017, fake stories about celebrities have popped up in Facebook searches, even after some had been debunked by Facebook’s designated third-party fact-checkers. Facebook in response has said it is trying to improve Facebook search results.

The problem has also extended to its ads. In May 2018, Facebook launched a public database of paid ads deemed “political” that ran on the platform. A review of the database found that the platform, in violation of its own policies, allowed ads featuring fake stories and conspiracy theories.

Withholding 2016 data from researchers

After the 2016 election, researchers repeatedly urged Facebook to give them access to its data to examine how misinformation spreads on the platform. In April, the platform announced it would launch an independent research commission that would have access to the data. However, the platform has refused to allow researchers to examine data from before 2017, meaning data from during the 2016 election is still inaccessible.

Misuse of Instant Articles

BuzzFeed reported earlier this year that fake news creators were pushing their content via Facebook’s Instant Articles, a feature that allows stories to load on the Facebook mobile app itself and which Facebook partly earns revenue from. In response, Facebook claimed it had “launched a comprehensive effort across all products to take on these scammers.” Yet the platform has continued to allow bad actors to use the feature for fake stories and conspiracy theories.

Problems with fact-checking

In response to the proliferation of fake news on the platform after the 2016 campaign, Facebook partnered with third-party fact-checkers to review posts flagged by users as possible fake news. Since then, some of these fact-checkers have criticized Facebook for not being transparent, particularly in its flagging process, withholding data on the effectiveness of the debunks, and failing to properly communicate with them.

In 2017, Facebook included the conservative Weekly Standard in its fact-checking program in the United States. The platform otherwise included only nonpartisan fact-checkers in its program, and since then it has not included any corresponding progressive outlet. This has resulted in the conservative outlet fact-checking and penalizing in the news feed a progressive outlet over a disputed headline, which was harshly criticized.

Human and civil rights violations

Poor policies for monitoring white supremacy and hate

This year, leaked documents showed that while Facebook’s content policies forbid hate speech arising from white supremacy, so-called white nationalist and white separatist views were considered acceptable, a policy it is now reviewing after public scrutiny. A 2017 Pro Publica investigation of Facebook’s content policies showed that white men were protected from hate speech but Black children were not. Neo-Nazis and white supremacists continue to profit by selling white supremacist clothing and products on Facebook and Instagram. Zuckerberg also defended the rights of Holocaust deniers to share their conspiracy theories on the platform.

After years of pressure from civil rights groups, Facebook finally agreed to submit to a civil rights audit, but it also announced the creation of a panel to review supposed bias against conservatives the same day, equating the civil rights of its users with partisan bickering by Republicans.

Contributing to violence in multiple countries

Facebook in recent years has actively expanded to developing countries. Since then, the platform has been used in Myanmar and Sri Lanka to encourage hate and violence against minorities, resulting in riots and killings. In Libya, militias have used the platform to sell weapons, find their opponents, and coordinate attacks. The United Nations has issued multiple reports criticizing Facebook’s role in Myanmar, suggesting the platform “contributed to the commission of atrocity crimes” in the country. Activists and officials in those countries also complained that Facebook had not employed moderators to monitor for hateful content, nor had they established clear points of contact for people in those countries to contact them to issue concerns.

Content sent via messaging app WhatsApp, which Facebook owns, has also caused problems. In India, hoaxes spreading through the platform have led to multiple lynchings, and the Indian government (whose supporters have themselves spread hoaxes) has pressured the company to clamp down on misinformation. In response, the platform has resorted to going on the road to perform skits to warn people about WhatsApp hoaxes. Other countries like Brazil and Mexico have also struggled with hoaxes spreading through WhatsApp, with the latter also seeing lynchings as a result.

Used by authoritarians to target opponents

Certain governments have also used Facebook as a means to target and punish their perceived opponents. In the Philippines, supporters of President Rodrigo Duterte, some of whom have been part of Duterte’s government, have spread fake content on the platform to harass and threaten his opponents. And in Cambodia, government officials have tried to exploit Facebook’s policies to target critics of Prime Minister Hun Sen.

Ads discrimination

Facebook’s ad policies have allowed people to exclude groups based on their race while creating a target audience for their ads, as ProPublica noted in 2016. The following year, it found that despite Facebook’s claims to stop such discrimination, housing ads on the platform continued to exclude target audience by race, sex, disability, and other factors. In 2017, civil rights groups filed a lawsuit against the platform and the Department of Housing and Urban Development also filed a complaint. Another investigation the same year found that the platform could exclude viewers by age from seeing job ads, a potential violation of federal law. In 2018, the American Civil Liberties Union sued Facebook for allegedly allowing employers to exclude women from recruiting campaigns.

Helping anti-refugee campaign in swing states

In 2016, Facebook, along with Google, directly collaborated with an agency that was working with far-right group Secure America Now to help target anti-Muslim ads on Facebook to users in swing states that warned about Sharia law and attacked refugees.

Online harassment

Facebook has done little to protect people who become targets of online harassment campaigns, even though most of them are likely users of Facebook themselves. Time and again, Facebook has allowed itself to be weaponized for this purpose. Alex Jones and Infowars are perhaps the most famous examples of this problem. Even though Jones harassed Sandy Hook families for years, calling the school shooting a false flag, spreading hate speech, and engaging in other forms of bullying, Facebook continued to allow him free rein on its platform. The company finally banned Jones in July this year, after weeks of public pressure, including an open letter from two Sandy Hook parents, but only after Apple “stopped distributing five podcasts associated with Jones.”

Facebook has also allowed conspiracy theorists and far-right activists to harass the student survivors of the Parkland school shooting, most of whom were minors, on the platform. More recently, it allowed right-wing meme pages to run a meme disinformation campaign targeting professor Christine Blasey Ford, Deborah Ramirez, and other survivors who came forward during the confirmation process of now Supreme Court Justice Brett Kavanaugh.

Still more screw-ups

Then there are the failures that defy category. In 2012, Facebook conducted psychological tests on nearly 700,000 users without their consent or knowledge. Zuckerberg had to apologize after giving a virtual reality tour of hurricane-struck Puerto Rico. Illegal opioid sales run rampant on Facebook, among other platforms, and the company has been unable to curb or stop them.

Even advertisers, the source of Facebook’s profit, haven’t been spared. Facebook’s latest political ad restrictions have created problems for local news outlets, LGBTQ groups, and undocumented immigrants seeking to buy ads. Facebook also had to admit to advertisers that it gave them inflated video-viewing metrics for the platform for over two years.

What Facebook owes consumers

As a college student, Zuckerberg offered the personal data of Facebook’s initial users at Harvard to his friend and joked that people were “dumb fucks” for trusting him with their personal information. One hopes that Zuckerberg’s respect for his customer base has improved since then, but Facebook’s many failures since suggest that it hasn’t.

BuzzFeed’s Charlie Warzel suggested that Facebook’s users simply don’t care enough about data privacy to stop using the platform. We have a slightly different theory: Users don’t leave Facebook because there’s no available alternative. Without a competitor, Facebook has no real incentive to fix what it’s broken.

The impact of Facebook’s failures compound on society at large. As the founder of one of Facebook’s designated third-party fact-checkers told The New York Times, “Facebook broke democracy. Now they have to fix it.”

Network/Outlet
Facebook
We've changed our commenting system to Disqus.
Instructions for signing up and claiming your comment history are located here.
Updated rules for commenting are here.