Mark Zuckerberg | Media Matters for America

Mark Zuckerberg

Tags ››› Mark Zuckerberg
  • Anti-abortion extremists keep crying censorship to raise money

    Blog ››› ››› JULIE TULBERT


    Sarah Wasko / Media Matters

    If there’s one thing Republicans love more than pretending they’re being victimized by liberal elites, it’s raising money off this inaccurate claim -- a tendency demonstrated clearly during recent congressional hearings on the activities of Facebook. During these hearings, Republican members of Congress elevated various overinflated right-wing grievances against social media companies (such as claims of anti-abortion censorship and anti-Christian bias) in order to pressure the platform into allowing greater promotion of inflammatory or inaccurate content. In particular, they seized on pro-Trump YouTubers Diamond and Silk, who have actively lied about Facebook censoring them and then used the attention to raise money. As close watchers of the anti-abortion movement know, this tactic of crying censorship to garner attention and raise funds is a favorite of anti-choice actors. Here are a few that have recently employed this practice:

    Live Action

    Lila Rose, founder of the anti-abortion group Live Action, appeared on Fox News’ Tucker Carlson Tonight in June 2017 alleging that Twitter was censoring Live Action’s ads due to ideological bias. In reality, the content still appeared on Live Action’s Twitter page, but was not allowed to be promoted as an advertisement to other users, not because of bias, but because it violated several of Twitter’s content policies regarding "hate content, sensitive topics, and violence.”

    Instead of altering the organization’s content to meet Twitter’s policies, Rose appeared on Tucker Carlson Tonight and used claims of supposed censorship to raise funds for Live Action. As Rose told Carlson, “We’re actually doing a campaign right now to get people to fund Live Action and to get out the information that Twitter is trying to block using other platforms -- using Facebook, using YouTube, using the blogosphere, obviously coming on here and talking with you.”

    Live Action continued to deploy this dishonest tactic even after Rose’s Fox News appearance. Following the June 26 segment, Live Action sent a fundraising email claiming that “Live Action is being suppressed” and asking supporters “to help us strengthen our efforts against the abortion industry.” Live Action’s censorship allegations also animated other right-wing media outlets. For example, on June 29, Christian Broadcasting Network published an article promoting Live Action’s claims about Twitter’s ad policy, which stated that “Live Action has launched a campaign to compensate for their losses due to Twitter’s censoring,” and directed readers to Live Action’s fundraising page. Rose and Live Action also pushed the narrative on Twitter, using the hashtag #DontDeleteMe -- even though all of Live Action tweets remained publicly available on the platform.

    The group also continued to use claims of censorship to raise funds in three October 2017 emails. In one email, Live Action stated that “Twitter is STILL banning our paid ads” and asked whether members would “give a gift to Live Action today so that we can expose more people to the truth.” In another email, Live Action claimed, “While we work to pressure Twitter to lift their ban on ads for pro-life content, we must double our efforts elsewhere” and asked people to “make a gift … so that we can reach more people with the truth.” Live Action made a similar plea in another email, asking people to “consider helping us reach more Americans with the truth about abortion through our other social media platforms like Facebook, YouTube, and Instagram.”

    Operation Rescue

    The extremist anti-abortion group Operation Rescue claimed in July 2017 that Google was censoring parts of its website after its page rankings decreased in the results of searches for “abortions in US” or “abortion statistics.” The group alleged that “Google’s search engine has manipulated search parameters to dramatically reduce exposure” to Operation Rescue's web pages, which contain abortion statistics purporting to show the "truth about abortion." Operation Rescue then sent a fundraising email asking for support to "launch a massive campaign to ensure our critical abortion research and pro-life content is available, and no longer pushed down by the pro-abortion radicals at Google." Prior to the complaint, Google announced a policy change regarding how sites containing misleading or false information would be ranked.

    Susan B. Anthony List

    In October 2017, Susan B. Anthony List (SBA List) claimed that one of the organization’s Twitter ads, targeting Virginia Attorney General Mark Herring in the 2017 election, was taken down by the platform, seemingly for inflammatory language. Citing this example and other anti-abortion censorship allegations, SBA List asked people to “make a gift today to get our pro-life message past Twitter’s censorship” and to “fight back against Twitter’s censorship.”

    Following Facebook CEO Mark Zuckerberg’s testimony before Congress last week, SBA List reprised this tactic and emailed supporters to detail instances where the group claimed to have been censored by social media companies. SBA List then directed people to “please make a generous donation of $250 to help win the fight against pro-abortion Silicon Valley elites.”

    Anti-abortion outlets

    Not to be left out of the conversation about supposed anti-abortion censorship, the anti-choice news outlet Life News also sent an email after Zuckerberg’s testimony stating, “Social media companies like Facebook, Twitter, Google and YouTube are increasingly censoring pro-life voices,” and asking readers to sign a petition and to “make a donation today … so we can continue to stand up to these social media giants [and] their censorship.”

    Another anti-abortion outlet, LifeSite News, also asked for donations in light of supposed censorship by social media companies. The site posted in March 2018 about the “surprising and disturbing reason why LifeSite’s Spring campaign is struggling.” The reason, according to LifeSite News, “is an almost declared war by the globalist social media giants – Facebook, Google, Twitter and YouTube against websites, blogs and individuals who promote conservative views.” LifeSite argued that its inability to raise funds was due to censorship from Facebook and Google and pleaded to readers, writing, “To those of you who were not blocked from reading this letter, we are depending on you much more than normal to help us to reach our goal.” Unsurprisingly, the outlet provided zero evidence of the censorship it was allegedly experiencing.

    Roe v. Wade -- the movie

    The producer of an anti-abortion film about Roe v. Wade claimed that Facebook temporarily blocked his ability to post an Indiegogo crowdfunding page for the production of the film. On the Indiegogo page, the film is described as “the real untold story of how people lied; how the media lied; and how the courts were manipulated to pass a law that has since killed over 60 million Americans.” According to the film’s crowdfunding page, the film needs “support now more than ever. Facebook has banned us from inviting friends to ‘Like’ our page and from ‘Sharing’ our PAID ads.”

    Rep. Marsha Blackburn

    In October 2017, Rep. Marsha Blackburn (R-TN) announced she was running for a Senate seat by tweeting out a campaign video that included a mention of her time as chair of the House Select Investigative Panel on Infant Lives -- a sham investigation based on deceptive and disproven claims by the anti-abortion group Center for Medical Progress. The video included inflammatory language such as that Blackburn had “stopped the sale of baby body parts.” After Twitter temporarily blocked her from running the tweet as a paid ad due to its inflammatory language, Blackburn claimed censorship and made the rounds on Fox News to push this story. Blackburn also used the opportunity to tweet that the “conservative revolution won’t be stopped by @Twitter and the liberal elite,” urging people to “donate to my Senate campaign today.”

    Anti-abortion groups and outlets have found a great deal of success in crying censorship -- a lesson that wider conservative media outlets and figures appear to be taking to heart. As a recently published report from the right-wing Media Research Center (a report that was readily promoted by outlets like Life News) melodramatically framed the issue: “The question facing the conservative movement is one of survival. Can it survive online if the tech companies no longer allow conservative speech and speakers? And, if that happens, can the movement survive at all?”

  • Lack of diversity is at the core of social media's harassment problem

    Right-wing figures and far-right trolls mocked questions to Facebook's Zuckerberg about diversity. But it's crucial to understanding how platforms enable harassment.

    Blog ››› ››› CRISTINA LóPEZ G.


    Sarah Wasko / Media Matters

    This week, Facebook CEO Mark Zuckerberg was questioned on racial diversity within his company as he appeared before House and Senate committees to address Facebook’s handling of user data. Facebook -- and more generally, the tech industry -- has often been criticized for its lack of diversity, an issue that, as members of Congress pointed out, can hinder the platform’s ability to respond to discrimination against African-American users and fake news.

    Rep. Yvette Clarke (D-NY) discussed the relationship between Facebook’s fake news problem and lack of diversity within the company itself:

    Sen. Cory Booker (D-NJ) asked Zuckerberg about racial discrimination enabled by Facebook and indicated a "growing distrust ... about Facebook's sense of urgency” in addressing such discrimination:

    Rep. G.K. Butterfield (D-NC) questioned Zuckerberg on Facebook’s lack of diversity:

    REP. G.K. BUTTERFIELD (D-NC): You and your team certainly know how I feel about racial diversity in corporate America, and [Facebook Chief Operating Officer] Sheryl Sandberg and I talk about that all of the time. Let me ask you this, and the Congressional Black Caucus has been very focused on holding your industry accountable -- not just Facebook, your industry -- accountable for increasing African-American inclusion at all levels of the industry. And I know you have a number of diversity initiatives. In 2017, you’ve increased your black representation from 2 to 3 percent. While this is a small increase, it's better than none. And this does not nearly meet the definition of building a racially diverse community. CEO leadership -- and I have found this to be absolutely true -- CEO leadership on issues of diversity is the only way that the technology industry will change. So, will you commit, sir, to convene, personally convene a meeting of CEOs in your sectors -- many of them, all of them perhaps, are your friends -- and to do this very quickly to develop a strategy to increase racial diversity in the technology industry?

    MARK ZUCKERBERG: Congressman, I think that that's a good idea and we should follow up on it. From the conversations that I have with my fellow leaders in the tech industry, I know that this is something that we all understand, that the whole industry is behind on, and Facebook is certainly a big part of that issue. We care about this not just from the justice angle, but because we know that having diverse viewpoints is what will help us serve our community better, which is ultimately what we're here to do. And I think we know that the industry is behind on this.

    Right-wing media figures and far-right trolls scoffed at the idea of questioning the tech industry’s lack of diversity

    Right-wing figures and far-right trolls scoffed at these questions on different social media platforms -- including Gab, an alternative to Twitter that has been called a "haven for white nationalists" and has on occasion served as a platform to coordinate online harassment -- dismissing them as “insane” and describing efforts to increase racial diversity as discrimination “against white people.” 

    But experts have criticized Facebook and other platforms for the lack of racial diversity within their ranks and explained that diversity is at the core of social media’s harassment problems

    Members of Congress were not alone in their concern that Facebook’s racial homogeneity might diminish its capacity to create a safe environment for every user and protect user data. Bärí A. Williams, formerly a senior commercial attorney at Facebook, explained that racial diversity specifically would improve the platform’s ability to respond to data breaches, “fill blind spots,” and improve “cultural competency” through “lived experience.”

    While Zuckerberg announced Facebook’s intention to rely on Artificial Intelligence (AI) to adress many of the social network’s shortcomings, Molly Wood, host of the Marketplace Tech radio show, pointed out that AI is not a substitute for a racially inclusive workforce:

    A lack of racial diversity in companies’ ranks is at the core of the harassment problem on their social media platforms, as online harassment disproportionately targets minorities of color. According to Pew, “harassment is often focused on personal or physical characteristics; political views, gender, physical appearance and race are among the most common,” with African-Americans experiencing more harassment because of their ethnicity than other groups, and women experiencing more harassment than men:

    Some 14% of U.S. adults say they have ever been harassed online specifically because of their political views, while roughly one-in-ten have been targeted due to their physical appearance (9%), race (8%) or gender (8%). Somewhat smaller shares have been targeted for other reasons, such as their religion (5%) or sexual orientation (3%).

    Certain groups are more likely than others to experience this sort of trait-based harassment. For instance, one-in-four blacks say they have been targeted with harassment online because of their race or ethnicity, as have one-in-ten Hispanics. The share among whites is lower (3%). Similarly, women are about twice as likely as men to say they have been targeted as a result of their gender (11% vs. 5%)

    During a conversation with Wired about how Silicon Valley can address harassment in social media platforms, Black Lives Matter’s Chinyere Tutashinda talked about her experiences online as a black social activist, confirming Pew’s findings by remarking on the ways that people of color are targeted disproportionately online:

    CHINYERE TUTASHINDA: I work within the social justice movement, and there’s no one, especially in the black community, who doesn’t expect harassment online. It’s just replicating what happens in the real world, right? How do we make other people know and care?

    [...]

    There is a lack of diversity in who’s creating platforms and tools. Too often it’s not about people, it’s about how to take this tool and make the most money off it. As long as people are using it, it doesn’t matter how they’re using it. There’s still profit to earn from it. So until those cultures really shift in the companies themselves, it’s really difficult to be able to have structures that are combating harassment.

    [...]

    Diversity plays a huge role in shifting the culture of organizations and companies. Outside of that, being able to broaden the story helps. There has been a lot of media on cyberbullying, for example, and how horrible it is for young people. And now there are whole curricula in elementary and high schools. There’s been a huge campaign around it, and the culture is shifting. The same needs to happen when it comes to harassment. Not just about young people but about the ways in which people of color are treated.

    Experts have weighed in on the specific implications of social media platforms lacking racial diversity among their ranks. As Alice Marwick, a fellow for the Data & Society Research Institute, pointed out on Quartz,“the people who build social technologies are primarily white and Asian men” and because “white, male technologists don’t feel vulnerable to harassment” in the same way that minorities or people of color do, they often fail to incorporate protections against online abuse in their digital designs.

    To illustrate Marwick’s point, take Twitter’s mute button, a feature that can filter unwanted content from users' timelines, making it easier for users to avoid abusive content directed at them. As Leslie Miley -- a black former engineering manager at Twitter who left the company specifically because of how it was addressing diversity issues -- told The Nation, the feature wasn’t perfected until a diverse group of people worked together to fix it:

    [Leslie] Miley was a part of a diverse team at Twitter that he says proves his point. His first project as the engineering manager was to fix Twitter’s “mute” option, a feature that allows users to filter from their timelines unwanted tweets, such as the kind of harassment and personal attacks that many prominent women have experienced on the platform.

    “Twitter released a version in the past that did not go over well. They were so badly received by critics and the public that they had to be rolled back. No one wanted to touch the project,” says Miley. So he pulled together a team from across the organization, including women and people of color. “Who better to build the feature than people who often experience abuse online?” he asks. The result was a new “mute” option that was roundly praised as a major step by Twitter to address bullying and abuse.

    The blind spots caused by racial homogeneity might also delay platforms’ responses to rampant harassment. As documented by Model View Culture magazine, far-right troll and white nationalist sympathizer Milo Yiannopoulos was allowed to rampantly harass users for years on Twitter before getting permanently banned for his “sustained racist and sexist” harassment of African-American comedian Leslie Jones. As Model View Culture points out, racial diversity could be extremely helpful in addressing the challenge social media platforms face in content moderation:

    From start to finish of the moderation pipeline, the lack of input from people who have real, lived experience with dealing with these issues shows. Policy creators likely aren’t aware of the many, subtle ways that oppressive groups use the vague wording of the TOS to silence marginalized voices. Not having a background in dealing with that sort of harassment, they simply don’t have the tools to identify these issues before they arise.

    The simple solution is adding diversity to staff. This means more than just one or two people from marginalized groups; the representation that would need to be present to make a real change is far larger than what exists in the population. Diversity needs to be closer to 50% of the staff in charge of policy creation and moderation to ensure that they are actually given equal time at the table and their voices aren’t overshadowed by the overwhelming majority. Diversity and context must also be considered in outsourcing moderation. The end moderation team, when it comes to social issues specific to location, context and identity, needs to have the background and lived experience to process those reports.

    To get better, platforms must also address how user-generated reports are often weaponized against people of color. Although there’s nothing that can be done about the sheer numbers of majority-White users on platforms, better, clearer policy that helps them question their own bias would likely stop many reports from being generated in the first place. It may also help to implement more controls that would stop targeted mass-reporting of pages and communities by and for marginalized people.

    Ultimately, acknowledging these issues in the moderation pipeline is the first step to correcting them. Social media platforms must step away from the idea that they are inherently “fair,” and accept that their idea of “fairness” in interaction is skewed simply by virtue of being born of a culture steeped in White Supremacy and patriarchy.

  • The tragedy and lost opportunity of Zuckerberg’s testimony to Congress

    Congress didn’t do nearly enough to hold Mark Zuckerberg accountable

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg came to Washington to testify before Congress over two days of hearings. Expectations were low -- to the point of infantilization. Unsurprisingly, Zuckerberg was able to clear the extremely low bar America sets for white men in business. He showed up in a suit and tie, didn’t say anything too embarrassing, and, for the most part, the members of Congress questioning him made more news than his testimony did. Facebook’s public relations team probably considers the hearings a win. The stock market certainly did.

    Facebook’s users, however, lost bigly. Congress failed to hold Zuckerberg accountable. The Senate hearing, held jointly by the judiciary and commerce committees, devolved into Zuckerberg explaining how the Internet worked to the poorly informed senators. The House commerce committee members were more up to speed, but Republican members -- following Ted Cruz’s lead from the day before -- spent most of their time and energy grilling Zuckerberg about nonexistent censorship of right-wing content. If Facebook’s leaders are ill-prepared to handle the challenges they’re facing, Congress appears even less up to the challenge.

    Tech press had a field day on Twitter in feigning outrage at Congress for its lack of tech savvy, but the Congress’ lack of interest in holding Facebook accountable is far more problematic. As David Dayen noted in the Intercept:

    This willingness, against interest and impulse, to do the job of a policymaker was sorely absent throughout Tuesday’s testimony, which involved both the judiciary and commerce committees, as well as nearly half the members of the Senate. Far too many senators framed the problems with Facebook — almost unilaterally agreed, on both sides of the aisle, to be pernicious and requiring some action — as something for Zuckerberg to fix, and then tell Congress about later.

    Sen. Lindsey Graham (R-SC) was the rare exception. He was one of few members of Congress comfortable with calling Facebook a monopoly.

    Facebook’s issues with civil rights was barely covered, with a few notable exceptions. Sen. Mazie Hirono (D-HI) asked Zuckerberg if Facebook would ever assist the government in vetting immigrants (it would not in most cases), and Sen. Cory Booker (D-NJ) asked Zuckerberg to protect Black Lives Matter activists from improper surveillance (he agreed). Reps. Bobby Rush (D-IL) and G.J. Butterfield (D-NC) asked similar questions during the House hearing, and Rep. Susan Brooks (R-IN) asked about Facebook as a recruitment tool for ISIS. But not one question was asked about Facebook’s role as a recruitment tool for white supremacists and neo-Nazis.

    While the House hearing featured better questions, the majority of Republican members nevertheless managed to turn it into a circus. They repeatedly asked Zuckerberg about the supposed censorship of pro-Trump social media stars Diamond and Silk (which has since been debunked) and suggested that the biggest issue Facebook faces is the censorship of right-wing content. The concern trolling over Diamond and Silk came between questions exposing deep societal problems including opioid sales on the social media platform that are literally responsible for overdose deaths and Facebook’s role in the Rohingya genocide in Myanmar.

    The Diamond and Silk obsession derives from another one of Facebook’s societal problems: the prominence of propaganda, conspiracy theories, and misinformation on the platform. Multiple members who asked Zuckerberg about Diamond and Silk said they’d heard directly from their constituents about the matter, which they almost certainly did. Pro-Trump media lost their collective minds when the news broke. The facts are that the Diamond and Silk supposed censorship didn't actually happen and that data does not back up the claim of right-wing media being censored on Facebook. If anything, the platform is a cesspool of far-right activity.

    Not one member of Congress asked Zuckerberg about Facebook’s role in the spread of conspiracy theories and propaganda. Republicans were wasting valuable time demanding answers over a nonexistent conspiracy theory, and no one at all felt compelled to ask Zuckerberg how the hell we got to here. It is extremely telling that while this was going on, Diamond and Silk made an appearance on Alex Jones’ Infowars broadcast, another conspiracy theory site that owes its popularity in part to Facebook.

    If social media filter bubbles have split Americans into different realities, it would seem that Congress is a victim to the same problem. Research shows that the right-wing’s filter bubble influences the left’s in a way that isn’t reciprocated. Right-wing content isn’t actually being censored on Facebook. The newly minted Diamond and Silk Caucus (or the Alex Jones Caucus) in Congress was demanding that even more right-wing content show up in our feeds, sending the right-wing base even deeper into their bubble. It’s the same schtick that the same people have pulled for years with the political media.

    While many in Congress have complained about far-right conspiracy theories becoming a part of mainstream American society, it’s a shame that they didn’t hold accountable the one man who more than anyone created this reality.

  • Facebook’s latest announcements serve as a reminder that fixing the platform is a global issue

    Effective consumer pushback must be global as well.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    A few huge updates from Facebook this week are worth paying attention to.

    First, the company announced the removal of “70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — that were controlled by the Russia-based Internet Research Agency (IRA).” Facebook also removed any ads associated with the IRA pages. In an unusual bit of transparency, the company provided stats of what was deleted and who those pages were targeting:

    Of the Pages that had content, the vast majority of them (95%) were in Russian — targeted either at people living in Russia or Russian-speakers around the world including from neighboring countries like Azerbaijan, Uzbekistan and Ukraine.

    Facebook also provided a few samples from the pages as well as ad samples, none of which were written in English. “The IRA has consistently used inauthentic accounts to deceive and manipulate people,” the announcement said. “It’s why we remove every account we find that is linked to the organization — whether linked to activity in the US, Russia or elsewhere.”

    CEO Mark Zuckerberg reiterated IRA’s global reach in a post on his personal page, saying, “Most of our actions against the IRA to date have been to prevent them from interfering in foreign elections. This update is about taking down their pages targeting people living in Russia. This Russian agency has repeatedly acted deceptively and tried to manipulate people in the US, Europe, and Russia -- and we don't want them on Facebook anywhere in the world.”

    Facebook also announced an updated terms of service and data policy that the company claims will be easier for users to understand. “It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy,” the announcement reads. “So we’re proposing updates to our terms of service that include our commitments to everyone using Facebook. We explain the services we offer in language that’s easier to read. We’re also updating our data policy to better spell out what data we collect and how we use it in Facebook, Instagram, Messenger and other products.”

    Finally, Facebook announced major changes to how third parties can interact with and collect data. The company acknowledged that the number of users whose data was being illegally used by Cambridge Analytica -- reported to be 50 million -- was actually 87 million. Facebook promised, “Overall, we believe these changes will better protect people’s information while still enabling developers to create useful experiences. We know we have more work to do — and we’ll keep you updated as we make more changes.”

    Facebook is finally responding to consumer pressure in a systematic way. These changes will curb the amount of propaganda users are exposed to, limit how third parties can interact with users on the platform, and make the rules of the road clearer for everyone.

    It’s important to note that all of these changes appear to be global, not limited to specific countries, which is good because the problems Facebook has caused are also global. Facebook has been weaponized by hostile actors seeking to manipulate users in dozens of countries. Facebook employees have admitted, on the company's Hard Questions Blog, that Facebook as a platform can be harmful to democracy. Facebook’s ability to reach people across the world is unprecedented in scale, and because of this, there’s no institution or government with the ability to regulate Facebook and protect the totality of its users.

    We have Facebook on the defensive, but they’re going to change only as much as it’s pressured to change. Tech lawyer and privacy advocate Tiffany Li, in an op-ed for NBC News, has identified three groups of stakeholders Facebook needs to appease in order to save their company: “shareholders, policymakers, and of course, consumers.” I like her categorization but would add that Facebook needs to appease these three groups in countries across the globe, not just in the U.S., U.K., and European Union nations.

    This isn’t a problem that can be solved overnight, something Zuckerberg acknowledged when he spoke with Vox’s Ezra Klein this week, saying, “I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.” Generally, I’m a Zuckerberg critic, but I appreciate this comment and agree we’re in for a turbulent couple of years coming to grips with everything.

    Here’s the good news. Thanks to social media (including Facebook!) we’re more connected than ever before. Facebook’s users have an opportunity to have a global conversation about what changes are needed and take any activist campaigns or direct actions global. We can pressure multiple governments, work with civil society groups in multiple countries, and create a global consumer movement.

    Facebook still has a long way to go and it’s users have 87 million (or 2 billion) reasons to be upset. The company has a lot do before it can earn back the trust of their consumers across the globe. That said, I appreciate that Facebook is finally taking some decisive action, even as they acknowledge curbing abuse of all kinds on the platform will be an ongoing battle. It’s a welcome correction to the company’s PR apology tour, adding action to words that would otherwise ring hollow. To be clear: Facebook was forced to take these actions thanks to global activism and consumer pressure. We have the momentum to force needed systemic changes. Let’s keep at it.

    Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action

  • Mark Zuckerberg’s apology PR tour and why now is our best opportunity yet to push for change

    Facebook to everyone: our bad

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg is sorry. Specifically, as he told CNN, he’s “really sorry that this happened.”

    “I think we let the community down, and I feel really bad and I’m sorry about that,” he told Recode’s Kara Swisher. Facebook Chief Operating Officer Sheryl Sandberg, appearing on CNBC, also offered an apology: “I am so sorry that we let so many people down.”

    Zuckerberg and Facebook have a lot to apologize for. In addition to the numerous other problems plaguing Facebook under Zuckerberg’s watch, he allowed Cambridge Analytica to obtain and exploit the Facebook data of 50 million users in multiple countries. When the platform discovered the stolen data, it took the firm’s word that the data had been deleted (it hadn’t). Facebook made no attempts to independently verify that the data was no longer being used, nor did it notify users whose data was exploited. Even after the news broke, it took Zuckerberg and Sandberg six days to face the public and give interviews.

    In addition to offering their apologies, both Sandberg and Zuckerberg acknowledged that trust between Facebook and users had been breached. Sandberg said on CNBC, “This is about trust, and earning the trust of the people who use our service is the most important thing we do. And we are very committed to earning it.”

    What surprised me most, however, was their acknowledgment that regulation was coming and that perhaps Facebook needs to be checked. Zuckerberg in his CNN interview suggested that regulation of tech companies like Facebook might be necessary. Sandberg went even further: “It's not a question of if regulation, it's a question of what type. ... We're open to regulation. We work with lawmakers all over the world." At first this read to me like another attempt at passing the buck of responsibility onto another entity, and while that might still be partially true, there’s more to it. Facebook is responding to public outrage, including the growing calls for regulation. Facebook executives have concluded they’re not getting out of this mess without regulation, and their best path forward is to try to get the best deal they can get, given the circumstances.

    Were Zuckerberg and Sandberg forthcoming enough? No. I don’t think anyone was convinced that Facebook is telling us everything it knows, nor did the company present much of a plan for protecting consumers moving forward. But consumers have the momentum. Facebook will change only as much as its users demand. The fact that Facebook’s leadership is on a full-blown apology tour means that public pressure is starting to work. After months of bad press and user backlash, Facebook is finally acknowledging that some things need to change.

    Facebook failed to protect users from a consulting firm so shady that it bragged to a potential client about entrapping candidates for office, potentially breaking U.S. election laws to help Donald Trump win in 2016, and avoiding congressional investigations. Consumers are outraged, many to the point of quitting Facebook entirely. Cambridge Analytica probably isn’t the only problematic company that Facebook allowed to exploit user data, but from an organizing perspective, we couldn’t ask for a better villain. After months of outrage, Facebook is on the defensive. This is the best opportunity we’ll have to force it and other tech platforms to make systemic change.

    Here’s a good place to start: Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action.

  • For Zuck's sake

    Blog ››› ››› MELISSA RYAN

    Mark Zuckerberg has been sharing a lot this month. First, he posted that his “personal challenge” for 2018 is to fix the glaring and obvious problems for which he’s taken so much heat. Last week, he announced that he had directed Facebook’s product teams to change their focus from “helping you find relevant content to helping you have more meaningful social interactions.” Zuckerberg promised users that they’d see less content from “businesses, brands and media” and more content from “your friends, family and groups.” On Friday, Zuckerberg shared another major change: Facebook would improve the news that does get shared by crowdsourcing what news sources were and weren’t trustworthy via user surveys.

    The first change, a return to “meaningful interaction,” is one I can get behind. I’m all for anything that discourages fake news sites from monetizing on Facebook. I’ve long suspected that part of why these sites took hold in the first place was a lack of meaningful content available on our feeds. Less sponsored content and more pictures and videos from family and friends will greatly improve my Facebook experience. I suspect I’m not the only one.

    I’m also hopeful this change will move digital advocacy away from broadcasting and back to organizing. Given how Facebook groups have become such a crucial part of #TheResistance I’m glad to hear they’ll be emphasized. I want to see more groups like Pantsuit Nation and the many local Indivisible groups that have formed in the last year. (Media outlets fear not, Vox has also been building Facebook groups in addition to their pages.) Digital ads and acquisition shouldn’t be the only tools digital organizers use. Increased engagement should involve actually engaging folks rather than simply broadcasting to them.

    The second change, user surveys to determine what news people trust, is maddening. If you were going to design a system that could be easily gamed, this is how you’d do it. “Freeping” online polls and surveys is a longstanding tactic of the far right online, going back nearly 20 years. It’s in their online DNA and they have groups of activists at the ready who live for this activity. Facebook isn’t handing authority over to their broader community but to an engaged group of users with an agenda. Even if the freeping wasn’t inevitable, it’s pretty well established that there’s already no common ground when it comes to what news sources people with different political viewpoints trust.

    The crux of the problem is that Facebook desperately wants to be seen a neutral platform while Facebook’s users want them to keep inaccurate information off of Facebook. In his New Year’s post, Zuckerberg emphasized he believes technology “can be a decentralizing force that puts more power in people’s hands” while acknowledging that the reality might be the opposite. There’s a tension between his core beliefs and what Facebook users currently expect from the company. My sense is that’s a driving force behind attempting to pass the buck back to us.

    Facebook will only go as far as their users pressure them, especially in the US where regulation from the government will be minimal. If we want Facebook to take responsibility, we have to continually hold them accountable when things go wrong or when proposed solutions don’t go far enough. Mark Zuckerberg’s personal challenge is to fix what’s broken. Ours is to keep pressing him in the right direction.

    This piece was originally published as part of Melissa Ryan's Ctrl Alt Right Delete newsletter -- subscribe here

  • Facebook’s news feed changes could elevate fake news while harming legitimate news outlets

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    New changes announced by Facebook to elevate content on its users’ news feed that is shared by friends and family over that shared by news publishers could wind up exacerbating Facebook’s fake news problem.

    Over the past year, Facebook has struggled to combat the spread of fake news and misinformation on its platform. On January 11, the social media giant announced that it would change the algorithm of its news feed so that it would “prioritize what [users’] friends and family share and comment on,” according to The New York Times. Facebook CEO Mark Zuckerberg, who was named Media Matters2017 Misinformer of the Year, told the Times that the shift was “intended to maximize the amount of content with ‘meaningful interaction’ that people consume on Facebook.” Additionally, content from news publishers and brands will be given less exposure on the news feed. Facebook is also weighing including some kind of authority component to its news feed algorithm so outlets that are considered more credible will get more prominence in the news feed.

    In the past year or so, Facebook has attempted to employ some measures in its effort to fight fake news, including its third party fact-checking initiative. Though these efforts have thus far been far from effective, the new changes threaten to undercut the measures even more.

    At least one study has shown that Facebook users are influenced by their friends and family members’ actions and reactions on the site. Last year, New York magazine reported on a study that found that “people who see an article from a trusted sharer, but one written by an unknown media source, have much more trust in the information than people who see the same article from a reputable media source shared by a person they do not trust.” With Facebook’s new changes, as the Times noted, “If a relative or friend posts a link with an inaccurate news article that is widely commented on, that post will be prominently displayed.”

    An additional point of concern is how this will exacerbate the problem of conservative misinformation specifically. Up until now, misinformation and fake news on social media have seemingly come from and been spread more by conservatives than liberals. And according to research conducted by Media Matters, right-wing communities on Facebook are much bigger than left-wing communities and mainstream distribution networks, and right-wing engagement is also bigger than in left-wing circles. These changes then could mean that peer-to-peer promotion of right-wing misinformation will more likely lead to fake news being pushed toward the top of people’s news feed.

    The changes will also likely cause real harm to legitimate news outlets by burying their stories. The head of Facebook’s news feed admitted that some pages “may see their reach, video watch time and referral traffic decrease.” Smaller, less-known outlets, especially those that do not produce content on the platform (such as live videos), could face major financial losses from the move. Facebook’s head of news partnerships, Campbell Brown, also wrote to some major publishers that the changes would cause people to see less content from “publishers, brands, and celebrities,” but that “news stories shared between friends will not be impacted,” which could suggest that fake news might get promoted over content directly from legitimate news outlets.

    It’s conceivable that adding some kind of authority component that ensures “articles from more credible outlets have a better chance of virality” could help lessen this possibility. Such a move would be a welcome development, and Media Matters has recommended that Facebook include it in its algorithm. But the possible criteria that Facebook is currently considering to determine which publisher is credible -- such as “public polling about news outlets” and “whether readers are willing to pay for news from particular publishers” -- is vague and could be problematic to enforce. And The Wall Street Journal noted that Facebook was still undecided about adding the authority component; without that, the possible negative impact from these news feed changes could be even worse.

    It is possible that Facebook’s move to include “Related Articles” next to the posts that its fact-checking partners have flagged could override people’s tendency to believe what their peers share. And perhaps the algorithm that tries to stop the spread of stories the fact-checkers have flagged may decrease the spread of fake news. But it’s also possible that these new moves undermine those initiatives, and that Zuckerberg’s aim to make users more happy could also make them more misinformed.

  • Angelo Carusone explains why Facebook’s Mark Zuckerberg is the 2017 Misinformer of the Year

    Blog ››› ››› MEDIA MATTERS STAFF

    Today, Media Matters for America named Facebook CEO Mark Zuckerberg as its 2017 Misinformer of the Year. The designation is presented annually to the media figure, news outlet, or organization that is the most influential or prolific purveyor of misinformation.

    Media Matters President Angelo Carusone explained why Mark Zuckerberg is the Misinformer of the Year:

    “We selected Mark Zuckerberg as the Misinformer of the Year because Facebook's actions in 2017 have been more of a public relations campaign than a deeper systemic approach to address the underlying causes of the proliferation of fake news and disinformation.

    I know that Facebook has the talent and knows how to implement some meaningful countermeasures. Instead of utilizing that talent, Zuckerberg has spent too much time downplaying the crisis and repeating his mistakes from 2016, like continually caving to right-wing pressure. There are some very basic things that Facebook can do to make meaningful dents in this problem -- and my hope for 2018 is that Mark Zuckerberg lets his team take those steps and more.”

    Here’s more about why Mark Zuckerberg earned the Misinformer of the Year designation:

    • Not only did Mark Zuckerberg allow Facebook to be used to mislead, misinform and suppress voters during the 2016 election, but he took active steps in an attempt to assuage right-wing critics that actually made the problem worse. He subsequently downplayed concerns about Facebook’s clear impact on the 2016 election. Instead of learning from those past mistakes, Zuckerberg has repeated them, continuing to act in a way designed to inoculate against or mollify right-wing critics, despite evidence that 126 million Facebook users saw election-related propaganda in 2016.

    • Mark Zuckerberg’s inaction and half-measures illustrate either his lack of recognition of the scope and scale of the crisis or his refusal to accept responsibility. After intense public pressure made Facebook’s misinformation problem impossible to ignore, Zuckerberg announced a series of toothless policy changes that are more public relations ploys than real meaningful solutions. Notably, little effort has been made to improve its news feed algorithm so that Facebook is not turbocharging disreputable or extreme content simply because it has high engagement, or to grapple with the scores of Facebook-verified disinformation and fake news pages masquerading as news sites.

    • Facebook’s third-party fact-checking system doesn't stop fake news from going viral. Fact-checkers who have partnered with Facebook have voiced their concerns about the company’s transparency and effectiveness of their efforts as Zuckerberg has largely refused to release Facebook’s data for independent review.  

    • In yet another attempt to mollify right-wing critics, Zuckerberg’s Facebook partnered with disreputable right-wing media outlet The Weekly Standard, thus allowing an outlet that baselessly criticizes fact-checkers and undermines confidence in journalism into its fact-checking program.

    Media Matters is the nation’s premier media watchdog. Following the 2016 presidential election, Media Matters acknowledged that in order to fully carry out its mission, its scope of work must incorporate an analysis of the way digital platforms influenced the larger problem of misinformation and disinformation in the media landscape.

    Media Matters’ selection of Mark Zuckerberg as Misinformer of the Year reflects Zuckerberg’s failure to take seriously Facebook’s role as a major source of news and information, and his failure to address Facebook’s role in spreading fake news, yet repeating past mistakes that are actually making the problem worse.

    Previous Misinformers of the Year include Sean Hannity, Rush Limbaugh, Rupert Murdoch and News Corp,  and Glenn Beck.

  • Misinformer of the Year: Facebook CEO Mark Zuckerberg

    Facebook's "personalized newspaper" became a global clearinghouse for misinformation

    Blog ››› ››› MATT GERTZ


    Sarah Wasko / Media Matters

    In late August, as Hurricane Harvey crashed through the Texas coastline, millions of Americans searching for news on the crisis were instead exposed to a toxic slurry of fabrications. Fake news articles with headlines like “Black Lives Matter Thugs Blocking Emergency Crews From Reaching Hurricane Victims” and “Hurricane Victims Storm And Occupy Texas Mosque Who Refused To Help Christians” went viral on Facebook, spreading disinformation that encouraged readers to think the worst about their fellow citizens.

    When Facebook set up a crisis response page a month later -- following a mass shooting in Las Vegas, Nevada, that killed dozens and injured hundreds -- the company intended to provide a platform for users in the area to confirm they were safe and to help people across the country learn how to support the survivors and keep up to date on the events as they unfolded. But the page soon became a clearinghouse for hyperpartisan and fake news articles, including one which baselessly described the shooter as a “Trump-hating Rachel Maddow fan.”

    In Myanmar, an ethnic cleansing of the Muslim Rohingya minority population was aided by misinformation on Facebook, the only source of news for many people in the country. In India, The Washington Post reported, “false news stories have become a part of everyday life, exacerbating weather crises, increasing violence between castes and religions, and even affecting matters of public health.” In Indonesia, disinformation spread by social media stoked ethnic tensions and even triggered a riot in the capital of Jakarta.

    Throughout the year, countries from Kenya to Canada either fell prey to fake news efforts to influence their elections, or took steps they hoped would quell the sort of disinformation campaign that infected the 2016 U.S. presidential race.

    Last December, Media Matters dubbed the fake news infrastructure 2016’s Misinformer of the Year, our annual award for the media figure, news outlet, or organization which stands out for promoting conservative lies and smears in the U.S. media. We warned that the unique dangers to the information ecosystem meant “merely calling out the lies” would not suffice, and that “the objective now is to protect people from the lies.” We joined numerous experts and journalists in pointing to weaknesses in our nation’s information ecosystem, exposed by the presidential election and fueled by key decisions made by leading social media platforms.

    Twelve months later, too little has changed in the United States, and fake news has infected democracies around the world. Facebook has been central to the spread of disinformation, stalling and obfuscating rather than taking responsibility for its outsized impact.

    Media Matters is recognizing Facebook CEO Mark Zuckerberg as 2017’s Misinformer of the Year.

    He narrowly edges Larry Page, whose leadership of Google has produced similar failures in reining in misinformation. Other past recipients include the Center for Medical Progress (2015), George Will (2014), CBS News (2013), Rush Limbaugh (2012), Rupert Murdoch and News Corp. (2011), Sarah Palin (2010), Glenn Beck (2009), Sean Hannity (2008), ABC (2006), Chris Matthews (2005), and Bill O'Reilly (2004).

    Facebook is the most powerful force in journalism

    “Does Even Mark Zuckerberg Know What Facebook Is?” Max Read asked in an October profile for New York magazine. Rattling off statistics pointing to the dizzying reach and breadth of a site with two billion monthly active users, Read concluded that the social media platform Zuckerberg launched for his Harvard peers in 2004 “has grown so big, and become so totalizing, that we can’t really grasp it all at once.”

    Facebook’s sheer size and power make comparisons difficult. But Zuckerberg himself has defined at least one key role for the website. In 2013, he told reporters that the redesign of Facebook’s news feed was intended to “give everyone in the world the best personalized newspaper we can.” This strategy had obvious benefits for the company: If users treated the website as a news source, they would log on more frequently, stay longer, and view more advertisements.

    Zuckerberg achieved his goal. Forty five percent of U.S. adults now say they get news on Facebook, dominating all other social media platforms, and the percentage of people using the website for that purpose is rising. The website is the country’s largest single source of news, and indisputably its most powerful media company.

    That goal of becoming its users' personalized newspaper was assuredly in the best interest of Facebook. The company makes money due to its massive usership, so the value of any particular piece of content is in whether it keeps people engaged with the website. (Facebook reported in 2016 that users spend an average of 50 minutes per day on the platform.) But ultimately, Zuckerberg’s declaration showed that he had put his company at the center of the information ecosystem -- crucial in a democracy because of its role in setting public opinion -- but refused to be held accountable for the results.

    That failure to take responsibility exposes another key difference between Facebook and the media companies Zuckerberg said he wanted to ape. Newspapers face a wide variety of competitors, from other papers to radio, television, and digital news products. If a newspaper gains a reputation for publishing false information, or promoting extremist views, it risks losing subscribers and advertising dollars to more credible rivals and potentially going out of business. But Facebook is so popular that it has no real competitors in the space. For the foreseeable future, it will remain a leading force in the information ecosystem.

    Fake news, confirmation bias, and the news feed


    Sarah Wasko / Media Matters

    Facebook’s news feed is designed to give users exactly what they want. And that’s the problem.

    All content largely looks the same on the feed -- regardless of where it comes from or how credible it is -- and succeeds based on how many people share it. Facebook’s mysterious algorithm favors “content designed to generate either a sense of oversize delight or righteous outrage and go viral,” serving the website’s billions of users the types of posts they previously engaged with in order to keep people on the website.

    When it comes to political information, Facebook largely helps users seek out information that confirms their biases, with liberals and conservatives alike receiving news that they are likely to approve of and share.

    A wide range of would-be internet moguls -- everyone from Macedonian teenagers eager to make a quick buck to ideological true believers hoping to change the political system -- have sought to take advantage of this tendency. They have founded hyperpartisan ideological websites and churned out content, which they have then shared on associated Facebook pages. If the story is interesting enough, it goes viral, garnering user engagement that leads to the story popping up in more Facebook feeds. The site’s owners profit when readers click the Facebook story and are directed back to the hyperpartisan website, thereby driving up traffic numbers and helping boost advertising payouts. The more extreme the content, the more it is shared, and the more lucrative it becomes. Facebook did not create ideological echo chambers, but it has certainly amplified the effect to an unprecedented degree.

    Intentional fabrications packaged as legitimate news have become just another way for hyperpartisan websites to generate Facebook user engagement and cash in, launching outlandish lies into the mainstream. Users seem generally unable to differentiate between real and fake news, and as they see more and more conspiracy theories in their news feed, they become more willing to accept them.

    Facebook’s 2016 decision to bow to a conservative pressure campaign has accelerated this process. That May, a flimsy report claimed that conservative outlets and stories had been “blacklisted” by the Facebook employees who selected the stories featured in its Trending Topics news section, a feature that helps push stories viral. The notion that Facebook employees might be suppressing conservative news triggered an angry backlash from right-wing media outlets and Republican leaders, who declared that the site had a liberal bias. In an effort to defuse concerns, Zuckerberg and his top executives hosted representatives from Donald Trump’s presidential campaign, Fox News, the Heritage Foundation, and other bastions of the right at Facebook’s headquarters. After hearing their grievances over a 90-minute meeting, Zuckerberg posted on Facebook that he took the concerns seriously and wanted to ensure that the community remained a “platform for all ideas.”

    While Facebook’s own internal investigation found “no evidence of systematic political bias” in the selection or prominence of stories featured in the Trending Topics section, the company announced the following week that its curators would no longer rely on a list of credible journalism outlets to help them determine whether a topic was newsworthy, thereby removing a key method of screening the credibility of stories. And in late August, as the presidential election entered its stretch run, Facebook fired its “news curators,” putting Trending Topics under the control of an algorithm. The company promised that removing the human element “allows our team to make fewer individual decisions about topics.” That’s true. But the algorithm promoted a slew of fabricated stories from bogus sources in the place of news articles from credible outlets.

    This confluence of factors -- users seeking information that confirms their biases, sites competing to give it to them, and a platform whose craven executives deliberately refused to take sides between truth and misinformation -- gave rise to the fake news ecosystem.

    The result is a flood of misinformation and conspiracy theories pouring into the news feeds of Facebook users around the world. Every new crisis seems to bring with it a new example of the dangerous hold Facebook has over the information ecosystem.

    Obfuscation and false starts for enforcement

    Zuckerberg resisted cracking down on fake news for as long as he possibly could. Days after the 2016 presidential election, he said it was “crazy” to suggest fake news on Facebook played a role in the outcome. After an uproar, he said that he took the problem “seriously” and was “committed to getting this right.” Seven months later, after using his control of more than 50 percent of Facebook shares to vote down a proposal for the company to publicly report on its fake news efforts, the CEO defended the company’s work. Zuckerberg said Facebook was disrupting the financial incentives for fake news websites, and he touted a new process by which third-party fact-checkers could review articles posted on the site and mark them as “disputed” for users. This combination of small-bore proposals, halting enforcement, and minimal transparency has characterized Zuckerberg’s approach to the problem.

    Under Facebook’s third-party fact-checking system, rolled out in March to much hype, the website’s users have the ability to flag individual stories as potential “false news.” A fact-checker from one of a handful of news outlets -- paid by Facebook and approved by the International Fact-Checking Network at Poynter, a non-partisan journalism think tank -- may then review the story, and, if the fact-checker deems it inaccurate, place an icon on the story that warns users it has been “disputed.”

    This is not a serious effort at impacting an information infrastructure encompassing two billion monthly users. It’s a fig leaf that Facebook is using to benefit from the shinier brands of the outlets it has enlisted in the effort, while creating a conflict of interest that limits the ability of those news organizations to scrutinize the company.  

    The program places the onus first on users to identify the false stories and then on a small group of professionals from third parties -- including The Associated Press, Snopes, ABC News and PolitiFact -- to take action. The sheer size of Facebook means the fact-checkers cannot hope to review even a tiny fraction of the fake news circulating on the website. The Guardian's reviews of the effort have found that it was unclear whether the flagging process actually impeded the spread of false information, as the “disputed” tag is often only added long after the story had already gone viral and other versions of the same story can circulate freely without the tag. The fact-checkers themselves have warned that it is impossible for them to tell how effective their work is because Facebook won’t share information about their impact.

    Zuckerberg’s happy talk about the company’s efforts to demonetize the fake news economy also continues to ring hollow. According to Sheryl Sandberg, Facebook’s chief operating officer, this means the company is “making sure” that fake news sites “aren’t able to buy ads on our system.” It’s unclear whether that is true, since Facebook refuses to be transparent about what it’s doing. But whether the fake news sites buy Facebook ads or not, the same websites continue to benefit from the viral traffic that Facebook makes possible. They can even benefit from having their associated Facebook pages verified. (Facebook verifies pages for public figures, brands, and media outlets with a blue check mark to confirm it is “the authentic Page” or profile for the associated group or person, imbuing the page with what inevitably looks like a stamp of approval from the social media giant.)

    Facebook’s response to criticism of its political advertising standards has been more robust. During the 2016 presidential election, the company was complicit in what the Trump campaign acknowledged was a massive “voter suppression” effort. Trump’s digital team spent more than $70 million on Facebook advertising, churning out hundreds of thousands of microtargeted “dark” ads, which appeared only on the timelines of the target audience and did not include disclosures that they were paid for by the campaign. A hefty portion of those ads targeted voters from major Democratic demographics with negative messages about Hillary Clinton and was intended to dissuade them from going to the polls. Facebook employees embedded with the Trump team aided this effort, helping with targeting and ensuring the ads were approved through an automated system. But in October, the company received major blowback following the disclosure that Russian-bought ads were targeted using similar strategies. Facebook subsequently announced that in the future, election ads would be manually reviewed by an employee to ensure it meets the company’s standards. The company plans to require disclosure of who paid for political ads, and increase transparency by ensuring that users can view all ads a particular page had purchased. Those are meaningful steps, but Facebook should go further by making clear that the company opposes civil suppression and will instruct its employees not to approve ads intended for that purpose.   

    It’s notable, of course, that Facebook’s effort to curb abuse of its political advertising came only after U.S. senators unveiled legislation requiring stricter disclosure for online political ads. The company took action in order to preempt a meaningful federal response. With no such pressure on offer with regard to fake news, Facebook has been left to its own devices, responding only as needed to quiet public anger at its failures. At every step, experts have warned that Facebook’s efforts to push back against fake news have been insufficient and poorly implemented. The company is doing as little as it can get away with.

    What can be done?


    Sarah Wasko / Media Matters

    Hoaxes and disinformation have always been a part of human society, with each new generation enlisting the era’s dominant forms of mass communication in their service. But Facebook’s information ecosystem and news feed algorithm has proven particularly ripe for abuse, allowing fake news purveyors to game the system and deceive the public. Those bad actors know that user engagement is the major component in ensuring virality, and have engineered their content with that in mind, leading to a system where Facebook turbocharges false content from disreputable sources.

    Facebook could fight back against fake news by including an authority component in its algorithm, ensuring that articles from more credible outlets have a better chance of virality than ones from less credible ones. Facebook’s algorithm should recognize that real news outlets like The New York Times or CNN are more credible than websites that serve up deliberate fabrications, and respond accordingly, the way Google’s (admittedly imperfect) search engine does.

    This will also require Facebook to stop conveying authority on users that do not deserve it by stripping verified tags from pages that regularly traffic in fake news.

    Facebook also has a serious problem with bots: software that mimics human behavior and cheats the company’s algorithm, creating fake engagement and sending stories viral. The company will need to step up its efforts to identify algorithmic anomalies caused by these bots, and develop heightened countermeasures, which should include minimizing the impact on users by known bots.

    If Facebook can find a way to change its algorithm to avoid clickbait, as it has claimed, it should be able to do the same to limit the influence of websites that regularly produce fabrications.

    But algorithms alone won’t be enough to solve the problem. Facebook announced it would hire 1,000 people to review and remove Facebook ads that don’t meet its standards. So why hasn’t Zuckerberg done something similar to combat fake news? Why won’t Facebook, as one of the third-party fact-checkers suggested in an interview with The Guardian, hire “armies of moderators and their own fact-checkers” to solve that problem?

    Given the collapse of the news industry over the last decade, there is no shortage of journalists with experience at verifying information and debunking falsehoods. Facebook could hire thousands of them; train them; give them the actual data that they need to determine whether they are effective and ensure that their rulings impact the ability of individual stories to go viral; and penalize websites, associated Facebook pages, and website networks for repeat offenses.

    If Zuckerberg wants Facebook to be a “personalized newspaper,” he needs to take responsibility for being its editor in chief.

    There is a danger, of course, to having a single news outlet with that much power over the U.S. information ecosystem. But Facebook already has that power, though there are compelling arguments in favor of limiting it, either with government regulation or antitrust actions.

    What’s clear is that Facebook will only act under pressure. Earlier this month, The Weekly Standard, a conservative magazine and regular publisher of misinformation, announced it had been approved to join Facebook’s fact-checking initiative. The magazine was founded by Bill Kristol, the former chief of staff to Vice President Dan Quayle, and is owned by right-wing billionaire Philip Anschutz. Stephen Hayes, The Weekly Standard’s editor-in-chief and the author of The Connection: How al Qaeda's Collaboration with Saddam Hussein Has Endangered America, praised Facebook for the decision, telling The Guardian: “I think it’s a good move for [Facebook] to partner with conservative outlets that do real reporting and emphasize facts.” Conservatives, including those at The Weekly Standard, had previously criticized the initiative, claiming the mainstream news outlets and fact-checking organizations Facebook partnered with were actually liberal partisans. Facebook responded by trying to “appease all sides.”

    Nineteen months after Facebook’s CEO sat down with conservative leaders and responded to their concerns with steps that inadvertently strengthened the fake news infrastructure, his company remains more interested in bowing to conservative criticisms than stopping misinformation.


    The very people who helped build Facebook now warn that it is helping to tear the world apart.

    Founding President Sean Parker lamented “the unintended consequences of a network when it grows to a billion or 2 billion people” during a November event. “It literally changes your relationship with society, with each other,” he said. “It probably interferes with productivity in weird ways. God only knows what it's doing to our children's brains."

    Chamath Palihapitiya, who joined Facebook in 2007 and served as vice president for user growth at the company, said earlier this month that he regrets helping build up the platform: “I think we have created tools that are ripping apart the social fabric of how society works.”

    Facebook’s current employees also worry about the damage the company is doing, according to an October The New York Times report detailing “growing concern among employees.” Last week, the Facebook’s director of research tried to allay some of these fears with a press release titled, “Hard Questions: Is Spending Time on Social Media Bad for Us."

    Mark Zuckerberg built a platform to connect people that has become an incredibly powerful tool to divide them with misinformation, and he’s facing increasing criticism for it. But he only ever seems interested in fixing the public relations problem, not the information one. That’s why he is 2017’s Misinformer of the Year.

  • Facebook partners with conservative misinformer The Weekly Standard on fact-checking

    The Weekly Standard becomes the only partisan organization tasked with fact-checking for Facebook

    Blog ››› ››› MEDIA MATTERS STAFF

    Conservative news outlet The Weekly Standard has been approved by Facebook to partner in fact-checking "false news," a partnership that makes little sense given the outlet’s long history of making misleading claims, pushing extreme right-wing talking points, and publishing lies to bolster conservative arguments.

    The Weekly Standard’s history of publishing false claims on topics such as the 2012 attacks on diplomatic facilities in Benghazi, the Affordable Care Act, tax cuts, and the war in Iraq, among many others, raises doubts that Facebook is taking the challenge of fact-checking seriously.

    As The Guardian reports, The Weekly Standard is the first “explicitly partisan” outlet to partner with Facebook in their effort to fact-check fake news. The decision by Facebook raises concerns over the decision to give a conservative opinion outlet with a history of misinformation unearned influence over the fact-checking process. From the December 6 report:

    A conservative news organization has been approved to partner with Facebook to fact-check false news, drawing criticisms that the social media company is caving to rightwing pressures and collaborating with a publication that has previously spread propaganda.

    The Weekly Standard, a conservative opinion magazine, said it is joining a fact-checking initiative that Facebook launched last year aimed at debunking fake news on the site with the help of outside journalists. The Weekly Standard will be the first right-leaning news organization and explicitly partisan group to do fact-checks for Facebook, prompting backlash from progressive organizations, who have argued that the magazine has a history of publishing questionable content.

    [...]

    “I’m really disheartened and disturbed by this,” said Angelo Carusone, president of Media Matters for America, a progressive watchdog group that published numerous criticisms of the Weekly Standard after the partnership was first rumored in October. “They have described themselves as an opinion magazine. They are supposed to be thought leaders.”

    Calling the magazine a “serial misinformer”, Media Matters cited the Weekly Standard’s role in pushing false and misleading claims about Obamacare, Hillary Clinton and other political stories.

  • Facebook CEO’s Immigration Reform Group Donated To Trump’s Transition To “Curry Early Favor” With Administration

    Latest Report Adds To Growing List Of Questionable Donations And Meetings With Conservatives

    Blog ››› ››› CHRISTOPHER LEWIS

    Facebook CEO Mark Zuckerberg’s immigration reform lobby FWD.us, donated $5,000 to President Donald Trump’s transition according to a report from Politico.

    Despite a contentious history opposing Trump’s anti-immigrant policies, the group donated to Trump “hoping to curry early favor and help shape the incoming administration.” From Politico:

    But months later the nonprofit, founded by Facebook CEO Mark Zuckerberg wrote a $5,000 check to Trump’s presidential transition — the latest indication that it’s still business as usual for the tech industry in Washington despite the revulsion many Silicon Valley engineers and executives feel toward Trump.

    Hoping to curry early favor and help shape the incoming administration, FWD.us joined a handful of tech and telecom companies like AT&T, Microsoft and Qualcomm in funding Trump’s months-long transition operation, which raked in roughly $6.5 million through Feb. 15, according to a transition disclosure report filed last weekend and obtained by POLITICO on Thursday.

    [...]

    FWD.us has had a fractious history with Trump and some of his top lieutenants, dating back to well before the election. Jeff Sessions, now the U.S. attorney general, blasted the group and its founder, Zuckerberg, in a blistering anti-immigration speech from the Senate floor in 2014. When Trump, as a candidate in 2015, detailed his immigration policy blueprint, Schulte described the approach as “just wrong.” While he didn’t mention Trump by name, the FWD.us founder took aim at “anti-immigrant voices” that seek to “forcibly expel millions of immigrants, period.”

    Facebook and CEO Mark Zuckerberg have faced increasing criticism over their efforts to reach out to conservatives. Recently Facebook donated more than $120,000 to the American Conservative Union’s annual event the Conservative Political Action Conference (CPAC).  In 2016, Facebook met with conservative leaders to listen to their complaints of anti-conservative bias in Facebook’s trending topics feature. Facebook subsequently fired their human editors in August. 

  • Report: Facebook Continues To Placate Conservatives By Donating To CPAC

    Blog ››› ››› MEDIA MATTERS STAFF

    The Daily Beast reports that Facebook donated more than $120,000 to the American Conservative Union’s annual event the Conservative Political Action Conference (CPAC). Mark Zuckerberg’s donation comes after he held a meeting with conservative media personalities such as Glenn Beck and Fox’s Dana Perino following allegations that the website had been suppressing conservative views.

    During the meeting, Zuckerberg lauded President Donald Trump for having “more fans on Facebook than any other presidential candidate” and Fox News for driving “more interactions on its Facebook page than any other news outlet in the world.” Following the accusations of bias, Facebook laid off its entire editorial team and replaced it with an algorithm, a move which The Washington Post reported led to the rise and prominence of “fake news” trending on the website.

    According to The Daily Beast, Facebook continues to court conservatives with its “six-figure contribution to CPAC,” which includes a cash donation and “in-kind support.” From The Daily Beast:

    Sources with direct knowledge of the matter tell The Daily Beast that Facebook made a six-figure contribution to CPAC, the yearly conference for conservative activists which will feature President Donald Trump, White House advisor Steve Bannon, NRA president Wayne LaPierre, and other right-wing favorites.

    Facebook’s contribution is worth more than $120,000, according to our sources. Half of that is cash, and the other half is in-kind support for CPAC’s operations. Facebook will have a space at the conference for attendees to film Facebook Live videos, and will also train people on best practices for using the social network and Instagram.

    [...]

    The Wall Street Journal reported in October that Trump’s own Facebook posts fueled intense debate within the company about what kind of content was acceptable——particularly his calls for a ban on Muslims from entering the U.S. Mark Zuckerberg himself had to determine that Trump’s posts were okay, according to the paper’s report. And The New York Times reported that after Trump won the election, some company employees worried the spread of racist memes and fake news on the site may have boosted his candidacy.

    “A fake story claiming Pope Francis—actually a refugee advocate—endorsed Mr. Trump was shared almost a million times, likely visible to tens of millions,” Zeynep Tufekci, an associate professor at the University of North Carolina who studies the social impact of technology, told the Times. “Its correction was barely heard. Of course Facebook had significant influence in this last election’s outcome.”