Author Page | Media Matters for America

Melissa Ryan

Author ››› Melissa Ryan
  • What does Dan Scavino do all day?

    Scavino is the ambassador to Trump’s vast online army of trolls -- and diplomatic relations are strong

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    In a profile of White House social media director Dan Scavino published in The New York Times Magazine, writer Robert Draper seems puzzled at Scavino's role in the Trump campaign and administration. Throughout the piece, Draper attempts to answer the question: What does this guy do all day?

    From the article:

    Scavino was another of the “originals” on Trump’s 2016 campaign, and I saw him numerous times on the trail, but I could never quite ascertain what he was doing to further his boss’s presidential ambitions. Aggressively nondescript, Scavino could often be seen in a suit at the side of the stage, taking photos of the immense rally crowds with his iPhone and later, while scowling at his laptop aboard Trump’s 757, posting the images to Facebook. … Scavino’s sole task, from what I could tell, was to document Trump’s popularity.

    My perplexity over Scavino deepened after Inauguration Day, even as he got an official title: assistant to the president and director of social media, a position that had never existed before and one that paid him the maximum White House staff salary of $179,700. The Trump White House continued to employ an official photographer (Shealah Craighead) as well as a chief digital officer (Ory Rinat). This small digital team shared a suite across the street, in the Executive Office Building. But Scavino got an office on the ground floor of the West Wing, just down the hall from the leader of the free world.

    Draper spends the rest of the article trying to grasp Scavino’s role and why it matters. He comes to the conclusion that Scavino’s most important job is minding President Donald Trump’s Twitter account.

    The only official function Scavino filled that might justify his salary and his prime White House real estate was detailed in the lawsuit’s [over Trump blocking people on Twitter] stipulation of facts. “Scavino,” both parties to the lawsuit agreed, “assists President Trump in operating the @realDonaldTrump account, including by drafting and posting tweets to the account.” No one else, besides Trump himself, had access to the most consequential and controversial social media account in the world.

    Having access to the president’s Twitter feed isn’t a job; it’s a sign that you’re good at your job. Trump’s Twitter account is arguably his most valuable digital asset. Scavino has access because he’s a trusted member of Trump’s administration.

    Draper does eventually get around to describing Scanvino’s day-to-day job duties -- reaching out to Trump’s base online and serving as the keeper of those relationships.

    More than anyone else in the White House, the director of social media spends his day online, monitoring the #MAGA congregation. “Dan talks to the base more than anybody else after the president,” one senior White House official told me. “He’s the conductor of the Trump Train, and these people know he’s true blue, and he also knows all the influencers.” A year ago, the former chief strategist Steve Bannon shared a West Wing office with Scavino. “He has his hands on the Pepes,” Bannon recalls, referring to the cartoon frog that serves as mascot to the alt-right. “He knew who the players were and who were not. He’d bring me Cernovich — I didn’t know who Cernovich was until Scavino told me.” Bannon was referring to the alt-right blogger Mike Cernovich, who has frequently promoted debunked and scurrilous conspiracy theories.

    But Draper doesn’t recognize both the actual labor involved in Scavino’s operation and its value to the administration and Trump personally. And it’s important for anyone covering Trump (as well as anyone running against Trump and the GOP) to understand not just Scavino’s job but why his work matters. Draper’s profile misses both.

    Scavino isn’t just monitoring the #MAGA movement online; he’s actively cultivating relationships with that community, more than likely sharing messaging and talking points with influencers, and amplifying their content to a broader audience. More than once, user-generated content from Reddit forum “r/The_Donald” has been tweeted out by Trump himself, most notoriously when Trump tweeted this meme of himself beating up CNN. Scavino is almost certainly responsible for this Trump tweet attacking Rosie O’Donnell after a similar thread appeared on r/The_Donald as well. (Designer Mike Rundle tweeted a crude but accurate depiction of the Scavino social media pipeline.)

    Scavino’s outreach isn’t an unusual occurrence. The Obama White House devoted staff resources to the same task, as has most every major presidential campaign since 2004. Online outreach is a crucial part of any digital operation. Given that Trump needs to hang on to his base perhaps more than any president before him, it makes sense that Scavino’s White House role is prominent.

    I don’t write this to defend him as a person. After all, Scavino is a guy who, through his personal Twitter feed, amplifies conspiracy theories and harasses others. Scavino is not the kind of person I want paid with the taxpayer dime. But it’s important to understand what his job is -- and that Scavino is quite good at what he does.

    Per the profile, Scavino is the “conductor of the Trump Train.” Draper got the quote right but failed to consider what it meant, even as he described the train as a “juggernaut.” Scavino’s role isn’t just to craft tweets for Trump. He’s keeping the Trump Train’s passengers on board.

  • The tragedy and lost opportunity of Zuckerberg’s testimony to Congress

    Congress didn’t do nearly enough to hold Mark Zuckerberg accountable

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg came to Washington to testify before Congress over two days of hearings. Expectations were low -- to the point of infantilization. Unsurprisingly, Zuckerberg was able to clear the extremely low bar America sets for white men in business. He showed up in a suit and tie, didn’t say anything too embarrassing, and, for the most part, the members of Congress questioning him made more news than his testimony did. Facebook’s public relations team probably considers the hearings a win. The stock market certainly did.

    Facebook’s users, however, lost bigly. Congress failed to hold Zuckerberg accountable. The Senate hearing, held jointly by the judiciary and commerce committees, devolved into Zuckerberg explaining how the Internet worked to the poorly informed senators. The House commerce committee members were more up to speed, but Republican members -- following Ted Cruz’s lead from the day before -- spent most of their time and energy grilling Zuckerberg about nonexistent censorship of right-wing content. If Facebook’s leaders are ill-prepared to handle the challenges they’re facing, Congress appears even less up to the challenge.

    Tech press had a field day on Twitter in feigning outrage at Congress for its lack of tech savvy, but the Congress’ lack of interest in holding Facebook accountable is far more problematic. As David Dayen noted in the Intercept:

    This willingness, against interest and impulse, to do the job of a policymaker was sorely absent throughout Tuesday’s testimony, which involved both the judiciary and commerce committees, as well as nearly half the members of the Senate. Far too many senators framed the problems with Facebook — almost unilaterally agreed, on both sides of the aisle, to be pernicious and requiring some action — as something for Zuckerberg to fix, and then tell Congress about later.

    Sen. Lindsey Graham (R-SC) was the rare exception. He was one of few members of Congress comfortable with calling Facebook a monopoly.

    Facebook’s issues with civil rights was barely covered, with a few notable exceptions. Sen. Mazie Hirono (D-HI) asked Zuckerberg if Facebook would ever assist the government in vetting immigrants (it would not in most cases), and Sen. Cory Booker (D-NJ) asked Zuckerberg to protect Black Lives Matter activists from improper surveillance (he agreed). Reps. Bobby Rush (D-IL) and G.J. Butterfield (D-NC) asked similar questions during the House hearing, and Rep. Susan Brooks (R-IN) asked about Facebook as a recruitment tool for ISIS. But not one question was asked about Facebook’s role as a recruitment tool for white supremacists and neo-Nazis.

    While the House hearing featured better questions, the majority of Republican members nevertheless managed to turn it into a circus. They repeatedly asked Zuckerberg about the supposed censorship of pro-Trump social media stars Diamond and Silk (which has since been debunked) and suggested that the biggest issue Facebook faces is the censorship of right-wing content. The concern trolling over Diamond and Silk came between questions exposing deep societal problems including opioid sales on the social media platform that are literally responsible for overdose deaths and Facebook’s role in the Rohingya genocide in Myanmar.

    The Diamond and Silk obsession derives from another one of Facebook’s societal problems: the prominence of propaganda, conspiracy theories, and misinformation on the platform. Multiple members who asked Zuckerberg about Diamond and Silk said they’d heard directly from their constituents about the matter, which they almost certainly did. Pro-Trump media lost their collective minds when the news broke. The facts are that the Diamond and Silk supposed censorship didn't actually happen and that data does not back up the claim of right-wing media being censored on Facebook. If anything, the platform is a cesspool of far-right activity.

    Not one member of Congress asked Zuckerberg about Facebook’s role in the spread of conspiracy theories and propaganda. Republicans were wasting valuable time demanding answers over a nonexistent conspiracy theory, and no one at all felt compelled to ask Zuckerberg how the hell we got to here. It is extremely telling that while this was going on, Diamond and Silk made an appearance on Alex Jones’ Infowars broadcast, another conspiracy theory site that owes its popularity in part to Facebook.

    If social media filter bubbles have split Americans into different realities, it would seem that Congress is a victim to the same problem. Research shows that the right-wing’s filter bubble influences the left’s in a way that isn’t reciprocated. Right-wing content isn’t actually being censored on Facebook. The newly minted Diamond and Silk Caucus (or the Alex Jones Caucus) in Congress was demanding that even more right-wing content show up in our feeds, sending the right-wing base even deeper into their bubble. It’s the same schtick that the same people have pulled for years with the political media.

    While many in Congress have complained about far-right conspiracy theories becoming a part of mainstream American society, it’s a shame that they didn’t hold accountable the one man who more than anyone created this reality.

  • Facebook’s latest announcements serve as a reminder that fixing the platform is a global issue

    Effective consumer pushback must be global as well.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    A few huge updates from Facebook this week are worth paying attention to.

    First, the company announced the removal of “70 Facebook and 65 Instagram accounts — as well as 138 Facebook Pages — that were controlled by the Russia-based Internet Research Agency (IRA).” Facebook also removed any ads associated with the IRA pages. In an unusual bit of transparency, the company provided stats of what was deleted and who those pages were targeting:

    Of the Pages that had content, the vast majority of them (95%) were in Russian — targeted either at people living in Russia or Russian-speakers around the world including from neighboring countries like Azerbaijan, Uzbekistan and Ukraine.

    Facebook also provided a few samples from the pages as well as ad samples, none of which were written in English. “The IRA has consistently used inauthentic accounts to deceive and manipulate people,” the announcement said. “It’s why we remove every account we find that is linked to the organization — whether linked to activity in the US, Russia or elsewhere.”

    CEO Mark Zuckerberg reiterated IRA’s global reach in a post on his personal page, saying, “Most of our actions against the IRA to date have been to prevent them from interfering in foreign elections. This update is about taking down their pages targeting people living in Russia. This Russian agency has repeatedly acted deceptively and tried to manipulate people in the US, Europe, and Russia -- and we don't want them on Facebook anywhere in the world.”

    Facebook also announced an updated terms of service and data policy that the company claims will be easier for users to understand. “It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy,” the announcement reads. “So we’re proposing updates to our terms of service that include our commitments to everyone using Facebook. We explain the services we offer in language that’s easier to read. We’re also updating our data policy to better spell out what data we collect and how we use it in Facebook, Instagram, Messenger and other products.”

    Finally, Facebook announced major changes to how third parties can interact with and collect data. The company acknowledged that the number of users whose data was being illegally used by Cambridge Analytica -- reported to be 50 million -- was actually 87 million. Facebook promised, “Overall, we believe these changes will better protect people’s information while still enabling developers to create useful experiences. We know we have more work to do — and we’ll keep you updated as we make more changes.”

    Facebook is finally responding to consumer pressure in a systematic way. These changes will curb the amount of propaganda users are exposed to, limit how third parties can interact with users on the platform, and make the rules of the road clearer for everyone.

    It’s important to note that all of these changes appear to be global, not limited to specific countries, which is good because the problems Facebook has caused are also global. Facebook has been weaponized by hostile actors seeking to manipulate users in dozens of countries. Facebook employees have admitted, on the company's Hard Questions Blog, that Facebook as a platform can be harmful to democracy. Facebook’s ability to reach people across the world is unprecedented in scale, and because of this, there’s no institution or government with the ability to regulate Facebook and protect the totality of its users.

    We have Facebook on the defensive, but they’re going to change only as much as it’s pressured to change. Tech lawyer and privacy advocate Tiffany Li, in an op-ed for NBC News, has identified three groups of stakeholders Facebook needs to appease in order to save their company: “shareholders, policymakers, and of course, consumers.” I like her categorization but would add that Facebook needs to appease these three groups in countries across the globe, not just in the U.S., U.K., and European Union nations.

    This isn’t a problem that can be solved overnight, something Zuckerberg acknowledged when he spoke with Vox’s Ezra Klein this week, saying, “I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.” Generally, I’m a Zuckerberg critic, but I appreciate this comment and agree we’re in for a turbulent couple of years coming to grips with everything.

    Here’s the good news. Thanks to social media (including Facebook!) we’re more connected than ever before. Facebook’s users have an opportunity to have a global conversation about what changes are needed and take any activist campaigns or direct actions global. We can pressure multiple governments, work with civil society groups in multiple countries, and create a global consumer movement.

    Facebook still has a long way to go and it’s users have 87 million (or 2 billion) reasons to be upset. The company has a lot do before it can earn back the trust of their consumers across the globe. That said, I appreciate that Facebook is finally taking some decisive action, even as they acknowledge curbing abuse of all kinds on the platform will be an ongoing battle. It’s a welcome correction to the company’s PR apology tour, adding action to words that would otherwise ring hollow. To be clear: Facebook was forced to take these actions thanks to global activism and consumer pressure. We have the momentum to force needed systemic changes. Let’s keep at it.

    Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action

  • Russian trolls used my Tumblr to spread election propaganda. Here's my story.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    My name is Melissa and I was duped by Russian propagandists on Tumblr.

    It started innocently enough. In 2016, I ran a pro-Hillary Tumblr that became quite popular. I started it after searching for Hillary and Bernie memes on Tumblr and discovering just how little pro-Hillary content existed on the platform. I was also overwhelmed by the volume and tone of the anti-Hillary content there. Tumblr’s demographic skews young, so I wasn’t surprised by how much pro-Bernie content I found on the platform, but the state of Hillary’s presence on Tumblr (outside of her campaign’s own page) really shocked me. I decided to do something about it, and I Like Hillary was born.

    I didn’t put a lot of time or energy into the site, maybe 10 to 15 minutes every morning before work. I’d search the internet for new Hillary content and reblog posts from the other pro-Hillary Tumblrs I followed. But the return for my minimal effort was enormous. I’ve created popular Tumblrs before (most notably This Man Legislates), but traffic on this new one was through the roof. At its peak, I Like Hillary posts were averaging more than 200,000 engagements per week, with the top post gathering more than 71,500 notes. Given the effort I put in, that’s the best return on investment I’d ever seen on a digital project I’ve created.

    When Trump won the election, I abandoned I Like Hillary, but what I found on the initial search stuck with me. I’ve been doing digital strategy in politics and advocacy for more than 11 years, and what I saw online in 2016 didn’t make sense to me. Spend more than 10 minutes on any online platform and you got the sense that every American voter thought Hillary Clinton was an evil criminal, something the election results (in which Clinton won the popular vote) didn’t bear out. Hillary Clinton might have been an unpopular candidate, but she wasn’t hated by everyone. It didn’t add up.

    Of course, now we know why so much of what happened online in 2016 didn’t make sense. Russian propaganda ran rampant on all of our favorite social media sites. The Kremlin-backed Internet Research Agency reportedly ran digital influence operations on Facebook, Twitter, YouTube, Reddit, and, as it turns out, Tumblr.

    Per BuzzFeed, Russian trolls exploited the young audiences of Tumblr in their content strategy:

    Russian trolls posed as black activists on Tumblr and generated hundreds of thousands of interactions for content that ranged from calling Hillary Clinton a “monster” to supporting Bernie Sanders and decrying racial injustice and police violence in the US, according to new findings from researcher Jonathan Albright and BuzzFeed News.

    While Facebook and Twitter continue to face intense public and congressional pressure over the activity from trolls working for the Russian Internet Research Agency, Tumblr has somehow managed to escape scrutiny. But the blogging platform was in fact home to a powerful, largely unrevealed network of Russian trolls focused on black issues and activism.

    “The evidence we've collected shows a highly engaged and far-reaching Tumblr propaganda-op targeting mostly teenage and twenty-something African Americans. This appears to have been part of an ongoing campaign since early 2015,” said Albright, research director of the Tow Center for Digital Journalism at Columbia University.

    A month after this article ran, Tumblr let its users know that, yes, the platform  had been infected with Russian propaganda. In a blog post, Tumblr outlined steps it was  taking to correct the problem and made public a list of 84 accounts known to be run by Russian trolls. Additionally, it emailed users to let them know if they’d engaged with IRA trolls on their own Tumblr accounts.

    I received one of those emails.

    I actually already knew that Russian trolls had engaged with I Like Hillary and that I might have unknowingly reblogged IRA-created content. When I tweeted about the initial article and the Tumblr I ran, Jonathan Albright (the researcher quoted above) reached out to me. He’d taken a look at my Tumblr and it took him less than five minutes of scanning the comments to find inflammatory posts from known Russian trolls. I’d missed this entirely.

    How did Russian trolls use Tumblr specifically? As this piece in New York magazine points out, we have a pretty clear idea because the chains of reblogged posts of Russian origin still exist:

    But Tumblr also provides our best glimpse of the IRA’s actual practices, what they posted, and how these users inserted themselves into American discourse. That’s because Tumblr’s primary interaction, reblogging, requires users to duplicate another user’s post onto their own profile. User B reblogs User A, and on User B’s blog, User A’s comment remains. In essence, the structure of Tumblr is millions of users copy-pasting each other. If Tumblr were to wipe every instance of Russian activity, it would also “break the reblog chain,” wiping every user interaction that came after an IRA one. Tumblr opted against that, which means that, armed with a list of aliases and the indexing power of Google, you can find plenty of old posts from IRA trolls.

    Mostly, it appears, the IRA’s Tumblr strategy was to rip popular Twitter posts and re-upload them to Tumblr.

    Essentially, I gave Russian propagandists an outlet. I unknowingly allowed them to use something I’d created online in their active measures campaign. I was duped.

    The tech companies have been reluctant to tell users that they were exposed to Russian propaganda. Given how Facebook users reacted with anger when they were told about their own exposure, I can understand the reluctance of others. Tumblr waited too long to inform its users, but I appreciate the way the company did it, especially its decision to provide the list of account names and leaving the chains of reblogged content intact.

    No one wants to admit they were duped. I’ve long known this intellectually, but now I understand it personally. It’s embarrassing to learn that something you made became a tool for Russian propagandists. I’ve been studying all of this for more than a year, but it had never occurred to me that my own social media content might have been involved in a Russian propaganda effort. We were all duped to some degree. Russia used our own online lives against us, with the goal of pitting Americans against one another.

    It worked.

  • Tech’s moral obligation to Parkland survivors

    Blog ››› ››› MELISSA RYAN

    The Parkland student survivors behind the #MarchForOurLives are now public figures. Their social media presence is massive, they’re a fixture on TV news, and according to a poll from Public Policy Polling, they have a “56/34 favorability rating.” Their advocacy has inspired many Americans to engage (or re-engage) in the fight for gun safety. It’s also inspired a steady stream of harassment and hoaxes from the right.

    The latest attacks on the students, right after their wildly successful march, are particularly vile. They include doctored images, memes suggesting that the students support communist dictators or Nazis (apparently communism and fascism are one and the same now), and accusations that student David Hogg wasn’t actually present for the shooting (just for the record, we know Hogg was there because he recorded interviews with his fellow students during the shooting). Conspiracy theorist Alex Jones took things to another level entirely when he released videos depicting a Parkland survivor as a Hitler Youth member and transposing a Hitler speech over another’s words. The students were even mocked by Rep. Steve King, R-IA, a congressman, on his Facebook page.

    To be clear, these are high school students, most of whom are still minors, being attacked for over a month by adults who should know better. And tech companies allow their platforms to be weaponized over and over again for this purpose.

    None of this should feel normal but somehow it is. It’s the circle of life on the internet: If half of social media is building you up, the other half will inevitably be tearing you down. We’ve accepted bullying and harassment as the price we pay for a more connected society, and that includes the harassment of minors advocating for their right to be safe at school. Looking over the social media landscape, it’s hard to argue that this isn’t normal. Does it have to be?

    All of the tech platforms have policies against harassment in their terms of service, but none include special protections for minors who are harassed. All terms of service prohibit hate speech or harassment based on protected classes, including age, but only when the attack is made on the basis of that characteristic. So while disseminating doctored images of Emma Gonzalez supposedly tearing up a copy of the Constitution (she wasn’t) or memes suggesting that David Hogg is a Nazi or that he gave the Nazi salute at the #MarchForOurLives (he isn’t and he did not) are out of the bounds of human decency, they appear not to violate any one company’s terms.

    It’s understandable that tech companies would avoid taking political positions and do everything in their power to prevent the appearance that they’re censoring a political viewpoint. But doctoring images of the Parkland students and spreading false information about them and their families online isn’t expressing a political opinion; it’s harassment. People should be able to express political viewpoints without harassing minors. They should be able to disagree with the students’ views without superimposing their heads on Nazi uniforms. More important, tech companies should be able to understand the difference.

    The Parkland students survived one of the worst mass shootings in modern American history. They lost friends and classmates, and their lives were completely disrupted. Whether or not you agree with their views on gun safety, we should all be able to agree that teenagers have a right to advocate for their own safety at school without fear of weaponized social media attacks against them. It should never be acceptable to spread false information and doctored images that threaten the safety of anyone, especially if that person is still a student in high school. Tech companies shouldn’t allow their platforms to become dissemination engines for this type of attack. That’s not politics; that’s just human decency.

  • Mark Zuckerberg’s apology PR tour and why now is our best opportunity yet to push for change

    Facebook to everyone: our bad

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Facebook CEO Mark Zuckerberg is sorry. Specifically, as he told CNN, he’s “really sorry that this happened.”

    “I think we let the community down, and I feel really bad and I’m sorry about that,” he told Recode’s Kara Swisher. Facebook Chief Operating Officer Sheryl Sandberg, appearing on CNBC, also offered an apology: “I am so sorry that we let so many people down.”

    Zuckerberg and Facebook have a lot to apologize for. In addition to the numerous other problems plaguing Facebook under Zuckerberg’s watch, he allowed Cambridge Analytica to obtain and exploit the Facebook data of 50 million users in multiple countries. When the platform discovered the stolen data, it took the firm’s word that the data had been deleted (it hadn’t). Facebook made no attempts to independently verify that the data was no longer being used, nor did it notify users whose data was exploited. Even after the news broke, it took Zuckerberg and Sandberg six days to face the public and give interviews.

    In addition to offering their apologies, both Sandberg and Zuckerberg acknowledged that trust between Facebook and users had been breached. Sandberg said on CNBC, “This is about trust, and earning the trust of the people who use our service is the most important thing we do. And we are very committed to earning it.”

    What surprised me most, however, was their acknowledgment that regulation was coming and that perhaps Facebook needs to be checked. Zuckerberg in his CNN interview suggested that regulation of tech companies like Facebook might be necessary. Sandberg went even further: “It's not a question of if regulation, it's a question of what type. ... We're open to regulation. We work with lawmakers all over the world." At first this read to me like another attempt at passing the buck of responsibility onto another entity, and while that might still be partially true, there’s more to it. Facebook is responding to public outrage, including the growing calls for regulation. Facebook executives have concluded they’re not getting out of this mess without regulation, and their best path forward is to try to get the best deal they can get, given the circumstances.

    Were Zuckerberg and Sandberg forthcoming enough? No. I don’t think anyone was convinced that Facebook is telling us everything it knows, nor did the company present much of a plan for protecting consumers moving forward. But consumers have the momentum. Facebook will change only as much as its users demand. The fact that Facebook’s leadership is on a full-blown apology tour means that public pressure is starting to work. After months of bad press and user backlash, Facebook is finally acknowledging that some things need to change.

    Facebook failed to protect users from a consulting firm so shady that it bragged to a potential client about entrapping candidates for office, potentially breaking U.S. election laws to help Donald Trump win in 2016, and avoiding congressional investigations. Consumers are outraged, many to the point of quitting Facebook entirely. Cambridge Analytica probably isn’t the only problematic company that Facebook allowed to exploit user data, but from an organizing perspective, we couldn’t ask for a better villain. After months of outrage, Facebook is on the defensive. This is the best opportunity we’ll have to force it and other tech platforms to make systemic change.

    Here’s a good place to start: Media Matters is calling on Facebook to ban any entity, be it the Trump campaign or any other, that is using a copy of Cambridge Analytica's data or any other data set acquired by cheating.

    Click here and join our call to action.

  • Facebook failed to protect consumers from Cambridge Analytica. Only systemic changes can prevent that from happening again.

    50 million reasons to be mad at Facebook

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Tech companies have repeatedly failed to protect the consumers who use their platforms, and despite the outrage that arises when news of another failure breaks, remarkably little has been done to fix the problem. Consumers have been left to deal with fake news, predatory political ads, and data breaches largely on their own without assistance from companies, government, or other institutions. We’re dealing with systemic failures of the social media ecosystem, but the solutions offered largely call on individuals to sort out their online experience for themselves.

    This past weekend, a series of stories broke that illustrate just how colossal those failures are. On Friday, Facebook abruptly announced that it had banned Cambridge Analytica, the firm that did data targeting for Donald Trump’s presidential campaign, from using the platform for “violating its policies around data collection and retention,” as The Verge described it. On Saturday, The New York Times and The Observer broke the story Facebook was clearly trying to get ahead of: Cambridge Analytica had illegally obtained and exploited the Facebook data of 50 million users in multiple countries.

    Via The New York Times:

    The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.

    So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

    Carole Cadwalladr of The Observer worked with whistleblower Christopher Wylie for over a year to expose Cambridge Analytica’s practices and Facebook’s complicity in allowing them:

    Wylie oversaw what may have been the first critical breach. Aged 24, while studying for a PhD in fashion trend forecasting, he came up with a plan to harvest the Facebook profiles of millions of people in the US, and to use their private and personal information to create sophisticated psychological and political profiles. And then target them with political ads designed to work on their particular psychological makeup.

    “We ‘broke’ Facebook,” he says.

    And he did it on behalf of his new boss, Steve Bannon.

    “Is it fair to say you ‘hacked’ Facebook?” I ask him one night. He hesitates. “I’ll point out that I assumed it was entirely legal and above board.”

    It’s particularly troubling that this stolen data was used in a political campaign. Cambridge Analytica has long had a reputation for being “shady”; during the 2016 Republican primaries, many GOP consultants complained about the company’s practices and methodology. Democratic data consultants have also speculated prior to this week’s revelations that Cambridge Analytica would have had to steal data in order to do the work its team has bragged about doing. Even the Trump campaign, despite having staff from Cambridge Analytica embedded in its headquarters, attempted to deny that the company had done what it had claimed: used psychographic profiling to help Trump win.

    More troubling is the connection to Russia. In 2014, Chris Wylie was asked to help Cambridge Analytica prepare a pitch to Vagit Alekperov, a Russian oligarch and the CEO of Lukoil. “It didn’t make any sense to me,” he told The Guardian, "I didn’t understand either the email or the pitch presentation we did. Why would a Russian oil company want to target information on American voters?” The eventual presentation “focused on election disruption techniques,” The Guardian reported. “The first slide illustrates how a ‘rumour campaign’ spread fear in the 2007 Nigerian election – in which the company worked – by spreading the idea that the ‘election would be rigged’. The final slide, branded with Lukoil’s logo and that of SCL Group and SCL Elections, headlines its ‘deliverables’: ‘psychographic messaging.’”

    An illegal data breach. Russian oligarchs. Psychographic profiling to manipulate voters. Social media is breaking democracy, aided by companies with shady practices and politicians who have turned a blind eye. By not disclosing the leak and allowing Cambridge Analytica to continue using its platform, Facebook failed us. By not asking more questions and considering regulations much earlier, political leaders on two continents, have failed us as well. What’s a social media user supposed to do? And remember, this is to say nothing about similar commercial practices on Facebook.

    The only recourse we consumers have is to demand systemic changes. Tech companies must feel more pressure from us. Governments and regulatory bodies must be similarly pressured to force tech companies to protect consumers using regulations and legislation. We need more citizens like Parsons professor David Carroll, who is mounting a legal effort against Cambridge Analytica, to explore the potential of lawsuits.

    We have 50 million reasons to be mad at Facebook. If that anger can be turned into action, the potential exists to create a global consumer movement on a scale never seen before. Social media is broken, but with the right amount of pressure we can force the tech giants, starting with Facebook, to fix themselves.

  • YouTube outsources truth to Wikipedia

    YouTube’s solution to conspiracy theory videos? Let Wikipedia handle it. There are three big reasons that will not work.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    YouTube has a conspiracy theory problem. The platform is full of conspiracy theory videos, and its algorithm moves viewers up a ladder of engagement. YouTube encourages consumption of more videos on a daisy chain of content that becomes more radical with each new suggested video. Last week, Zeynep Tufekci outlined this process in an op-ed for The New York Times, making the point that what “keeps people glued to YouTube” is that its “algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.”

    Conspiracy theory videos that correlate to news events go viral on YouTube with alarming regularity, often spreading misinformation and lies about real people in the process. Last month, YouTube was forced to remove a conspiracy theory video alleging that underage Parkland student David Hogg was a paid crisis actor after it became YouTube’s top trending video. False information about Hogg and his family spread on YouTube for days before the company removed the smears. This week, YouTube admitted that it didn’t know why an “InfoWars video claiming that Antifa members are the ‘prime suspects’ in the mysterious package bombings in Austin, Texas appeared at the top of search results.” YouTube has reportedly informed InfoWars that the site is on its second strike and dangerously close to being permanently banned from the video-sharing platform. But even if YouTube follows through with its threat, InfoWars is merely a drop in the bucket.

    YouTube CEO Susan Wojcicki was asked about the problem during a panel at South by Southwest (SXSW) this week and previewed the platform’s latest attempt at a solution: information cues. YouTube will apparently keep a running list of known conspiracy theories, and videos referring to these conspiracies will include a text box underneath them with links to Wikipedia articles challenging the claims. You can see how this would look on YouTube’s platform here.

    I have some bad news for Wojicki. Adding “information cues” isn’t going to solve the problem. It might actually make it worse.

    It passes the buck: Tech platforms don’t want to be held responsible for the content on their sites. Both Facebook and Twitter have made it clear that they don’t want to be “arbiters of truth.” The platforms have also pushed back hard against the idea that they are media companies, continually arguing that they’re neutral platforms for individuals and media companies to publish content. Yet the tech platforms seem more than willing to outsource the truth to other entities like Snopes, The Associated Press, and now Wikipedia. Determining what is and isn’t true isn’t something tech platforms should feel free to outsource, especially to an organization of volunteer editors at Wikipedia who weren’t informed in advance, much less consulted, about the feasibility of using their website in this way.

    It tips off the trolls: If we’ve learned anything over the past couple of years, it’s that trolls are quite good at organizing to keep ahead of the tech platforms’ attempts to curb them. Whether it’s Russian trolls getting Americans to RSVP for events on Facebook, white nationalists attempting to flood Rotten Tomatoes with fake movie reviews, or Nazis taking on the video gaming platform Steam, there’s no denying that trolls are constantly manipulating the rules of the game. The platforms can’t keep up with things as they are, let alone plan for the next thing. And now Wojcicki’s “information cues” announcement gives trolls a heads-up. Informations cues aren’t even live yet, but hostile actors foreign and domestic can already start to plan how they’ll game Wikipedia pages that debunk conspiracy theories. I’m sure the volunteer editors at Wikipedia are really looking forward to the onslaught!

    It won’t have the desired effect: Information cues have been tried before and failed miserably. Recall Facebook's attempt to have fact-checkers such as Snopes dispute fake news. It failed, causing Facebook to alter the program in December so that fact checks now show up simply as “related articles.” It turns out that flagging content as potentially untrue can backfire, further entrenching mistaken beliefs. Other research on misinformation found similar effect. YouTube’s information cues have the potential to make their already viral conspiracy problem even worse.

    As long as conspiracy theories are allowed to live online, they’ll continue to flourish. The trolls who disseminate them have mastered almost every platform and they know that tech companies will take only half steps to stop them. Meanwhile, tech companies offer no protection for real people who become entangled in organized conspiracy theory campaigns and whose professional and personal lives can be upended as a result.

  • Russian propaganda is rampant on Reddit. Here's why that matters.

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko - Media Matters

    Russian propaganda runs rampant on the online message board Reddit, especially on the notorious Trump supporters’ subreddit r/The_Donald. A search on Reddit for Russian propaganda outlets RT (formerly Russia Today) and Sputnik News turns up well over 200 examples apiece. This week, Reddit CEO Steve Huffman (who uses the handle “spez”) admitted the obvious that yes, Russian propagandists have been using Reddit, and outlined some of the steps the company had taken in response.

    Reddit disclosed its efforts to combat Russian propaganda on its site in response to the news that the Senate intelligence committee had expanded its Russian interference investigation to include Reddit and Tumblr. In his post on Reddit, Huffman admitted that Russian trolls had weaponized the platform, that the company was cooperating with the investigations as asked, and that the misinformation problem would be difficult to solve saying “I wish there was a solution as simple as banning all propaganda, but it’s not that easy. Between truth and fiction are a thousand shades of grey.” Reddit is now facing the same scrutiny as Google, Twitter, and Facebook over the spread of Russian propaganda on the platform. Huffman (spez) immediately answered questions from the Reddit community in the comments section.

    Reddit is different from the other platforms Russian trolls targeted, as users play such a large role in shaping its community. Volunteer moderators build and maintain subreddits, and the company’s leaders generally respond to user questions and concerns when they make announcements. That doesn’t mean that Reddit has done a better job on issues tech platforms are facing, just that the relationship Reddit has with its user base is less top-down than those of other social networks dealing with Russian propaganda.

    The subreddit most closely associated with Russian propaganda is r/The_Donald, already known among Reddit users as a problem child or, as Gizmodo reported in 2016, “a community which, by exploiting poor enforcement of Reddit’s already limp user protections, has effectively been holding the rest of the site hostage.” Multiple Redditors in the comments section of Huffman’s post pointed out that not only had r/The_Donald been infiltrated by Russian trolls (many argued that it was little more than a front) but also that the subreddit’s continued existence was a sign that the platform wasn’t taking Russian propaganda seriously at all.

    Huffman addressed this criticism by responding in comments: “Banning them [users on r/The_Donald] probably won't accomplish what you want. However, letting them fall apart from their own dysfunction probably will. Their engagement is shrinking over time, and that's much more powerful than shutting them down outright” (link original). Redditors responded by downvoting Huffman’s comment a record-breaking 6,000 times.

    I’ve long maintained that tech platforms will change only as much as their users demand. It doesn’t matter what the issue is -- hate speech, propaganda, disinformation, et cetera -- tech companies have no incentive to do anything beyond what’s profitable, unless pressured enough by their users. What strikes me is that Reddit’s community is better equipped to pressure Reddit to clean up its act than users of any other platform are. Unlike with your average Facebook user, that Redditors are well aware that a lot of Russian propaganda originates from and lives on this platform (*cough* The_Donald *cough*). Redditors have organized communities, and volunteer moderators are already in place. Users have a forum they can use to speak directly to the company leadership, and because that forum is public, media can cover it more easily, amplifying the conversation.

    Consumers have more potential power over tech companies than they realize, but only if they take collective action. Reddit’s unique community structure could be the birthplace of a new advocacy model -- one that could spread to communities on other tech platforms.

  • Roy Moore’s political consultants have a new side hustle: Pro-Trump media magnates

    GOP consultants go Big League

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    The Daily Beast reported yesterday that Mustard Seed Media, which consulted for Republican candidate Roy Moore in last year’s Alabama Senate special election, has purchased Big League Politics, a “notable outpost of pro-Trump viewpoints and anti-liberal conspiracy theories on the internet.” Big League’s editor-in-chief is Patrick Howley, an alumnus of Breitbart News, the Washington Free Beacon, and The Daily Caller. Howley and Big League Politics are both known amplifiers of right-wing conspiracy theories.

    Mustard Seed Media isn’t the first GOP consulting firm to get into the pro-Trump media game. I’ve written previously about Dan Backer, a GOP consultant who has created pro-Trump media sites and used them to build an email fundraising list and raise money for pro-Trump super PACs he also runs. Both Becker and Mustard Seed Media have decided to skip the middleman: Why reach out to pro-Trump media when you can simply build your own media empire to talk up your political clients? It’s a lot less work than pitching pro-Trump outlets that you don’t own outright.

    Mustard Seed Media’s latest purchase illustrates this point. The company has also turned Moore’s campaign Twitter account, a valuable asset with more than 75,000 followers, into Big League Politics’ official Twitter account.

    Big League Politics’ About page does not disclose that it’s owned by Republican media consultants or the candidates they work with. Comically, the page has an ethics section where it falsely claims to be “journalist-owned” and states, “We do not belong to the Republican, Democrat, Libertarian, Green, or even Bull Moose parties. We are independent people telling independent stories and working on issues that we care about. Our only goal is to tell the truth.”

    Partisan media by itself isn’t a problem, even when it’s run by former political operatives. Certainly, there are examples on the left as well. However, running political media outlets while actively working for political candidates and party committees is another matter entirely, especially when you choose not to disclose your conflicts of interest to the readers.