Google | Media Matters for America

Google

Tags ››› Google
  • Russian trolls moved 340,000 Americans up the ladder of engagement

    Blog ››› ››› MELISSA RYAN


    Sarah Wasko / Media Matters

    Last night, The Washington Post revealed that Russian trolls “got tens of thousands of Americans to RSVP” to local political events on Facebook. We’ve known since last September that Russian trolls employed this tactic and often created dueling events at the same location and time, probably to incite violence or increase tension within local communities. But it is only now we’re learning the scale of that engagement. Per the Post, “Russian operatives used Facebook to publicize 129 phony event announcements during the 2016 presidential campaign, drawing the attention of nearly 340,000 users -- many of whom said they were planning to attend.”

    The new information comes via the Senate intelligence committee, which has been investigating potential Russian collusion in the 2016 U.S. elections and pressuring tech companies, especially Facebook, Twitter, and Google, to disclose more of what they know about just how much propaganda Americans saw on their platforms. Both Twitter and Facebook have agreed to let users know if they were exposed, but given that we’re still learning more about the scale of the operation, I’m skeptical that anyone knows how many Americans were exposed to Russian propaganda or how often. (If you’d like to check for yourself, I helped create a site that allows anyone to check the likelihood of them being exposed on Facebook.)

    By now most Americans accept that Russian propaganda appeared on their social media feeds in 2016. What concerns me is whether or not they believe that they themselves were susceptible to it. The fact that nearly 340,000 people RSVP’d to events created by Russian trolls -- that they moved up the ladder of engagement from consuming content to RSVPing to an event -- should make us all reconsider our own vulnerability, especially when you consider that many of these events were created to sow discord. Russia’s goal is to destabilize U.S. democracy. Stoking racial, cultural, and political tensions in local communities across the U.S. via creating events on Facebook is a cheap and effective way for Russian trolls to do this.

    Russia’s use of social media to disseminate propaganda and stoke political tension is an ongoing problem. Last fall, Sens. Richard Burr (R-NC) and Mark Warner (D-VA), leaders of the Senate intelligence committee, issued a bipartisan warning that Russian trolls would continue their actions into the 2018 midterm elections and 2020 presidential elections to sow chaos. A ThinkProgress article on the now-defunct website BlackMattersUS.com illustrates how sophisticated propaganda operations can use content, online campaigns, offline events, and relationships with local activists to develop trust and credibility online. And as the successful dueling event demonstrate, all Americans, no matter what their political persuasion, are susceptible to these influence operations.

    As Recode Executive Editor Kara Swisher pointed out on MSNBC today, we’re in an “ongoing war.” There’s no easy way to tell if the content we see on our social media feeds comes from Russian trolls or other hostile actors. There’s no media literacy course or easily available resource that can teach individuals how to identify propaganda. That’s why regulation that protects consumers such as stricter disclosure of political ads and safeguards against fraud is so vital to solving this problem. Especially as tech companies have proven reluctant to make any real changes beyond what public pressure demands of them.

  • A fake Bruce Willis story is being monetized by Google AdSense and prominently featured on YouTube

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    Google, through its advertising network Google AdSense, is monetizing multiple fake news websites spreading a bogus story that actor Bruce Willis wants critics of President Donald Trump to move out of the United States. Additionally, the made-up story is featured prominently on YouTube, which is owned by Google. This is just the latest example of Google floundering in its supposed efforts to fight fake news.

    On November 27, Snopes.com flagged a “made-up news story” that circulated on fake news websites alleging that actor Bruce Willis said Trump was “doing great. In fact, he just might be the best US President ever.” The fake news articles additionally claimed that Willis said Trump’s critics should “go to Canada or something.” As Snopes noted, the fake story was based off of an October 2015 appearance by Willis on The Tonight Show Starring Jimmy Fallon where he dressed up as Trump.

    The fake story has gone viral, spreading to multiple fake news websites. Combined, the posts have received well over 100,000 Facebook engagements, according to social media analytics website BuzzSumo. Several of the websites running the story are using Google AdSense -- identifiable by the blue triangle in the top right corner -- to make money off of the fake story. (A previous Media Matters report found that Google AdSense was one of the most widely used advertising networks by fake news websites.) At least one of these posts with AdSense advertisements is on a website registered in Denmark.

    In addition to Google AdSense monetizing the fake story about Willis, YouTube featured the fake story in its top results when one searches for “Bruce Willis.” YouTube videos promoting the made-up story have received over 85,000 views combined. Along with its parent company Google, YouTube has also claimed it has taken steps to address fake news.

    Google has struggled to stop misinformation being spread through its platforms. Last week, a Google search featured a fake story about actor Keanu Reeves. Similarly, AdSense also featured advertisements on some of the websites pushing that made-up story. The fake story also remains the top result when Reeves’ name is typed into YouTube -- a full week after multiple outlets flagged it. Google’s search platform has also featured scams and false claims from far-right message boards, while Google AdSense has continued to monetize other fake news stories and race-based lies.

  • Google is promoting and monetizing a website’s fake story about Keanu Reeves

    Blog ››› ››› GRACE BENNETT


    Sarah Wasko / Media Matters

    Google has a serious problem with promoting and monetizing fake news, as a recent fabricated story about actor Keanu Reeves has shown once again. The story, which originated from a fake news website, was still one of the first results for an incognito search of Reeves’ name in the news section of Google’s platform even after it was flagged as false, and the company’s advertising network, AdSense, is helping monetize the same fallacious story.

    On November 19, prominent fake news website YourNewsWire published a post headlined “Keanu Reeves: Hollywood Elites Use ‘Blood Of Babies’ To Get High” that claimed Reeves said that “Hollywood elites use ‘the blood of babies to get high’” and that “‘these people believe the more innocent the child, and the more it suffered before it died, the better the high.’”

    The story was blatantly fake, but it quickly spread. It was posted on both of YourNewsWire’s associated Facebook pages, as well as on other fake news websites, some of which are funded by ads from Google AdSense (identifiable by the blue triangle in the top right corner), one of the advertising services most widely used by fake news websites. On November 21, the story appeared on the fake news website The Weekly Observer, under the headline “Keanu Reeves warns that the elite of Hollywood drinks blood from babies.”

    Despite the blatantly false claims in The Weekly Observer’s post, Google has lent legitimacy to the story by including it in its news results. It appears as the third result in a search for "Keanu Reeves." (Results are from a search conducted in an incognito window; a user's search history and preferences may affect what they see in a normal window.)

    Not only is Google helping the story spread by presenting it prominently in its search results, the company is also allowing The Weekly Observer to monetize its fake story by permitting Google AdSense ads to appear on the page. Since 2011, Google's placed advertisements have been identifiable by a blue triangle icon and the words “AdChoices.” Several such advertisements are visible on The Weekly Observer’s post.

    In recent testimony in front of the House intelligence committee, Google’s senior vice president and general counsel claimed that the company has “taken steps” to demonetize misrepresentative websites. According to Google AdSense’s policies, “Google ads may not be placed on pages that misrepresent, misstate, or conceal information about you [or] your content.” Despite the company’s promises to prevent the monetization of misrepresentative and fallacious content, Google ads still regularly appear alongside fake stories, and Google AdSense is still profiting off of multiple fake news sites.

  • Fake news website YourNewsWire deletes bogus story about Keanu Reeves

    Facebook, YouTube, and Google all helped the story spread

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    Prominent fake news website YourNewsWire deleted a fabricated story headlined “Keanu Reeves: Hollywood Elites Use ‘Blood Of Babies’ To Get High” after the site was called out for its lie on Twitter -- but not before the post was widely circulated on verified Facebook pages, converted into a monetized YouTube video that became the top result when searching for the actor’s name on YouTube, and posted on other fake news sites that feature ads placed by Google. This is just the latest example of tech platforms aiding in the spread of misinformation from fake news websites.

    On November 19, YourNewsWire published a post claiming Reeves said that “Hollywood elites use ‘the blood of babies to get high’” and that “‘these people believe the more innocent the child, and the more it suffered before it died, the better the high.’” The story was blatantly fake, but it quickly spread. It was posted on both of YourNewsWire’s associated Facebook pages, both of which are verified by Facebook. One of those pages is called The People’s Voice; the other, called YourNewsWire after the website, recently lost its verification under unexplained circumstances, but has since gotten it back. The false story about Reeves received more than 26,000 Facebook engagements, according to BuzzSumo. The Facebook posts have since been deleted.

    The story was also posted on other fake news websites, some of which are funded by ads from Google AdSense, one of the most widely used advertising services by fake news websites. (YourNewsWire’s article displayed ads via Revcontent, another ad service used frequently by fake news sites.)

    Additionally, a video pushing the fake story from the account Kinninigan for a time became the top result for a search of Reeves’ name on YouTube, which is owned by Google and which has struggled to not feature misinformation on its platform. It has been viewed over 114,000 times and is monetized with ads as well; Media Matters found on the video an ad for the movie Lady Bird. In effect, YouTube, Google, and Kinninigan are all potentially making money from this video claiming that Reeves said “Hollywood elites” get high from drinking baby blood. (Kinninigan’s account features a number of videos of YourNewsWire content, as well as conspiracy theory videos and videos about various celebrities such as Sofia Vergara, Angelina Jolie, and Hillary Clinton being reptilian shapeshifters.)

    After some people on Twitter promised to flag the article as fake news on Facebook following a tweet from a Media Matters researcher who had called out the fake story, YourNewsWire's owner, Sean Adl-Tabatabai, lashed out. Adl-Tabatabai, who has openly stated that he believes facts are not sacred, tweeted:

    Although YourNewsWire took down the fake story after it was called out, the damage had already been done: other websites have now picked up the story and people are still sharing it on Facebook and elsewhere, as noted by Mashable.

    The tech companies that contribute to the spread of fake news and profit from these stories are, at least in part, responsible for them. They have also, as noted by BuzzFeed's Charlie Warzel, repeatedly bungled handling the spread of misinformation. By verifying YourNewsWire’s pages, Facebook -- which claims to be committed to fighting fake news on its platform -- is implicitly indicating to its users that the website has some kind of legitimacy, which it clearly does not merit. And YourNewsWire is not alone; although Facebook appears to have removed at least one verified page for a fake news website and blocked its links, plenty of other fake news websites’ Facebook pages remain verified.

    All of these companies should be aware that YourNewsWire is a bad actor. The site, which was founded in 2014, has come under fire for repeatedly publishing fake stories like a dying former MI5 agent confessing to killing Princess Diana, former Democratic presidential nominee Hillary Clinton helping run a pedophilia ring from the basement of a D.C. family pizzeria (which in fact led to a gunman to open fire in the restaurant), and actor Morgan Freeman wanting Clinton to be jailed. Some of YourNewsWire’s fake stories about Clinton and about former President Barack Obama have even been pushed by Fox News’ Sean Hannity. The website, which American and European experts have called a Russian proxy, has also published fake stories that seem to fit Russia’s anti-democratic, anti-European Union (EU), and anti-George Soros agenda. (The website has also been promoted by what appears to be a revived version of @TEN_GOP, a Russian account that was run by the Kremlin-linked Internet Research Agency.) And recently, the website published a fake story that the gunman involved in the massacre in Sutherland Springs, TX, was a member of antifa; the false story went viral and received more than a quarter million Facebook engagements, according to social media analytics website BuzzSumo.

  • Google is still profiting from a racist fake news site that promotes violence 

    Freedom Daily hosts Google ads right next to racist content that advocates for anti-Muslim violence

    Blog ››› ››› KATIE SULLIVAN & NATALIE MARTINEZ

    Google advertising network AdSense is helping monetize racist and dangerous anti-Muslim articles from the fake news website Freedom Daily, despite clear violations of AdSense’s policies prohibiting content that "[t]hreatens or advocates for harm on oneself or others" or “incites hatred against” an individual or group on the basis of race or religion.

    Racist content is part of Freedom Daily’s DNA. Between September 15 and October 15, the site published over 100 articles about the recent NFL player protests in support of civil rights, many of which denigrated black players because they are black and falsely claimed that the protests led to anti-white attacks. And Freedom Daily’s posts about NFL players are just part of its targeted racist content, which includes articles promoting violence and hatred against Muslims.

    In recent testimony in front of the House Intelligence Committee, Google’s senior vice president and general counsel claimed that the company has “taken steps” to demonetize misrepresentative websites and has added to its policies “around or against hate speech.” According to Google AdSense’s policies, “We don’t permit monetization of dangerous or derogatory content”:

    Dangerous or derogatory content

    We believe strongly in freedom of expression, but we don't permit monetization of dangerous or derogatory content. For this reason, Google ads may not be placed on pages containing content that:

    • Threatens or advocates for harm on oneself or others;
    • Harasses, intimidates or bullies an individual or group of individuals;
    • Incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalization.

    But Google AdSense is still profiting off of multiple fake news sites, including Freedom Daily and its related site mPolitical (mPolitical features the same posts and bylines as Freedom Daily, and the two sites share a Google ad code), by providing advertising to them in a clear contradiction of its policies (which also prohibit “misrepresentative content”). Here are 10 posts from Freedom Daily swarmed by ads from AdSense that Google might want to take a look at:

    Note: The blue arrow in the corner of an ad indicates that it was placed by AdSense.

    “Desperate Tennessee residents are frantically crying out for help, as the small town they once loved and cherished is being completely overrun by hordes of nasty Muslim migrants.”


     

    A 2017 Freedom Daily story published at least twice claimed "nasty Muslim migrants" from Somalia were taking over a Tennessee town and targeting Christians. Snopes looked at the July 25 version and rated it false, stating, “Based on the evidence we were able to amass, Shelbyville, Tennessee doesn’t have a Somali refugee problem; Freedom Daily, on the other hand, does” (emphasis original):

    As many Americans were left in shock and horror following the Minnesota woman being violently shot down by a Somali Muslim cop, now Somali migrants have set their sights on another American city to invade. Desperate Tennessee residents are frantically crying out for help, as the small town they once loved and cherished is being completely overrun by hordes of nasty Muslim migrants, who have now started targeting local Christians with violent Islamic crimes.

    [...]

    Somali Muslims are proving to be the most violent migrants due to the vicious jihadist-ruled country they’re plucked from before making their way to the United States. Somali Muslims are not only overtaking Shelbyville and forcing Sharia onto the populace, but the city of Minneapolis as well. In addition to the disturbing headlines of the Somali cop gunning down a white woman last week, Somali migrants completely took over a small Minnesota white neighborhood several months back and started threatening to rape and murder females who were standing in their yards. [Freedom Daily, 7/25/17,  11/1/17; Snopes, 7/31/17]

    Michelle Obama “Goes Full-Ghetto” by attending a tour of the Siena Cathedral in a “slutty” shirt.

    An October piece said former first lady Michelle Obama entered an Italian church wearing “a slutty off-the-shoulder top,” claiming she was going “full-ghetto”:

    For the past 8 years, Michelle Obama strutted around like a queen, doing very little work as the First Lady other than running her big fat anti-American mouth. While we’re thankful this piece of work has finally left our White House after making a mockery of our country, what Michelle was caught doing inside a ancient church on Tuesday proves once again just how classless this woman is, who obviously holds herself above all the rules, even in the places of worship that most people revere as the sacred and holy place of God.

    [...]

    Showing up for a tour of Siena Cathedral in Italy looking like she had been up all night partying at a strip club, many people were quite astonished at the amount of skin Michelle was showing, as flaunted a slutty off-the-shoulder top that was a strict violation of the Siena Cathedral’s rules. [Freedom Daily, 10/18/17]

    Headline calls Palestinian people “Cockroaches.”

    A June 2016 piece called Palestinians “cockroaches,” saying they were celebrating a terror attack. [Freedom Daily, 6/8/16]

    "OF COURSE….Muslim Machete Attacker Was Here On Green Card From Africa"

    [Freedom Daily, 2/23/16]

    “ISLAM Needs to Be ERADICATED From the Earth.”

    Freedom Daily published a post about an ISIS propaganda video headlined “What This Muslim Father Did to His Son Proves ISLAM Needs to Be ERADICATED From the Earth.” [Freedom Daily, 2/24/16]

    Michael Brown’s family “Hit The Ghetto Lottery.”

    A July piece described a settlement given to the family of  Michael Brown --  a black teenager who was fatally shot by a white police officer in 2014 while unarmed -- as a win of “the ghetto lottery.” The post went on to attack Brown’s family for having the “audacity” to argue that “the death of the two-bit thug” would deprive the family of potential wages. From the July 2 article:

    Take note America, In today’s America if you are a minority and raise a two-bit thug who gets himself killed for intimidating people and attacking a police officer the government will reward you with so much money that you will be able to fly around to exotic locations on your very own private jet. This is what Obama must have meant by the slogan “Hope and Change.”

    [...]

    Although their son didn’t even live with them, they actually had the audacity to argue that the death of the two-bit thug that they raised deprived them of financial support through his future potential wages. Umm, you won’t make 1.5 million dollars working at McDonald’s, if he was even qualified to work there.

    [...]

    This is Obama’s America. This is what we, as a nation, got for electing a race baiting charlatan with a name like Barack Hussein Obama. We are in serious trouble when you raise a kid to be a two-bit thug and the government, out of fear of looting and riots, takes it upon its self to reward you with a 1.5 million dollar payday because society had to fix the problem you raised. [Freedom Daily, 7/2/17; The Associated Press, 6/23/17]

    Man “driving a van [that] plowed through a group of Muslims” gave them a “taste of their own medicine.”  

    A June piece said that “political correctness and liberal politicians are providing to be Islam’s final advocate,” while “one resident finally snapped” and attacked Muslims outside a mosque:

    Authorities have done little to prevent terror attacks from plaguing Europe. Muslim refugees continue to flood places like the U.K. and Germany, while citizens continue to be brutally victimized with terror attacks, as political correctness and liberal politicians are proving to be Islam’s biggest advocate. As the U.K. continues to be targeted in recent months, one resident finally snapped and decided to give Muslims a brutal taste of their own medicine after what he did to a group of Muslims who had gathered for prayers outside their mosque a little after midnight last night. [Freedom Daily, 6/19/17]

    “Americans aren’t the ones who should run and hide and live like victims, it’s Muslims who need to be afraid. They don’t run this show anymore.”

    An October piece cheered a Louisiana sheriff-turned-lawmaker for “sending a message that … Muslims who need to be afraid”:

    Finally, someone has come out and given an assertive directive Americans have been desperate to hear for eight years. With a new sheriff in town, this former sheriff is empowered to give blunt advice like this rather than apologies. While it may seem like an impossibility to cleans the country of Islam, it’s a lot more feasible than loving terrorist out of committing acts of terror. More importantly, it’s sending a message that Americans aren’t the ones who should run and hide and live like victims, it’s Muslims who need to be afraid. They don’t run this show anymore. [Freedom Daily, 10/15/17]

    Sesame Street is “threatening indoctrination” by showing children “a heaping spoonful of Islamic propaganda.”

    An April 2016 piece criticized the TV show Sesame Street for “having a Muslim woman teach young girls about female empowerment”:

    Instead of sticking with basic education like numbers, the alphabet, and the occasional sing-along about treating others with kindness, the adored children’s program is kicking off their near 50-year run with perhaps the most threatening indoctrination yet. One-upping controversial episodes on topics including HIV, death, and Katy Perry’s barely-there wardrobe, the puppet-laden television show just gave our children a heaping spoonful of Islamic propaganda.

    [....]

    Having a Muslim woman teach young girls about female empowerment is like having a slave teach a free people about emancipation. How can one who is still chained speak of freedom? There is no personal experience to backup their claims. Even more so, they are the definition of a bound servant who believes they are free. [Freedom Daily, 4/10/16]

  • Google AdSense is sponsoring fake news about the Texas church massacre

    Google’s AdSense, along with Revcontent and content.ad, are helping to fund fake news about the shooting

    Blog ››› ››› ALEX KAPLAN

    Google is continuing to allow the monetization of fake news via its advertising network AdSense, this time surrounding the November 5 mass shooting in Sutherland Springs, TX. Advertising networks Revcontent and content.ad are also featuring advertisements on fake news stories about the attack.

    On November 5, a gunman opened fire and killed at least 26 people at a church in Sutherland Springs, TX. The alleged gunman, Devin Patrick Kelly, was court martialed while in the Air Force in 2012 on charges of “assaulting his wife and child” and has been accused of stalking ex-girlfriends. Law enforcement officers are now saying that the shooting was related to “a domestic situation.”

    A Media Matters search found that Google’s AdSense supplied advertisements for many websites pushing the fake news that Kelly was a member of the antifascist group antifa, with many seeming to base their pieces on a fake news article from prominent fake news website YourNewsWire. Those websites included Real Farmacy, USN Politics, myinfonews.net, Clear Politics, SBVNews, RedStateWatcher, and TruthFeed.

    Some of these websites that were using AdSense, such as Clear Politics and SBVNews, also carried advertisements from content.ad, while TruthFeed also featured advertisements from Revcontent. Other websites not using AdSense that pushed the baseless claim, such as Conservative Fighters, The Conservative Truth, and borntoberight.com, featured advertisements from Revcontent or content.ad instead, including the YourNewsWire piece (that article went viral, drawing at least 235,000 Facebook engagements within almost 24 hours of the attack, according to social media analytics website BuzzSumo, and was shared on gun parts manufacturer Molon Labe Industries’ Facebook page). 

    Another false claim about the shooting came from Freedum Junkshun, a “satire” website run by a man whose made-up stories have been used by fake news websites to misinform. It claimed that the shooter “was an atheist” on the payroll of the Democratic National Committee. That article was funded via advertisements from both AdSense and content.ad. And fake news website Freedom Daily, which has repeatedly violated AdSense’s rules against race-based incitement of hatred, published the false claim that the shooter was a Muslim convert named Samir Al-Hajeed. AdSense advertisements funded that article.

    It isn't just Google's advertising service that is struggling with how to handle fake news; among the top Google search results of Kelly’s name following the attack were tweets and a video that also baselessly claimed he was a member of antifa. YouTube, which Google owns, also prominently featured a video pushing the false claim as one of the top results for the alleged shooter’s name.

    In early November, a Google senior executive testified before Congress that the company had “taken steps” to demonetize misrepresentative websites. Yet the fact that multiple websites are using AdSense to monetize misinformation about the Texas mass shooting via AdSense signals otherwise. Indeed, AdSense, along with Revcontent and content.ad, have generally become the advertising networks of choice for those who push fake news. And this comes amid continuing criticism of Google’s inability to not feature misinformation during or after crisis events. These companies clearly have a long way to go to fix their misinformation problem.

  • Las Vegas shooting shows Facebook, Google, and YouTube's misinformation problem

    Blog ››› ››› ALEX KAPLAN


    Google, Facebook logos

    A page set up by Facebook to keep the public up to date on the October 1 Las Vegas shooting, along with searches on Google and YouTube regarding the shooting, show the struggle these platforms still have in combating fake and dubious news.

    During the 2016 election campaign, fake news was widely shared on Facebook, including in its “trending topics” section. In response to intense criticism after the election, Facebook said it tried to take measures to limit the spread of fake news. Yet the company disclosed in September that hundreds of fake Russian accounts bought tens of thousands of dollars worth of advertisements, and reports continue to come out about Russia’s use of Facebook to interfere in the election.

    Following a shooting on October 1 at a Las Vegas, NV, concert that killed at least 58 people, Facebook created a crisis response page called “The Violent Incident in Las Vegas, Nevada,” where people in the area could confirm that they were safe and users could find ways to support the victims. The page also has an “about” section with links to articles about the shooting, which seemed to appear and then disappear after a certain period of time.

    While many of the articles on the page appeared to come from legitimate sources, some did not, and those dubious links even appeared toward the top of the page at certain points. One article that appeared on the page came from TruthFeed, a fake news purveyor that has pushed baseless conspiracy theories and other false claims. Additionally, the page at one point featured a link toward the top to an article from theantimedia.org, which was itself a reprint of an article from fringe blog Zero Hedge. Zero Hedge has a history of pushing conspiracy theories and has shared forged documents targeting then-French presidential candidate Emmanuel Macron. At another point, the Facebook page also featured, toward the top, an article from consistently inaccurate far-right pro-Trump blog The Gateway Pundit, which had already been forced to delete a post accusing the wrong man of being the Las Vegas shooter earlier that day. It also featured a link to a blog called Alt-Right News, which wrote about the shooting “from an Alt-Right perspective.”

    Facebook’s heavy use of algorithms appears to still be harming the website’s ability to block misinformation and nefarious usage of its platform. Besides its crisis page, Facebook's trending topic page for the shooting featured multiple articles from Sputnik, an outlet funded by the Russian government that is currently under investigation by the FBI for possibly violating the Foreign Agents Registration Act.

    And Facebook is not the only platform having problems following the Las Vegas shooting. Google featured in its news section a false claim from 4chan's "politically incorrect" message board (commonly referred to as "/pol/"), which Google blamed on algorithms and absurdly referred to as a "4chan story." And on YouTube, which is owned by Google, a conspiracy theory that the Las Vegas shooter was an "Anti Trump Far Left Activist" is one of the top results if the alleged shooter's name is typed into the search bar. If Facebook and Google cannot get a handle on their misinformation problem, more dubious sources will continue to roam their platforms, earning wide exposure for their misinformation.

  • These three advertising networks are powering fake news

    We looked at 100 fake news purveyors, and 84 of them use at least one of three advertising networks

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    A Media Matters review of 100 websites that publish fake news found that 84 percent use at least one of three specific advertising networks for revenue.

    Much of the public criticism about the proliferation of fake news in the past year has focused on social media platforms like Facebook and Twitter. While those platforms are vital in driving traffic to purveyors of fake news, less attention has been devoted to the series of advertising networks that help fake news websites turn those clicks into money. Creating revenue streams for websites that post this sort of content gives them an incentive to spread misinformation. For example, CNN reported in September that fake news purveyors from Macedonia, where much of this type of content originates, get their “profits … primarily from ad services such as Google’s AdSense.”

    The review found that the examined fake news purveyors use three advertising networks far more than others: Google AdSense, Revcontent, and Content.ad. AdSense appeared on 41 fake news-purveying websites, Revcontent on 40, and Content.ad on 36. The websites don’t use these networks exclusively, often employing multiple advertising networks concurrently.

    Google and Taboola (the fifth most used advertising network found in this study) told BuzzFeed in April that they accepted many websites that published false claims because “the content is intended to be satirical” -- a weak cover for what’s clearly misinformation. In fact, of the 100 websites examined in this review, none explicitly say they are satirical. And most of them have published fake stories that have been debunked by fact-checkers such as FactCheck.org, Snopes, and PolitiFact. Simply put, these websites aim to intentionally deceive.

    Some of these fake news purveyors also appear to violate these advertising networks’ terms of services. For example, Revcontent’s terms of service prohibit content that is “pornographic, hate-related or otherwise violent in content.” Yet it’s been used to monetize websites that publish fake stories such as a made-up Supreme Court decision that “pissed off every Muslim in America” ("it's about time" says the headline) or the baseless Pizzagate conspiracy theory that caused a gunman to open fire inside a Washington, D.C., pizzeria.

    Terms of service from another widely used advertising network found in this study, Content.ad, prohibits content “that contains pornographic, hate material, gambling related material or any other material deemed illegal or offensive by Content.ad.” Yet the fake news purveyors using its services published articles such as “Texas Mosque Refuses To Help Refugees: ‘Allah Forbids Helping Infidels’” (which was made-up) and the fake Muslim Supreme Court story.

    Google's AdSense prohibits content that viewers are enticed to click on “under false or unclear pretenses,” but the fake news purveyors using its services published articles such as “Obama’s Tax-Skipping - Audit Shows Millions In Offshore Accounts” (false) and “Clinton Foundations Sends Water To Houston...For $7 A Bottle” (the foundation did nothing of the sort).

    The sample websites examined show that these three advertising networks clearly have a problem with fake news. In fact, some of these companies seem to have acknowledged fake news purveyors’ widespread use of their networks and tried to avoid the problem rather than confront it. In January, Google changed its “prohibited content” policy to no longer directly mention “fake news articles,” although it promised to Media Matters that the change in wording had “not changed our misrepresentative content policy in any way.” But in May, Recode reported that Google would start removing advertisements mainly from individual web pages, as opposed to websites, which Recode characterized as a “more lenient” policy. 

    Revcontent claimed to USA Today in August that it was “a challenge to keep up with violations when content can be changed at anytime without the company knowing” and told Digiday back in November that “it doesn’t want to be in a censorship role.” Yet it dubiously told BuzzFeed months later that it has “some of the most stringent standards out there.” Content.ad has previously refused to comment on its role in funding fake news.

    Whether they like it or not, advertising networks are playing a major role in the spread of fake news, making money for websites that spread misinformation and mislead the public.

  • Story about Houston mosque refusing to help non-Muslims is fake news

    Right-wing fake news websites run wild with bogus story using the image of a Canadian scholar and imam

    Blog ››› ››› ALEX KAPLAN

    Multiple fake news purveyors are pushing a story originating from a supposedly satirical website alleging that a Houston-area mosque is refusing to take in non-Muslim victims of Hurricane Harvey because of their religion. At least one of the fake news purveyors pushing the story is funded by Google AdSense.

    On August 31, The Last Line of Defense, a website that claims it is satirical but has been repeatedly promoted by fake news purveyors as if its stories were legitimate, published a piece claiming the “Ramashan Mosque outside of Houston” refused to take in “any non-Muslim people” impacted by Harvey “because it’s against their religion.” The next day, the website published another piece claiming “flood refugees banded together and kicked in the door to the mosque” to let themselves in. The August 31 article received more than 118,000 Facebook engagements and 1,100 Twitter engagements, and the September 1 piece received more than 1,800 Facebook engagements, according to social media analytics website BuzzSumo. The articles were also shared as seemingly real news by fringe conservative media personality Holly Henderson and by former parliamentary candidate of the anti-Muslim UK Independence Party Sharon McGonigal‏.

    Qasim Rashid, a spokesperson for Ahmadiyya Muslim Community USA, subsequently pointed out that “no such mosque exists.” In fact, mosques in Houston have been actively aiding people impacted by Harvey. One of the men pictured in the Last Line of Defense articles, Ibrahim Hindy, a Canadian imam, noted on Twitter that he had “never even been to Texas before.” (Hindy told Toronto’s CityNews that he was actually in Saudi Arabia completing an annual Muslim pilgrimage when the hurricane hit Texas.)

    Multiple fake news purveyors ran with the made-up Last Line of Defense story, including All News 4 USA (which published both pieces and the photo of Hindy), American President Donald J. Trump, and Daily Post Feed. The Daily Post Feed article received more than 9,400 Facebook engagements and at least 1,000 Twitter engagements, according to BuzzSumo, and was also shared on Reddit’s “r the_Donald,” a forum that has previously helped far-right trolls and fake news purveyors spread misinformation. Multiple ad services are funding these fake news purveyors -- these fake news articles about Harvey and the mosque carry their sponsored ads, including content.ad, Google AdSense, Criteo, TrustArc, and MGID. Google has previously promised to fight AdSense being used to fund fake news.

    The fake news story comes as fake news purveyors continue to push other fake news surrounding Harvey, some of which has even reached Fox News. It is also yet another example of fake news purveyors’ rampant Islamophobia.

  • Yahoo News aggregates a right-wing fake news website

    Blog ››› ››› ALEX KAPLAN


    Sarah Wasko / Media Matters

    Yahoo News aggregated a highly misleading article with fake news, raising the question of how the company ended up treating a fake news purveyor as a legitimate news source.

    On July 5, Yahoo News aggregated an article on its website from Conservative Daily Post (CDP) headlined “U.N. Chief Makes Stunning Paris Agreement Admission: ‘President Trump Was Right.’” The Yahoo News page linked to CDP for the full article, which does not include a mention of United Nations Secretary General António Guterres saying the phrase quoted in the headline. In fact, it appears that Guterres has never said “President Trump was right” at all; on the contrary, in May he stated, “We believe it would be important for the US not to leave the Paris agreement.”

    The CDP article aggregated by Yahoo News also claimed that people opposed to Trump’s decision to withdraw from the Paris climate agreement "are going to go to any lengths to convince you, even lie to you on CNN” because they “are losing a lot of money.” Additionally, the article pushed climate denial, falsely claiming that the agreement “aims at guilt tripping and deceiving people into believing that human CO2 is responsible for rising temperatures on Earth” and that “there is a very strong case that the sun is mostly responsible for rising CO2 levels, not human beings.”

    CDP is a serial fake news purveyor. During the 2016 presidential campaign, it falsely claimed that the FBI was looking into “at least 6 members of Congress and several leaders from federal agencies that partake in" a "pedophile ring, which they say was run directly with the Clinton Foundation as a front,” citing the “alt-right”-affiliated and conspiracy-driven 4chan forum /pol/. Later that month, the website falsely claimed that Trump would seek to criminally charge those who burn the American flag. Last August, it reported a satirical article claiming then-President Barack Obama would move to Canada if Trump won the presidential election as fact. And last July, it appeared to make up a story about an undocumented immigrant being fired from McDonald’s for telling police officers, “We don't serve pigs." The website recently also pushed a dubious claim from “alt-right” troll Jack Posobiec that then-FBI Director James Comey dropped an investigation into former national security advisor Susan Rice for allegedly “requesting the ‘unmasking’ of ... identities” of “US individuals” who were connected to Trump and had been caught in surveillance. And it used photographer Laura Hunter’s name and photo as a byline for some of its articles, turning Hunter, who leans “a bit more to the liberal side,” into a “fake far-right blogger.” In response, Hunter sued the website.

    Yahoo News is a regular aggregator of other news sources, including The Associated Press and Reuters, but it would be a highly alarming and unfortunate editorial choice for Yahoo to aggregate fake news. As other platforms such as Google and Facebook continue to struggle in their fight against fake news, it is critical for major websites like Yahoo to not drive traffic and give credibility to websites that push fake news and misinformation.

  • Experts Explain Why Google, Facebook, And The Media's Approaches To Combating Fake News Are Flawed

    Blog ››› ››› MEDIA MATTERS STAFF

    In a Los Angeles Times op-ed, professors from Harvard and Northeastern University argued that tech giants and social media platforms such as Google and Facebook are not taking necessary steps to stem the spread of fake news online, such as less prominently displaying potential fake news stories. The op-ed also suggested that media outlets stop repeating false claims in headlines.

    Facebook, with its algorithm that allows fake news stories to go viral, and Google, with its advertising service that continues to fund multiple fake news purveyors, have become two of the largest platforms on which fake news stories and their purveyors spread and grow. Although they have taken some steps to address the issue, such as recruiting fact-checkers, the platforms still continue to host fake news. And media outlets, some of which have been recruited by Facebook to fact-check potential fake news stories, can inadvertently spread fake news by repeating dubious or false claims in their headlines -- a practice with which many have struggled.

    In their May 8 op-ed, Harvard professor Matthew Baum and Northeastern University professor David Lazer -- who recently co-authored a report on combating fake news that made several suggestions for stemming its proliferation -- wrote that “the solutions Google, Facebook and other tech giants and media companies are pursuing aren’t in many instances the ones social scientists and computer scientists are convinced will work.” The article cited research to explain that “the more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true.” Instead, they urged the platforms to “move suspect news stories farther down the lists of items returned through search engines or social media feeds.” They added that while “Google recently announced some promising steps in this direction,” such as “responding to criticism that its search algorithm had elevated to front-page status some stories featuring Holocaust denial,” “more remains to be done.” The professors also called on “editors, producers, distributors and aggregators” to stop “repeating” false information, “especially in their headlines,” in order to be “debunking the myth, not restating it.” From the op-ed:

    We know a lot about fake news. It’s an old problem. Academics have been studying it — and how to combat it — for decades. In 1925, Harper’s Magazine published “Fake News and the Public,” calling its spread via new communication technologies “a source of unprecedented danger.”

    That danger has only increased. Some of the most shared “news stories” from the 2016 U.S. election — such as Hillary Clinton selling weapons to Islamic State or the pope endorsing Donald Trump for president — were simply made up.

    Unfortunately — as a conference we recently convened at Harvard revealed — the solutions Google, Facebook and other tech giants and media companies are pursuing aren’t in many instances the ones social scientists and computer scientists are convinced will work.

    We know, for example, that the more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true. As recent studies led by psychologist Gordon Pennycook, political scientist Adam Berinsky and others have shown, over time people tend to forget where or how they found out about a news story. When they encounter it again, it is familiar from the prior exposure, and so they are more likely to accept it as true. It doesn’t matter if from the start it was labeled as fake news or unreliable — repetition is what counts.

    Reducing acceptance of fake news thus means making it less familiar. Editors, producers, distributors and aggregators need to stop repeating these stories, especially in their headlines. For example, a fact-check story about “birtherism” should lead by debunking the myth, not restating it. This flies in the face of a lot of traditional journalistic practice.

    [...]

    The Internet platforms have perhaps the most important role in the fight against fake news. They need to move suspect news stories farther down the lists of items returned through search engines or social media feeds. The key to evaluating credibility, and story placement, is to focus not on individual items but on the cumulative stream of content from a given website. Evaluating individual stories is simply too slow to reliably stem their spread.

    Google recently announced some promising steps in this direction. It was responding to criticism that its search algorithm had elevated to front-page status some stories featuring Holocaust denial and false information about the 2016 election. But more remains to be done. Holocaust denial is, after all, low-hanging fruit, relatively easily flagged. Yet even here Google’s initial efforts produced at best mixed results, initially shifting the denial site downward, then ceasing to work reliably, before ultimately eliminating the site from search results.

    [...]

    Finally, the public must hold Facebook, Google and other platforms to account for their choices. It is almost impossible to assess how real or effective their anti-fake news efforts are because the platforms control the data necessary for such evaluations. Independent researchers must have access to these data in a way that protects user privacy but helps us all figure out what is or is not working in the fight against misinformation.

  • Shorenstein Report Identifies Steps For Stemming The Spread Of Fake News

    Blog ››› ››› ALEX KAPLAN

    A new report from the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy, which examined fake news and misinformation in the media ecosystem, has identified possible steps that academics, internet platforms, and media outlets could take in the short term to help stem the spread of fake news.

    Fake news -- information that is clearly and demonstrably fabricated and that has been packaged and distributed to appear as legitimate news -- was a major problem during the 2016 election, and such misinformation continues to be pervasive. Websites that spread fake news, which Media Matters has dubbed fake news purveyors, have additionally become part of an ecosystem with the “alt-right” that also spreads other kinds of misinformation, such as dubious claims and conspiracy theories. Aides and allies of President Donald Trump have also pushed articles from fake news purveyors and from the “alt-right”/fake news ecosystem, helping spread their reach.

    The Harvard report provides an overview of misinformation in the current media ecosystem, discusses the psychology of fake news, identifies potential areas for further research on the topic, and presents three possible approaches to addressing the problem of fake news in the short term.

    Making The Fight Against Fake News Bipartisan

    First, the report explains that “bringing more conservatives into the deliberation process about misinformation is an essential step in combating fake news,” adding that fake news, “for the moment at least,” is a problem on “predominantly the right side of the political spectrum.” It further notes that corrections to fake news are “most likely to be effective when coming from a co-partisan with whom one might expect to agree.” From the report:

    Bringing more conservatives into the deliberation process about misinformation is an essential step in combating fake news and providing an unbiased scientific treatment to the research topic. Significant evidence suggests that fake news and misinformation impact, for the moment at least, predominantly the right side of the political spectrum (e.g., Lazer n.d., Benkler, 2017). Research suggests that error correction of fake news is most likely to be effective when coming from a co-partisan with whom one might expect to agree (Berinsky, 2017). Collaboration between conservatives and liberals to identify bases for factual agreement will therefore heighten the credibility of the endeavors, even where interpretations of facts differ. Some of the immediate steps suggested during the conference were to reach out to academics in law schools, economists who could speak to the business models of fake news, individuals who expressed opposition to the rise in distrust of the press, more center-right private institutions (e.g. Cato Institute, Koch Institute), and news outlets (e.g. Washington Times, Weekly Standard, National Review).

    Fake news is not inherently a conservative phenomenon, but as the report suggests, it is currently an asymmetric political problem. As a result, the media debate over fake news has become similarly partisan. Following the 2016 election, while some in right-wing media acknowledged the problem, other figures dismissed concerns about fake news as “silly” and called fake news simply “satire.” Along with the president and his administration, they have delegitimized the term “fake news” by using it to erroneously label credible news sources and have attacked the fact-checking organizations that social media platforms like Facebook partnered with to fight fake news. The report’s recommendations for conservative figures -- and ideas of organizations that could potentially be engaged -- could help serve as a counter to this reactionary backlash to the fight against fake news.

    Strengthening Reliable Information Sources And Broadening Their Reach

    Secondly, the report says that “we need to strengthen trustworthy sources of information,” partly by “seek[ing] stronger future collaborations between researchers and the media” and “support[ing] efforts to strengthen local reporting.” It also says that “the identification of fake news and interventions by platforms” appears to be “pretty straightforward,” suggesting that it would help to identify “the responsibilities of the platforms” where fake news spreads and get “their proactive involvement” in fighting it. From the report:

    [T]he apparent concentration of circulated fake news (Lazer et al., n.d.) makes the identification of fake news and interventions by platforms pretty straightforward. While there are examples of fake news websites emerging from nowhere, in fact it may be that most fake news comes from a handful of websites. Identifying the responsibilities of the platforms and getting their proactive involvement will be essential in any major strategy to fight fake news. If platforms dampened the spread of information from just a few web sites, the fake news problem might drop precipitously overnight. Further, it appears that the spread of fake news is driven substantially by external manipulation, such as bots and “cyborgs” (individuals who have given control of their accounts to apps). Steps by the platforms to detect and respond to manipulation will also naturally dampen the spread of fake news.

    Internet platforms like Facebook and Google have taken some steps to temper the spread of fake news. Facebook, for example, made an initial move to address the problems with its algorithms that allowed fake news to spread and become trending topics. Yet the website continues to verify fake news purveyors’ Facebook pages, lending them a sense of legitimacy, and misinformation continues to be disseminated via the social networking site. Meanwhile, Google is still allowing fake news purveyors to use its advertising network, as are other ad networks.

    Creating A Cooperative Infrastructure For Additional Research On Social Media And The Spread Of Misinformation

    Finally, the report calls for academics to partner with other companies and organizations to build a cooperative infrastructure for social media research and to help “develop datasets that are useful for studying the spread of misinformation online and that can be shared for research purposes and replicability.” The report details the value academics can bring to the study of how misinformation spreads, but notes that “accessing data for research is either impossible or difficult, whether due to platform constraints, constraints on sharing, or the size of the data”:

    With very little collaboration academics can still join forces to create a panel of people’s actions over time, ideally from multiple sources of online activity both mobile and non-mobile (e.g. MediaCloud, Volunteer Science, IBSEN, TurkServer). The cost for creating and maintaining such a panel can potentially be mitigated by partnering with companies that collect similar data. For example, we could seek out partnerships with companies that hold web panels (e.g. Nielsen, Microsoft, Google, ComScore), TV consumption (e.g. Nielsen), news consumption (e.g. Parsely, Chartbeat, The New York Times, The Wall Street Journal, The Guardian), polling (e.g. Pollfish, YouGov, Pew), voter registration records (e.g. L2, Catalist, TargetSmart), and financial consumer records (e.g. Experian, Axciom, InfoUSA). Of course, partnerships with leading social media platforms such as Facebook and Twitter are possible. Twitter provides APIs that make public data available, but sharing agreements are needed to collect high-volume data samples. Additionally, Facebook would require custom APIs. With more accessible data for research purposes, academics can help platforms design more useful and informative tools for social news consumption.

  • Google Is Funding Alex Jones' Harassment And Hate On YouTube

    Blog ››› ››› BRENNAN SUEN & KATIE SULLIVAN

    Alex Jones, a conspiracy theorist radio host who is one of President Donald Trump’s media sycophants, appears to be monetizing his content as part of the YouTube Partner Program even though Infowars' content regularly violates the program’s policies and guidelines for advertising. Jones’ YouTube videos and other content feature extreme anti-LGBTQ and racist commentary, and Infowars promotes conspiracy theories that have encouraged harassment of families that lost children in the Sandy Hook massacre and led to a gunman firing shots in a Washington, D.C., pizzeria.

    The YouTube Partner Program allows content creators to “monetize content on YouTube in many ways, including advertisements, paid subscriptions, and merchandise,” as long as their content is “advertiser-friendly” and meets YouTube’s “community guidelines.” Google, which owns YouTube, recently changed its advertising policies after major European corporations and the British government raised concerns over their ads being placed next to extremist content. In response, Google wrote that it was “raising the bar for our ad policies” and that it would “tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program”:

    We know advertisers don't want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.

    We’ll also tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program—as opposed to those who impersonate other channels or violate our community guidelines. Finally, we won’t stop at taking down ads. The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform—not just what content can be monetized.

    Google’s promise to better ensure that ads appear only alongside content of “legitimate creators in our YouTube Partner Program" indicates that Jones’ channel is a partner. An online post by the Houston Chronicle also explained that a YouTube partner can be identified by “look[ing] for advertisements on the user’s pages."

    Jones’ videos, which often violate YouTube’s policies for its advertising partners, frequently appear with ads for brands such as Trivago, Playstation, and a corporation that is contracted by the state of Hawaii to promote tourism. These ads appear on a targeted, automated rotating system, so they may alternate or change. 

    On March 19, Jones claimed that his website “Infowars got knocked off of Google ads through AdRoll, their subsidiary company they work with.” AdRoll -- which is actually a Google competitor, though it does use some Google technology -- did in fact cut ties with Infowars, citing violations of its policies, which require that a website’s content be accurate and verifiable and that it not have “derogatory content” about a political candidate. But it appears that Google, through YouTube, has not taken any similar action.

    YouTube’s Community Guidelines And Advertising Guidance Ban Threats And Harassment

    YouTube’s community guidelines include banning content creators -- and not just their advertising -- for threats, including “harassment, intimidation, invading privacy, revealing other people's personal information, and inciting others to commit violent acts.” Infowars is no stranger to harassment and threats. In addition, YouTube’s content guidelines, which apply to pages hosting advertisements, say that videos with “inappropriate language, including harassment, profanity and vulgar language” are “inappropriate for advertising.” Jones, including on his YouTube page, regularly makes vulgar and harassing comments, and his role in spreading conspiracy theories has helped incite others to commit threatening and violent acts.

    Jones played a crucial role in pushing the false “Pizzagate” conspiracy, which claimed that a Washington, D.C., pizzeria hid a pedophilia ring run by prominent Democratic politicians. Jones told his audience members in late November that they “have to go investigate" the conspiracy theory for themselves. Days later, a Jones listener fired his gun inside the pizzeria. After that incident, Jones scrubbed Pizzagate-related content from his YouTube page and elsewhere. In February, Jones uploaded a new video breaking down the “PizzaGate pedophile cult,” months after the shooting incident; an ad for LinkedIn appeared next to that video on March 23. On March 24, Jones apologized to the pizzeria and its owner for his attacks on them. An advertisement for TBS’ late night talk show Conan appeared before the video on March 27.

    Jones also relentlessly pushed conspiracies about the 2012 Sandy Hook massacre, in which 20 children and six adults were murdered during a shooting at an elementary school. Jones has attacked the families of the victims as “actors” who helped pull off a “hoax,” and family members have said that they have repeatedly faced harassment and threats and have criticized Jones for his smears. On March 23, an advertisement for FedEx appeared on a video exploring “false narratives vs. the reality” of Sandy Hook, and an ad for PNC showed up on another video alleging that Sandy Hook conspiracy theorist Wolfgang Halbig was “stonewalled and threatened” as he investigated the massacre.

    Jones has made other threatening and violent comments. In a now-deleted YouTube video, Jones told conservative Washington Post columnist George Will to “put a .357 Magnum to your head, and blow what little is left of your brains out all over yourself.” Jones also asserted that Will is a “constitutional rapist” who is “literally mounting America, raping it in the ass, and telling us how great he is.”

    Jones also recently challenged actor Alec Baldwin to a “bare knuckle” fight, saying, “I will break your jaw, I will knock your teeth out, I will break your nose, and I will break your neck.” During the 2016 Democratic primary, Jones suggested that supporters of Democratic presidential candidate Sen. Bernie Sanders (I-VT) needed to have their "jaws broken" and their "moron heads" slapped (following criticism, Jones claimed he was speaking only “figuratively” about breaking their jaws).

    YouTube Already Pulled A High-Profile User From Its Advertising Platform For Content Violating The Guidance On “Controversial Or Sensitive Subjects”

    YouTube’s advertising guidelines also note that content “is considered inappropriate for advertising” when it includes “controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown.”

    Jones has made his name weighing in on controversial subjects and spreading conspiracy theories. He is an ardent 9/11 truther who calls the attacks an “inside job.” He has also spread conspiracy theories about the Oklahoma City bombing, Boston Marathon bombing, a number of mass shootings, and vaccinations. A Google AdChoices advertisement appeared next to a video calling 9/11 a “false flag”

    Jones has also made numerous disparaging comments about LGBTQ people. After more than 40 people were killed at an LGBTQ nightclub in Orlando, FL, Jones charged “the LGBT community in general with endangering America and with the blood of these 50-plus innocent men and women.” Many of Jones’ comments about the attack were uploaded to his YouTube channel. Jones also once claimed that the U.S. government is trying to “encourage homosexuality with chemicals so that people don’t have children,” adding that being gay is a “destructive lifestyle.” A static in-video advertisement and, separately, an advertisement for Wix.com appeared in a March 16 YouTube video on Jones’ page during which Infowars guest host Anthony Cumia mocked a 15-year-old transgender girl and compared her decision to transition to children deciding they want “to be a dinosaur.”

    A sponsored Funny or Die video appeared before one of Jones’ YouTube videos in which he lamented the introduction of an autistic muppet to Sesame Street and pushed the dangerous, debunked myth that vaccines cause autism by claiming “it burns out their pancreas. It burns out their brain.” The video and the video’s summary asserted that the character’s inclusion was “an effort to normalize the epidemic of childhood mental disorders.”

    Jones also frequently makes controversial comments on race and gender, such as when he went on a racist rant against former President Barack Obama on his YouTube channel, saying he was “elected on affirmative action” and “ain’t black, in my opinion.” Jones also accused Obama of having “some big old donkey dick hard-on.”

    Jones has made other vulgar comments about politicians and their families, particularly about women. These statements include calling Obama’s mother a “sex operative” for the CIA on his radio show and calling Hillary Clinton a “lying whore” on his YouTube channel. He has also said that Chelsea Clinton looks like Mister Ed the Horse and made numerous other sexist comments about women and their looks.

    Removing Jones’ channel from the YouTube Partner Program would hardly be unprecedented. The Independent reported in February that YouTube removed user “PewDiePie from its advertising platform after anti-Semitic videos were posted to his account.” PewDiePie has more than 53 million subscribers and has been called “by far YouTube’s biggest star.” The report noted that the videos could no longer “be monetised because they are in violation of YouTube’s ‘advertiser-friendly content guidelines’, which are stricter than the normal guidelines.” The report added that YouTube’s community guidelines “include restrictions on hate speech”:

    The videos are no longer allowed to be monetised because they are in violation of YouTube's "advertiser-friendly content guidelines", which are stricter than the normal guidelines and require that people cannot feature "controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown".

    But they are still available to view on the site, where they were posted in January.

    Google requires that all videos uploaded to the site comply with its community guidelines, which include restrictions on hate speech. The guidelines specifically note that YouTube will consider the "intent of the uploader", and that videos may stay online if they are "intended to be humorous or satirical", "even if offensive or in poor taste".

    It would appear to be consistent with YouTube’s existing policies to pull advertising from Jones’ videos. If YouTube fails to take action, advertisers can request to have their ads removed from videos appearing on Jones’ channel; Google has pledged to implement “account-level controls to make it easier for advertisers to exclude specific sites and channels.”

  • Advertisers Are Fleeing YouTube To Avoid “Directly Funding Creators Of Hateful” Content

    Blog ››› ››› MEDIA MATTERS STAFF

    YouTube is losing advertisers as big-name companies pull ads from the site because, according to a report from The New York Times, “The automated system in which ads are bought and placed online has too often resulted in brands appearing next to offensive material on YouTube such as hate speech.”

    More and more major companies are abandoning the ad services of YouTube's parent company, Google, amid concerns that ads for their brands are being placed next to extremist material. On March 22, The New York Times reported that AT&T and Johnson & Johnson “were among several companies to say Wednesday that they would stop their ads from running on YouTube and other Google properties amid concern that Google is not doing enough to prevent brands from appearing next to offensive material, like hate speech.” The decision by advertisers comes as Google has struggled in its efforts to prevent websites that peddle fake news from using its online advertising services to profit. It also comes as Google and YouTube have been criticized following a BuzzFeed News report for driving revenue for conspiracy theorists who broadcast to millions and monetize conspiracy theories like “Pizzagate,” which led to an armed confrontation in a DC-pizza shop.

    Now, The New York Times reports that “the technology underpinning YouTube’s advertising business has come under intense scrutiny” as “other deep-pocketed marketers [are] announcing that they would pull their ads from the service.” According to the Times report, the problem “is particularly jarring” for YouTube specifically, because “YouTube splits advertising revenue with its users, meaning advertisers risk directly funding creators of hateful, misogynistic or terrorism-related content.” From The Times’ March 23 report:

    YouTube is now one of the pillars of Google’s advertising business and the most valuable video platform on the internet. In recent years, advertisers, unable to ignore its massive audience, flocked to YouTube to reach younger people who have started to shun traditional broadcast television.

    But the technology underpinning YouTube’s advertising business has come under intense scrutiny in recent days, with AT&T, Johnson & Johnson and other deep-pocketed marketers announcing that they would pull their ads from the service. Their reason: The automated system in which ads are bought and placed online has too often resulted in brands appearing next to offensive material on YouTube such as hate speech.

    [...]

    That technology, known as programmatic advertising, allows advertisers to lay out the general parameters of what kind of person they want to reach — say, a young man under 25 — and trust that their ad will find that person, no matter where he might be on the internet. This approach plays to the strengths of tech giants like Google and Facebook, allowing advertisers to use automation and data to cheaply and efficiently reach their own audiences, funneling money through a complicated system of agencies and third-party networks.

    But more than 400 hours of content are uploaded to YouTube every minute, and while Google has noted that it prevents ads from running near inappropriate material “in the vast majority of cases,” it has proved unable to totally police that amount of content in real time. And that has advertisers increasingly concerned.

    [...]

    While brands have expressed concern about showing up next to unsavory photos and videos uploaded to digital platforms by users — like pornography on Snapchat — the situation with YouTube is particularly jarring. YouTube splits advertising revenue with its users, meaning advertisers risk directly funding creators of hateful, misogynistic or terrorism-related content.

    The revenue-sharing model has minted stars, some of whom gain cultlike followings for edgy and inappropriate content. Last month, the platform cut business ties with its biggest star, Felix Kjellberg, known to his 54 million subscribers as PewDiePie, after The Wall Street Journal reported on crude anti-Semitic jokes and Nazi imagery in his comedy videos. He was part of YouTube’s premium advertising product called Google Preferred — a category of popular, “brand safe” videos on YouTube.