Facebook may ban political ads in the days leading up to the election. That won't solve its problems.
Written by John Whitehouse
Research contributions from Kayla Gogarty & Media Matters Staff
Published
Facebook is considering banning political advertisements “in the days leading up to the U.S. election in November,” according to a Bloomberg News report. Twitter banned political ads in late 2019.
Political ads on Facebook have caused numerous problems for the company in recent years.
As Media Matters has previously documented, multiple outlets exposed serious problems with Facebook’s ad policies for the 2018 midterm elections. CNN and The New York Times uncovered pages with no information available about who was operating them running ads attacking congressional candidates. The company also allowed a political action committee to run ads with anti-Semitic imagery attacking Florida gubernatorial candidate Andrew Gillum without disclosing its connections to a Republican ad firm. Vice News was able to submit ads without issue using content from Russia’s Internet Research Agency while posing as ISIS and Vice President Mike Pence, and it was also able to run ads posing as every U.S. senator. ProPublica also found that multiple interest groups were able to cloak their identities while running ads on Facebook.
In 2019, Media Matters identified that Facebook let President Donald Trump’s campaign and nine other Republican politicians run over 2,000 ads referring to immigration as an “invasion,” a white supremacist talking point that got a lot of attention after the shooter in the El Paso, Texas, massacre cited it in a manifesto. This violated Facebook’s advertising policies and community standards.
Facebook’s ad policy also formerly banned “misleading or false” information, but the platform narrowed its policy in October 2019, prohibiting only those “ads that include claims debunked by third-party fact checkers” and even exempting ads from politicians from this fact-checking.
Vox’s Recode detailed the “disaster” of Facebook’s policy not to fact-check politicians in ads, noting that it had immediately spiralled out of control. Emily Stewart wrote that “Facebook’s hard-and-fast rule on political speech doesn’t seem so hard-and-fast, considering it’s already making exceptions to it.” This policy has faced a lot of criticism, but Facebook has only doubled down as it continues to let Trump run misleading ads against his critics and opponents.
Later in 2019, ABC News identified that Facebook had allowed Trump to run misleading ads about impeachment. In January, CNN found the platform had allowed the Trump campaign to run hundreds more misleading ads.
In May, Media Matters identified that Facebook let the Trump campaign publish at least 529 ads with false claims of voter fraud.
In June, Facebook let the Trump campaign publish ads with an inverted red triangle, an infamous Nazi symbol. After widespread outrage, Facebook removed the ads, and a spokesperson told Media Matters, “We removed these posts and ads for violating our policy against organized hate. Our policy prohibits using a banned hate group's symbol to identify political prisoners without the context that condemns or discusses the symbol.”
The newly announced proposal to ban political ads ahead of Election Day has immediately raised concern about voter outreach, specifically in terms of getting out the vote:
Notably, this new proposal doesn’t address organic posts containing political misinformation from spreading on the platform -- where Facebook has given a massive advantage to conservatives.
To put an even finer point on it, many of the top-performing posts about voting are all from Trump’s Facebook page. As Media Matters’ Kayla Gogarty noted, seven of the top 10 posts from right-leaning pages are directly from Trump, including one of the posts that Facebook took no action against when Twitter labeled it as election misinformation. During the studied time period between March 12 and June 12, 2020, Trump made 60 posts related to voting, which got 7.6 million interactions combined (approximately 8% of all engagement earned on voting-related posts from right-leaning pages). The average number of interactions on Trump’s misleading posts about voting was approximately 127,000 -- much more than the average number of interactions on other posts about voting, which was about 3,700.
If Facebook really wants to take action against hate and misinformation, it should confront them directly. Twitter’s ban of political ads also hasn’t solved misinformation or hate on the platform.
Facebook’s platform is more insular than Twitter’s, and categorical bans on types of ads won’t solve the problem either. The company knows what it has to do: actually enforce the policies that it has and commit to confronting hate on its platform, whether in ads, pages, or groups. Leaking other actions -- even if they may help in some cases -- won’t actually solve the problem.
The Stop Hate For Profit campaign recommends removing the politician exemption for misinformation on Facebook. That would be a better start than this.
The mere fact that Facebook is considering banning political advertising in an election year should be a warning to everyone: If the company is acknowledging that it cannot moderate paid ads effectively, it certainly does not bode well for its ability to prevent hate speech on its platform.