On CBSN, Angelo Carusone discusses TikTok promoting anti-LGBTQ hate

Video file

Citation From June 5, 2021, CBSN coverage

LANA ZAK (HOST): Popular social media app TikTok is accused of promoting homophobia and anti-transgender violence. Watchdog group Media Matters for America claims that the app's algorithm is identifying users as homophobic and then recommending anti-LGBTQ videos. We reached out to TikTok about the report but as of this newscast have not heard back. Joining us now is president and CEO of Media Matters, Angelo Carusone. Angelo, give us some examples of the hateful videos landing in people's feeds on TikTok. 

ANGELO CARUSONE (PRESIDENT, MEDIA MATTERS FOR AMERICA): So some of the videos you're getting, specifically these videos are being tailored to individuals that TikTok has identified as potentially liking them either because they're homophobic or anti-LGBT or some other reason. And the videos sort of range. everything from burning and destroying the Pride flag, which lines up with Pride month, to actually encouraging violence against them - talking about that trans people's pronouns really should be, you know, was/then -- a reference to them, obviously, being dead, past tense -- to referring to them responding to LGBT people talking about Pride and saying that they should be ended. So it's a whole range. Everything from encouraging violence to, sort of, encouraging an atmosphere where you're directly attacking Pride.

ZAK: So, tell us more about how TikTok's algorithm works, and can it be adjusted by users. And I guess we need to understand is this something that is happening that TikTok is aware of and trying to put the kibosh or is it something that the algorithm is actively trying to promote in terms of getting people riled up - which stands in violation, actually, of their hateful use policy? 

CARUSONE: Yeah I think that's a set of good questions. I think, you know, let's focus on what the harm is here. So the real harm is not just that the videos are existing. The reason it becomes a concern for all of us is that TikTok's algorithm is so powerful that when -- you sort of take the content like this and you're actually actively promoting it to individuals that are not interacting with it but that your algorithm thinks might like something like that or be susceptible to it.

And so the way that TikTok's algorithm works is, you know, a lot like these other algorithms does but TikTok's is uniquely sophisticated, if it sort of thinks based on recent behavior - and a whole set of different factors goes into it, specifically the kind of content you like or interact with - that you may fall into one of these categories, what it does is it will start to send you trial balloons. In this case, the way that we determine this is we ran a series of clean, brand new profiles. We used some factors, we interacted with some mildly homophobic content, and almost immediately did TikTok's - across the board, in every one of those instances - did TikTok start serving it these videos. And the part of it that's really dangerous is the more you liked it, the more you interacted with this kind of content, the more that they served that it to your feed. So over time, it ended up becoming the majority of these account's feeds, simply because TikTok pegged that user as somebody that might be interested or interacted with it. 

It is a problem on TikTok. Like I said, I think the real threat here is not that the content exists - that's bad in and of itself. But the real threat is when the tools that these platforms have, their algorithm, starts organizing hateful people. And just to put a bow on it, people remember the Boogaloos which was that violent movement that was really prolific last Spring in 2020 - a bunch of random people going out and committing violence during some of the protests. That was principally and originally organized on TikTok because they recognized that that was one of the fastest ways for them to recruit and identify new people was actually to piggyback on the tools that TikTok provided. Now, TikTok shut that down once the problem was aware of them, and I think that's what will happen here, but the flaw really is in the sophistication of their algorithm and not putting in appropriate protections and countermeasures.