Content warning: This article contains racist and antisemitic imagery, videos, and language.
Racist AI-generated videos are the newest slop garnering millions of views on TikTok
Google’s Veo AI video generator is seemingly being used to generate videos depicting racist and antisemitic tropes
Written by Abbie Richards
Published
Racist and antisemitic AI-generated videos are getting millions of views on TikTok.
These videos — seemingly created with Google’s Veo 3, a publicly accessible text-to-video generator — traffic in racist tropes, such as depicting Black people as monkeys and criminals and featuring imagery of Black people with watermelons and fried chicken.
Users are also posting misleading AI-generated videos of immigrants and protesters, including videos in which protesters are run over by cars. And in an especially dystopian nightmare, AI-generated videos are reenacting marginalized groups’ historical traumas, depicting concentration camps and Ku Klux Klan attacks on Black Americans.
TikTok’s community guidelines prohibit videos dehumanizing racial and ethnic groups as well as “threatening or expressing a desire to cause physical injury to a person or a group.” The platform has a long history of struggling to contain hate speech, violent content, and misleading AI-generated content, and the dehumanizing and sometimes violent videos that Media Matters has now identified are seemingly spreading unchecked on the social media platform.
Google’s new Veo 3 is apparently being used as a tool for creating hateful content
Veo 3 — Google’s text-to-video generative AI model — was released on May 20. The publicly available AI generator currently has a limit of 8-second videos, and Google reportedly plans to integrate the video generator into YouTube Shorts.
The videos that Media Matters has identified on TikTok all ran a maximum of 8 seconds long (or were composed of multiple clips each lasting no more than 8 seconds). They also had a “Veo” label in the corner; used hashtags or captions, or had usernames, related to Veo 3 or AI; and/or had clear signs of AI generation (such as gibberish text, distortions, or continuity errors that defy the laws of physics).
TikTok users are using AI-generated videos to spread racist and other hateful stereotypes
Much of the racist AI-generated content that Media Matters identified is explicitly anti-Black, using images of monkeys to promote stereotypes about Black people.
For instance, one video reads “Average Waffle House in Atlanta” and shows a restaurant filled with monkeys. A car crashes into the front door of the restaurant and five monkeys leap from the vehicle holding KFC-style fried chicken buckets filled with watermelon. Then the video shows monkeys running wild through the restaurant, throwing watermelon and carrying buckets of fried chicken. This video has over 600,000 views. (The video above includes viewing numbers from an earlier date, so they don't match the final figures in this piece.)
TikTok users clearly understand the racist message of this video. One viewer commented, “all their mannerisms…to the T…,” while another said, “And Walmart, and any other stores and restaurants this demographic frequents.”
Another video reads, “i asked ai: ‘average spirit airlines experience,’” and shows monkeys climbing all over a plane while a smoke alarm chirps. It has 835,300 views.
Again it’s evident that viewers understand and endorse this racist propaganda. A comment with over 2,000 likes reads, “Bro even AI has black fatigue.” Another user wrote, “AI got here 5 minutes ago and it already has black fatigue.”
This dehumanizing content also paints Black Americans as criminals. In one video, a monkey wearing a pink wig, long pink acrylic nails, and pink eyelashes says, “So my probation officer called. Good news — I ain’t gotta do no more community service. Bad news — that’s cause there's a new warrant out for my arrest.” This video has 4 million views.

Several other videos depicted AI-generated monkeys committing crimes and labeled them as “usual suspects.” For instance, a video with 5.3 million views depicts a police car chase in which a car chased by a police cruiser crashes into a pole and a monkey runs out of the driver's seat. It says, “ai getting out of hand bro.”
Another video, with 1 million views, depicts a white man saying, “Yup. It’s the usual suspects at it again,” followed by a shot of chimpanzees wearing gold chains running from a car while a police siren wails in the background.
Not all of the videos we found that depict Black people as criminals relied on animal imagery to push their hateful agenda. For instance, Media Matters found a video of a Black man wearing a balaclava carrying a TV out of a looted store with broken windows and saying, “Just doing my mostly peaceful shopping today.”
These AI-generated videos leaned into other racist stereotypes as well, such as “fatherlessness,” the false narrative that Black fathers are absent from their children’s lives.
A video with 373,900 views depicts a white woman crying and saying, “Tequavius, I have something to tell you. I’m pregnant.” A monkey in a backwards hat is then seen running away into a car. Text reads, “what is AI tryna say.”
In another video, with 351,000 views, a Black woman complains to a barista about not having enough milk in her coffee. The white barista says, “Maybe your baby daddy is coming back with the milk even though it’s been years already. I can get you watermelon and fried chicken instead.”
Fried chicken and watermelon consumption — a longstanding anti-Black racist stereotype — was a repeated theme in this content.
One video with 147,600 views depicted a Black man saying, “I can’t breathe. I can’t breathe. Watermelon, please. I need watermelon.”
Another video, with 350,900 views, depicted a monkey named “Lequarius” wearing a gold chain and holding a bucket of fried chicken.
These videos also repeatedly contained themes of police violence against Black people.
One particularly viral video — with 14.2 million views — depicts two white police officers eating donuts. Then one of them says, “Look! A Black one!” and fires his gun multiple times.
In another video, with 2.1 million views, a white police officer holds a fishing rod with a watermelon as bait and says, “My numbers are low this week. A man’s gotta do what a man’s gotta do,” as a Black woman crawls over to him on her hands and knees.
Another video, with 383,300 views, shows a white police officer outside of a KFC who says he is “spawn camping these motherfuckers.” (Spawn camping is a gaming term that refers to waiting near specific spawning locations in order to kill players immediately after they spawn.) When a Black man emerges from the KFC, the white police officer says, “Bingo,” and points his gun at him.
While most of the AI videos Media Matters identified targeted Black people, other racial and ethnic groups were targeted as well.
One video with over 206,900 views reads, “Average Panda Express be like,” and depicts an Asian man telling customers to “come try our sesame orange bark bark and meow meow today” as images of cats and dogs appear on the screen.
Several videos depicted South Asian-appearing people as dirty.
In one video, with 1.2 million views, a South Asian man is chased by a shower head and soap as he screams “No! No!”
In another video, South Asian men prepare food using their feet and serve bowls of mud alongside rats.
Another video depicts a crowd of Orthodox Jewish men rolling down a hill in pursuit of a piece of gold.
AI-generated videos are targeting immigrants and depicting violence against protesters
AI-generated videos are also targeting immigrants and protesters in the U.S. These videos were made in both Spanish and English languages.
For instance, one video with 45,500 views shows a man walking through a riot while waving a Mexican flag and saying, “I’m flying the flag of the place I don’t want to be sent back to while I’m destroying the place I don’t want to leave.”
Another video, with 25,400 views, shows a series of AI-generated Spanish speakers purporting to be from various Latin American countries, several of whom suggested their desire to use the U.S.’s resources. The first protester says in Spanish, “Hi my name is Jose Rojas from Venezuela and I don’t want to work. I only want to be supported by the government of the U.S.”
Media Matters also identified a concerning trend of AI-generated videos that seemed to fantasize about enacting violence on immigrants or on protesters defending immigrants.
In one video, an AI-generated Big Foot in a Border Patrol vest says, “I’m here to show y’all how to catch a border jumper,” and then proceeds to wrestle a man to the ground. Big Foot then puts the man in the back of his truck and says, “Taking this one to cut my grass before I deport him.”
Other videos endorsed hitting protesters with cars.
In a video with 90,000 views, drivers of a truck plow through a crowd of people. As the car drives away there’s a smacking sound implying the car ran over a person as the driver says, “I love these new speed bumps. Thump, thump.”
In a different video, with 61,400 views, Big Foot and a similar creature are driving a car surrounded by protesters and Big Foot says, “Well, Gov. Sanchez said run them over,” and the car drives into a sea of people.

AI-generated videos depict concentration camps and the Ku Klux Klan
While AI-generated historical depictions aren’t new to TikTok, we are now seeing their use to reenact historical trauma.
One video we found depicts a man narrating his day in a Nazi concentration camp as though he is vlogging. He stands in front of a crematorium chimney and says, “It’s a bit smoky, but something smells really nice out here.” He points to a pile of shoes and says, “I wonder if there’s any Air Jordans in there,” and ends the video by saying he’s “heading for a shower now.” It has a million views.
Another video depicts Big Foot on horseback dressed in a Ku Klux Klan robe and hood chasing a Black man.

The AI propaganda revolution
Conversations around AI and disinformation frequently focus on the AI-generated content’s believability – that is, how realistic it looks. But hateful messages don’t need to look real in order for them to further engrain racist beliefs into viewers’ minds. In the U.S., there is a long history of using cartoons to convey hateful anti-Black propaganda.
As the race to create generative AI tools continues, the conversation should consider the harms it can cause – with realistic and unrealistic imagery alike.