Article Lead Image

CkyBe/Shutterstock TechSolution/Shutterstock (Licensed) remix by Jason Reed

Distorted TikTok sounds hurt marginalized creators—and AI is making it worse

Even Taylor Swift is at risk.


Thalía Menchaca


Posted on Jul 21, 2023

The first time Riley Jay Davis heard a snippet of Tank and the Bangas’ “DM Pretty,” they didn’t relate to the song.

“This boy be in my DMs—say I’m pretty,” sings Tarriona “Tank” Ball over a soft musical arrangement. Over the last few months, the poetic tune has been used in over 950,000 videos, becoming a viral audio on TikTok. But the entirety of the poem is not as well known.

“DM Pretty” explores the objectification of Black women through seemingly innocent, but in reality, insincere compliments via direct messages. When Davis listened to the rest of the poem, they cried because the context hit close to home as a fat, Black, and autistic trans person. 

“Me showing an interest in someone has always resulted in the person being made fun of, and anyone who expressed liking me was considered to have low standards,” Davis told the Daily Dot in an Instagram direct message. “I get attention in secret late-night text messages but in person? Not unless nobody sees.”

Thousands of videos on TikTok feature viral sounds. But some audios have problematic origins, and others are edited out of context, leading to misinformation on the app that could have a ripple effect on users and creators alike. 

Problematic sounds

In January, TikToker Hannah Jones (@indihannah__jones) created a sound in which she claimed to be in a lesbian relationship after having an awful ex-boyfriend. Over 7,00 videos have used the audio, and multiple people have commented on the problematic sound.

TikToker @ubeoatmilk, who is a lesbian, stitched Jones’ video to discuss the harmful stereotypes of such homophobic sounds.

“Where’s the logic with criticizing men and their awful behavior and equating that to lesbians and sapphic relationships?” Sophia says in the video.

In a Zoom interview, Sophia said people often minimize the impact of homophobic and lesbophobic audios, ignoring the real-life implications of stereotypes.

Read the entire Psychology of TikTok series
Psychology of TikTok title page
Young woman on floor
sound wave indicator
Beauty products
Asian food
Mother with child's face pixelated

AI-generated sounds

Even celebrities can be victims of misrepresented audio on TikTok.

In March, fans of Taylor Swift started posting AI-generated sounds of the pop star. The trend started with audio of Swift talking about her ninth studio album, Evermore. Swifties have jokingly claimed that Swift dislikes Evermore because she did not celebrate the album’s first anniversary in 2021 as she did for Folklore.

Swift later dispelled the rumors on the opening night of “The Eras Tour” in Glendale, Arizona.

Around the same time, another AI-generated sound started gaining popularity. This time, the audio featured Swift addressing the 2016 drama with Kanye West and Kim Kardashian. 

When Sinéad O’Melinn (@sineadrobbins) heard the sound for the first time, she said the resurgence of the infamous phone call between Swift and Kanye felt disrespectful. 

“She wanted to be excluded from the narrative, and people are just bringing her back into it,” O’Melinn said in a Zoom interview. “And I just feel like they’re also not fun enough to risk the bad things that could happen.”

O’Melinn has been a Swift fan since 2011 after she bought a Speak Now CD. During middle school, she had fan accounts on Instagram and Tumblr. Now on TikTok, she has called out the usage of AI-generated sounds on the platform.

“[The AI sounds] already concerned me even when it was innocent,” O’Melinn said. “I just know that Taylor doesn’t like words being put in her mouth or being set up as saying something that she didn’t say.”

In a video that has received over 114,00 views since posted on March 13, O’Melinn also reminded Swifties that the singer stopped using Tumblr in 2016 after people started editing posts she had liked.

Misinformation on TikTok

Whether it’s an AI-generated Taylor Swift clip or an out-of-context line from a poem, audio on TikTok has the potential to distort perceptions of reality, especially for the young people who use the app.

In February, Wallaroo Media found that 32.5% of TikTok users were between 10 and 19 years old. The survey also found that 29.5% of users are 20 to 29 years old, and 60% of users are part of Gen Z, users born after 1997. Meanwhile, TikTok’s main algorithm for searching videos has been shown to suppress content by disabled, fat, and LGBTQ creators.

A 2022 report by NewsGuard found that on TikTok, up to 20% of the search results for top news issues like COVID-19 vaccines and school shootings were videos containing misinformation. In recent months, the social media platform has cracked down on misinformation by revising its guidelines around content that’s “synthetic” or AI-produced. Such creations that remix images or videos of private individuals must be clearly labeled by the poster, the platform’s rules state. 

But while TikTok is making efforts to curb fake AI-produced images, the guidelines don’t specifically address audio or sounds. In that space, a misleading quote or snippet could lead to confusion and distortion. 

Creators say the biggest way to combat this is with media literacy and education outside of the app. 

“I have so many screenshots of TikToks for me to just research the topic more later,” Davis said. “And I love when content creators say, ‘Here is where you can read/learn more.’”

We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Share this article
*First Published: Jul 21, 2023, 7:35 am CDT