- This Twitter account is parodying Amazon customer service Thursday 8:52 PM
- How to live stream Oleksandr Gvozdyk vs. Artur Beterbiev Thursday 7:00 PM
- Shaggy says an online scammer is impersonating him Thursday 6:53 PM
- IRL Barbie Malibu Dreamhouse available to rent on Airbnb Thursday 6:16 PM
- Men’s Humor trolled for unknowingly tweeting Grindr conversation Thursday 5:29 PM
- How to stream Dominick Reyes vs. Chris Weidman Thursday 5:00 PM
- Jennifer Aniston had a finsta before officially joining Instagram Thursday 4:35 PM
- Facebook denies moderating comments under Zuckerberg’s big free speech live stream Thursday 2:38 PM
- ‘My headphones’ meme proves our music is sadder than we look Thursday 1:53 PM
- ‘Time for an upgrade’ meme shows Kamala Harris’ team is too online Thursday 1:35 PM
- Prison guards reportedly mocked trans inmates in private Facebook groups Thursday 1:33 PM
- Gradient is the new celebrity look-alike app winning over influencers Thursday 12:46 PM
- Trolls accuse cosplayer of ‘appropriating’ Joker culture (updated) Thursday 12:28 PM
- Every Studio Ghibli movie will stream exclusively on HBO Max Thursday 12:24 PM
- ‘Stranger Things’ season 3 saw its highest viewer numbers yet Thursday 12:01 PM
Many of the deepfakes on the internet today—videos that use AI to put someone’s head on another person’s body—are easily identifiable.
They show Mr. Bean as Donald Trump, Steve Buscemi as Jennifer Lawrence, or Nicolas Cage as just about anybody. Yet these are just the deepfakes that use celebrity images. Deepfakes of non-famous people exist, and they are much harder to identify. Everyone wants to know how to stop the spread of deepfakes, or at least develop techniques to flag video and audio clips as fake, but there are no magic solutions yet.
It’s especially hard to figure out if a file is fake if it’s shared on a social site like Twitter or YouTube.
“There’s a lot of image and video authentication techniques that exist but one thing at which they all fail is at social media,” Matthew Stamm, an assistant professor at Drexel University, said at SXSW on Tuesday.
These techniques look for “really minute digital signatures” that are embedded in video files, said Stamm, and when a video file is shared on social media, it’s shrunken down and compressed. These processes are “forensically destructive and wipe out tons of forensic traces,” he added.
If a video is “pristine,” experts can find out a lot of information about them that can help determine their authenticity, Stamm said, but it’s much harder to extract information from videos found on social sites.
Stamm, who develops algorithms to figure out if images and videos are fake and how they are edited, spoke on a SXSW panel called “Easy to Fool? Journalism in the Age of Deepfakes,” which covered the recent spread of deepfakes as well as other synthetic information online. Stamm was joined by Kelly McBride, vice president at Poynter, and Paul Cheung, a director at the Knight Foundation. The panel was moderated by Jeremy Gilbert of the Washington Post.
Stamm stressed that there was not currently a “silver bullet” to combat deepfakes, while Cheung said that any techniques to combat fake audio and video will quickly become outdated as the technology rapidly develops.
Last year, Cheung said, researchers noticed that the people in deepfakes videos didn’t blink. But that changed as the technology improved. “The minute…we thought we had a mechanism for detecting a deepfake,” Cheung said, “someone had out-faked the solution.”
“We’re constantly being challenged and we constantly have to figure out new solutions,” he said.
McBride thinks that news organizations need to work together to tackle the problem of doctored videos and audio clips. Big news organizations like Associated Press and the Washington Post have more resources to research synthetic media like deepfakes, while small local newspapers do not, she said.
“Journalism itself is really going to have to ask some existential questions about creating a collective research organization,” said McBride, “that will eventually help with this problem of truth in democracy.”
Stamm, meanwhile, said that researchers are “very rapidly coming to the point to run analysis on videos and images” that can help news agencies. Stamm is also starting to look into synthetic audio, which can be detected by phase shifts in the audio file, among other techniques.
But even the best fact-checking and identifying techniques are irrelevant if people think it’s real and start to spread it on social media.
Tiffany Kelly is the Unclick editor at Daily Dot. Previously, she worked at Ars Technica and Wired. Her writing has appeared in several other print and online publications, including the Los Angeles Times, Popular Mechanics, and GQ.