- The ’24 hours to respond’ meme holds celebrities to a higher standard Monday 8:46 PM
- Twitter users miss the kids who walked in on their dad’s interview Monday 8:40 PM
- ‘The Thing About Men’ Twitter hashtag is full of sarcasm and misogyny Monday 7:27 PM
- This woman said Hillary Clinton losing the 2016 election gave her PTSD, and people are furious Monday 6:45 PM
- Vanessa Bryant files a lawsuit against helicopter company after deaths of Kobe and Gianna Monday 5:49 PM
- Michael Jordan cries at Kobe Bryant memorial, jokes about creating a new meme Monday 4:43 PM
- Woman’s boyfriend says it’s him or the frogs—Reddit says choose the frogs Monday 4:22 PM
- Greyhound buses will no longer allow Border Patrol checks Monday 4:04 PM
- ‘Eat Them To Defeat Them’ is oddly about vegetables—not about eating the rich Monday 3:26 PM
- Marco Rubio mocked for filming talking while driving socialism critique Monday 2:54 PM
- QAnon believer asks Trump’s campaign press secretary who Q is Monday 2:36 PM
- Octavia Spencer has discovered ‘Ma’ memes—and she can’t get enough Monday 2:09 PM
- Meet the anti-Greta Thunberg, a climate ‘skeptic’ funded by the oil industry Monday 1:12 PM
- Harvey Weinstein convicted of rape and sexual assault Monday 12:56 PM
- Senator calls Facebook’s current election disinformation efforts ‘inadequate’ in letter Monday 12:11 PM
Now more than ever, a certain level of sophistication is needed to navigate the various sources of information on the internet. Compounding the issue is the rise of the “deepfake,” extraordinarily realistic-looking videos that often feature public figures doing and saying things they never actually did. So far, their uses have been relatively benign, so far as we know, but the potential for abuse is alarming, to say the least. Here’s everything you need to know spot a deepfake video.
I've gone down a black hole of the latest DeepFakes and this mashup of Steve Buscemi and Jennifer Lawrence is a sight to behold pic.twitter.com/sWnU8SmAcz— Mikael Thalen (@MikaelThalen) January 29, 2019
What is a deepfake?
A deepfake is a video created using a type of machine learning to pick up the facial movements of one person and graft them onto another person making similar gestures. The term comes from the practice’s affiliation with a popular Reddit user called Deepfakes.
Private citizens have less to worry about because these algorithms require loads of material from a subject to be able to replicate them with accuracy. But we’ve already seen the technology to create fake celebrity porn, and if you’re talking about the possibility of swaying an election by attributing fake messages or deeds to a candidate, it has the possibility to affect a great many through electoral influence.
- Latest deepfakes show how Nicolas Cage and David Schwimmer are nearly identical
- Terrifying deepfake combines Donald Trump and Mr. Bean
Members of Congress have recognized the danger. In September 2018, U.S. Reps. Adam Schiff (D-Calif.), Stephanie Murphy (D-Fl.), and Carlos Curbelo (R-Fl.) sent a letter to the Director of National Intelligence requesting a report to Congress on “the implications of new technologies that allow malicious actors to fabricate audio, video, and still images.”
How to spot a deepfake video
Some of the prominent examples of deepfakes that have made the rounds are not without their giveaways to the discerning eye. Obviously, President Donald Trump doesn’t have Mr. Bean’s eyes.
Still, it helps to have a set of guidelines for those who don’t watch online videos as a career.
One such tell, picked up on by Siwei Lyu, a professor at University at Albany, is blinking.
Because the machine learning depends on the availability of images of a public figure—and famous people are seldom photographed with their eyes shut—deepfakes struggle depicting convincing images of people with their eyes closed.
What’s more, subjects depicted in deepfake videos often blink far less often than humans do in real life. According to Lyu, this method has a 95 percent detection rate. That said, the technology behind deepfakes and sophistication of those who want to employ them will only improve, so it will remain a struggle for fighting against the forces of public manipulation.
Signs you are watching a deepfake video
Jonathan Hui, a writer who specializes in deep learning, pinpoints a few things to look for in possible deepfakes. Once a viewer slows a video down, they should look for the following things:
- Blurring evident in the face but not elsewhere in the video
- A change of skin tone near the edge of the face
- Double chins, double eyebrows, or double edges to the face
- Whether the face gets blurry when it’s partially obscured by a hand or another object
Can artificial intelligence spot deepfakes?
Some are attempting to build safeguards against the coming proliferation of deepfakes, using artificial intelligence to combat forgeries online.
For example, the AI Foundation has developed Reality Defender, a program that runs alongside other online applications, identifying potentially fake media.
Researchers at Germany’s Technical University of Munich developed an algorithm called XceptionNet that purports to that spots fake videos online, so they can be flagged and removed. Of course, at the speed with which damning false reports can fly, it will take a lot of diligence from laypeople online in the future to make sure damage cannot be done before one of these programs can out a fake.
- Deepfakes 2.0: The terrifying future of AI and fake news
- Why it’s harder to spot a deepfake once it goes viral
- Jennifer Buscemi is the deepfake that should seriously frighten you
Additionally, because social media uploads tend to compress files, these tactics lose efficacy when clips originate on Twitter or Facebook.
At a panel at South by Southwest in March, Matthew Stamm, an assistant professor at Drexel University, discussed the difficulty. “There’s a lot of image and video authentication techniques that exist but one thing at which they all fail is at social media,” he said.
These techniques look for “really minute digital signatures” that are embedded in video files, Stamm added. But when a video file is shared on social media, it’s shrunken down and compressed. These processes are “forensically destructive and wipe out tons of forensic traces,” he said.
In the end, we might just be forced to rely on our own BS detector. But if the fake news epidemic of 2016 taught us anything, we are in for a rough ride.
NOW HEAR THIS:
How deepfakes are made and the future of digital identity
Introducing 2 GIRLS 1 PODCAST, a weekly comedy show where Alli Goldberg and Jen Jamula (two actors who perform bizarre internet content on stage) have hilarious and humanizing conversations with Bronies, top Reddit mods, professional ticklers, video game archaeologists, dating app engineers, adult babies, cuddling specialists, vampires, Jedi, living dolls, and more.
Subscribe to 2 GIRLS 1 PODCAST in your favorite podcast app.