- ‘Star Trek: Discovery’ unmasks the time-traveling Red Angel Thursday 8:30 PM
- Everyone is making memes of Meghan McCain saying ‘my father’ on loop Thursday 8:11 PM
- Irony of Georgia’s sperm-reporting bill flies by anti-abortion advocates Thursday 7:11 PM
- Sex scandals are consuming the K-pop industry Thursday 5:44 PM
- Trump supporters are abandoning Fox News over network’s latest hire Thursday 5:20 PM
- QAnon is attacking a random woman in a disturbing and dangerous way Thursday 4:59 PM
- Google celebrates Bach with AI-powered, music-making doodle Thursday 4:53 PM
- RIP: The best free trial in all of streaming entertainment Thursday 2:19 PM
- Which ‘Florida Man’ are you? Thursday 1:06 PM
- Hundreds of millions of Facebook passwords were accessible to employees Thursday 12:55 PM
- ‘Bitch I’m Bella Thorne’ morphs into TikTok dyslexia meme Thursday 12:17 PM
- Marvel is auctioning props and costumes from Netflix’s ‘Defenders’ franchise Thursday 12:12 PM
- Net neutrality advocates plan online watch party for the ‘Save the Internet’ Act Thursday 12:01 PM
- Tim Cook turns his iPad meme into an AirPod meme Thursday 11:46 AM
- Auschwitz Memorial asks visitors to stop taking playful photos at Holocaust site Thursday 11:33 AM
LBJ Library/Flickr (Public Domain)
The internet is constantly evolving, as are the people who create the content it hosts. “Deepfakes“—a term that comes from a combination of “deep learning” and “fake”—or the process of superimposing the likeness of one person onto a video of another, has hit the newest stage of its evolution.
Research at Carnegie Mellon is advancing this technology with a new technique called Recycle-GAN, which can take detailed content from one video and apply it to another while keeping the style intact, according to New Atlas. And it can put words in people’s mouths.
Deepfakes have historically been synonymous with pornography, as people quickly seized on the opportunity to falsely depict celebrities engaging in sexual acts. Videos of this sort have been banned from some major porn sites and most social media sites, including Pornhub.
The newest technique is purely visual, with no audio capabilities. However, the synchronization of facial expressions and mouth movements are incredibly on-point.
The work that Carnegie Mellon is doing builds on an AI algorithm called a GAN, or generative adversarial network. It uses a generator capable of creating video content in the style of a chosen source video. Importantly, this new technology works with a discriminator that scores the consistency of the new video against the original. Using this method, the impressive results you see before you are achieved.
A version of this technique converts the new content back to the style of the original as a way of assessing the quality of the conversion. This iteration, called cycle-GAN, was compared by researches to the act of checking the quality of a translation from English to Spanish by translating the resulting Spanish back to English.
Obviously, this is not an iron-clad method. There are imperfections in many of the resulting videos—like the strange movement of Obama’s shirt in the attached video—but the progress is none-the-less impressive. Researchers are aware that their technology has the potential for nefarious uses, not just when it comes to pornography but with the frightening potential of bluffing video evidence.
They are quick to point out its potential benefits as well, of course. It can be used across the video-editing spectrum, to convert black and white to color or even, potentially, for use in the development of autonomous vehicles. By identifying hazards during the day, researches say they could theoretically convert them to more visually difficult situations like nighttime or inclement weather.
It is hard to say how soon this technology can be used for something so advanced, but it has far-reaching potential, according to Carnegie Mellon researchers.
H/T New Atlas
Nahila Bonfiglio reports on geek culture and gaming. Her work has also appeared on KUT's Texas Standard (Austin), KPAC-FM (San Antonio), and the Daily Texan.