barack obama

LBJ Library/Flickr (Public Domain)

The next generation of deepfakes is here

They can falsely put words in people's mouths.

 

Nahila Bonfiglio

Tech

Posted on Sep 11, 2018   Updated on May 21, 2021, 6:47 am CDT

The internet is constantly evolving, as are the people who create the content it hosts. “Deepfakes“—a term that comes from a combination of “deep learning” and “fake”—or the process of superimposing the likeness of one person onto a video of another, has hit the newest stage of its evolution.

Research at Carnegie Mellon is advancing this technology with a new technique called Recycle-GAN, which can take detailed content from one video and apply it to another while keeping the style intact, according to New Atlas. And it can put words in people’s mouths.

Deepfakes have historically been synonymous with pornography, as people quickly seized on the opportunity to falsely depict celebrities engaging in sexual acts. Videos of this sort have been banned from some major porn sites and most social media sites, including Pornhub.

The newest technique is purely visual, with no audio capabilities. However, the synchronization of facial expressions and mouth movements are incredibly on-point.

The work that Carnegie Mellon is doing builds on an AI algorithm called a GAN, or generative adversarial network. It uses a generator capable of creating video content in the style of a chosen source video. Importantly, this new technology works with a discriminator that scores the consistency of the new video against the original. Using this method, the impressive results you see before you are achieved.

A version of this technique converts the new content back to the style of the original as a way of assessing the quality of the conversion. This iteration, called cycle-GAN, was compared by researches to the act of checking the quality of a translation from English to Spanish by translating the resulting Spanish back to English.

Obviously, this is not an iron-clad method. There are imperfections in many of the resulting videos—like the strange movement of Obama’s shirt in the attached video—but the progress is none-the-less impressive. Researchers are aware that their technology has the potential for nefarious uses, not just when it comes to pornography but with the frightening potential of bluffing video evidence.

They are quick to point out its potential benefits as well, of course. It can be used across the video-editing spectrum, to convert black and white to color or even, potentially, for use in the development of autonomous vehicles. By identifying hazards during the day, researches say they could theoretically convert them to more visually difficult situations like nighttime or inclement weather.

It is hard to say how soon this technology can be used for something so advanced, but it has far-reaching potential, according to Carnegie Mellon researchers.

H/T New Atlas

Share this article
*First Published: Sep 11, 2018, 10:54 am CDT