Two 3D model heads examining one another

Semisatch/Shutterstock (Licensed)

The desperate race to stop fake celebrity porn

Websites are using software to detect fake celebrity porn. Will it be enough?

 

Jay Hathaway

Tech

Posted on Feb 22, 2018   Updated on May 22, 2021, 12:00 am CDT

The AI-assisted face-swap videos known as deepfakes hit a wall earlier this month. The videos, which are mainly used to create convincing fake celebrity porn, were banned on several major websites, including Reddit and Twitter. The subreddits where people initially shared—and even tried to sell—deepfakes were also shut down. But the demand for deepfakes is still there, and any ban is contingent on being able to determine that a video isn’t real.

Websites are now using software to flag these controversial videos as soon as they’re uploaded. They’re fighting machine learning with machine learning, and GIF-hosting website GFYcat is leading the way.

GFYcat was the first place on the internet where deepfake creators hosted their work, but the site has since banned the videos and is now blocking them through a combination of AI software and human moderation. GFYcat is using two of its existing pieces of software, called Angora and Maru, both of which have been repurposed to spot deepfakes.

“We developed Angora and Maru for other purposes: Angora for improving the visual quality of content already on our platform, and Maru for recognizing celebrities and influencers in GIFs,” a GFYcat spokesperson told the Daily Dot. “When we initially identified deepfake content on our platform, we were fortunate enough to have robust pre-existing AI programs that could be adapted to the purpose of spotting deepfakes.”

Humans at GFYcat are still teaching the software to spot fakes on the website. “AI flags content as likely fake, a human reviewer makes a final determination on whether it violates ToS, then it’s added back into our training data to help improve the accuracy of the model,” GFYcat explained.

emma watson deepfake fake porn video
4chan

For now, the fakes are easy to spot. It still takes a lot of time, a powerful GPU, and a lot of high-quality footage of the target to make the faces look seamless. Plus, most of the faces belong to celebrities, and most of the footage comes from professional porn videos. Some of the videos even have “FakeApp” watermarks, advertising the desktop application that made deepfakes go mainstream.

But some deepfake aficionados are talking about targeting relatively unknown women—their friends, classmates, even relatives. In a future with even better video processing and without recognizable celebrity faces and bodies, a human moderator might not always be able to tell the difference. And that’s where machine learning could become extremely powerful.

The promise of machine learning, though, is that sufficiently trained software will spot patterns that humans can’t. Whether it’s GFYcat’s apps or something else, it should be possible for computers to notice signs of a deepfake—maybe tiny imperfections in the face replacement, or a background match with an existing porn video—that the human brain might miss.

It might sound like this is setting up a technological arms race between the AIs that make fake porn and the AIs that stop it, but that might not be the case. From the consumer’s perspective, the porn only has to create a half-convincing illusion. After all, guys are already cranking it to wobbly, glitchy-eyed versions of Emma Watson and Ariana Grande. Sure, the tech will get better, but there’s just not much incentive to make it too good to detect. No court case has tested the legality of deepfakes, and making the videos too convincing might actually make them less defensible as “parody.”

Plus, there’ll always be shady corners of the internet to host the videos (see 4chan, 8chan, private websites, and international BitTorrent trackers). The cache of real celebrity nudes that leaked in 2014 still hasn’t been eradicated from the internet, but you don’t see it on brightly lit social networks like Facebook or even grey-ish spaces like Reddit.

Deepfakes: Custom fake celebrity porn featuring Emma Watson.

From the perspective of the people making deepfakes, AI detection isn’t that big of a deterrent. It’s kind of like putting a chunky lock on a bicycle.

“There’s a certain irony to discussing the idea of machine learning being used to detect uses of machine learning,” redditor derpfakes, who’s known for high-quality non-porn fakes of subjects like Nicolas Cage, told the Daily Dot.

“It would probably be somewhat effective but then it becomes a cat and mouse of ever-advancing technology—as deepfakes and the technology improves, the detection of them also has to improve,” derpfakes said. “The funny part is that that is almost the fundamental point of machine learning technology and so [the cat and mouse game] ends in something of a stalemate.”

FakeApp, which allows people to easily create their own deepfake videos, doesn’t plan on making any changes to avoid software detection.

“FakeApp will never try to implement techniques to circumvent [deepfake] detection software, because my goal has never been to pass off these creations as real videos (which I think is profoundly unethical),” the creator of the app, who wished to remain anonymous, told the Daily Dot. “Most video hosting platforms have drawn a distinction between ethical and unethical use of deepfakes and allowed the former, but if a site doesn’t want to allow deepfakes period and puts up some system to detect and ban them, I see no reason to try to fight them.”

Major porn site Pornhub announced its own ban on deepfakes shortly after GFYcat did, but a spokesperson for the site said it wasn’t using any AI measures to block the videos.

“We are using a mixture of keyword blocking and relying on our community and team of moderators to flag [deepfakes],” Pornhub told the Daily Dot. “Anyone finding themselves depicted on Pornhub without their consent or who encounters content they suspect may be nonconsensual is encouraged to make use of our content removal form so that it can promptly be removed from our site.”

The ban announcement and user flagging have apparently been enough to discourage uploaders.

The future of deepfakes is still a bit of a question mark, though. As the quality of deepfakes goes up, what’s the minimum level of security that will keep them away from popular image-hosting sites and porn spaces? Will we see mainstream demand for custom videos, or will this remain a passing curiosity that only really appeals to a niché of horny nerds? Will the law even attempt to keep up with the technology?

We were all briefly scandalized by the existence of deepfakes. But in 2018, when there’s so much scandalous news that it’s physically and emotionally impossible to care about all of it, how much effort will public or private authorities expend to stop them?

“Personally I would prefer people to use some critical thinking skills to make judgment themselves on the authenticity of what they are seeing, but then websites or other outlets are well within their rights to try to moderate what is shared on their platform,” derpfakes said. “At the end of the day, the technology isn’t going anywhere and society is going to have to adapt to it and its ever-changing capabilities and applications.”

Here’s hoping that doesn’t mean we won’t all be starring in own deepfakes.

Update: 5pm CT, Feb. 22: This story was updated with a quote from the creator of FakeApp.

Share this article
*First Published: Feb 22, 2018, 8:48 am CST