Computer scientists have developed a new way of detecting deepfake videos that they claim is more accurate than all other methods.
In a paper titled “Detection and Localization of Facial Expression Manipulations,” researchers at the University of California, Riverside outlined how the new technique can not only detect face swaps but media in which only the subject’s facial expressions had been digitally altered.
“Facial manipulations can be created by Identity swap (DeepFake) or Expression swap,” the paper states. “Contrary to the identity swap, which can easily be detected with novel deepfake detection methods, expression swap detection has not yet been addressed extensively.”
While even the best deepfakes can often still be discerned by the naked eye, readily available filters and tools designed simply to add a smile or scowl to an individual’s face can be much more difficult to detect.
The framework, known as Expression Manipulation Detection (EMD), is even designed to identify the specific areas of the face that have been manipulated. Lead researcher Ghazal Mazaheri told UC Riverside News that “facial expression recognition systems” were paramount in aiding the new method in recognizing alterations.
“Multi-task learning can leverage prominent features learned by facial expression recognition systems to benefit the training of conventional manipulation detection systems,” Mazaheri said. “Such an approach achieves impressive performance in facial expression manipulation detection.”
After testing the method against two facial manipulation datasets, researchers concluded that EMD was able to accurately detect manipulated videos 99% of the time.
“Experiments on two challenging datasets demonstrate our method has better classification and segmentation performance in facial expression manipulation detection in comparison to state-of-the-art results,” the paper concludes. “Also, our method is close to the state-of-the-art methods for other kinds of manipulation (identity swap) detection, thus ensuring generalizability.”
While the research undoubtedly signals a step forward in detecting manipulated media, the balance will almost certainly continue to shift back and forth as the arms race between deepfake creators and detectors advances.