- Joined
- Mar 22, 2015
- Location
- Untied States of America
The number of deepfake videos presenting fake content has been rapidly increasing. A deepfake, or a form of fake media created using deep learning is yet another way to spread misinformation.
New security measures consistently catch many deepfake images and videos, but unfortunately, deepfakes will get only easier to generate and harder to detect as computers become more powerful and as learning algorithms get more sophisticated.
In a recent paper, Electrical and Computer Engineering masters students Apurva Gandhi and Shomik Jain from USC Viterbi School of Engineering, Los Angeles, showed how deepfake images could fool even the most sophisticated detectors with slight modifications. A team at the University of California San Diego also arrived at similar conclusions about deepfake videos.
Today’s state-of-the-art deepfake detectors are based on convolutional neural networks. While initially, these models seem very accurate, they admit a major flaw. Gandhi and Jain showed that these deepfake detectors are vulnerable to adversarial perturbations – small, strategically-chosen changes to just a few pixel values in an image.
The neural networks the two trained initially identified over 95% of the normal, everyday deepfakes. But when they perturbed the images, the detectors were able to catch (checks notes) zero percent. Under the right circumstances, this technique essentially renders our entire deepfake security apparatus obsolete.
i-hls.com
New security measures consistently catch many deepfake images and videos, but unfortunately, deepfakes will get only easier to generate and harder to detect as computers become more powerful and as learning algorithms get more sophisticated.
In a recent paper, Electrical and Computer Engineering masters students Apurva Gandhi and Shomik Jain from USC Viterbi School of Engineering, Los Angeles, showed how deepfake images could fool even the most sophisticated detectors with slight modifications. A team at the University of California San Diego also arrived at similar conclusions about deepfake videos.
Today’s state-of-the-art deepfake detectors are based on convolutional neural networks. While initially, these models seem very accurate, they admit a major flaw. Gandhi and Jain showed that these deepfake detectors are vulnerable to adversarial perturbations – small, strategically-chosen changes to just a few pixel values in an image.
The neural networks the two trained initially identified over 95% of the normal, everyday deepfakes. But when they perturbed the images, the detectors were able to catch (checks notes) zero percent. Under the right circumstances, this technique essentially renders our entire deepfake security apparatus obsolete.
Deepfake Detectors Have This Vulnarability - iHLS
This post is also available in: עברית (Hebrew)In a research paper, Electrical and Computer Engineering masters students Apurva Gandhi and Shomik Jain from USC Viterbi School of Engineering, Los Angeles, showed how deepfake images could fool even the most sophisticated detectors with slight...
