Imagine that you can dance like Bruno Mars or sing like Whitney Houston in just one day. The technology used to make anyone do or say things they have never done may seem complicated — but it's very accessible. All that's needed is some visual footage or a recording of a voice to start creating an alternative reality.
This kind of manipulated content that has been created with the help of AI technology is called synthetic content, or more widely known as a deepfake.
Deepfakes for media spoofing
In the past year one phenomenon that has been recurring is media spoofing, i.e. when someone creates a fake account of a media outlet, using their name in order to fool people and maybe even spread disinformation. In this case they copy the profile photo and use similar usernames to the original to make their fake accounts look as realistic as possible.
Sometimes the logo and font of a media outlet are used to create fake content. We have analyzed some examples in previous factchecks like here as well.
Now such spoofing cases also occur with the help of deepfakes. In this video, it looks like a DW employee is advertising an incredible investment opportunity. Those who created the video used a clip from a DW News segment and generated a deepfake, to make it appear as if Benjamin Alvarez Gruber is their testimonial. In reality, however, the content is fake and leads to an investment scam.
A few hints that reveal that we are looking at a deepfake here: Due to the ever-improving quality of deepfakes, tips such as "watch out for unusual mouth movements" or "the video quality is bad" are not always relevant.
However, we do see an inconsistency in what Alvarez Gruber is saying in the deepfake and the movement of his lips. If you enlarge the video and zoom in, you can see that the words and the lip movements do not match. Also sometimes the teeth of the correspondent in the deepfake video seem to dissapear where they should be visible, because the mouth is still open. These little errors often help us to identify a fake.
We also advise to check such content against several sources. In this case, it is important to check the real social media accounts of Benjamin Alvarez Gruber and see if it also appears there.
One additional step could be to include a deepfake detector in your research. You could check a suspicious video by using a detection software like the one included in the InVID verification plugin, which was developed by DW and other stakeholders. Be aware, though, that these programs do not always deliver accurate verdicts, as several of them are still in a developmental stage. The result in the case of the investment scam is that the video is a deepfake with a 94% probability.
What are deepfakes?
Deepfake is a term that describes audio and video files that have been created using artificial intelligence, "machine learning" to be more exact.
All sorts of deepfakes are possible. Face swaps, where the face of one person is replaced by another. Lip synchronization, where the mouth of a speaking person can be adjusted to an audio track that is different from the original. Voice cloning, where a voice is being "copied" in order to use that voice to say things.
Completely synthetic faces and bodies can also be generated, for example, as digital avatars. With deepfake technology, even dead people could be brought back to life, like the Dali Museum in Florida did with artist Salvador Dali.
These synthetic video manipulations are produced with so-called Generative Adversarial Networks, or GANs.
A GAN is a machine learning model in which two neural networks compete with each other to become more accurate in their results.
In simple terms: One computer is telling the other computer if the digital clone it has created of you is convincing enough by comparing it with the original material. Do you move the same, do you sound the same, is your expression the same? The system improves itself with multiple attempts until it’s happy with the result.
Although this technology is continuously improving and is highly sophisticated, you can still spot deepfakes if you know where to look.
Spotting the (in)visible
You do not need to become a deepfake expert to distinguish what is real from what is fake. Here are some tips:
1. Slow down and look again. Think before you share. Ask yourself: Can this really be true? Would you expect this to happen? If you are not sure, don't share.
2. Do a quick check to see if you can find the same story or narrative from different and trustworthy sources. A brief internet search on a headline will give you leads on the real story.
3. Find another version and compare. If you do not trust a claim, an image or a video, then describe it in a Google or DuckDuckGo search, find another version, and then compare the two versions. You can use a standard internet search for this or try a reverse image search.
Detecting (almost in)visible traces in synthetic and manipulated media is a much bigger challenge. Such manipulation can be detected by looking for strange "jumps" in a video, a change of voice emphasis, low-quality audio, blurred spots, strange shapes of limbs, and other unusual inconsistencies. Trust your senses and gut feeling. Always ask yourself: Does this make sense? Could this really be true? Look carefully and always look twice. Focus on details and ask a friend or colleague for a second opinion.
4. Check for known deepfake giveaways: A perfectly symmetrical face; mismatched earrings or glass frames; unusual ear, nose and tooth shapes; loss of contrast; inconsistencies in the neck area, hair or fingers that are not connected.
Sometimes you will need to watch a video frame by frame to detect these inconsistencies. You can do that with a local video player (for example VLC) or online with watchframebyframe.com.
5. Zoom in on mouth and lip movements and compare them with your own human behavior to detect lip synchronization. What should a mouth look like when making a certain sound?
Sharpen your senses
You have noticed that an essential aspect of verification is using your senses. And the good news is, you can train those. In this training, you will find exercises to sharpen your vision and hearing skills. Doing these exercises will make you more confident in detecting synthetic and manipulated media.
Dangers of deepfake technology
The impact of deepfake technology is profound in the domain of pornography, including so-called revenge porn. Fake porn videos and images are being published widely and causing harm to its victims, who range from celebrities to school kids.
For society, the danger of deepfakes also lies in the way of consuming media nowadays. The average person is inundated with media while online — and is not always certain that what they share is actually true.
In polarized societies, that behavior leaves ample opportunity to fool people into believing something — no matter the veracity. Therefore, the quality of the video isn't even all that important. It's about what you've apparently seen, with your own eyes, even if it's not true: that then-Greek Finance Minister Yanis Varoufakis gave Germany the middlefinger; that David Beckham spoke nine languages; or that Mark Zuckerberg said he controls you because he controls your stolen data.
One politically motivated Deepfake that went viral in the Netherlands was created by the news site "De Correspondent" and appears to show Dutch Prime Minister Mark Rutte state a major change in his policy: From now on, he will fully support far-reaching climate measures.
Then there is the "liar's dividend", which suggests that some politicians profit from an informational environment saturated with misinformation. The mere existence of this technology allows people to claim that whatever they have said is a deepfake and to prove it is actually real is extremely challenging. The best-known example is Donald Trump claiming the "grab them by the p****" recording is "a fake" even after initially apologizing for it.
Most disinformation is being published with a reason: to create doubt, to support popular beliefs, or to loudly oppose other beliefs. It is very challenging to verify images and sound that has been stripped of context, edited or staged. Still, for now, you can train yourself to better spot a deepfake:
Remember: If you're not sure, don't share!
More on how to spot misinformation:
Edited by: Stephanie Burnett