You are currently viewing Explained: What are Deepfakes and How To Spot Them 2023

Explained: What are Deepfakes and How To Spot Them 2023

What is a deepfake?

Have you witnessed Barack Obama uttering derogatory words about Donald Trump, Mark Zuckerberg claiming control over stolen data from billions of individuals, or even Jon Snow apologising for the controversial ending of Game of Thrones? If you have, you’ve encountered a deepfake. In the 21st century, deepfakes have become the digital counterpart to traditional Photoshop techniques, using a type of artificial intelligence known as deep learning to generate fake visuals and audio, thus the name “deepfake.” Whether you want to manipulate a politician’s statements, star in your favorite movie, or dance like a professional, deepfakes offer the means to do so.

What are they used for?

Unfortunately, a significant portion of deepfakes is explicit in nature. In September 2019, Deeptrace, an AI company, discovered approximately 15,000 deepfake videos online, nearly doubling within nine months. An astonishing 96% of these videos were of a pornographic nature, with 99% featuring the faces of female celebrities superimposed on adult film actors. As technology advancements make it easier for amateurs to create deepfakes with just a few photos, there is growing concern that these manipulated videos may extend beyond the realm of celebrities to perpetuate revenge porn. Professor Danielle Citron of Boston University Law School succinctly points out, “Deepfake technology is being weaponised against women.” Beyond explicit content, deepfakes are also used for satirical and humorous purposes.

Is it limited to videos?

No, deepfake technology is not confined to videos. It can generate entirely fabricated images from scratch, such as the case of the fictitious Bloomberg journalist “Maisy Kinsley” and “Katie Jones,” who posed as a professional at the Center for Strategic and International Studies. These fake personas are suspected to be deepfakes created for espionage purposes. Audio can also be manipulated to produce “voice skins” or “voice clones” of public figures, potentially leading to fraudulent activities. For instance, a UK subsidiary executive fell victim to a scam in which a fraudster impersonated the voice of the German CEO, possibly utilising a deepfake. Similar scams have been known to employ manipulated WhatsApp voice messages.

How are deepfakes created?

The creation of deepfakes involves multiple steps. Initially, thousands of facial images of two individuals are processed through an AI algorithm called an encoder, which identifies and learns the shared features between the two faces, compressing the images in the process. A second AI algorithm, the decoder, is then trained to restore the faces from the compressed images. Since the faces are different, one decoder is trained to reconstruct the first person’s face, while another is trained for the second person’s face. To perform a face swap, encoded images are fed into the “wrong” decoder, effectively merging the features of one face with the expressions and orientation of the other. This process must be executed for each frame to create a convincing video. Alternatively, generative adversarial networks (GANs) can be employed to generate realistic faces of entirely non-existent individuals through the competition between two AI algorithms.

Who is responsible for creating deepfakes?

Deepfakes are produced by a wide range of individuals and organizations, including academic researchers, visual effects studios, amateur enthusiasts, and even the adult film industry. Governments may also utilize this technology for various purposes, including discrediting extremist groups or engaging with specific individuals.

What technology is required to create deepfakes?

Generating high-quality deepfakes typically demands access to high-end desktop computers equipped with powerful graphics cards or cloud computing resources, reducing processing time significantly. Proficiency is also necessary to enhance the final videos and rectify visual imperfections. Nonetheless, there are now readily available tools that assist individuals in creating deepfakes, with some companies offering cloud-based services. Mobile apps like Zao enable users to insert their faces into scenes featuring trained TV and movie characters.

 

How can you identify a deepfake?

As technology advances, detecting deepfakes becomes increasingly challenging. In the past, researchers found that deepfake faces did not blink realistically, but this flaw has since been addressed. Poorly made deepfakes may display signs like inconsistent lip syncing, uneven skin tones, flickering around transposed faces, and difficulties rendering fine details like hair or jewellery. Governments, universities, and tech companies are funding research to enhance deepfake detection methods, aiming to stay ahead of the technology. Social media platforms like Facebook have implemented policies to counter deepfakes that could mislead viewers.

Will deepfakes have widespread consequences?

While it’s expected that deepfakes will be used for harassment, manipulation, and disinformation, the potential for major international incidents remains uncertain. It is unlikely that a deepfake of a world leader pressing a doomsday button would result in a global catastrophe. However, deepfakes could still influence stock prices, sway elections, and provoke religious tensions, posing significant challenges.

Will they undermine trust?

One of the more insidious effects of deepfakes, along with other forms of synthetic media and fake news, is the erosion of trust in society. When people can no longer reliably distinguish between genuine and fabricated content, skepticism about real-world events and incidents becomes more prevalent. This can have serious implications in various areas, including the legal system, personal security, and biometric recognition technology. Scams could potentially exploit deepfakes to mimic biometric data, making them more convincing to unsuspecting individuals.

What’s the solution?

Ironically, artificial intelligence may hold the solution to combating deepfakes. Tech companies are developing detection systems that can identify fakes whenever they appear, with a focus on media provenance through digital watermarks and blockchain-ledger systems. These measures aim to provide tamper-proof records of videos, images, and audio, allowing for the verification of their origins and any potential manipulations.

Are all deepfakes malicious?

No, not all deepfakes are intended for harm. Some serve entertainment or educational purposes. For example, voice-cloning deepfakes can help individuals regain their voices when lost due to illness. Museums and galleries use deepfake technology to enhance visitor experiences, and in the entertainment industry, it can improve foreign-language film dubbing or even bring deceased actors back to the screen. Shallow fakes, another category of manipulated content, can also be impactful in delivering political messages or influencing public perception.