Deepfakes: AI and the Election

October 31st, 2020 by Audrey Tam

Deepfakes are artificially produced media where a person’s likeness replaces another in an existing video. An autoencoder is a neural network used in deepfakes; it functions by compressing and reconstructing data. Because of deepfakes’ potential to create and promote fake news, hoaxes, and frauds, Congress has taken measures to detect and limit such technology. However, some experts believe that the bigger threat to election security is not necessarily deepfake technology itself, rather our reaction to its existence.

As the 2020 United States presidential election rapidly approaches, the issue of foreign interference surfaces. U.S. officials have accused Russia, China, and Iran of attempting to manipulate election results-- regardless of one’s political leaning, it is evident that election security is vital. With the rise of new technology such as deepfakes, new concerns about its role in American democracy have also risen. How are deepfakes created? What are their uses? And perhaps most importantly, are they a threat?

Before answering the aforementioned questions, it is important to know what a deepfake is. Deepfakes are artificially produced media where a person’s likeness replaces another in an existing video. Unlike other forms of fake media, deepfakes utilize artificial intelligence to create content that has great power to deceive.

How are deepfakes created?

The word “deepfake” is a portmanteau of “deep learning” and “fake”. Deep learning is a branch of machine learning that relies on neural networks, or systems loosely inspired by how the brain processes information.

There are various neural networks, which means neural networks have various performances and purposes. An autoencoder is a neural network used in deepfakes.

- An autoencoder’s job is to match the input provided. It does so by compressing and encoding data, then reconstructing the compressed data back into a representation of the original input.

- An autoencoder consists of two parts: an encoder and a decoder. The encoder is what compresses the data; it reduces an image to a lower latent space. The decoder reconstructs the image from the latent representation generated by the encoder.

In a deepfake, the encoder compresses the image of a person and key aspects of their appearance such as facial features and body posture. This information is then decoded and superimposed on a target.

Here is a helpful visual to explain the function of an autoencoder:

Deepfakes become even more powerful with the addition of a generative adversarial network, or GAN.

- While creating new images from the latent representation (i.e. the compressed image containing a person’s appearance), a GAN trains a discriminator to attempt to determine whether or not the new images are products of the autoencoder.

- This adversarial relationship minimizes defects, causing the generation of extremely realistic images. Hence, deepfakes are difficult to detect since they are improving and correcting themselves constantly.

What are the uses of deepfake technology?

Deepfakes have a variety of applications. While they have garnered attention for their use in adult films, typically implemented without the targets’ consent, deepfakes have also made an appearance in art, feature films, and politics. Because of deepfakes’ potential to create and promote fake news, hoaxes, and frauds, Congress has taken measures to detect and limit such technology.

Examples of deepfakes used to further political agendas include:

- Separate videos of Argentine President Mauricio Macri’s face replaced by Adolf Hitler’s, and Angela Merkel's face replaced with Donald Trump's

- In January 2019, Fox affiliate KCPQ mocking Trump’s appearance during his Oval Office address and in May 2019, slowing down Nancy Pelosi’s speech to suggest drunkenness or dementia

- In the 2020 Delhi Legislative Assembly election, the distribution of an English-language campaign advertisement by Manoj Tiwari translated into Haryanvi to target Haryana voters

- In April 2020, a video by Belgium’s Extinction Rebellion of Belgian Prime Minister Sophie Wilmès on Facebook promoting a possible link between deforestation and COVID-19

Are deepfakes a threat to the 2020 presidential election?

While politicians have equated deepfake technology to nuclear weapons and labeled it as a detriment to democracy, experts hold conflicting opinions on how serious a threat deepfakes are to the upcoming elections. There is no denying that deepfakes can easily paint a false narrative onto trusted sources of authority. By manipulating information, deepfakes have the potential to distort democratic discourse and erode the public’s trust. In addition to mixing the truth with lies, deepfakes can create entirely made-up content. Such new media could seek to exploit political tensions or even incite violence.

However, deepfakes do not seem to be the most prominent threat to election security. Scientists suggest that more primitive disinformation techniques such as misleading text and photoshopped images not only prove more effective but are also more accessible. Deepfakes tend to require more resources and technical skills in comparison, yet the effect of misinformation is the same. Some believe that the bigger threat is not necessarily deepfake technology itself, rather our reaction to its existence. The widespread reality of fake news gives rise to a phenomenon known as the Liar’s Dividend, in which denials hold more credibility and are difficult to refute. Because people believe that they can no longer trust what they hear or see, those caught on tape can more easily convince the audience of their alleged innocence. Because of this phenomenon, experts predict that the targets of malicious deepfakes are unlikely to be candidates. Rather, the target would be Americans and their faith in the election. For example, one could misconstrue a trusted figure’s words to suggest that polling sites are closed, unsafe, or untrustworthy during election day.


Deepfake technology is a display of the potential power of artificial intelligence and machine learning, but mal-intent often accompanies power. That begs the question: are deepfakes or humans the true threats to democracy? With the advancement of technology, deepfakes will likely become even more difficult to detect as they enter mainstream media to influence and deceive us. However, we can also be the solution. Through progress and innovation, we can hope to create technology to effectively combat the threat of deepfake technology.


Watch some not-so-political deepfakes:

- I used "deep fakes" to fix the Lion King

- Dr Phil but everyone is Dr Phil [Deepfake]

- Freddie Mercury DeepFake [VFX Breakdown]

- The Office - Identity Theft [DeepFake]

Further readings and resources:

- An Introduction to Neural Networks and Autoencoders

- Understanding the Technology Behind DeepFakes

- What Is a Deepfake? | Deepfake AI Technology Risks, Examples

- Deepfakes: A threat to democracy or just a bit of fun?

- Deepfakes' threat to 2020 US election isn't what you'd think

- Are Deepfake Videos A Threat? Simple Tools Still Spread Misinformation Just Fine

- Deepfake democracy: Here's how modern elections could be decided by fake news