Loading Logo

Neutralising the Threat of Deepfakes

March 2020
 by Carys Whomsley

Neutralising the Threat of Deepfakes

March 2020
 By Carys Whomsley

Although most deepfakes published to-date can be identified as false by human eyes, the technologies are developing at such a pace that these will soon pass undetected.

Digitalis’s Carys Whomsley explores the growing threat of Deepfakes, analysing the risks and ultimately proving how monitoring the internet and your online reputation remains critical today.

In the past few months, a new type of digital threat has entered the public consciousness. So-called ‘Deepfakes’ are Artificial Intelligence-generated video and audio clips, convincingly modelled on real-world human subjects. In the wrong hands, deepfakes have the potential to become a powerful technique for spreading deception and disinformation, across both the public and private sectors. And, as their sophistication increases, they will undoubtedly present an acute reputational risk to whomever or whatever their creators target.

In January, Facebook brought this threat into sharp focus when it announced it would remove certain deepfake videos to reduce the influence of misleading information in the run-up to the 2020 US election. While political disinformation is currently regarded as the most significant threat presented by deepfakes, intelligence analysts believe they will increasingly be used in a multitude of ways to target individuals and private sector organisations. This paper looks at the development and risks posed by deepfakes and explores remedies and countermeasures.

Creation and Development

So far, most deepfakes posted online have consisted of female celebrities’ faces grafted onto pornographic videos, including Scarlett Johansson and Gal Gadot. However, high profile business leaders and politicians are also starting to become targets. For example, in response to Facebook’s refusal to take down a widely shared manipulated video of Nancy Pelosi (slowed down so her speech is slurred) two individuals released a deepfake of Mark Zuckerberg. The fake depicted Zuckerberg boasting about the way Facebook “owns” its users.

Deepfakes are a product of “deep learning”, a machine learning technique which teaches computers to perform tasks natural to humans. They are created through neural networks, a set of algorithms designed to mimic the human brain’s approach to patterns. The videos are refined through Generative Adversarial Networks (GANS), which pit one network, producing fakes, against another network, measuring their credibility.

Since their rise, deepfakes targeting specific individuals have been created by transposing their face onto another person’s body, using computer vision techniques enabling the switching of specific facial features. In order to convincingly achieve this, 16 images of a target’s face are generally required, easily obtained through public record and social media searches. However, the technology is developing fast and Samsung has recently released a GAN requiring only one photo to create a credible moving image.

The most convincing fakes tend to be produced with impersonators mimicking the target’s voice. For example, a widely shared deepfake of Barack Obama was created with the help of comedian Jordan Peele. Yet significant advances have also been made in audio deepfake creation: Adobe and Google are now capable of producing fakes using only 40 minutes of audio, including words never recorded by the target.

Risks

The technology used to create deepfakes will soon proliferate widely, and the possibility of creating convincing falsified videos could soon be open to anyone with the willingness to make them. The risks posed by this are exacerbated by the current landscape of information sharing, revolutionised by social media. Videos are now spread at an unprecedented rate, allowing people to become first-hand witnesses of events. Deepfakes could consequently distort our collective understanding of the truth and further undermine our trust in everything we see and hear online. 

This risk manifested itself in Gabon in early 2019, when the nation’s President, Ali Bongo Ondimba, appeared in an unnatural-looking public address video, having not appeared in public for months due to ill health. The video, although later proven to be real, prompted a military coup by the opposition, who justified it by declaring the video a deepfake that signalled the President had either died or was incapacitated.

As a result of the loss of trust arising from the existence of deepfakes, people are likely to become all the more prejudicial in their choice of what to believe and what to distrust. If they watch a video of an individual whom they dislike speaking inappropriately, they are likely to view it as a confirmation of their bias. On the other hand, if they see a compromising video of someone they like, they are more likely to believe that the video is fake.

High-profile public figures could exploit this, in a phenomenon dubbed the ‘Liar’s Dividend’. This is when genuine footage of controversial content is released but the subject claims it is a deepfake. Even if the video were later proven to be real, the perception of the video as a fake may linger, as people hold on to their pre-held beliefs. A loss of basic confidence in the trustworthiness of video content could serve to reinforce societal divisions and may prove enormously destructive for democracy itself.

Mitigation

Although most deepfakes published to-date can be identified as false by human eyes, the technologies are developing at such a pace that these will soon pass undetected. Studies into the impact of deepfakes and ways to mitigate against the risks they pose have identified three key areas of countermeasures.

Legislative remedies

Most countries are currently not legally equipped to control the creation and spread of deepfakes. As it stands, the creation or distribution of a deepfake could constitute fraud, defamation or misappropriation of a person’s likeness, amongst other torts. However, as legal remedies are generally applied ex post facto, they will be limited in their efficacy to address the possible scale of the damage.

One solution could be to require social media companies to verify and remove deepfakes. Yet this has faced widespread criticism by reporters, who claim that platforms could start aggressively policing posts to avoid fines. One other solution would be to criminalise the making or sharing of deepfakes with intent to deceive or cause harm. This may be difficult to enforce, as the initial creator and distributor of a deepfake could be hard to trace, their intentions hard to understand and, once identified, may be out of the court’s jurisdiction.

Technological countermeasures

Analysts have so far put three technological solutions forward as possible countermeasures against deepfakes. The first is the development of a tool that social media platforms or news outlets could use to detect and classify deepfakes, by picking up on tell-take signs of a fake such as resolution inconsistencies. However, if published, such detection methods could be used by malicious actors to improve deepfakes. Although keeping detection methods secret would enable them to remain operational, they will have limited effect if they are not made available for use across media platforms.

Another solution recommends that social media platforms create a requirement that any recorded content to be uploaded by a device must be tagged with a digital watermark. This would require cooperation across producers of recording equipment and major social media platforms alike, and as such, is unlikely to be applied. Significantly, this demand could aid authoritarian regimes to trace content they disagree with back to its source.

A third solution is often referred to as ‘life-logging’. Targeted at high-profile individuals, who are most likely to be targets of deepfake attempts, every aspect of their life could be recorded, in order to prove what they were doing at any given time. Yet this practice would be invasive not only for the individual, but for everyone they interacted with, as it would create a haunting surveillance network implicating thousands of individuals.

Monitoring is key

Despite burgeoning efforts to combat deepfakes, the ability to produce fake content is evolving far quicker than our ability to fight it. Given the shortcomings presented by the above solutions, the most effective approach may lie in solutions aiming for societal resilience, deployed in conjunction with legal and technological measures.

Individuals most at risk, such as celebrities or senior executives of large companies, could follow various measures to offset the risks posed by a deepfake attack. Web monitoring will be crucial to identifying and measuring their spread at the earliest stage. Conducting crisis management exercises will help limit the chances of a successful attack. Alongside this, news outlets could incorporate detection technology defences into their everyday work, schools could help students become more discriminating, and governments could seek partnerships with private companies to launch national awareness campaigns, while implementing strong legal deterrent measures.

While the spread of deceptive information is an inevitable product of the internet, educating people about new technologies, alongside legal and technological countermeasures, could not only help inoculate people against deepfakes, but help society gain a ‘herd immunity’ against disinformation.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.