Your friend tells you they saw a video of you on social media. You look it up. The person in that video looks like you. That person even sounds like you. To make matters worse the video shows this counterfeit version of you doing something incredibly embarrassing. You have never done what the video is portraying and yet here it is online forever. You have just been victimized by a deepfake.

What is a Deepfake?

Deepfakes (short for ‘deep learning’ and ‘fake’[1]) use AIs trained in audio and video synthesis to manufacture fake videos. The AI system itself is based on adversarial generative learning.[2] These AIs are able to learn how a face moves from a bank of photos that a user inputs. It then superimposes that face onto the body of someone else in an unrelated video.

This technology is still somewhat young, but it is getting more accessible to the general public and better at creating convincing videos.[3] It can be used for innocuous purposes like inserting one’s self into one’s favourite movie,[4] or for poking harmless fun at political figures.[5] However, this technology also has a dark side.

Malicious Uses

Revenge Porn:

Revenge porn is the publication of sexually explicit videos of another person intending to degrade or humiliate that person. Before deepfake technology, only those who had made a sex tape or provided someone with revealing photos were at risk. With deepfake technology all that is required is access to someone’s online photos.

Fake News:

Deepfakes can be used for political disinformation. Consider the doctored video of Nancy Pelosi, an American politician, which was slowed down to make her appear drunk.[6] The story went viral before it was ultimately revealed as fake. That video was not a deepfake, however it highlights the fact that the use of deepfake technology in a similar manner can have serious implications on public perceptions.

Bad actors could use deepfakes in order to discredit politicians by producing videos aimed at destroying their reputation. They could also use this technology to falsify official news stories. For instance, bad actors could make a video where a world leader threatens a nuclear strike. This poses a threat to both the public and to domestic and foreign relations.[7]

Possible New Solutions

Present jurisprudence in Canada offers some possible avenues for judges to pursue.[8] Defamation will often, though not always, offer a remedy for the victims. There are also criminal sanctions imposed on those who publish fake news which causes injury or mischief.[9]

False Light Publicity:

Deepfake revenge porn presents an uncomfortable challenge for defamation law.

Real revenge porn is actionable on the tort of the publication of a private fact. This tort requires, in part, that the “matter publicized or its publication would be highly offensive to a reasonable person.[10] The issue with deepfake revenge porn is that no private fact has been published because deepfake revenge porn is fake.

Publicity which places the plaintiff in a false light could be endorsed as a new privacy tort to protect against deepfake revenge porn. This tort is in the same family of torts which the courts relied on to create the torts of intrusion upon seclusion[11] and public disclosure of private facts.[12] It may offer a remedy to victims of deepfake revenge porn without a finding that the matters depicted are themselves defamatory.

Blanket Prohibitions:

A blanket prohibition against deepfake technology is a possible option, but might invite Charter challenges. These deepfakes can be used as a form of political satire inviting the public to draw parallels between two different people in an impactful way. It can be used in movie special effects to create a seamless experience. Deepfakes can also be used to insert one’s self into videos for personal comedic or expressive uses. As such it is possible that a blanket prohibition against deepfake technology would be an impermissible infringement on an individual’s freedom of expression.

For example, the Supreme Court held that expression that falls away from the core of the freedom of expression “does not lessen the need to insist on strict justification for [its] prohibition.”[13] A tool which can create political discourse is within the expression which the Charter is intended to protect. Any future legislation aimed at curbing the use of deepfake technology must balance the inherent expressive value of this tool against the serious privacy harms it can cause.


Note: A modified version of this article was originally published on the Ontario Bar Associations’ Information Technology and Intellectual Property Law Section webpage.

[1] Tom Van de Weghe, “Six lessons from my deepfakes research at Stanford” (May 29, 2019), Medium, online: <>.

[2] JM Porup, “How and why deepfake videos work – and what is at risk” (April 10, 2019), CSO, online: <>.

[3] Tom Van de Weghe, “Six lessons from my deepfakes research at Stanford” (May 29, 2019), Medium, online: <>.

[4] Ryan Gilbey, “A ‘deep fake’ app will make us film stars – but will we regret our narcissism” (September 4, 2019), The Guardian, online: <>.

[5] Kaleigh Rogers, “Deepfakes of Canadian politicians emerge on YouTube” (June 20, 2019), CBC, online: <>.

[6] Drew Harwell, “Fakes Pelosi videos, slowed to make her appear drunk, spread across social media” (May 24, 2019), The Washington Post, online: <>.

[7] Jon Christian, “Experts fear face swapping tech could start an international showdown” (February 1, 2018), The Outline, online: <>.

[8] Ryan J Black & Pablo Tseng, “What Can The Law Do About ‘Deepfake’?” (March 2018), McMillan, online: <>.

[9] Criminal Code, RSC, 1985, c C-46 at s. 181.

[10] Jane Doe 72511 v Morgan, 2018 ONSC 6607 at para 99 endorsing Jane Doe 464533 v D(N), 2016 ONSC 541.

[11] Jones v Tsige, 2012 ONCA 32 at para 18.

[12] Jane Doe 72551 v Morgan, 2018 ONSC 6607 at para 65.

[13] R v Sharpe, [2001] 1 SCR 45 at para 107.