“A lie gets halfway around the world before the truth has a chance to get its pants on.”– variously attributed
As artificial intelligence (AI) advances it creates both benefits and dangers, and few applications illustrate that fact better than the emergence of “deepfakes.” A portmanteau of “deep learning” and “fake,” deepfakes are AI-generated audios or even videos of real people saying fake things. While, in the case of videos at least, it is still reasonably easy to distinguish between a deepfake and the genuine article, the gap is narrowing and it may not be long before seeing is no longer believing.
Most of the concern about deepfakes in the popular press centers around its more sensational (and appalling) uses, such as creating fake pornography of real women or putting scandalous words in a politician’s or CEO’s mouth, with obvious potential for electoral or share price manipulation. But while less sensationalist, deepfakes’ potential uses and misuses in the corporate world should be of concern for any business. Below is a list of some of the legal issues that deepfakes can create and the applicable law.
In cases involving misappropriation of an image, it is natural to think of invoking intellectual property rights such as copyright and trademark. The holder of the copyright in the original video on which the deepfake is based (if it is not the deepfake creator himself) would be able to file suit to seek damages and even to prevent the use of the video itself, assuming of course that there is practical way to enforce the judgement. Moreover, even if the deepfake creator owns the rights to the video, if the business targeted in the video sees its logo, slogan or other intellectual property used without permission then it would also have grounds on which to bring an action for either copyright or trademark infringement. While intellectual property rights include “fair use” exceptions such as parody and satire, actual malice on the creator’s part would likely prevent them from being invoked.
While the use of manipulated images to generate celebrity endorsements is not new, the ability to have that celebrity appear to be speaking in a video creates a whole host of new concerns. Section 5 of Quebec’s Charter of Human Rights and Freedoms (the Charter) states, “Every person has a right to respect for his private life,” whereas article 36(5) of the Civil Code of Québec (the Civil Code) provides that the use of an individual’s name, image, likeness or voice may be considered an invasion of privacy if it is for a purpose other than the public’s legitimate information. Deepfakes could also be used by a business looking to free ride off another company’s good will, to imply that the other company has endorsed the business’ product. If the other company’s name or logo are used, there may be a cause for action under trademark law.
Privacy rights of individuals are protected both under provincial and federal statute, as well as the common law. The question, however, is whether deepfakes trigger privacy issues at all, given they are technical falsehoods that don’t necessarily relate to identifiable individuals. However, there is precedent that suggests subjective information about an individual may still be personal information even if it is not necessarily accurate (Lawson v. Accusearch Inc., 2007 FC 125).
In terms of statutory rights, section 1(1) of the BC Privacy Act creates a statutory tort, and states “it is a tort, actionable without proof of damage, for a person, willfully and without a claim of right, to violate the privacy of another.” The language of this tort, and other similar torts in other provinces, is broader (“privacy of another”) and may be broad enough to support an action in respect of a deepfake as it does not explicitly require there be an “identifiable individual”.
Similarly, those in the Province of Quebec may have recourse under the Civil Code, which as mentioned above states that it is an invasion of privacy to use a person’s name, image, likeness, or voice for a purpose other than the legitimate information of the public.
Deepfakes pose extensive risks in the employment realm, and businesses need to be aware that they can be liable if an employee maliciously creates a deepfake of a colleague. In the Province of Quebec, such activities could expose the employer to a claim under the Act Respecting Labour Standards (the “ALS”), which guarantees employees the right to a workplace free from psychological harassment. While such harassment generally involves a pattern of behaviour, the act states that even a single incident can constitute psychological harassment if it is sufficiently serious to leave a lasting harmful effect. Such harassment may also result in worker’s compensation claims under the Act respecting industrial accidents and occupational diseases.
In addition to harassment claims, businesses must also take care when making hiring decisions or considering disciplinary sanctions. If an employer refuses to hire a candidate or taking disciplinary measures based on video evidence of bad behaviour, it will be vital to confirm that the video is, in fact, authentic. If the video turns out to be a deepfake, the result could be complaints under the ALS or Quebec’s Charter of Human Rights and Freedoms (and, in a unionized environment, grievances)
The speed at which markets react make public companies ripe targets for attacks. An ill-intentioned trader could release a deepfake of a listed company’s CEO stating, for example, that the business will exceed or miss its profit forecast. Even if it doesn’t take long for the video to be exposed as counterfeit and the stock price quickly returns to its previous level, such manipulation could create a window of opportunity during which the malefactor could buy or sell into a distorted market.
Such actions would breach any province’s Securities Act – not to mention paragraph 380(2) of the Criminal Code, which prohibits affecting stock prices through “deceit, falsehood or other fraudulent means.” While anyone malevolent enough to behave in this way would take measures to hide their identity, filing a report with the provincial regulator and the police will allow them to track down the perpetrator by tracing the origin of the video and unusual trading activity occurring simultaneously with the release.
Deepfakes put businesses at risk by enabling malicious individuals (such as a disgruntled employee or customer) to smear a company’s reputation, for example by showing its senior executives uttering all sorts of loathsome things. In a few minutes, your reputation with employees, clients, investors and the broader public can be irreparably damaged.
If the perpetrator can be identified, a lawsuit could be filed under the Civil Code, whose Article 3 provides that every person (individuals and corporations) is entitled to their reputation. An action could also be possible under section 4 of the Charter, which recognizes the right to one’s “dignity, honour and reputation.” A successful plaintiff under the Charter may even be able to obtain punitive damages.
Finally, section 298 of the Criminal Code prohibits defamatory libel, being words published without lawful justification or excuse that are likely to injure a person’s reputation by exposing them to hatred, contempt or ridicule, or that is designed to insult them.
While some bad actors may simply be look to hurt the company, others are interested in holding it for ransom. If instead of actually using the deepfake to harm the business, a person instead threatens to do so – presumably by sending the video to the company with a demand for money – then they’ve likely fallen afoul of section 346 of the Criminal Code, which prohibits extortion. Once again, a phone call to the police would be in order.
Anyone tempted to ignore the deepfake threat would be wise to review the recent case of an executive who was tricked by a deepfake audio into thinking that he was receiving phone instructions from his boss to send $243,000 to a supplier abroad. Business simply cannot afford to ignore the reality of what exists today, and the even better technology that is around the corner.
It’s worth considering that the legal problems surrounding deepfakes arise from the danger of people thinking that they’re genuine. As people become increasingly aware of them, they are likely to become more skeptical. Before long, the problem may no longer be that people fall for fake material, but that they instead refuse to believe authentic material. Such a situation would, of course, create its own challenges. In the meantime, deepfakes with trusting audiences are upon us and every business, regardless of industry, needs to be mindful of the legal consequences they can entail.
For more information about Denton’s technology and data expertise and how we can help, please see our Transformative Technologies and Data Strategy page and our unique Dentons Data suite of data solutions for every business.