Deepfakes: protecting your image online is the key to fighting them

23 February 2024 - 09:38 By Layckan Van Gensen
subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now
Deepfakes involve the use of artificial intelligence tools to manipulate images, videos and audio. Stock image.
Deepfakes involve the use of artificial intelligence tools to manipulate images, videos and audio. Stock image.
Image: 123RF

Leanne Manas is a familiar face on South African television screens. Towards the end of 2023 the morning news presenter’s face showed up somewhere else: in bogus news stories and fake advertisements in which “she” appeared to promote products or get-rich-quick schemes.

It quickly emerged Manas had fallen victim to “deepfaking”.

Deepfakes involve the use of artificial intelligence tools to manipulate images, videos and audio. It doesn’t require cutting-edge technical know-how. Software such as FaceSwap and ZaoApp, which can be downloaded for free, mean anybody can create deepfakes.

It is worrying that government hasn’t yet taken any legislative steps to combat deepfakes, specially with the national elections scheduled for later this year

Deepfakes were initially used in the entertainment industry. For example, an actress in France who was unable to film her parts in person for a soap opera due to Covid-19 restrictions  played the role thanks to deepfakes. In the health industry deep-learning algorithms, which are responsible for deepfakes, are used to detect tumours through pattern-matching in images.

However, these positive applications are few and far between.

There are rising global concerns about the effect deepfakes might have on democratic elections.

Recent reports suggest deepfakes are on the rise in the country and South Africans seemingly struggle to spot them. It is worrying that government hasn’t yet taken legislative steps to combat deepfakes, especially with national elections scheduled for later this year.

I am a legal scholar specialising in sport law, with a particular focus on image rights. I’m especially interested in the recognition of an individual’s image right and the legal position when their likeness is misappropriated without their consent. That includes the use of deepfakes.

In my LLD thesis, I argued  a person’s image needs clear legal protection, taking into account the realities of digital media and that many individuals such as influencers, athletes and celebrities generate an income from commodifying their image online. Promulgating legislation will create legal certainty regarding an individual’s image.

Some states in the US have taken action to deal with deepfakes, mostly in the context of elections. For example, Texas became one of the first states to criminalise the use of deepfakes, specially if the content relates to political elections.

It also recently passed a second bill which targets sexually explicit deepfakes. It’s a criminal offence to create a deepfake video with the intention of injuring a political candidate or influencing an election result, or to distribute sexually explicit deepfakes without the consent of the individual with the intention to embarrass them.

Maryland and Massachusetts have proposed legislation that specifically prohibits the use of deepfakes. Maryland plans to target deepfakes that may influence politics and Massachusetts wants to criminalise the use of deepfakes for “criminal or tortious (wrongful) conduct”.

In 2020 California became the first US state to criminalise the use of deepfakes in political campaign promotion and advertising. The AB 730 bill makes it a crime to publish audio, imagery or video that gives a false and damaging impression of a politician’s words or actions. Though the bill doesn’t explicitly mention deepfakes it is clear AI-manufactured fakes are its primary concern.

In 2023, the governor of New York signed the Senate Bill 1042A. This aims to prohibit the dissemination of deepfakes in general, not only in relation to elections.

At least four federal deepfakes bills have been considered. These include the Identifying Outputs of Generative Adversarial Networks Act and the Deepfakes Accountability Act.

There is no recognition of image rights in South Africa’s case law or legislation. Image rights are distinct from copyright in law. The scope of protection provided by copyright alone would not be enough to tackle the problem of deepfakes in a court setting.

I argue for legal intervention which recognises individual image rights.

By correctly recognising an image the image will be protected against unauthorised use. This will not only include the misappropriation of an individual image for commercial use, it will also combat deepfakes, whether those relate to elections and politicians or any manipulation of a person’s image with malicious intent.

Image rights legislation is key. It can:

  • clearly define an individual’s image;
  • specify when an infringement of the image has occurred; and
  • provide the image right holder with legal remedies for unauthorised use.

This can all help regulate deepfake situations. The malicious and deceptive nature of deepfakes may cause the image right holder to suffer significant harm. It is time South Africa’s legislature addressed these situations by providing the necessary protection to individuals.

Layckan van Gensen is a junior lecturer in mercantile law, Stellenbosch University

This article was first published by The Conversation


subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.