However, these positive applications are few and far between.
There are rising global concerns about the effect deepfakes might have on democratic elections.
Recent reports suggest deepfakes are on the rise in the country and South Africans seemingly struggle to spot them. It is worrying that government hasn’t yet taken legislative steps to combat deepfakes, especially with national elections scheduled for later this year.
I am a legal scholar specialising in sport law, with a particular focus on image rights. I’m especially interested in the recognition of an individual’s image right and the legal position when their likeness is misappropriated without their consent. That includes the use of deepfakes.
In my LLD thesis, I argued a person’s image needs clear legal protection, taking into account the realities of digital media and that many individuals such as influencers, athletes and celebrities generate an income from commodifying their image online. Promulgating legislation will create legal certainty regarding an individual’s image.
Some states in the US have taken action to deal with deepfakes, mostly in the context of elections. For example, Texas became one of the first states to criminalise the use of deepfakes, specially if the content relates to political elections.
It also recently passed a second bill which targets sexually explicit deepfakes. It’s a criminal offence to create a deepfake video with the intention of injuring a political candidate or influencing an election result, or to distribute sexually explicit deepfakes without the consent of the individual with the intention to embarrass them.
Deepfakes: protecting your image online is the key to fighting them
Image: 123RF
Leanne Manas is a familiar face on South African television screens. Towards the end of 2023 the morning news presenter’s face showed up somewhere else: in bogus news stories and fake advertisements in which “she” appeared to promote products or get-rich-quick schemes.
It quickly emerged Manas had fallen victim to “deepfaking”.
Deepfakes involve the use of artificial intelligence tools to manipulate images, videos and audio. It doesn’t require cutting-edge technical know-how. Software such as FaceSwap and ZaoApp, which can be downloaded for free, mean anybody can create deepfakes.
Deepfakes were initially used in the entertainment industry. For example, an actress in France who was unable to film her parts in person for a soap opera due to Covid-19 restrictions played the role thanks to deepfakes. In the health industry deep-learning algorithms, which are responsible for deepfakes, are used to detect tumours through pattern-matching in images.
However, these positive applications are few and far between.
There are rising global concerns about the effect deepfakes might have on democratic elections.
Recent reports suggest deepfakes are on the rise in the country and South Africans seemingly struggle to spot them. It is worrying that government hasn’t yet taken legislative steps to combat deepfakes, especially with national elections scheduled for later this year.
I am a legal scholar specialising in sport law, with a particular focus on image rights. I’m especially interested in the recognition of an individual’s image right and the legal position when their likeness is misappropriated without their consent. That includes the use of deepfakes.
In my LLD thesis, I argued a person’s image needs clear legal protection, taking into account the realities of digital media and that many individuals such as influencers, athletes and celebrities generate an income from commodifying their image online. Promulgating legislation will create legal certainty regarding an individual’s image.
Some states in the US have taken action to deal with deepfakes, mostly in the context of elections. For example, Texas became one of the first states to criminalise the use of deepfakes, specially if the content relates to political elections.
It also recently passed a second bill which targets sexually explicit deepfakes. It’s a criminal offence to create a deepfake video with the intention of injuring a political candidate or influencing an election result, or to distribute sexually explicit deepfakes without the consent of the individual with the intention to embarrass them.
Disinformation and ‘deepfakes’ are a real and present danger to our democracy
Maryland and Massachusetts have proposed legislation that specifically prohibits the use of deepfakes. Maryland plans to target deepfakes that may influence politics and Massachusetts wants to criminalise the use of deepfakes for “criminal or tortious (wrongful) conduct”.
In 2020 California became the first US state to criminalise the use of deepfakes in political campaign promotion and advertising. The AB 730 bill makes it a crime to publish audio, imagery or video that gives a false and damaging impression of a politician’s words or actions. Though the bill doesn’t explicitly mention deepfakes it is clear AI-manufactured fakes are its primary concern.
In 2023, the governor of New York signed the Senate Bill 1042A. This aims to prohibit the dissemination of deepfakes in general, not only in relation to elections.
At least four federal deepfakes bills have been considered. These include the Identifying Outputs of Generative Adversarial Networks Act and the Deepfakes Accountability Act.
There is no recognition of image rights in South Africa’s case law or legislation. Image rights are distinct from copyright in law. The scope of protection provided by copyright alone would not be enough to tackle the problem of deepfakes in a court setting.
I argue for legal intervention which recognises individual image rights.
By correctly recognising an image the image will be protected against unauthorised use. This will not only include the misappropriation of an individual image for commercial use, it will also combat deepfakes, whether those relate to elections and politicians or any manipulation of a person’s image with malicious intent.
Image rights legislation is key. It can:
This can all help regulate deepfake situations. The malicious and deceptive nature of deepfakes may cause the image right holder to suffer significant harm. It is time South Africa’s legislature addressed these situations by providing the necessary protection to individuals.
• Layckan van Gensen is a junior lecturer in mercantile law, Stellenbosch University
• This article was first published by The Conversation
READ MORE:
Google upgrades AI product for advertisers with Gemini models
US justice department names first AI officer as new technology challenges law enforcement
Apple rolls out iMessage upgrade to withstand decryption by quantum computers
AI can tell the ‘truth’ for business
Ensuring the quantum leap makes a safe landing
It’s time to upgrade AI infrastructure
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
News and promos in your inbox
subscribeMost read
Latest Videos