0% found this document useful (0 votes)
12 views10 pages

Deep Fakes

The document examines the legal issues surrounding online streaming platforms, particularly focusing on the implications of deep fake technology. It discusses the evolution, creation, and potential misuse of deep fakes, highlighting the need for specific legislation to address privacy violations and misinformation. Additionally, it outlines the relevant legal frameworks in Kenya, including constitutional rights, data protection laws, and regulations against cybercrimes and defamation, which aim to mitigate the risks associated with deep fakes.

Uploaded by

Stephen Mang'eni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views10 pages

Deep Fakes

The document examines the legal issues surrounding online streaming platforms, particularly focusing on the implications of deep fake technology. It discusses the evolution, creation, and potential misuse of deep fakes, highlighting the need for specific legislation to address privacy violations and misinformation. Additionally, it outlines the relevant legal frameworks in Kenya, including constitutional rights, data protection laws, and regulations against cybercrimes and defamation, which aim to mitigate the risks associated with deep fakes.

Uploaded by

Stephen Mang'eni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Examine the legal issues that pertain to online streaming platforms.

1.0 INTRODUCTION
It is amazing how the human brain works. With the ability to be creative and innovative, humans
have become unstoppable. "One of the most important things that humanity is working on is
artificial intelligence. At the World Economic Forum, Sundar Pichai, the CEO of Google and
Alphabet, stated, "It's more profound than fire or electricity." 1 However, a question can be asked
on whether an invention can be un-invented. The most logical answer would be a resounding
‘NO it therefore becomes impossible to uninvent an already existing technology. The question
then becomes how one would have knowledge that a technology to be invented would become a
menace ab nitio when made accessible to all.

It is evident that information technology has improved over the past decade and still has the
potential to advance at an unprecedented speed to an unprecedented level that the human mind
cannot fathom. AI being a growing industry that has been expected to reach about $118 Billion
USD by 2025, has enabled the incorporation of robots in our systems to include the health,
tourism and foodservice industries among others.2

However, the world has unfortunately witnessed the rise in the misuse of technology. One of
these notable technologies is the deep fake that is powered by AI and has become a global
concern due to its potential misuse in spreading misinformation, committing fraud and violating
privacy rights.3

1 Sherisse Pham, 'Google CEO: AI is More Profound than Electricity or Fire' (CNN Business, 24 January 2018)
<https://money.cnn.com/2018/01/24/technology/sundar-pichai-google-ai-artificial-intelligence/index.html> accessed
19 March 2025.

2 Tractica, 'Artificial Intelligence Software Market to Reach $118.6 Billion in Annual Worldwide Revenue by 2025 '
(Twitter, 1 May 2019) < https://x.com/tractica/status/1123244645067005954> accessed 19 March 2025

3 Shalini Mahashreshty Vishweshwar, 'Implications of Deep fake Technology on Individual Privacy and Security'
(MSc thesis, St. Cloud State University 2023) <https://repository.stcloudstate.edu/msia_etds/142> accessed 19
March 2025

1
Deep fake is a digital content generated by use of machine learning and AI to overlay, replace
and blend several content types to create fake media that masks authenticity. 4 There are different
types of deep fakes to include lip syncing, face swapping, voice cloning, full body deep fakes, AI
generated people and text to video deep fakes. Deep fakes involve: For face swapping, faces are
replaced with others, lip syncing involves altering the movements of the lips to match different
speeches, voice cloning involves replication of somebody’s voice, full body deep fakes involves
mimicking AI generated body movements to real people, AI generated people is the creation of
entirely new people who do not exist and text to video deep fakes involves the creation in real
video content from a text input.5 Sites that generate such content include Art breeder, Chatgpt,
Gemini, Zao and Wombo among others.

1.1 Creation and application of deep fakes


Generative Adversarial Networks (GANs) and Artificial Neural Networks (ANNs) are machine
learning components used in the production and usage of deep fakes. ANNs have a propensity to
mimic human brain neurons. They are made up of interconnecting layers of artificial neurons
that process and alter incoming data to produce output .6 The GANs on the other hand
synthesizes new content. They are composed of a generator and discriminator network. The
generator network creates content and the discriminator network detects the content to tell if it is
real or fake. Overtime, the generator network improves its output to deceive the discriminator
network until it can no longer detect any difference. Therefore, the generator network then
creates realistic content that cannot be told apart from the real content. 7

4 Ibid

5 Faith Amatika-Omondi, 'The Regulation of Deep fakes in Kenya' (2022) 2(1) Journal of Intellectual Property and
Information Technology Law 145, <https://doi.org/10.52907/jipit.v2i1.208> accessed 19 March 2025

6 H Kukreja, N Bharath, C Siddesh, and S Kuldeep, 'An Introduction to Artificial Neural Network' ( 2016) IJARIIE
<https://www.researchgate.net/profile/KuldeepShiruru/publication/319903816_AN_INTRODUCTION_TO_ARTIF
ICIAL_NEURAL_NETWORK/links/59c0fe55458515af305c471a/AN-INTRODUCTION-TO-ARTIFICIAL-
NEURAL-NETWORK.pdf> accessed 19 March 2025.

7 I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, and Y Bengio, 'Generative
Adversarial Networks' (2020) 63(11) Communications of the ACM 139<https://doi.org/10.1145/3422622. accessed
19 March 2025.

2
1.2 Evolution of deep fakes
The manipulation of materials and information didn’t start with the emergence of AI and its use
in deep fakes. Going back to the eighteenth century when the depictions of Maria Antoinette and
her Husband Louis XVI were aired in sexually explicit cartoons. 8 Though that could not be
considered photo alteration, the narrative was instead controlled to dehumanize the monarch and
turn the people against them that eventually led to their downfall. The negative public reputation
contributed largely in justifying their execution at the guillotine.

As long as there have been pictures and films, they have been subject to manipulations. In
America, the photo of the assassinated president Abraham Lincoln’s head was mounted on the
body of a pro-slavery politician, John Calhoun because there wasn’t a single proper and official
portrait of him. This was done by one Thomas Hicks who was an artist. It was only later after
Stefan Lorant and art director realized that the photo was a fake and investigated it. This is
evidence that photo tampering existed way before tools like Photoshop existed.9
With the invention of digital cameras in 1975 by Steve Sasson, an electric engineer, it became
possible to easily edit a photo and vary its effects. 10 This is termed as the subjectification of
photography by the analysts. This is contrasted with the analogue photography that focuses on
objectification.11 It has been debated that with the taking up of digital techniques in the 1990s,
there has been an erosion of truth in images. Therefore, digital photos do not speak the truth as
analogue photos did.12

8 Jacquelyn A. Burkell and Chandell Enid Gosse, 'Nothing New Here: Emphasizing the Social and Cultural Context
of Deep fakes' (2019) First Monday< https://doi.org/10.5210/fm.v24i12.10287 > accessed 19 March 2025.

9 Photo Tampering Throughout History <https://faculty.cc.gatech.edu/~beki/cs4001/history.pdf> accessed 19 March


2025.

10 Quora User, ‘What Was the First Commercial Digital Camera Available to the Public and Who Created It?’
(Quora, Date of Post) <https://www.quora.com/What-was-the-first-commercial-digital-camera-available-to-the-
public-and-who-created-it> accessed 19 March 2025.

11 Matthew Biro, 'From Analogue to Digital Photography: Bernd and Hilla Becher and Andreas Gursky' (History of
Photography, 2012) <https://doi.org/10.1080/03087298.2012.686242 >accessed 19 March 2025

12 Ibid
3
The same is seen with digital videos and how easy it has become to alter them. The internet is
filled with software tools that facilitate editing based on preference and likes. Today, there is use
of captions that are questionable as to their truthfulness and are used to describe footage. The
false caption or the slowing down of a video is referred to as cheap fake. 13 However, it is now
feasible to identify changes thanks to the use of digital forensic tools and knowledge.

An example of a cheap fake is when an infamous slowed video of the US House Speaker Nancy
Pelosi made her appear drunk thus incapacitated to perform. The video made her suffer negative
press.14

The use of TikTok and Twitter Apps for instance has fueled the circulation of content at high
speed. Most of this digital content is evidently manipulated.

1.3 Conclusion
The fast development of deep fake technology calls for particular laws to handle its particular
dangers and possible negative effects. In contrast to conventional media manipulation, deep
fakes use artificial intelligence to produce incredibly lifelike fakes that may trick the public,
sway elections, enable fraud, and violate people's rights, such as their privacy and reputation.
Laws now in place regarding identity theft, defamation, and disinformation are frequently
insufficient to address the complex and dynamic nature of deep fake threats. Clear consequences
for harmful usage, a definition of legal accountability, and the advancement of AI-driven
detection systems would all be aided by specific legislation. Furthermore, carefully thought-out
regulatory frameworks may strike a balance between the necessity for innovation in creative
sectors and protections against abuse, guaranteeing that deep fake technology serves morally
righteous and beneficial ends while reducing harm to society.

13Faith Amatika-Omondi, 'The Regulation of Deep fakes in Kenya' (2022) 2(1) Journal of Intellectual Property
and Information Technology Law 145, <https://doi.org/10.52907/jipit.v2i1.208> accessed 19 March 2025

14 Alex Kantrowitz, ‘The Pelosi Video Was Altered. Why Won’t Facebook Just Say That?’ ( BuzzFeed News, Date
of Publication)<https://www.buzzfeednews.com/article/alexkantrowitz/the-pelosi-video-was-altered-why-wont-
facebook-just-say-that> accessed 19 March 2025.

4
2.0. LEGAL FRAMEWORK FOR DEEPFAKE IN KENYA
In the Kenyan jurisdiction, the following laws apply and address the issue of deep fakes;

2.1. The Constitution of Kenya 2010


Article 33(1) of the Kenyan Constitution provides for freedom of speech. According to the
article, every person enjoys fundamental freedom of expression. Kenya’s Constitution provides
that people, along with others, retain the right to request information, obtain it, and convey
messages through any method.15

According to constitutional law, every citizen has the right to create deep fakes for information
and idea dissemination. Educational deep fakes benefit from this section of constitutional
freedom since they recreate undocumented relevant information for contemporary use. 16

The Constitution establishes freedom of media as one of its protected rights according to Article
34(1).17 Article 34(1) protects the freedom together with the independence of electronic printing
and all media types. The freedom limitation under Article 33(2) 18 restricts particular expressions
mentioned in the section. Social media platforms Twitter, Facebook, Integra, TikTok, and
YouTube among others serve to spread deep fake videos, photographs, and other content. 19
Distributing information through deep fake technology to these platforms becomes restricted.
Expression through Article 33(2) designated categories warrants prohibition under the law.

15 The Constitution of Kenya 2010, article 33.

16 Stenbuch, Y. Listen to JFK speak from beyond the grave, (2018), < https://nypost. com/2018/03/16/jfks-voice-
delivers-speech-he-never-gave-day-of-assassination, (accessed 18 March 2025).

17 The Constitution of Kenya 2010, Article 34(1).

18 The Constitution of Kenya 2010, Article 33.

19 Communications, Elections in Africa: AI-generated deep fakes could be the greatest digital threat in 2020,
Paradigm Initiative, (6 January 2020).

5
2.2 The Data Protection Act 2019
Section 2 of the Data Protection Act defines data as information that is processed using
equipment operating automatically in response to instructions given for that purpose. Further, it
defines processing to mean any operation or sets of operations that are performed on personal
data or sets of personal data whether or not by automated means, such as storage, adaptation, or
alteration.20 Therefore, it can be concluded that photographs, videos, or sound recordings form
part of the data. Additionally, the creation of deep fakes is one of how data is processed.

Section 25 of the Act establishes the principles of data protection. One of the principles is that it
must be processed in conformity with the privacy of data. 21 The right to privacy is also provided
for under Article 31 of the Constitution of Kenya, which includes information relating to one’s
family or private affairs not being unnecessarily required or revealed.22

Deep fakes may infringe on the privacy of an individual and the effects can be dire as far as the
emotional stability of the data subject is concerned. 23 For Instance, looking at the contravention
of the right to privacy from the public’s perspective, when images or videos go viral, the
response from the public may be undesired depending on the content. Therefore, ensuring the
processing of personal data while considering the privacy principle may deter some negative
uses of deep fakes.24

2.3. The Computer Misuse and Cybercrimes Act, 2018


The provisions of the CMCA can be used to regulate the malicious use of technology. Under
section 22 of the act, it is a crime to intentionally publish false, misleading, or fictitious data with
the intention that the same is relied on as authentic. 25 Therefore, deep fakes designed to spread

20 The Data Protection Act, No. 24 of 2019, section 2.

21 Data Protection Act 2019, section 25.

22 The Constitution of Kenya 2010, Article 31.

23 Danielle K. Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National
Security, 107 California Law Review 1753 (2019).

24 Faith Amatika-Omondi, The Regulation of Deep fakes in Kenya, (2022).

25 The Computer Misuse and Cybercrimes Act 2018, section 22(1).


6
misinformation, defame individuals, or incite public unrest could be prosecuted under this
section. This section further provides for the penalty for the crime which may attract a fine of not
more than five million Kenya shillings or imprisonment for not more than two years or both.

Further, under section 23, a person who knowingly publishes information that is false in print,
broadcast, data, or over a computer system, that is calculated or results in panic, chaos, or
violence among citizens of the Republic, or which is likely to discredit the reputation of a person
commits an offense and shall on conviction, be liable to a fine not exceeding five million Kenya
shillings or to imprisonment for a term not exceeding ten years, or to both. 26 This means that the
creation and publication of deep fakes that serve the purpose outlined above is a crime
punishable as stated.

2.4. The Defamation Act 1970


Under Civil defamation, the legal definition of defamation in Kenya which stems from English
common law incorporates two distinct forms of defamation namely libel through written public
statements and slander through spoken false declarations.

Deep fake content that presents someone in an injury-causing false picture could act as libel per
Kenya's defamation laws when their public image suffers such damage. The plaintiff must prove
that the publisher distributed the fraudulent information by sharing it. The identity of the person
remains either directly exposed or indirectly disclosed through the content. Such information
lacks truth and leads to damaging someone's public standing.

Regarding deep fakes, ‘words’ under section 2 of the Act include pictures, visual images,
gestures, and other methods of signifying meaning. This implies that photos and other visual
images created using deep fakes or other forms of manipulation are subject to the Act.27

Whether there is malice or not, any publication that is prohibited by law does not fall under the
qualified privilege envisioned by the Act. 28 Hence, the publication of deep fake content that is

26 The Computer Misuse and Cybercrimes Act 2018, section 23.

27 The Defamation Act 1970, section 2.

28 The Constitution of Kenya 2010, Articles 33 & 34 and the Data Protection Act 2019, section 25.
7
outlawed, for instance, because it has the potential to damage the reputation of an individual, is
not privileged and is actionable in the suit of the subject.

2.5. The Copyright Act 2001


Copyright protects all creative works fixed in a tangible format, for instance, photos and videos,
and sound recordings, including derivative works, among others.29 One of the ways that
copyright regulates deep fakes is by invoking the moral rights of copyright holders. Under the
Copyright Act, an author of a copyrighted work has the right to object to the mutilation or
adaptation of a copyrighted work in such a way that it is prejudicial to their honor or reputation.30

Image rights can be enforced in Kenyan courts, where a person whose image was used in an
advert was awarded damages since their consent was not obtained prior to the use of the image.
For instance, in the case of Jessica Clarise Wanjiru v Davinci Aesthetics & Reconstruction
Centre & 2 others [2017], the court stated that image or personality rights are generally
considered to consist of two types of rights: the right of publicity, or to keep one’s image and
likeness from being commercially exploited without permission or contractual compensation,
which is similar to the use of a trademark; and the right to privacy, or the right to be left alone
and not have one’s personality represented publicly without one’s permission.31

Therefore, where deep fakes are used to create an individual’s image and the image is
subsequently used in an advert, such an individual is entitled to compensation if their consent
was not sought.

2.6. The Penal Code Chapter 63, 1930


The Penal Code through section 66(1) forbids the circulation of alarming publications and false
information that might lead to public fear or disturbance under the law. 32 No liability exists when
the publisher demonstrates their effort to check the information's truthfulness. This means that

29 The Copyright Act 2001, section 22.

30 The Copyright Act 2001, Section 32.

31 Jessica Clarise Wanjiru v Davinci Aesthetics & Reconstruction Centre & 2 others [2017] Eklr.

32 The Penal Code 1930, section 66.


8
the publication of deep fake content that is likely to cause panic or disturb the peace is
prohibited.

3.0. ENFORCEMENT AND PRACTICAL CHALLENGES IN KENYA


The existence of deep fakes has been a major contribution in the rising cases of misinformation
in Kenya and the rapid spread of false information which quickly make it to the headlines of
many social media pages which are considered to be the main sources of information in the 21 st
Century.

The evolution of this technology over the years has posed a great challenge since many
recipients of misinformation believe in the narrative of ’seeing is believing’ or rather ‘we believe
in what we see’ though many at times, it has been proven that what you see in not necessarily
true because such information may have been interfered with or altered by the technological use
of deep fakes.33

A particular student, Donald Machenzie, in the field of science and sociology once asked if it
was possible to uninvent the bomb. Turns out that is very much impossible because once
technology is invented it is close to impossible to uninvent it.34

This theory can be applied to the reasoning behind a probable uninvention of deep fakes which
would definitely not work. Although helpful, it has many disadvantages which have led to
numerous human rights violations such as the freedom of speech as well as the freedom of
expression. This section aims to identify some of the challenges that make it quite impossible or
hard to regulate the existence of deep fakes in Kenya, despite several laws and statutes put in
place to govern its use.

Before delving in to the possible challenges of regulation, it is important that real life cases or
incidents in Kenya involving deep fakes are highlighted, just to see how practical the
implications of misuse of deep fakes have affected the country.

33 Jarameel Kevins, ‘Deep Fakes In Kenya: The Intersection Of Technology, Disinformation, And National
Security’ The Mount Kenya Times , February 26, 2025

34 Faith Amatika-Omondi, ‘Regulation of Deep Fakes in Kenya’ (2022) 2(1) Journal of Intellectual Property and
Information Technology Law 145, at 170

9
A report by the Mozilla Foundation highlighted how the use of deep fakes negatively impacted
the 2022 Kenya’s general election by fabricating false information and pouring out to the
public.35 Though the situation did not get out of hand, such instances demonstrated how synthetic
media like deep fakes can be used as a divide and conquer mechanisms to change the political
environment and cause chaos among voters and citizens at large.36

Despite the available legal frameworks put in place to address the challenges of deep fakes, they
seem not to be sufficient. One challenge in the regulation of deep fakes is, as a technological
invention, it is prone to change and alterations from time to time. 37 This is to say if legislation is
put in place today; it may fail to address the unique deep fake changes and complex inventions
which are developed from time. While making reference to Lon Fuller’s principles of law, he
stated that for a law to be good law, it must not change so frequently that the subjects fail to rely
on it.38 This may deem the law to be dysfunctional.

Secondly, the admissibility of evidence with regards to deep fakes might also prove a challenge.
Section 78 of the evidence act cap 80 requires that admissibility of photographic evidence is only
possible if it is accompanied with a certificate from the office of the Director of Public
Prosecutions.39 However, when it comes to cases dealing with deep fakes, it is actually possible
for the office of the DPP not to ascertain whether a photo is genuine or deep fake, thereby posing
a complex challenge in the admissibility of the digital or electronic message. 40

35Odanga Madung, ‘Opaque and Overstretched, Part II: How Platforms Failed to Curb Misinformation during the
Kenyan 2022 Election’, Mozilla Foundation, https://foundation.mozilla.org/en/campaigns/opaque-and-
overstretched-part-ii

36 Victoria Miyandazi and Lucianna Thuo, ‘Navigating the Nexus of Elections, Technology and Democracy amid
Escalating Disinformation and Misinformation Challenges in Kenya’ Cambridge University Press

37 Faith Amatika-Omondi, ‘Regulation of Deep Fakes in Kenya’ (2022) 2(1) Journal of Intellectual Property and
Information Technology Law 145, at 170

38 The Eight Principles of Law by Lon L. Fuller

39 Evidence Act Cap 80 Laws of Kenya, Section 78

40 Faith Amatika-Omondi, ‘Regulation of Deep Fakes in Kenya’ (2022) 2(1) Journal of Intellectual Property and
Information Technology Law 145, at 170

10

You might also like