Deepfake Data
Deepfake Data
Amelia O’Halloran
April 2021
Advisor
Timothy Edgar
Second Reader
James Tompkin
1
Acknowledgements
Thank you to my thesis advisor, Timothy Edgar, for offering expert in-
sights into privacy law and technology policy. When I was a freshman, I attended
a lecture you gave on surveillance and privacy that ignited my interest in tech-
nology policy. Two years later, my focus on digital sexual privacy originated in
your Computers, Freedom, and Privacy class. We formally started this thesis a
year ago, but it really originated almost four years ago in your lecture when you
showed me what it meant to work at the intersection of technology and civil
liberties.
Thank you to Danielle Citron for setting the standard for sexual privacy
research. My passion for this topic stemmed from reading your papers, and I
consider you to be my biggest role model in the fight for sexual privacy.
Finally, thank you to my parents for the constant encouragement and for
asking about my thesis on every phone call. Thank you to my mom for sending
me every single deepfake-related news item she could find online, and to my dad
for reading drafts and offering suggestions. Thank you to my two older brothers,
Jack and Cormac, for motivating all of my academic endeavors. Everything I
have ever done is because you two showed me how. Thank you.
2
Contents
Introduction 5
1 Societal Impacts 7
2 Deepfake Creation 12
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Conclusion 48
3
“The intensity and complexity of life. . . have rendered necessary some retreat
from the world. . . so that solitude and privacy have become more essential to
the individual; but modern enterprise and invention have, through invasions
upon his [or her] privacy, subjected him [or her] to mental pain and distress,
far greater than could be inflicted by mere bodily injury.”
“No woman can call herself free who does not own and control her own body.”
4
Introduction
The intention of this paper is to bridge the knowledge gap between tech-
nologists and policymakers regarding the issue of deepfake pornography. This
knowledge gap exists in all technology policy spheres; in 2012, the Secretary
of Homeland Security, in charge of assessing and protecting against national
cyberthreats, declared at a conference: “Don’t laugh, but I just don’t use email
at all.” In 2013, United States Supreme Court Justice Elena Kagan revealed
the same was true for eight of the nine Supreme Court Justices.5 Those decid-
ing what is legal, pertinent, or threatening in the digital sphere are not always
educated on the technical aspects. As Supreme Court Justice Warren Brandeis
once said, “The greatest dangers to liberty lurk in insidious encroachment by
men of zeal, well-meaning but without understanding.”6 Two branches of “well-
meaning but without understanding” may exist here: technologists are often
1 Danielle Citron and Robert Chesney, “Deep Fakes: A Looming Challenge for Privacy,
Democracy, and National Security,” California Law Review 107, no. 6 (December 1, 2019):
1776.
2 Samantha Cole, “Deepfakes Were Created As a Way to Own Women’s Bodies—We Can’t
ate Women: ‘Everybody Is a Potential Target,’” The Washington Post, December 30,
2018, https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-
weaponized-harass-humiliate-women-everybody-is-potential-target/.
5 P.W. Singer and Allan Friedman, “Cybersecurity and Cyberwar: What Everyone Needs
5
well-meaning in their novel technical innovations but lacking an understanding
of the societal harms. Meanwhile, policymakers may attempt to ameliorate the
harms of the internet, but without an understanding of the technical aspects,
proposed laws may have unintended consequences or lack the coverage of future
implications for the technology. This paper attempts to mitigate part of this
danger by closing this knowledge gap in technology policy related to deepfake
pornography.
6
1 Societal Impacts
7
body. Using the same dataset of Facebook photos, these photographs eventually
led to doctored pornographic videos featuring Martin. After being informed of
the deepfake videos in an email, Martin recalled watching the videos: “I watched
as my eyes connected with the camera, as my own mouth moved. It was con-
vincing, even to me.”10 Martin’s connection with her body was altered as she
watched herself performing falsified sexual actions and was convinced of their
veracity. These videos not only took away Martin’s power over how others per-
ceived her body, but also her own understanding and acceptance of her bodily
autonomy. She describes the numbing, helpless, and persistent feeling of the
exploitation: “I would contact the site administrators and request to get things
deleted. Sometimes they would delete the material and then a month later, it
would reappear. Sometimes they simply didn’t respond. . . One site host told
me they would only delete the material if I sent them nude photos of myself
within 24 hours. I had to watch powerless as they traded content with my face
doctored into it. . . I had spent my entire adult life watching helplessly as my
image was used against me by men that I had never given permission to of any
kind.“11 Noelle stated that these videos not only affected her mental state upon
viewing, but she also has anxieties surrounding the lasting effects of these videos
on her life, saying that she fears that her future children and future partner will
see these videos.12
The stories of Kristen Bell and Noelle Martin are not isolated by any
means. As machine learning capabilities grow, their abuses follow suit. In
September 2019, Deeptrace Labs produced a report detailing the current state
of deepfake videos, finding that the total number of deepfake videos online at the
time was 14,678 – almost double the number found in December 2018.13 This
swift growth represents the exponential increase of accessibility to sophisticated
machine learning algorithms (which I discuss in the Deepfake Creation section
of this paper) in order to create deepfake videos. Though people may have heard
of deepfake videos in the context of doctored political videos14 or resurrection
of deceased celebrities in films,15 the vast majority are used for a different pur-
pose; of the deepfake videos online, 96% feature pornographic content.16 The
majority of these videos are found on deepfake-specific pornographic websites.17
Despite these websites being relatively new, with the earliest deepfake-specific
10 Daniella Scott, “Deepfake Porn Nearly Ruined My Life,” Elle, June 2, 2020,
https://www.elle.com/uk/life-and-culture/a30748079/deepfake-porn/.
11 Ibid.
12 Vox, The Most Urgent Threat of Deepfakes Isn’t Politics.
13 Ajder et al., “The State of Deepfakes,” 1.
14 Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman, “Synthesiz-
ing Obama: Learning Lip Sync from Audio,” ACM Transactions on Graphics 36, no. 4 (July
20, 2017), https://doi.org/10.1145/3072959.3073640.
15 “Deepfake Tech Drastically Improves CGI Princess Leia in Rogue One — GeekTyrant,”
8
pornographic website being registered in February 2018, the total number of
video views across the top four dedicated deepfake pornography websites as of
September 2019 was over 134 million.18
18 Id., 1.
19 Danielle Keats Citron, “Sexual Privacy,” Yale Law Journal 128, no. 7 (May 1, 2019),
https://digitalcommons.law.yale.edu/ylj/vol128/iss7/2, 1915.
20 Id., 1909.
21 Id., 1914.
22 Id., 1917.
23 Sam Havron et al., “Clinical Computer Security for Victims of Intimate Partner Violence.”
24 Citron, “Sexual Privacy,” 1874.
25 Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review
9
ery of autonomy can be viewed as unethical through a plethora of moral lenses,
there may be none as obvious as the Kantian Humanity Formula, an ethical
proposal stating that people should never act in such a way that we treat hu-
manity, whether in ourselves or in others, as a means only but always as an end
in itself.29 Utilizing people’s photos as a means of self-pleasure is an obvious
example of treating a person as a means instead of an end. To understand a
person as an end within herself, we should be focusing on her humanity and
autonomy, as opposed to using her as a self-indulgent tool.
For many women, sexual privacy and the ability to create one’s own iden-
tity is a crucial aspect of building one’s career. Sexual and pornographic content
online may cause women to suffer stigmatization in the workforce, lose their
jobs, and have difficulty finding new ones33 – even if the videos are fake. Citron
cites a Microsoft study that states that “nearly eighty percent of employers use
search results to make decisions about candidates, and in around seventy per-
cent of cases, those results have a negative impact. As another study explained,
employers often decline to interview or hire people because their search results
featured ‘unsuitable photos.’”34 Explicit images posted online without consent
can have negative impacts on one’s job search, and doctored explicit images or
videos take that a step further, as the one whose likeness appears in the photo
may not be aware of its existence, and therefore unaware that her job prospects
are being hindered by false images or videos.
In 1869, John Stuart Mill argued for a new system of equality for both
sexes, writing that the subordination of one sex to another is one of the chief
hindrances to human improvement.35 More than 150 years ago, he understood
29 Robert Johnson and Adam Cureton, “Kant’s Moral Philosophy,” The Stan-
ford Encyclopedia of Philosophy (Spring 2021 Edition), ed. Edward N. Zalta,
https://plato.stanford.edu/archives/spr2021/entries/kant-moral/
30 Citron, “Sexual Privacy,” 1875, 1889.
31 Id., 1888.
32 Id., 1924.
33 Id., 1875.
34 Citron, “Sexual Privacy,” 1927.
35 John Stuart Mill, “The Subjection of Women,” in Feminism: The Essential Historical
10
that the imbalance of power between the sexes was dangerous for the progress
of humanity. In order to fully understand the harm caused by deepfake pornog-
raphy, we must place it in this context: deepfake pornography is an explicit tool
to subordinate women and create a dangerous power dynamic. According to
the Deeptrace Labs report, 100% of the pornographic deepfake videos found in
their September 2019 report featured videos created from female images being
imposed onto the videos, as opposed to zero percent of videos being created
from images of male faces (the report does not break down gender further than
male/female).36 This solidifies what most would assume about any new trend in
exploitative online behavior: the victims of these videos are basically exclusively
female. While the report does not break down victims by race, it does note that
the nationality of the subjects in deepfake pornography videos are about two-
thirds from Western countries and about a third from non-Western countries,
including South Koreans specifically making up about 25% of the victims.
Writings, ed. Miriam Schneir (New York: Vintage Books, 1972), 166.
36 Ajder et al., “The State of Deepfakes,” 2.
37 Citron and Chesney, “Deep Fakes: A Looming Challenge,” 1773.
11
2 Deepfake Creation
The earliest instances of video editing were performed in the 1950s with ra-
zor blades and videotapes. Video editors would splice and piece together frames
of a videotape recording, a process that required “perfect eyesight, a sharp razor
blade, and a lot of guts.”38 Now, video editing is much more sophisticated and
accessible; the process to create an accurate, lifelike doctored video only requires
access to a computer and the ability to gather photos and videos of subjects.
The underlying mechanism for deepfake creation consists of deep learning mod-
els such as autoencoders and Generative Adversarial Networks.39 I detail the
technical aspects of deepfakes below.
Neural Networks
Neural networks are essential for deepfake videos because they give com-
puter models the ability to learn about the faces of the subjects, which allows
them to perform the face-swapping function to place one subject’s face onto
38 Dr Robert G. Nulph, “Edit Suite: Once Upon a Time: The History of Videotape Editing,”
1996, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.493.
12
another’s body without having to do a frame-by-frame copy-and-paste of one
face on top of the other.
Encoder-Decoder Pairs
13
Generative Adversarial Networks
14
2.2 Coding a Deepfake Model
15
quires some patience.45 There are even applications that will perform these
tasks for you to create a deepfake,46 though applications’ outputs will likely be
less realistic than those coded using the steps above, as their process is likely
simplified.
45 James Vincent, “I Learned to Make a Lip-Syncing Deepfake in Just a Few Hours (and You
16
3 Technical Detection of Deepfakes
3.1 Overview
In 2018, Korshunov and Marcel stated that most of the research being
done on deepfakes was about advancing the creation of the face-swapping tech-
nology, trying to make them more and more realistic. The detection of deepfakes
took a backseat to the augmentation of their creation. Keep in mind that the
original usage of deepfakes was deepfake pornography, so the fact that research
was primarily performed on augmentation of creation as opposed to detection
speaks to the lack of socially-conscious judgement applied by many deepfake
researchers. However, in 2018, researchers started to work on databases and
detection methods.47 Research is still being done on the generation of more
sophisticated and realistic deepfakes, so detection methods are fighting an up-
hill battle. This relationship may seem like Sisyphus’s endless stone-rolling
attempts, or even an arms race between creation and detection – and it is. As
detection gets more sophisticated, creators know what to look for and how to
enhance their videos. Progress in detection forces video generators to be more
creative in their realism, and progress in creation realism requires detectors to
be more creative in how to spot minute discrepancies or tampering artifacts.
Although this paper offered an overview of deepfake creation above, I focus
more on detection than creation in order to contribute to what I hope to be the
future of deepfake research emphasizing detection and socially just deepfakes.
17
Figure 2: Issues present in first generation deepfakes
48 Ruben Tolosana et al., “DeepFakes Evolution: Analysis of Facial Regions and Fake Detec-
18
precursor to deepfake videos was photoshopped (or otherwise doctored) images,
so image forensics have had much longer to gain sophistication and accuracy.
However, traditional image forensics often do not translate accurately to videos
due to video compression (which often occurs in the storage and posting of
videos in order to save space) that strongly degrades the data, making it more
difficult to find evidence of tampering.51
Finally, before diving into the technicalities of the papers listed below, I
find it important to note that of the eleven papers listed, only four mention
the dangers of deepfakes in pornography, whereas six mention the danger of
deepfakes in politics. Thus, even those interested in the detection of deepfakes
and creating literature surrounding this important issue are misunderstanding
(or miscommunicating) the proportional reach of the issue. Papers that discuss
the political impacts of deepfakes while leaving out the sexual privacy issues are
focusing on an important but small percentage of deepfakes and disregarding
the most common deepfake exploitation. This may distort the research, as the
nature of the activities performed in political deepfakes versus pornographic
deepfakes differ – for example, while measuring lip synching may be an effec-
tive method for campaign speeches, it may not be as relevant for pornographic
videos. This is a clear example of biased datasets that may have real con-
sequences on pornographic deepfake detection, once again demonstrating that
sexual privacy, and especially the sexual autonomy of women, is not brought to
the forefront in the issue of deepfakes. To those researching deepfake generation
or deepfake detection, I urge you to think not only of the political implications,
but also of the crucial importance that your work has on victims of sexual
privacy invasions.
51 Darius Afchar et al., “MesoNet: A Compact Facial Video Forgery Detection Network,”
in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018,
1–7, https://doi.org/10.1109/WIFS.2018.8630761.
19
3.2 Measurements of Technical Detection Methods
– True Positive (TP): Items that are categorized as true and are true (in
this context, items that are categorized as deepfakes and are, in fact,
deepfakes).
– True Positive Rate (TPR) is the percentage of true items that are
marked as true – in mathematical terms, T PT+F
P
N and in our context,
the percentage of deepfakes that are indeed marked as deepfakes.
– True Negative (TN): Items that are categorized as false and are false (in
this context, items that are categorized as real videos and are, in fact, real
videos).
– False Positive (FP): Items that are categorized as true but are false (in
this context, items that are categorized as deepfakes but are actually real
videos).
20
– Area Under the Curve (AUC):
Models can be optimized to cre-
ate the highest TPR or lowest
FPR. For example, if we had a
model that marked all videos au-
tomatically as deepfakes (a triv-
ial and non-helpful model), using
this model on a dataset of 99 deep-
fakes and one real video would out-
put a TPR of 100% and an FPR
of 100%, which may seem mislead-
ingly promising if the researcher is
Figure 3
only focusing on TPR. Instead of
looking at only one, we can create a curve with axes consisting of the
TPR and FPR; the area under this curve correlates to the correctness
of the categorization when not optimized solely for one or the other, but
rather at many different points of optimization decisions. Figure 352 shows
an example of this curve, displaying points on the curve at two different
decision thresholds. Thus, the higher the AUC, the better the model
performs, as it performs closest to a maximal TPR and a minimal FPR.
52 “Classification: ROC Curve and AUC — Machine Learning Crash Course,” Google De-
21
volutional neural networks are very good at picking up on patterns in the
input image, such as lines, gradients, circles, or even eyes and faces. . .
Convolutional neural networks contain many convolutional layers stacked
on top of each other, each one capable of recognizing more sophisticated
shapes. With three or four convolutional layers it is possible to recognize
handwritten digits and with 25 layers it is possible to distinguish human
faces.”54
learning-glossary-and-terms/convolutional-neural-network.
22
3.3 Technical Detection Methods
Expose Deepfakes and Face Manipulations,” in 2019 IEEE Winter Applications of Computer
Vision Workshops (WACVW), 2019, 83–92, https://doi.org/10.1109/WACVW.2019.00020.
58 Xin Yang, Yuezun Li, and Siwei Lyu, “Exposing Deep Fakes Using Inconsistent Head
Poses,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and
23
4. Agarwal et al. (2019) has created a model that distinguishes individ-
uals based on the presence and strength of twenty specific action units
or facial motion features such as inner brow raise, outer brow raise, lip
tighten, nose wrinkle, cheek raise, head rotation, mouth stretch and more.
Each individual is given a “motion signature” based on the frequency and
strength of these twenty features, then any deepfake created was tested
against this motion signature to see the extent to which they match up.
Motion signatures that do not match were classified as fake. This study
had a very promising AUC of up to 99% for face-swap deepfakes, however
it can only be used in instances where we have a robust set of videos of
the individual in order to create his or her motion signature. This study
was performed on Barack Obama, Donald Trump, Bernie Sanders, Hillary
Clinton, and Elizabeth Warren; it could only be performed accurately on
other politicians or celebrities with sufficiently available video data.59
5. Jung et al. (2020) analyzes blinking patterns of subjects in a video to de-
termine the veracity of the video.60 The paper contends that this method
achieved accurate results in 7 of the 8 videos tested (87.5%), however I
note that blinking is an aspect of deepfake generation that has, after com-
ing under scrutiny in detection methods, been considered more heavily
in the creation phase – meaning that more and more deepfake generators
ensure that the blinking patterns are stable in their videos. Thus, this
method may not be a long-lasting one.
6. Afchar et al. (2018) propose two methods for detection, both at the meso-
scopic (intermediate stage between microscopic and macroscopic) level.
The methods vary slightly, but generally alternate layers of convolution
(filtering) and pooling (reducing the amount of parameters) before being
input into a classification network. These methods have achieved accura-
cies of 96.9% and 98.4% of deepfake video classification.61
7. Zhou et al. (2018) uses a two-stream CNN to classify deepfakes: the first
stream is a face classification stream that is meant to detect tampering
artifacts (such as blurred areas on foreheads, sharp edges near lips, et
cetera) in the faces of subjects, and the second stream focuses on features
such as camera characteristics and local noise residuals. This two-pronged
approach achieved a strong AUC of 92.7% after testing on a different
dataset than the model was trained on.62
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017,
https://doi.org/10.1109/CVPRW.2017.229.
24
8. Nguyen et al. (2019) propose the use of a capsule network as opposed to
a CNN for deepfake detection. Whereas CNNs may just check if features
exist on a face (such as a nose, mouth, and eyes), capsule networks de-
termine if they exist in the correct spatial relationships with each other
(the nose is above the mouth, and so on), meaning that capsule networks
often outperform CNNs on object classification tasks. Using a capsule net-
work requires fewer parameters than CNNs, thus requiring less computing
power, and it also can often be more accurate as it takes positionality, not
just existence, into stronger consideration. The accuracy of this method
for detecting deepfakes was about 92.17%.63
9. Dang et al. (2019) utilizes an attention map (a feature map of the section
of the image that the model is using the most “attention” to look at) to
highlight the regions of the image that most influence the CNN’s decision
as to whether or not the image is fake. This can be used to further guide
the CNN as to which features are most discriminative and significant in
determining veracity, making the CNN stronger. This method achieved
an AUC of 98.4% on data that was of the same dataset that the model was
trained on, and an AUC of 71.2% on data that was of a different dataset.64
The disparity among results using different datasets implies that training
on political deepfakes may not have strong results for verifying porno-
graphic deepfakes. Thus, the training dataset used should encompass the
scope of the issue – in this case, should include pornographic deepfakes.
10. Wang and Dantcheva et al. (2020) compare three 3D CNN models used
for action recognition and analysis. These methods perform well on deep-
fakes that use deepfake generation techniques that they were trained on
(accuracies ranging from 91% to 95%), but less well on deepfake generation
techniques that were not included in the training data.65
11. Güera and Delp (2018) offer a detection system based on exploiting three
key weaknesses of deepfakes: multiple camera views or differences in light-
ing causing swapped faces to be inconsistent with the colors, lighting, or
angle of the rest of the scene; boundary effects due to a seamed fusion be-
tween the swapped face and the rest of the scene; and the frame-by-frame
encoding nature of the deepfake development that may cause color and
lighting inconsistencies among different frames of the same face, causing
a “flickering” effect. Their model obtains a set of features for each frame
from the CNN, so the concatenation of the features of multiple frames
63 Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen, “Use of a Capsule Net-
work to Detect Fake Images and Videos,” ArXiv:1910.12467 [Cs], October 29, 2019,
http://arxiv.org/abs/1910.12467.
64 Hao Dang et al., “On the Detection of Digital Face Manipulation,” ArXiv:1910.01717
25
are passed into their detection model to determine the likelihood of the
sequence being that of a deepfake.66
The strength and breadth of these varied models show a promising future for
deepfake detection. However, successful deepfake detection requires not only
the initial detection of deepfake videos, but also the continuous detection of
distribution of these videos. Thus, once a video has been detected as a deepfake
pornography video, online platforms should create a fingerprint for the video
that is stored and checked against in the future. For example, if a data storage
website such as Google and Microsoft detects a deepfake pornography video
being shared, it should add a hash of said video to an internal database. When
users share or distribute videos in the future, hashes can be checked against
those in the database to ensure that the video is not shared further. Ideally,
this database of hashes (which cannot be decrypted back into the videos, so no
deepfake pornography videos would be accessible through the database) would
be shared among a variety of companies in a data-sharing agreement to impede
the spread of deepfake pornography. The feasibility of this catching all deepfake
pornography is still out of reach, as videos could be compressed or slightly edited
to change the hash in order to evade the database checks, but it would assist
in catching deepfake pornography that hasn’t been further altered since its last
distribution, and future innovations to this technology may assist in catching
compressed videos as well.
66 David Güera and Edward J. Delp, “Deepfake Video Detection Using Recurrent Neural
Networks,” in 2018 15th IEEE International Conference on Advanced Video and Signal Based
Surveillance (AVSS), 2018, https://doi.org/10.1109/AVSS.2018.8639163.
26
4 Potential Legal and Policy Solutions
While the technical detection methods offered in the previous section may
engender optimism, we must realize that detection without removal offers little
solace to those exploited by deepfake pornography. Detection is an essential
initial step, but now we must move on to the steps toward removal.
I offer three paths towards progress: civil liability in the form of tort
law and copyright law, personal criminal liability, and platform liability. Civil
liability allows for victims to apply tort law violations to the creators of deep-
fake pornography. Criminalization of deepfake pornography allows for a more
codified prohibition of the creation and distribution of deepfake pornography
through state or federal laws. Platform liability can exist in the form of user
policy restrictions – either created by the platforms themselves or in conjunction
with legislative bodies – that disallows distribution of deepfake pornography.
27
Appropriation of Name or Likeness
This tort, commonly used in privacy cases, covers cases in which the
defendant “intrudes, physically or otherwise, upon the solitude or seclusion of
another or his private affairs or concerns. . . if the intrusion would be highly
offensive to a reasonable person.”74 This may sound applicable at first read
as the creation of deepfake videos may be considered to be an intrusion on
the plaintiff’s private affairs, but this tort is typically only used in the case of
physical intrusions on one’s affairs, such as opening one’s mail, tapping one’s
telephone wires, forcing entry to one’s hotel room, or examining one’s private
bank account.75 Since there is no actual physical intrusion to the plaintiff’s
personal affairs, this tort is not viable for deepfake video cases.
28
online can be considered “giving publicity,” and the sexual behaviors of the
plaintiff can certainly be considered his or her private life, which is why this
tort is commonly used in cases regarding revenge porn.77 However, it would not
be a viable option for a deepfake video case, as the deepfake videos’ doctored
nature means they are not considered statements of fact. Thus, as this tort
provides liability for damages related to publicity given only to true statements
of fact,78 this tort is likely not applicable.
Defamation
The defamation tort can be invoked if the plaintiff can prove that the
communication (in this case, the deepfake video) “harm[s] the reputation [of
the plaintiff] as to lower him in the estimation of the community or to deter
third persons from associating or dealing with him.”79 Thus, in order to invoke
this tort, one must prove that the deepfake video harmed her reputation and
caused others’ estimations of that plaintiff to lower. In the Societal Impacts
section of this paper, I show the harms suffered by a plaintiff in deepfake porn
cases, including harm to one’s career, mental health, and from third parties
and/or ex-partners. However, one caveat complicates the usage of defamation
in a deepfake case: the truth defense. Truth is a complete defense against
defamation,80 so if a watermark appears on the video stating that the video is
doctored, the defendant may be able to utilize this to show truth and negate
the possibility of a defamation claim.
False Light
The tort of false light covers the defendant “giv[ing] publicity to a mat-
ter concerning another that places the other before the public in a false light,”
requiring that the false light is “highly offensive to a reasonable person” and
was done with either disregard to or knowledge of the “falsity of the publicized
matter and the false light in which the other would be placed.”81 In false light
cases, the plaintiff can receive damages for the emotional harm suffered due to
the falsehood.82 This may be more applicable than a defamation claim, as a
77 Caroline Drinnon, “When Fame Takes Away the Right to Privacy in One’s Body: Revenge
Porn and Tort Remedies for Public Figures,” William & Mary Journal of Race, Gender, and
Social Justice 24, no. 1 (November 2017): 220.
78 Restatement (Second) of Torts § 652D Special Note (1977).
79 Restatement (Second) of Torts § 559 (1977).
80 “Defamation vs. False Light: What Is the Difference?,” Findlaw, December
5, 2018, https://www.findlaw.com/injury/torts-and-personal-injuries/defamation-vs–false-
light–what-is-the-difference-.html.
81 Restatement (Second) of Torts § 652E (1977).
82 “False Light,” LII / Legal Information Institute, accessed December 17, 2020,
https://www.law.cornell.edu/wex/false light.
29
defamation claim may be hindered be the necessity of falsity of the communi-
cation. As stated above, if the maker of the video adds a watermark stating
that the video is doctored, then the defendant could claim that the video is not
false. However, even if the video is not false, it could still place the plaintiff in
a false light, making this claim applicable.
30
Overall Viability of Tort Liability
Copyright Law
In the discussion of how to mitigate against revenge porn, Citron offers the
potential invocation of copyright law. She claims that victims could file notice
and takedown requests with content platforms after registering the copyright of
their images or videos; once they have been registered, content platforms would
then have to take down the pornography promptly or face monetary damages
under the Digital Millenium Copyright Act.89 While Citron argues that this
may not be entirely effective for revenge porn as some platforms may ignore
requests due to victims’ lacking money and/or desire to sue, it seems to be even
less effective against deepfake pornography. The “performance” in deepfake
pornography is not owned by the victim, but rather by pornographic actors
who took part in the original video (or the agency that created it). Even the
face transplanted onto the “performance” is usually created from a dataset of
publicly available images. Thus, copyright damages would likely not hold for
deepfake pornography videos.
31
4.2 Criminalization of Deepfake Pornography
90 “Current Numbers,” Rutgers Center for American Women and Politics, June 12, 2015,
https://cawp.rutgers.edu/current-numbers.
91 Karl Kurtz, “Who We Elect: The Demographics of State Legislatures,” National Confer-
32
recent attention surrounding election misinformation. Since media coverage is
proportionally less for pornographic deepfakes, it follows that elected officials
would pay less attention to deepfake pornography when creating deepfake laws.
33
unauthorized acts involved in the action,” and any other available relief.96
The deepfake pornography laws in Virginia and California are both un-
doubtedly significant steps. The path taken by Virginia seems to be the simplest
route to the criminalization of deepfake pornography for the majority of states.
Currently, 46 states and DC have laws regarding non-consensual pornography,
though the states’ laws are of varying classifications and punishments. The
only states without non-consensual pornography laws as of February 2021 are
Massacushetts, Mississippi, South Carolina, and Wyoming.97 States with pre-
existing non-consensual pornography laws can simply add text similar to the
Virginia addendum in order to ensure that non-consensual pornography covers
the realm of deepfake pornography as well.
Massachusetts and New York have both proposed deepfake laws that are
not specific to either pornography or politics. Massachusetts’s pending legisla-
tion criminalizes the use of deepfakes in conjunction with already “criminal or
tortious conduct.”100 Thus, since extortion is illegal in Massachusetts,101 using
a deepfake in order to perform extortion would make the usage of deepfakes
illegal as well. New York’s pending deepfake law focuses on a different aspect of
96 Ibid.
97 “46 States + DC + One Territory NOW Have Revenge Porn Laws — Cy-
ber Civil Rights Initiative,” Cyber Civil Rights Initiative, accessed February 5, 2021,
https://www.cybercivilrights.org/revenge-porn-laws/.
98 David Ruiz, “Deepfakes Laws and Proposals Flood US,” Malwarebytes Labs, January
34
deepfakes – specifically, the rights to an individual’s digital likeness up to forty
years after his or her death.102 This will likely be used in the case of actors and
performers whose likeness will be able to be registered to their family members
to control whether deepfakes will be allowed in posthumous productions.
(Stat. 1150; Date: 12/23/2020), text from: Congress.com, accessed February 8, 2021,
https://www.congress.gov/bill/116th-congress/senate-bill/2904/text.
35
4.2.3 Impacts to Free Speech
Some may read the above section regarding the criminalization of deepfake
pornography and find themselves growing wary of the potential impacts to free
speech. The banning and/or censoring of any mode of expression, especially
one that may be perceived as “creative,” is sure to raise concern. In fact, the
California deepfake law AB 602 has already been criticized by the California
News Publishers Association and the ACLU for its potential harmful impacts to
free speech.105 We must address these concerns if we are to create an equitable,
just action regarding deepfake pornography.
105 K.C. Halm et al., “Two New California Laws Tackle Deepfake Videos in Politics and
Porn — Davis Wright Tremaine,” Davis Wright Tremaine LLP, accessed February 5, 2021,
https://www.dwt.com/insights/2019/10/california-deepfakes-law.
106 Victoria L Killion, “The First Amendment: Categories of Speech,” Congressional Re-
36
dangerous simply because of the pornographic aspects of it or because of its
effects on the viewers, but because of the severe exploitation of the victim’s
bodily autonomy and invasion of her sexual privacy with no consent whatsoever
– for the act itself, for the creation of a video, for the use of her likeness, or for
the distribution.
Thus, I argue that impacts to the First Amendment are not applicable
in the ban of deepfake pornography, as deepfake pornography should not be
37
considered protected speech following the precedent set by child pornography.
However, for the sake of hypothetical, one could argue that deepfake pornog-
raphy should be protected speech. In that case, being protected simply means
that the government cannot infringe upon the speech, not that companies can-
not censor or ban the expression. This brings us to our next section concerning
online platforms’ roles in the censorship of deepfake pornography.
38
4.3 Platform Liability
Pornography Websites
Pornography websites are generally divided into two types: large aggre-
https://help.twitter.com/en/rules-and-policies/intimate-media.
114 Monika Bickert, “Enforcing Against Manipulated Media,” About Facebook, January 6,
2020, https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/.
39
gators and smaller, usually more specific and niche platforms. MindGeek is the
dominant online pornography company, consisting of subsidiaries that include
mainstream pornography websites such as PornHub, RedTube, and YouPorn.115
MindGeek has been criticized as having a monopoly over the digital pornogra-
phy industry, as it owns three of the top ten pornography sites116 and in 2018,
brought in over 115 million visitors to its websites every day.117 “Every day,
roughly 15 terabytes worth of videos get uploaded to MindGeek’s sites, equiv-
alent to roughly half of the content available to watch on Netflix.”118 However,
even with the vast amounts of video content added each day, the number of
people moderating the content is few, especially compared to that of YouTube,
Facebook, and other platforms. The Financial Times reported that the site
only had a “couple of dozen” people screening the content uploaded to the site,
though MindGeek refuted that claim without providing an alternate figure.119
How can porn aggregators get away with less moderation (and thus, ostensi-
bly more abusive content) than social media platforms even though they both
have a wide following? The taboo of sexual topics has “allowed MindGeek and
other distributors to fall under the radar of regulators. While Google-owned
YouTube has been dragged in front of politicians for failing to spot and remove
copyrighted videos or outright illegal content of people getting harmed, criticism
of MindGeek and other tube sites has been muted.”120
However, even if MindGeek faces less media and political attention than
large social media platforms, they still bear more responsibility and regulation
than smaller, deepfake-specific pornography websites. “Mr. Tucker, who has
worked with online porn companies since the late 1990s and counts MindGeek as
a client, says large and well-established porn sites such as PornHub are ‘the most
responsible [of pornography websites]... they have too much risk not to adhere to
laws.’”121 PornHub, one subsidiary of MindGeek, has enacted corporate policies
to create a less abusive online pornography space: prohibiting non-verified users
from uploading videos,122 banning deepfakes, and not returning any results upon
searches for “deepfake,” “deep fake,” or any plural version.123 This is due in
part to the fact that PornHub has much more to lose than smaller porn sites, as
they garner large amounts of money and views that make them hold a significant
115 Patricia Nilsson, “MindGeek: The Secretive Owner of Pornhub and RedTube,” Fi-
2020, https://mashable.com/article/pornhub-alternatives-free-porn-paid-porn/.
117 Nilsson, “MindGeek: The Secretive Owner of Pornhub and RedTube.”
118 Ibid.
119 Ibid.
120 Ibid.
121 Ibid.
122 Nicholas Kristof, “Opinion: An Uplifting Update, on the Terrible World
of Pornhub,” The New York Times, December 10, 2020, sec. Opinion,
https://www.nytimes.com/2020/12/09/opinion/pornhub-news-child-abuse.html.
123 Pornhub. Accessed Feb 5, 2021.
40
place in the public eye. PornHub’s willingness to bow to public pressure was
made obvious in December 2020 when Nicholas Kristof published a piece in the
New York Times exposing the amount of revenge porn and child pornography
present in non-verified videos.124 The decision to ban non-verified users’ videos
occurred only a few days after the report was posted, an obvious response to
the public outcry stemming from the article.
Section 230
124 Nicholas Kristof, “Opinion: The Children of Pornhub,” The New York Times, December
41
internet has not flourished at expunging itself of privacy violations, including
violations of users’ sexual privacy. The internet has not flourished at cleansing
itself of hate speech, cyberbullying, and incitements of violence. The internet
has by no means flourished at creating a just, non-abusive, non-exploitative,
non-racist, non-sexist environment. When Section 230 states that the internet
has flourished to the “benefit of all Americans,” we must understand that at
this point, “all Americans” does not really mean all Americans – only those who
have not and could not potentially be abused online.
Section 230 goes on to detail the United States’ policies regarding the in-
ternet, stating that the United States will continue to promote the growth of the
internet and “preserve the vibrant and competitive free market that presently
exists for the Internet and other interactive computer services, unfettered by
Federal or State regulation.”127 It then creates a shield for platforms that has
been widely contested in the years following, stating: “No provider or user of an
interactive computer service shall be treated as the publisher or speaker of any
information provided by another information content provider.”128 This declares
that providers of interactive computer services, such as social media platforms
or pornography websites, are not liable for any inappropriate or illegal content
posted by a user.
Section 230 has been an extremely controversial statute. I will soon ex-
plain the dangers of the section, but it would be unrepresentative to leave out
a discussion of the benefits of the section as well. The initial purpose was actu-
ally to create a safer digital environment by encouraging internet moderation.
Before Section 230, websites that displayed offensive or illegal content with no
moderation could not be sued for that offensive content, as they were only dis-
tributing the content and had no expectations of knowing the obscenity levels
of everything they published. However, when websites began to moderate some
offensive content, they were no longer simply distributors, but were in con-
trol of the content they were distributing. The internet service that provided
some moderation was liable for that content, whereas the internet services that
provided no moderation was not.129 This caused an obvious issue as it disincen-
tivized moderation whatsoever, so Congress wanted to create a law to encourage
moderation, stating that just because websites moderate the content on their
pages does not mean they are liable for that content. This has given websites
broad discretion to moderate content that is illegal and even content that is
not necessarily illegal, such as Facebook banning all manipulated media – also
known as deepfakes – even though they are currently legal.
127 Ibid.
128 Ibid.
129 Adi Robertson, “Why Section 230 Exists and How People Are Still Getting It Wrong,” The
42
However, the unintended impact that Section 230 has on the inability to
hold platforms liable for their content is dangerous at best. Citron and Wittes
analogize that any business that knowingly allows for sexual abuses to happen in
their physical space would not be able to easily avoid liability, yet simply because
certain businesses operate in a digital environment means that they are free
from liability.130 The shielding of liability for actions solely because they exist
online (where, as we know, a plethora of sexual abuses do exist) is dangerous
and not aligned with the original intention of the Section. Citron and Wittes
summarize the dangers of the current interpretation of Section 230 as follows:
“[Section 230] has produced an immunity from liability that is far more sweeping
than anything the law’s words, context, and history support. Platforms have
been protected from liability even though they republished content knowing it
might violate the law, encouraged users to post illegal content, changed their
design and policies for the purpose of enabling illegal activity, or sold dangerous
products.”131
While this free, unregulated nature of the internet may have been helpful
in its fetal stages, it is far from necessary in the full-fledged internet environment
in existence today. Citron and Wittes summarize this as follows: “If a broad
reading of the safe harbor embodied sound policy in the past, it does not in the
present—an era in which child (and adult) predation on the internet is rampant,
cybermobs terrorize people for speaking their minds, and designated foreign ter-
rorist groups use online services to organize and promote violent activities.”132
The internet no longer needs to be hand-held.
130 Danielle Keats Citron and Benjamin Wittes, “The Internet Will Not Break: Denying Bad
43
is discussed in the Criminalization of Deepfake Pornography section, but there
is another aspect of censorship that is important to discuss: that by private
companies as opposed to the government. In our solutions below, some consist
of private companies deciding what to moderate and what to censor from their
platform. Some may read this and argue that this is a violation of the users’
freedom of speech, so I am taking this space to clarify that the First Amend-
ment protects against government infringement on protected speech, not private
companies or private individuals’ decisions to infringe on any speech, protected
or not. In fact, it would be a First Amendment issue to not allow a platform
to censor its users (such as Twitter deciding to ban Donald Trumps’ account in
January 2021) because a social media company has the right to exclude speak-
ers that it chooses not to make available on its platform. Though some could
argue that there should be limits to this power, particularly where platforms
have monopoly power, those limits almost certainly do not apply to the ability
to censor deepfake pornography. Thus, even in the hypothetical situation that
deepfake pornography is protected speech, that does not mean that PornHub
necessarily has to host it, nor do other companies have to provide technical
support to websites that do host it.
Citron and Wittes give two paths for updating Section 230: an interpre-
tive shift and a legislative shift. The current interpretations for Section 230 are
based on state and lower federal court decisions, which have interpreted Sec-
tion 230 to be construed broadly, whereas the Supreme Court has declined to
interpret the meaning of Section 230.133 However, this over-broad interpreta-
tion supported by the state and lower courts is dangerous. Citron and Wittes
argue that the interpretive solution to this challenge is to interpret Section 230
in “a manner more consistent with its text, context, and history” by limiting
its application to true Good Samaritans, defined in their paper as “providers or
users engaged in good faith efforts to restrict illegal activity.”134 Finally, they
argue that “[t]reating abusive website operators and Good Samaritans alike de-
values the efforts of the latter and may result in less of the very kind of blocking
that the CDA in general, and § 230 in particular, sought to promote.”135 In
the distinction between abusive website operators and Good Samaritans, one
could easily argue that deepfake-specific pornographic websites are in the for-
mer group. Thus, the path to update Section 230 via interpretation is for lower
courts to begin to interpret Section 230 in this way (as it was originally intended)
133 Id.,407.
134 Id.,415-416.
135 Ibid.
44
or for the Supreme Court to hear a case regarding the Section and create a more
firm interpretation.
Aside from interpretation, Section 230 could also be updated via legisla-
tion. One such example is a proposal by the National Association of Attorneys
General to amend Section 230 to exempt state criminal laws.136 If this be-
comes adopted, then the state laws I discuss in the Criminalization of Deepfake
Pornography section would allow for platforms to be liable for those crimes as
well. Another alternative would be an amendment to the Section that eliminates
immunity for the worst actors online, such as “sites that encourage destructive
online abuse or that know they are principally used for that purpose.”137 Citron
proposes the following amendment:
She and Wittes explain the benefit of her proposal: “With this revision, plat-
forms would enjoy immunity from liability if they could show that their re-
sponse to unlawful uses of their services was reasonable. Such a determination
would take into account differences among online entities. ISPs and social net-
works with millions of postings a day cannot plausibly respond to complaints
of abuse immediately, let alone within a day or two.”140 Thus, either of these
two amendment proposals would allow for a more accurate, specific interpreta-
tion of Section 230 that would mitigate the challenges created by its overbroad
interpretation. This would eliminate the platform immunity that currently ex-
ists, making social media websites and pornography websites accountable for
136 Id.,418.
137 Id.,419.
138 Ibid.
139 Ibid.
140 Ibid.
45
the content they publish on their websites, incentivizing them to quickly delete
and ban deepfake pornography.
Platform Agreements
1. Internet service providers such as Verizon and Comcast can decide not to
host websites that publish deepfake pornography, thus resolving to a DNS
error when end users attempt to access the website.
2. App stores such as the Google Play Store and the iOS App Store could
agree to not approve applications that create deepfake videos unless the
application is built to detect and disallow the creation of pornographic
videos.
3. Social media websites such as Twitter and Reddit could disallow links
leading to known deepfake pornography websites (and work with content
moderators to keep the list of websites updated).
4. Search engines such as Google and Bing could decide to not index known
deepfake pornography websites, thus leading to them not showing up in
searches.
5. Cloud hosting services such as Amazon Web Services and Microsoft Azure
could enact a policy that disallows their cloud hosting services to be used
on websites that host deepfake pornography in a similar manner to Ama-
zon Web Services’ decision to halt the hosting of Parler, citing Parler’s
“violent content” violating Amazon Web Service’s Terms of Service.141
Their Terms of Service could extend not only to prohibiting content that
incites violence, but also prohibiting deepfake pornography as well.
6. Browsers such as Google Chrome, Firefox, and Safari could agree to block
users from entering known deepfake pornography websites.
141 Annie Palmer, “Amazon Drops Parler from Its Web Hosting Service, Citing Violent
46
8. Payment-based companies such as PayPal, Mastercard, Visa, and Amer-
ican Express could refuse to provide payment services for websites that
host deepfake pornography (PayPal has already suspended cooperation
with PornHub,142 with Mastercard and Visa reviewing their ties with the
streaming service).143
9. Companies with data storage services such as Google and Microsoft can
agree to the deepfake pornography fingerprinting technical mitigation dis-
cussed in the Technical Detections section.
These are only nine examples of a wide range of potential voluntary platform
agreements. Since these are voluntary, they would not be required legally, but
could be encouraged by lobbyists, activists, and the general public. We have
seen how public pressure towards PornHub has led them to enact stricter policies
on the content allowed on their website. Directing energy towards large pornog-
raphy aggregators is the obvious first step in purging the internet of deepfake
pornography, but this should be extended to companies that indirectly host
and enable deepfake pornography as well, including internet service providers,
cloud hosting services, payment services, search engines, app stores, social me-
dia websites, browsers, and many more technical providers. Even with these
agreements, deepfake pornography would likely still exist online – it may move
further onto the dark web, for example – but would be less easily accessible,
and would allow for actions to take it down once detected. This is a strong step
in impeding the growth and distribution of deepfake pornography in order to
form a less exploitative and abusive internet environment.
47
Conclusion
48
References
Agarwal, Shruti, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and
Hao Li. “Protecting World Leaders Against Deep Fakes,” 2019.
Ajder, Henry, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen. “The
State of Deepfakes: Landscape, Threats, and Impact.” Deeptrace, Septem-
ber 2019.
Citron, Danielle Keats, and Benjamin Wittes. “The Internet Will Not Break:
Denying Bad Samaritans § 230 Immunity.” Fordham Law Review 86 (2017).
Citron, Danielle Keats. “Sexual Privacy.” Yale Law Journal 128, no. 7 (May 1,
2019). https://digitalcommons.law.yale.edu/ylj/vol128/iss7/2.
Citron, Danielle, and Robert Chesney. “Deep Fakes: A Looming Challenge for
Privacy, Democracy, and National Security.” California Law Review 107,
no. 6 (December 1, 2019): 1753.
Cyber Civil Rights Initiative. “46 States + DC + One Territory NOW Have
Revenge Porn Laws — Cyber Civil Rights Initiative.” Accessed February
5, 2021. https://www.cybercivilrights.org/revenge-porn-laws/.
Dang, Hao, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil Jain. “On the
Detection of Digital Face Manipulation.” ArXiv:1910.01717 [Cs], October
23, 2020. http://arxiv.org/abs/1910.01717.
49
DeepAI. “Convolutional Neural Network,” May 17, 2019. https://deepai.
org/machine-learning-glossary-and-terms/convolutional-neural-
network.
Drinnon, Caroline. “When Fame Takes Away the Right to Privacy in One’s
Body: Revenge Porn and Tort Remedies for Public Figures.” William &
Mary Journal of Race, Gender, and Social Justice 24, no. 1 (November
2017): 220.
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-
Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative
Adversarial Networks.” ArXiv:1406.2661 [Cs, Stat], June 10, 2014. http:
//arxiv.org/abs/1406.2661.
Güera, David, and Edward J. Delp. “Deepfake Video Detection Using Re-
current Neural Networks.” In 2018 15th IEEE International Conference
on Advanced Video and Signal Based Surveillance (AVSS), 2018. https:
//doi.org/10.1109/AVSS.2018.8639163.
Halm, K.C., Ambika Kumar Doran, Jonathan Segal, and Caesar Kalinowski IV.
“Two New California Laws Tackle Deepfake Videos in Politics and Porn.”
Davis Wright Tremaine LLP. Accessed February 5, 2021. https://www.
dwt.com/insights/2019/10/california-deepfakes-law.
50
Humiliate Women: ‘Everybody Is a Potential Target.’” The Washing-
ton Post, December 30, 2018. https://www.washingtonpost.com/
technology/2018/12/30/fake-porn-videos-are-being-weaponized-
harass-humiliate-women-everybody-is-potential-target/.
Havron, Sam, Diana Freed, Nicola Dell, Rahul Chatterjee, Damon McCoy, and
Thomas Ristenpart. “Clinical Computer Security for Victims of Intimate
Partner Violence.”
Johnson, Robert and Adam Cureton. “Kant’s Moral Philosophy.” The Stan-
ford Encyclopedia of Philosophy (Spring 2021 Edition). Edited by Edward
N. Zalta. https://plato.stanford.edu/archives/spr2021/entries/
kant-moral/.
Joho, Jess. “The Best Alternatives to Pornhub and Xvideos.” Mashable, Decem-
ber 15, 2020. https://mashable.com/article/pornhub-alternatives-
free-porn-paid-porn/.
Khalid, Amrita. “Deepfake Videos Are a Far, Far Bigger Problem for Women.”
Quartz. Accessed December 17, 2020. https://qz.com/1723476/
deepfake-videos-feature-mostly-porn-according-to-new-study-
from-deeptrace-labs/.
Kristof, Nicholas. “Opinion — The Children of Pornhub.” The New York Times,
December 4, 2020, sec. Opinion. https://www.nytimes.com/2020/12/04/
opinion/sunday/pornhub-rape-trafficking.html.
Krose, Ben, and Patrick van der Smagt. “An Introduction to Neural Networks,”
November 1996. http://citeseerx.ist.psu.edu/viewdoc/download;
jsessionid=1DC5C0944BD8C52584A8C2310AE64181?doi=10.1.1.18.493&
rep=rep1&type=pdf.
51
Kurtz, Karl. “Who We Elect: The Demographics of State Legisla-
tures.” National Conference of State Legislatures. Accessed February
5, 2021. https://www.ncsl.org/research/about-state-legislatures/
who-we-elect.aspx.
Leetaru, Kalev. “DeepFakes: The Media Talks Politics While The Public Is In-
terested In Pornography.” Forbes, March 16, 2019. https://www.forbes.
com/sites/kalevleetaru/2019/03/16/deepfakes-the-media-talks-
politics-while-the-public-is-interested-in-pornography/.
LII / Legal Information Institute. “False Light.” Accessed December 17, 2020.
https://www.law.cornell.edu/wex/false_light.
Matern, Falko, Christian Riess, and Marc Stamminger. “Exploiting Visual Ar-
tifacts to Expose Deepfakes and Face Manipulations.” In 2019 IEEE Win-
ter Applications of Computer Vision Workshops (WACVW), 83–92, 2019.
https://doi.org/10.1109/WACVW.2019.00020.
Nguyen, Huy H., Junichi Yamagishi, and Isao Echizen. “Use of a Capsule Net-
work to Detect Fake Images and Videos.” ArXiv:1910.12467 [Cs], October
29, 2019. http://arxiv.org/abs/1910.12467.
Nguyen, Thanh Thi, Cuong M. Nguyen, Dung Tien Nguyen, Duc Thanh
Nguyen, and Saeid Nahavandi. “Deep Learning for Deepfakes Creation
and Detection: A Survey.” ArXiv:1909.11573 [Cs, Eess], July 28, 2020.
http://arxiv.org/abs/1909.11573.
Palmer, Annie. “Amazon Drops Parler from Its Web Hosting Service, Citing
Violent Posts.” CNBC, January 9, 2021. https://www.cnbc.com/2021/
01/09/amazon-drops-parler-from-its-web-hosting-service.html.
52
February 8, 2021. https://www.congress.gov/bill/116th-congress/
senate-bill/2904/text.
“Public Law 116-92: National Defense Authorization Act for Fiscal Year 2020.”
(Stat. 1198; Date: 12/20/2019). Text from: Congress.com. Accessed
February 8, 2021. https://www.congress.gov/bill/116th-congress/
senate-bill/1790/text.
Robertson, Adi. “Why Section 230 Exists and How People Are Still Getting
It Wrong.” The Verge, June 21, 2019. https://www.theverge.com/2019/
6/21/18700605/section-230-internet-law-twenty-six-words-that-
created-the-internet-jeff-kosseff-interview.
Rutgers Center for American Women and Politics. “Current Numbers,” June
12, 2015. https://cawp.rutgers.edu/current-numbers.
Singer, P.W., and Allan Friedman. Cybersecurity and Cyberwar: What Every-
one Needs to Know. Oxford University Press, 2014.
53
Detection Performance.” ArXiv:2004.07532 [Cs], July 2, 2020. http://
arxiv.org/abs/2004.07532.
Vox. The Most Urgent Threat of Deepfakes Isn’t Politics, 2020. https://www.
youtube.com/watch?v=hHHCrf2-x6w&feature=emb_logo.
Wang, Yaohui, and Antitza Dantcheva. “A Video Is Worth More than 1000
Lies. Comparing 3DCNN Approaches for Detecting Deepfakes,” June 9,
2020. https://hal.inria.fr/hal-02862476.
Warren, Samuel D., and Louis D. Brandeis. “The Right to Privacy.” Harvard
Law Review 4, no. 5 (1890). https://doi.org/10.2307/1321160.
Yang, Xin, Yuezun Li, and Siwei Lyu. “Exposing Deep Fakes Using Inconsistent
Head Poses.” In ICASSP 2019 - 2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), 8261–65, 2019. https:
//doi.org/10.1109/ICASSP.2019.8683164.
Zhou, Peng, Xintong Han, Vlad I. Morariu, and Larry S. Davis. “Two-Stream
Neural Networks for Tampered Face Detection.” In 2017 IEEE Conference
on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
https://doi.org/10.1109/CVPRW.2017.229.
54