DEEPFAKE
DEEPFAKE
CHEAP FAKES
THE MANIPULATION OF
AUDIO AND VISUAL EVIDENCE
Britt Paris
Joan Donovan
DEEPFAKES AND CHEAP FAKES -1-
CONTENTS
02 Executive Summary
05 Introduction
10 Cheap Fakes/Deepfakes: A Spectrum
17 The Politics of Evidence
23 Cheap Fakes on Social Media
25 Photoshopping
27 Lookalikes
28 Recontextualizing
30 Speeding and Slowing
33 Deepfakes Present and Future
35 Virtual Performances
35 Face Swapping
38 Lip-synching and Voice Synthesis
40 Conclusion
47 Acknowledgments
Author: Joan Donovan, director of the Technology and Social Change Research
Project, Harvard Kennedy School; PhD, 2015, Sociology and Science Studies,
University of California San Diego.
This report is published under Data & Society’s Media Manipulation research
initiative; for more information on the initiative, including focus areas,
researchers, and funders, please visit https://datasociety.net/research/
media-manipulation
DATA & SOCIETY -2-
EXECUTIVE SUMMARY
INTRODUCTION
In June 2019, artists Bill Posters and Daniel Howe posted a
fake video of Mark Zuckerberg to Instagram. Using a pro-
1
1 https://www.instagram.com/p/ByaVigGFP2U/
DATA & SOCIETY -6-
2 Charlie Warzel, “He Predicted the 2016 Fake News Crisis. Now He’s Worried About An
Information Apocalypse.,” BuzzFeed News (blog), February 11, 2018, https://www.buzz-
feednews.com/article/charliewarzel/the-terrifying-future-of-fake-news; Franklin Foer,
“The Era of Fake Video Begins,” The Atlantic, April 8, 2018, https://www.theatlantic.
com/magazine/archive/2018/05/realitys-end/556877/; Samantha Cole, “AI-Assisted
Fake Porn Is Here and We’re All Fucked,” Motherboard (blog), December 11, 2017,
https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn; Joshua
Rothman, “In the Age of A.I., Is Seeing Still Believing?,” The New Yorker, November 5,
2018, https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-
still-believing; Jennifer Finney Boylan, “Will Deep-Fake Technology Destroy Democ-
racy?,” The New York Times, October 19, 2018, sec. Opinion, https://www.nytimes.
com/2018/10/17/opinion/deep-fake-technology-democracy.html.
3 Warzel, “He Predicted the 2016 Fake News Crisis. Now He’s Worried About An Informa-
tion Apocalypse”; James Vincent, “US Lawmakers Say AI Deepfakes ‘Have the Potential
to Disrupt Every Facet of Our Society,’” The Verge, September 14, 2018, https://www.
theverge.com/2018/9/14/17859188/ai-deepfakes-national-security-threat-lawmak-
ers-letter-intelligence-community; Daniel Funke, “A Potential New Marketing Strategy
for Political Campaigns: Deepfake Videos,” Poynter, June 6, 2018, https://www.poynter.
org/news/potential-new-marketing-strategy-political-campaigns-deepfake-videos;
Boylan, “Will Deep-Fake Technology Destroy Democracy?”
DEEPFAKES AND CHEAP FAKES -7-
However, there are two related phenomena that are truly new
today. First, deepfakes are being applied to the digitization
of bodies, including one’s voice and likeness in increasingly
routine ways. These are techniques that require training data
of only a few hundred images. With thousands of images of
many of us online, in the cloud, and on our devices, anyone
with a public social media profile is fair game to be faked.
And we are already seeing that the unbounded use of tools
4 Jean Baudrillard et al., In the Shadow of the Silent Majorities, trans. Paul Foss et
al. 1979. (Los Angeles: Cambridge, Mass: Semiotext, 2007); Jean Baudrillard,
Simulacra and Simulation. 1980, trans. Sheila Faria Glaser, 14th Printing edition (Ann
Arbor: University of Michigan Press, 1994); John Fiske and Kevin Glynn, “Trials of
the Postmodern,” Cultural Studies 9, no. 3 (October 1, 1995): 505–21, https://doi.
org/10.1080/09502389500490541; Ian Hacking, “Prussian Numbers 1860-1882,”
in The Probabilistic Revolution, Volume 1 (Cambridge, MA: MIT Press, 1987), 377–94;
Herbert Schiller, Information Inequality, 1 edition (New York, NY: Routledge, 1995); Jean
Baudrillard, The Gulf War Did Not Take Place. 1993. (Indiana University Press, 1995); Tal
Golan, Laws of Men and Laws of Nature: The History of Scientific Expert Testimony in
England and America (Cambridge, MA; London: Harvard University Press, 2007); Sarah
E. Igo, The Averaged American: Surveys, Citizens, and the Making of a Mass Public
(Cambridge, MA: Harvard University Press, 2008); Randall C. Jimerson, Archives Power:
Memory, Accountability, and Social Justice (Society of American Archivists, 2009);
Saloni Mathur, “Social Thought & Commentary: Museums Globalization,” Anthropologi-
cal Quarterly 78, no. 3 (2005): 697–708.trans. Sheila Faria Glaser, 14th Printing edition
(Ann Arbor: University of Michigan Press, 1994
DATA & SOCIETY -8-
5 Jessie Daniels, Cyber Racism: White Supremacy Online and the New Attack on Civil
Rights (Lanham, MD: Rowman & Littlefield Publishers, 2009); Safiya Umoja Noble, Algo-
rithms of Oppression: How Search Engines Reinforce Racism, 1 edition (New York: NYU
Press, 2018); Mary Anne Franks, “Unwilling Avatars: Idealism and Discrimination in Cy-
berspace,” Columbia Journal of Gender and Law 20, no. (2011) (May 9, 2019), https://
cjgl.cdrs.columbia.edu/article/unwilling-avatars-idealism-and-discrimination-in-cyber-
space?article=unwilling-avatars-idealism-and-discrimination-in-cyberspace&post_
type=article&name=unwilling-avatars-idealism-and-discrimination-in-cyberspace;
Franks, “The Desert of the Unreal: Inequality in Virtual and Augmented Reality,” U.C.D.
L. Rev., January 1, 2017, 499; Danielle Citron, “Addressing Cyber Harassment: An
Overview of Hate Crimes in Cyberspace,” Journal of Law, Technology, & the Internet 6,
no. 1 (January 1, 2015): 1.
6 Recently Facebook’s Zuckerberg made a statement that Facebook’s suite of plat-
forms, of which WhatsApp is one, should follow WhatsApp’s mode of encryption and
private messaging.
7 Trevor J. Pinch and Wiebe E. Bijker, “The Social Construction of Facts and Artefacts:
Or How the Sociology of Science and the Sociology of Technology Might Benefit Each
Other,” Social Studies of Science 14, no. 3 (1984): 399–441.
DEEPFAKES AND CHEAP FAKES -9-
8 Antonio García Martínez, “The Blockchain Solution to Our Deepfake Problems,” Wired,
March 26, 2018, https://www.wired.com/story/the-blockchain-solution-to-our-deep-
fake-problems/; Siwei Lyu, “The Best Defense against Deepfake AI Might Be... Blinking,”
Fast Company, August 31, 2018, https://www.fastcompany.com/90230076/the-best-
defense-against-deepfakes-ai-might-be-blinking; Tristan Greene, “Researchers Devel-
oped an AI to Detect DeepFakes,” The Next Web, June 15, 2018, https://thenextweb.
com/artificial-intelligence/2018/06/15/researchers-developed-an-ai-to-detect-deep-
fakes/; Facebook, “Expanding Fact-Checking to Photos and Videos,” Facebook
Newsroom, September 13, 2018, https://newsroom.fb.com/news/2018/09/expand-
ing-fact-checking/; TruePic, “Photos and Videos You Can Trust,” accessed December
10, 2018, https://truepic.com/about/; Sam Gregory and Eric French, “OSINT Digital
Forensics,” WITNESS Media Lab (blog), accessed June 20, 2019, https://lab.witness.
org/projects/osint-digital-forensics/; Matt Turek, “Media Forensics,” Defense Advanced
Research Projects MediFor, 2018, https://www.darpa.mil/program/media-forensics.
DATA & SOCIETY DEEPFAKES AND CHEAP FAKES
THE This spectrum charts specific examples of audiovisual (AV) manipulation that The deepfake process is both the most computationally reliant and also the least
illustrate how deepfakes and cheap fakes differ in technical sophistication, publicly accessible means of manipulating media. Other forms of AV
DEEPFAKES/ barriers to entry, and techniques. From left to right, the technical sophistication manipulation rely on different software, some of which is cheap to run, free to
CHEAP FAKES of the production of fakes decreases, and the wider public’s ability to produce download, and easy to use. Still other techniques rely on far simpler methods, like
SPECTRUM fakes increases. Deepfakes—which rely on experimental machine learning— mislabeling footage or using lookalike stand-ins.
are at one end of this spectrum.
Recurrent
TECHNOLOGIES
Neural
After
Network (RNN); Generative Video Free Relabeling/
Effects, Free speed
Hidden Markov Adversarial Dialogue FakeApp / Sony real-time In-camera Reuse of
Adobe alteration
Models (HMM) Networks Replacement After Effects Vegas Pro filter effects extant
Premiere applications
and Long Short (GANs) (VDR) model applications video
Pro
Term Memory
Models (LTSM)
DEEPFAKES More expertise and technical resources required Less expertise and fewer technical resources required CHEAP FAKES
TECHNIQUES
Virtual Virtual Voice Face Lip-synching Face Speeding Face altering/ Speeding and Lookalikes Recontextualizing
performances performances synthesis swapping (page 38) swapping: and slowing swapping slowing (page 27) (page 28)
(page 35) (page 38) (page 35) Rotoscope (page 30)
Rana Ayyub
Deepfakes: Gal Gadot (not pictured
(not pictured because because of
EXAMPLES
Suwajanajorn et al. Face2Face: Posters and Howe’s: Mark Zuckerberg Paul Joseph Watson: Acosta Video Belle Delphine: Hit or
Synthesizing Obama Miss Choreography
Mario Klingemann: AI Art Jordan Peele and BuzzFeed: Obama PSA Huw Parkinson: Uncivil War SnapChat: Amsterdam Fashion Institute Unknown: BBC NATO newscast
DATA & SOCIETY - 12 -
9 The graphical processing unit (GPUs) are found in chips in a computer’s motherboard,
with the CPU, or as a plug-in unit. It renders images, animations and video for the
computer's screen. A GPU can perform parallel processing and pixel decoding much
faster than a CPU. The primary benefit is that it decodes pixels to render smooth 3D
animations and video.
10 Jensen Huang, “Accelerating AI with GPUs: A New Computing Model,” The Official
NVIDIA Blog, January 12, 2016, https://blogs.nvidia.com/blog/2016/01/12/acceler-
ating-ai-artificial-intelligence-gpus/; Moor Insights and Strategy, “A Machine Learning
Landscape: Where AMD, Intel, NVIDIA, Qualcomm And Xilinx AI Engines Live,” Forbes,
accessed June 30, 2018, https://www.forbes.com/sites/moorinsights/2017/03/03/a-
machine-learning-landscape-where-amd-intel-nvidia-qualcomm-and-xilinx-ai-engines-
live/; “Intel, Nvidia Trade Shots Over AI, Deep Learning,” eWEEK, accessed June 30,
2018, https://www.eweek.com/servers/intel-nvidia-trade-shots-over-ai-deep-learning.
11 Machine learning is a set of complicated algorithms used to train machines to perform
a task with data, such as sorting a list of names in alphabetical order. The first few
times, the lists would be mostly correct, but with a few ordering errors. A human would
correct these errors and feed that data back into the system. As it sorts through
different hypothetical lists of names, the machine does it faster with fewer errors. Ma-
chine learning requires a human for error correction. Deep learning, on the other hand,
is a subset of machine learning that layers these algorithms, called a neural network,
to correct one another. It still uses complicated algorithms to train machines to sort
through lists of names, but instead of a human correcting errors, the neural network
does it.
DEEPFAKES AND CHEAP FAKES - 13 -
12 Economist Special Report, ”From Not Working to Neural Networking,” The Economist,
June 25, 2016, https://www.economist.com/special-report/2016/06/25/from-not-
working-to-neural-networking.; Jon Markman, “Deep Learning, Cloud Power Nvidia,”
Forbes, November 22, 2016, https://www.forbes.com/sites/jonmarkman/2016/11/22/
deep-learning-cloud-power-nvidia-future-growth/; Mario Casu et al., “UWB Mi-
crowave Imaging for Breast Cancer Detection: Many-Core, GPU, or FPGA?,” ACM
Trans. Embed. Comput. Syst. 13, no. 3s (March 2014): 109:1–109:22, https://doi.
org/10.1145/2530534.
13 Supasorn Suwajanakorn, Steven Seitz, and Ira Kemelmacher-Shlizerman, “Synthesizing
Obama: Learning Lip Sync from Audio,” in ACM Transactions on Graphics, vol. 36. 4, Ar-
ticle 95 (SIGGRAPH 2017, New York, NY: Association for Computing Machinery, 2017),
http://grail.cs.washington.edu/projects/AudioToObama/.
14 Aayush Bansal et al., “Recycle-GAN: Unsupervised Video Retargeting,” ArXiv:
1808.05174 [Cs], August 15, 2018, http://arxiv.org/abs/1808.05174.
15 In 2011, NVIDIA began partnering with deep learning computer scientists at leading
research universities to develop image and pattern recognition technology. https://
blogs.nvidia.com/blog/2016/01/12/accelerating-ai-artificial-intelligence-gpus/
16 Suwajanakorn, et al., “Synthesizing Obama: Learning Lip Sync from Audio.”
17 Justus Thies, Michael Zollhfer, Marc Stamminger, Christian Theobalt, and Matthias
Neissner, “Face2Face: Real-Time Face Capture and Reenactment of RGB Videos,” ac-
cessed June 27, 2019, https://cacm.acm.org/magazines/2019/1/233531-face2face/
fulltext.
18 Gregory Barber, “Deepfakes Are Getting Better, But They’re Still Easy to Spot,” Wired,
May 26, 2019, https://www.wired.com/story/deepfakes-getting-better-theyre-easy-
spot/.
DATA & SOCIETY - 14 -
19 Artists have also used deep learning methods to produce abstract images that sell for
great amounts of money and exhibit in the most renowned galleries across the world,
for example, new media artist Mario Klingeman’s work shown in the typology above
has shown at MoMA Le Centre Pompidou. Rama Allen, “AI Will Be the Art Movement
of the 21st Century,” Quartz, March 5, 2018, https://qz.com/1023493/ai-will-be-
the-art-movement-of-the-21st-century/; “Artificial Intelligence and the Art of Mario
Klingemann,” Sothebys.com, February 8, 2019, https://www.sothebys.com/en/articles/
artificial-intelligence-and-the-art-of-mario-klingemann.
20 “FakeApp 2.2.0 - Download for PC Free,” Malavida, accessed June 1, 2018, https://
www.malavida.com/en/soft/fakeapp/.
21 deepfakes, Deepfakes/Faceswap, Python, 2019, https://github.com/deepfakes/
faceswap.
22 iperov, DeepFaceLab Is a Tool That Utilizes Machine Learning to Replace Faces in Vid-
eos. Includes Prebuilt Ready to Work Standalone Windows 7,8,10 Binary (Look Readme.
Md), Iperov/DeepFaceLab, Python, 2019, https://github.com/iperov/DeepFaceLab.
DEEPFAKES AND CHEAP FAKES - 15 -
23 Rana Ayyub, “I Was the Victim of a Deepfake Porn Plot Intended to Silence Me,”
HuffPost UK, November 21, 2018, https://www.huffingtonpost.co.uk/entry/deep-
fake-porn_uk_5bf2c126e4b0f32bd58ba316; Rana Ayyub, “In India, Journalists Face
Slut-Shaming and Rape Threats,” The New York Times, May 22, 2018, https://www.
nytimes.com/2018/05/22/opinion/india-journalists-slut-shaming-rape.html.
24 Timothy McLaughlin, “How WhatsApp Fuels Fake News and Violence in India,” Wired,
December 12, 2018, https://www.wired.com/story/how-whatsapp-fuels-fake-news-
and-violence-in-india/.
25 Gregory and French, “OSINT Digital Forensics.”
DEEPFAKES AND CHEAP FAKES - 17 -
declared that “AI could set us back 100 years when it comes
to how we consume news.” The Atlantic characterized the
27
26 Warzel, “He Predicted the 2016 Fake News Crisis. Now He’s Worried About an Informa-
tion Apocalypse.”
27 Jackie Snow, “AI Could Send Us Back 100 Years When It Comes to How We Consume
News,” MIT Technology Review, November 7, 2017, https://www.technologyreview.
com/s/609358/ai-could-send-us-back-100-years-when-it-comes-to-how-we-
consume-news/.
28 Foer, “The Era of Fake Video Begins.”
29 Joshua Rothman, “In the Age of A.I., Is Seeing Still Believing?,” The New Yorker, Novem-
ber 5, 2018, https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-
seeing-still-believing.
30 Tal Golan, Laws of Men and Laws of Nature: The History of Scientific Expert Testimony
in England and America (Cambridge, MA: Harvard University Press, 2007).
DATA & SOCIETY - 18 -
31 John Durham Peters, “Witnessing,” Media, Culture & Society 23, no. 6 (November 1,
2001): 707–23, https://doi.org/10.1177/016344301023006002.
32 Sarah E. Igo, The Averaged American: Surveys, Citizens, and the Making of a Mass
Public (Cambridge, MA: Harvard University Press, 2008); Matthew S. Hull, Government
of Paper: The Materiality of Bureaucracy in Urban Pakistan, First edition (Berkeley:
University of California Press, 2012); Durba Ghosh, “Optimism and Political History: A
Perspective from India,” Perspectives on History 49, no. 5 (2011): 25–27; Donald Fran-
cis McKenzie, Oral Culture, Literacy & Print in Early New Zealand: The Treaty of Waitangi
(Victoria University Press, 1985); Harold Innis, Empire and Communications (Toronto,
Canada: Dundurn Press Limited, 1948); Marshall McLuhan, The Gutenberg Galaxy,
1962. Centennial Edition (Toronto: University of Toronto Press, Scholarly Publishing
Division, 2011); Walter J. Ong, Orality and Literacy, 1980. 30th Anniversary Edition
(London ; New York: Routledge, 2012).
33 Golan, Laws of Men and Laws of Nature.
DEEPFAKES AND CHEAP FAKES - 19 -
At left, taking an X-ray image, late 1800s, published in the medical journal “Nouvelle Iconographie
de la Salpetrière.” Center, a hand deformity; at right, the same hand seen using X-ray technology.
(Wikimedia Commons)
36 Brian Stonehill, “Real Justice Exists Only in Real Time: Video: Slow-Motion Images
Should Be Barred from Trials.,” Los Angeles Times, May 18, 1992, http://articles.
latimes.com/1992-05-18/local/me-15_1_real-time.
37 Meredith D. Clark, Dorothy Bland, Jo Ann Livingston, “Lessons from #McKinney: Social
Media and the Interactive Construction of Police Brutality,” (McKinney, 2017), https://
digital.library.unt.edu/ark:/67531/metadc991008/; Kimberlé Crenshaw, “Mapping the
Margins: Intersectionality, Identity Politics, and Violence against Women of Color,”
Stanford Law Review 43, no. 6 (July 1991): 1241–99. Both these pieces are founda-
tional to the thinking of the first author in her work on how evidence is wielded by law
enforcement and by groups pushing for police accountability and frame how evidence
from official statistics to images online are used to reify structural inequality.
38 Jean Baudrillard, The Gulf War Did Not Take Place. 1993. (Indiana University Press,
1995).
39 Baudrillard, 42–45.
DEEPFAKES AND CHEAP FAKES - 21 -
were real images. What was manipulative was how they were
contextualized, interpreted, and broadcast around the clock
on cable television.
Gulf War coverage on CNN in January 1991 featured infrared night time reporting of missile fire,
heightening the urgency of US operations in the Persian Gulf. (Wikimedia Commons)
40 Baudrillard, 71-73; Robert Fisk, The Great War for Civilisation: The Conquest of the
Middle East, Reprint edition (New York: Vintage, 2007).“DCAS Reports - Persian Gulf
War Casualties - Desert Storm,” accessed April 9, 2019, https://dcas.dmdc.osd.mil/
dcas/pages/report_gulf_storm.xhtml.
41 Baudrillard, 78–83.
DATA & SOCIETY - 22 -
This history shows that evidence does not speak for itself.
That is, it is not simply the representational fidelity of
media that causes it to function as evidence. Instead, media
requires social work for it to be considered as evidence. It
requires people to be designated as expert interpreters. It
requires explicit negotiations around who has the status to
interpret media as evidence. And because evidence serves
such a large role in society – it justifies incarcerations, wars,
and laws – economically, politically, and socially powerful
actors are invested in controlling those expert interpre-
tations. Put another way, new media technologies do not
inherently change how evidence works in society. What
they do is provide new opportunities for the negotiation of
expertise, and therefore power.
CHEAP FAKES ON
SOCIAL MEDIA
Today, social media is experiencing a sped-up version of
the cycles of hype, panic, and closure that still and moving
images went through in the last 200 years. The camera
function became a competitive feature of mobile phones in
the mid-2000s. At the same time, social media platforms,
43
43 Nathan Jurgenson, The Social Photo: On Photography and Social Media (New York:
Verso, 2019).
44 Henry Jenkins, “Photoshop for Democracy,” MIT Technology Review, June 4, 2004,
https://www.technologyreview.com/s/402820/photoshop-for-democracy/.
DATA & SOCIETY - 24 -
Photoshopping
45 Richard Abel, Encyclopedia of Early Cinema (Taylor & Francis, 2004); John Pultz, The
Body and the Lens: Photography 1839 to the Present, First edition (New York: Harry N
Abrams Inc., 1995).
46 Clare McGlynn, Erika Rackley, and Ruth Houghton, “Beyond ‘Revenge Porn’: The Contin-
uum of Image-Based Sexual Abuse,” Feminist Legal Studies 25, no. 1 (April 1, 2017):
25–46, https://doi.org/10.1007/s10691-017-9343-2.
47 McGlynn, Rackley, and Houghton, “Beyond ‘Revenge Porn’”; Danielle Keats Citron and
Mary Anne Franks, “Criminalizing Revenge Porn,” SSRN Scholarly Paper (Rochester,
NY: Social Science Research Network, May 19, 2014), https://papers.ssrn.com/
abstract=2368946.
48 McGlynn, Rackley, and Houghton, “Beyond ‘Revenge Porn,’” 33.
DATA & SOCIETY - 26 -
pictured loses agency over their own likeness and the “por-
trayal of their sexual self.” 49
Lookalikes
have been used with female journalists who call out abuses
of power as in the case of Rana Ayyub, mentioned earlier,
who critiqued the nationalist Bharatiya Janata Party (BJP)
that ran the Indian government in Gujarat responsible for
extrajudicial murders following the Gujarat riots in 2002. 54
53 “De Lima on Sex Video: It Is Not Me,” philstar.com, accessed March 29, 2019, https://
www.philstar.com/headlines/2016/10/06/1630927/de-lima-sex-video-it-not-me.
54 Rana Ayyub, Gujarat Files: Anatomy of a Cover Up (India: CreateSpace Independent
Publishing Platform, 2016).
55 Ayyub, “I Was The Victim of a Deepfake Porn Plot Intended To Silence Me.”
DATA & SOCIETY - 28 -
Re-contextualizing
56 Martin Coulter, “BBC Issues Warning after Fake News Clips Claiming NATO and Russia
at War Spread through Africa and Asia,” London Evening Standard, accessed April 25,
2018, https://www.standard.co.uk/news/uk/bbc-issues-warning-after-fake-news-
clips-claiming-nato-and-russia-at-war-spread-through-africa-and-a3818466.html;
“BBC Forced to Deny Outbreak of Nuclear War after Fake News Clip Goes Viral,” The
Independent, April 20, 2018, https://www.independent.co.uk/news/media/bbc-forced-
deny-fake-news-nuclear-war-viral-video-russia-nato-a8313896.html.
57 Chris Bell, “No, the BBC Is Not Reporting the End of the World,” April 19, 2018, https://
www.bbc.com/news/blogs-trending-43822718.
DEEPFAKES AND CHEAP FAKES - 29 -
58 Daniel Kreiss, “The Fragmenting of the Civil Sphere: How Partisan Identity Shapes the
Moral Evaluation of Candidates and Epistemology,” American Journal of Cultural Sociol-
ogy 5, no. 3 (October 2017): 443–59, https://doi.org/10.1057/s41290-017-0039-5;
Santanu Chakrabarti, Lucile Stengel, and Sapna Solanki, “Fake News and the Ordinary
Citizen in India,” Beyond Fake News (London: British Broadcasting Corp., November 12,
2018), http://downloads.bbc.co.uk/mediacentre/duty-identity-credibility.pdf.
59 Jason Koebler, Derek Mead, and Joseph Cox, “Here’s How Facebook Is Trying to Moder-
ate Its Two Billion Users,” Motherboard (blog), August 23, 2018, https://motherboard.
vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works; Sarah T.
Roberts, Behind the Screen, Yale University Press (New Haven, CT: Yale University
Press, 2019), https://yalebooks.yale.edu/book/9780300235883/behind-screen.
DATA & SOCIETY - 30 -
64 Sarah Sanders, "We Stand by Our Decision to Revoke This Individual’s Hard Pass.
We Will Not Tolerate the Inappropriate Behavior Clearly Documented in This
Video," Twitter, accessed November 8, 2019, https://twitter.com/PressSec/sta-
tus/1060374680991883265.
65 Mike Stuchbery, "Oh Lord, in His Fevered Attempts to Claim He Didn’t Edit the Acosta
Video, @PrisonPlanet Posted a Picture of His Vegas Timeline — That Shows He Added
Frames...," Twitter, accessed November 8, 2019, https://twitter.com/MikeStuchbery_/
status/1060535993525264384; Rafael Shimunov, "1) Took @PressSec Sarah Sand-
ers’’ Video of Briefing 2) Tinted Red and Made Transparent over CSPAN Video 3) Red
Motion Is When They Doctored Video Speed 4) Sped up to Make Jim Acosta’s Motion
Look like a Chop 5) I’ve Edited Video for 15+ Years 6) The White House Doctored
It," Twitter, accessed November 8, 2019, https://twitter.com/rafaelshimunov/sta-
tus/1060450557817708544.
66 Paul Joseph Watson, "Sarah Sanders Claims Jim Acosta ’Placed His Hands on
a Woman.’ Jim Acosta Says This Is a ‘Lie’. Here’s the Video. You Be the Judge,"
Twitter, accessed November 8, 2019, https://twitter.com/PrisonPlanet/sta-
tus/1060344443616800768.
67 Paris Martineau, “How an InfoWars Video Became a White House Tweet,” Wired,
November 8, 2018, https://www.wired.com/story/infowars-video-white-house-cnn-jim-
acosta-tweet/.
DATA & SOCIETY - 32 -
69 From January 2018 to June 2019, we collected and analyzed over 70 examples of
deepfake videos and images from YouTube, Instagram, Twitter, TikTok, PornHub, Arxiv,
and Mr. Deepfakes.
70 We completed eleven semi-structured interviews between September and December
2018.
71 This term is drawn from sociological and anthropological literature on knowledge pro-
duction refer to groups of individuals organize both on and off-line to experiment, learn,
and professionalize around practice-based activities. Jean Lave and Etienne Wenger,
Situated Learning: Legitimate Peripheral Participation (Cambridge University Press,
1991); Line Dubé, Anne Bourhis, and Réal Jacob, “The Impact of Structuring Character-
istics on the Launching of Virtual Communities of Practice,” Journal of Organizational
Change Management, April 1, 2005, https://doi.org/10.1108/09534810510589570.
DATA & SOCIETY - 34 -
Virtual Performances
72 This observation from our interviews is backed up in interviews from news coverage,
for example, in Megan Farokhmanesh, “Is It Legal to Swap Someone’s Face into
Porn without Consent?” The Verge, January 30, 2018, https://www.theverge.
com/2018/1/30/16945494/deepfakes-porn-face-swap-legal.
DEEPFAKES AND CHEAP FAKES - 35 -
One – highlighting just how close free and open source tools
can get to the “expensive and labor intensive” ones used by
Hollywood. 73
Face Swapping
73 Insight gleaned from exchange with derpfakes on November 14, 2018, and derpfake
video comments on YouTube and Reddit.
DATA & SOCIETY - 36 -
74 Samantha Cole, “AI-Assisted Fake Porn Is Here and We’re All Fucked,” Motherboard,
December 11, 2017, https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-
fake-ai-porn.
75 Quote from Reddit user Quiet Horse, now deleted from Reddit, in Megan Farokhmanesh,
“Is It Legal to Swap Someone’s Face into Porn without Consent?,” The Verge, January
30, 2018, https://www.theverge.com/2018/1/30/16945494/deepfakes-porn-face-
swap-legal.
76 McGlynn, Rackley, and Houghton, “Beyond ‘Revenge Porn’”; Franks, “Unwilling Avatars”;
Citron, “Addressing Cyber Harassment”; Citron and Franks, “Criminalizing Revenge
Porn.” While this is generally true in online porn communities, pornographic practice
is by no means monolithic. The production of pornography by and for women, LGBTIQ
individuals, and people of color highlights that these groups have agency in producing
and enjoying their own counter-hegemonic pornographic content. The higher degree
of visibility and access to sexual culture for these groups might be described as
liberatory, or more radically democratic. In this vein, Feona Attwood suggests that
altporn in the form of SuicideGirls and Nerve.com represent “participatory cultures
which serve corporate and community needs” which can be understood as radically
democratic. (Feona Attwood, “No Money Shot? Commerce, Pornography and New
Sex Taste Cultures,” Sexualities 10, no. 4 (October 1, 2007): 441–56, https://doi.
org/10.1177/1363460707080982.)
DEEPFAKES AND CHEAP FAKES - 37 -
78 Craig Silverman, “How To Spot a DeepFake Like The Barack Obama-Jordan Peele Video,”
accessed April 18, 2018, https://www.buzzfeed.com/craigsilverman/obama-jordan-
peele-deepfake-video-debunk-buzzfeed.
79 James Vincent, “Watch Jordan Peele Use AI to Make Barack Obama Deliver a
PSA about Fake News,” The Verge, April 17, 2018, https://www.theverge.com/
tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed.
80 Cade Metz, “A Fake Zuckerberg Video Challenges Facebook’s Rules,” The New York
Times, June 11, 2019, sec. Technology, https://www.nytimes.com/2019/06/11/tech-
nology/fake-zuckerberg-video-facebook.html.
81 The Association for Computing Machinery is a professional organization that supports
SIGGRAPH, a special interest group on computer graphics and interactivity.
82 “Canny AI: Imagine World Leaders Singing" Fxguide, accessed June 24, 2019, https://
www.fxguide.com/featured/canny-ai-imagine-world-leaders-singing/.
DATA & SOCIETY - 39 -
CONCLUSION
The problems wrought by audiovisual manipulation are
many and difficult to remedy, but we can work to reduce risk
and mitigate harm. This report has shown that individual
women, journalists, and others who are antagonistic to those
who hold economic and political power are going to be the
first to confront the politics of evidence in a “post-truth”
world. In these scenarios, questions of evidence are key:
Who should we trust, and on what basis?
83 Hamza Shaban, “A Google App That Matches Your Face to Artwork Is Wildly Popular.
It’s Also Raising Privacy Concerns.,” Washington Post. January 17, 2018, https://
www.washingtonpost.com/news/the-switch/wp/2018/01/16/google-app-that-
matches-your-face-to-artwork-is-wildly-popular-its-also-raising-privacy-concerns/;
Cole and Maiberg, “People Are Using AI to Create Fake Porn of Their Friends and
Classmates”; Natasha Singer, “Facebook’s Push for Facial Recognition Prompts Privacy
Alarms,” The New York Times, July 11, 2018, sec. Technology, https://www.nytimes.
com/2018/07/09/technology/facebook-facial-recognition-privacy.html.
84 Pavel Korshunov and Sebastien Marcel, “DeepFakes: A New Threat to Face Recog-
nition? Assessment and Detection,” ArXiv:1812.08685 [Cs], December 20, 2018,
http://arxiv.org/abs/1812.08685; Owen Hughes, “Is FaceApp Safe? Don’t Be so Quick
to Share Your Face Online, Warn Security Experts,” International Business Times UK,
April 27, 2017, https://www.ibtimes.co.uk/faceapp-safe-dont-be-so-quick-share-
your-face-online-warn-security-experts-1618975; Robert Chesney and Danielle
Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National
Security,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, July
14, 2018), https://papers.ssrn.com/abstract=3213954; M. C. Elish and danah boyd,
“Situating Methods in the Magic of Big Data and AI,” Communication Monographs 85,
no. 1 (January 2, 2018): 57–80, https://doi.org/10.1080/03637751.2017.1375130.
85 Instagram Scraper and the Chrome extension DownAlbum are among many open source
tools that allow anyone to easily download images from publicly available Facebook or
Instagram accounts to do many things, including to generate training data for fakes.
86 Cole and Maiberg, “People Are Using AI to Create Fake Porn of Their Friends and Class-
mates.”
87 “Bella Thorne Steals Hacker’s Thunder, Publishes Nude Photos Herself,” Naked Security
(blog), June 18, 2019, https://nakedsecurity.sophos.com/2019/06/18/bella-thorne-
steals-hackers-thunder-publishes-nude-photos-herself/; “Beware the Fake Facebook
Sirens That Flirt You into Sextortion,” Naked Security (blog), March 23, 2018, https://
nakedsecurity.sophos.com/2018/03/23/beware-the-fake-facebook-sirens-that-flirt-
you-into-sextortion/.
88 BBC News, ”Fake Voices ‘Help Cyber-Crooks Steal Cash,’” BBC, July 8, 2019, https://
www.bbc.com/news/technology-48908736; Stacey Colino, “Don’t Fall Victim to the
Grandparents Scam,” AARP, April 18, 2018, http://www.aarp.org/money/scams-fraud/
info-2018/grandparent-scam-scenarios.html.
DEEPFAKES AND CHEAP FAKES - 42 -
89 Drew Hartwell, “Fake-Porn Videos Are Being Weaponized to Harass and Humiliate
Women: ‘Everybody Is a Potential Target,’” Washington Post, accessed April 5, 2019,
https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-be-
ing-weaponized-harass-humiliate-women-everybody-is-potential-target/; Samantha
Cole and Emanuel Maiberg, “People Are Using AI to Create Fake Porn of Their Friends
and Classmates,” Motherboard (blog), January 26, 2018, https://motherboard.vice.
com/en_us/article/ev5eba/ai-fake-porn-of-friends-deepfakes.
90 “47 U.S. Code § 230 - Protection for Private Blocking and Screening of Offensive Mate-
rial,” Legal Information Institute, accessed March 1, 2019, https://www.law.cornell.edu/
uscode/text/47/230.
91 Franks, “The Desert of the Unreal”; Citron and Franks, “Criminalizing Revenge Porn.”
92 47 U.S. Code § 230 - Protection for Private Blocking and Screening of Offensive
Material,”; “Section 230 Protections,” Electronic Frontier Foundation, August 25, 2011,
https://www.eff.org/issues/bloggers/legal/liability/230.; Danielle Citron and Benjamin
Wittes, “The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity,” Ford-
ham Law Review 86, no. 2 (November 1, 2017): 401; Citron and Franks, “Criminalizing
Revenge Porn.”
DATA & SOCIETY - 43 -
93 Donald Mackenzie, “Uninventing the Bomb?” Medicine and War 12, no. 3 (October 22,
2007): 202–11, https://doi.org/10.1080/13623699608409285.
94 Natasha Bernal, “Journalists at Wall Street Journal to Be Taught to Identify
‘Deepfakes,’” The Telegraph, November 15, 2018, https://www.telegraph.co.uk/
technology/2018/11/15/journalists-wall-street-journal-taught-identify-deepfakes/;
Suzanne Sunne, “What to Watch for in the Coming Wave of “Deep Fake" Videos,” Global
Investigative Journalism Network, May 28, 2018, https://gijn.org/2018/05/28/what-
to-watch-for-in-the-coming-wave-of-deep-fake-videos/; Siwei Lyu, “The Best Defense
against Deepfake AI Might Be ... Blinking,” Fast Company, August 31, 2018, https://
www.fastcompany.com/90230076/the-best-defense-against-deepfakes-ai-might-be-
blinking.
95 Martínez, “The Blockchain Solution to Our Deepfake Problems”; Lyu, “The Best Defense
against Deepfake AI Might Be... Blinking”; Greene, “Researchers Developed an AI to
Detect DeepFakes”; “Exposing Fake Videos.”
96 Facebook, “Expanding Fact-Checking to Photos and Videos | Facebook Newsroom,”
Facebook, September 13, 2018, https://newsroom.fb.com/news/2018/09/expand-
ing-fact-checking/.
97 TruePic, “Photos and Videos You Can Trust.”
DEEPFAKES AND CHEAP FAKES - 44 -
98 Matt Turek, “Media Forensics,” Defense Advanced Research Projects MediFor, 2018,
https://www.darpa.mil/program/media-forensics.
99 H. R. Hasan and K. Salah, “Combating Deepfake Videos Using Blockchain and Smart
Contracts,” IEEE Access 7 (2019): 41596–606, https://doi.org/10.1109/AC-
CESS.2019.2905689.
100 Yvette Clarke, “H.R.3230 Defending Each and Every Person from False Appearances by
Keeping Exploitation Subject to Accountability Act of 2019,” webpage, June 24, 2019,
https://www.congress.gov/bill/116th-congress/house-bill/3230; Ben Sasse, “S.3805
Malicious Deep Fake Prohibition Act of 2018,” webpage, December 21, 2018, https://
www.congress.gov/bill/115th-congress/senate-bill/3805/text; Tim Grayson “AB-1280
Crimes: Deceptive Recordings.,” webpage, April 22, 2019, https://leginfo.legislature.
ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1280. Bryan Hughes “SB 751,”
webpage, February 11. 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.
xhtml?bill_id=201920200AB1280; In addition, Virginia has expanded existing revenge
porn law to cover fake nude images. Marcus Simon, “HB 2678 Amendment” webpage,
February 11, 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?-
bill_id=201920200AB1280
DATA & SOCIETY - 45 -
101 Literature in both science and technology studies (STS) and media studies discuss
how evidence in the form of formalized and informal data and discourses around these
types of evidence are shaped by social, political and cultural structures that reflect
and reify the status quo. At the same time, evidence can be mobilized by less powerful
entities to make justice-based arguments about the distribution of harms and power in
society. But even then, community-based counter-data is always judged according to
rules set by dominant discourses around truth that exist to protect the status quo of
privilege. The authors have published collaborative work on the politics of evidence as
they relate to community-based counter-data practices. Britt S. Paris et al., “Pursuing
a Toxic Agenda,” September 2017, https://100days.envirodatagov.org/pursuing-tox-
ic-agenda/; Morgan Currie, Joan Donovan, and Britt S. Paris, “Preserving for a More
Just Future: Tactics of Activist Data Archiving,” in Data Science Landscape: Towards
Research Standards and Protocols, ed. Usha Mujoo Munshi and Neeta Verma, Studies
in Big Data 38 (Singapore: Springer, 2018), 67–78; Britt S. Paris and Jennifer Pierre,
“Naming Experience: Registering Resistance and Mobilizing Change with Qualitative
Tools,” InterActions: Journal of Education and Information Studies 13, no. 1 (2017),
https://escholarship.org/uc/item/02d9w4qd; Britt S. Paris and Jennifer Pierre, “Bad
Data — Cultural Anthropology,” Cultural Anthropology Field Insights, April 28, 2017,
https://culanth.org/fieldsights/1107-bad-data.\\uc0\\u8221{} in {\\i{}Data Science
Landscape: Towards Research Standards and Protocols}, ed. Usha Mujoo Munshi and
Neeta Verma, Studies in Big Data 38 (Singapore: Springer, 2018
DEEPFAKES AND CHEAP FAKES - 46 -
102 Sarah T. Roberts, Behind the Screen (New Haven, CT: Yale University Press, 2019),
https://yalebooks.yale.edu/book/9780300235883/behind-screen.
103 Danielle Keats Citron, Hate Crimes in Cyberspace, Reprint edition (Cambridge, MA:
Harvard University Press, 2016); Danielle Keats Citron and Helen Norton, “Intermedi-
aries and Hate Speech: Fostering Digital Citizenship for Our Information Age,” Boston
University Law Review 91 (2011): 1435.Reprint edition (Place of publication not
identified: Harvard University Press, 2016
104 Danielle Citron and Benjamin Wittes, “The Internet Will Not Break: Denying Bad Samar-
itans § 230 Immunity,” Fordham Law Review 86, no. 2 (November 1, 2017): 401; Mary
Anne Franks and Danielle Citron, “Criminalizing Revenge Porn,” Wake Forest Law Review
49 (2014): 345–92; Mary Anne Franks, “Drafting an Effective ‘Revenge Porn’ Law: A
Guide for Legislators” (Cyber Civil Rights Initiative, 2016), https://www.cybercivilrights.
org/guide-to-legislation/; Christopher Zara, “The Most Important Law in Tech Has a
Problem | Backchannel,” Wired, January 3, 2017, https://www.wired.com/2017/01/the-
most-important-law-in-tech-has-a-problem/.
DATA & SOCIETY - 47 -
105 Donald Mackenzie, “Uninventing the Bomb?” Medicine and War 12, no. 3 (October 22,
2007): 202–11, https://doi.org/10.1080/13623699608409285, 120
ACKNOWLEDGMENTS
We are grateful to everyone at Data & Society Research Institute for their generous support,
guidance, and helpful feedback through the duration of this project. In particular, we’d like to
thank Patrick Davison for his thoughtful editing and revisions. We are forever thankful for the
collaboration and ongoing discussions with members of Data & Society’s Media Manipulation
Initiative, past and present. Thanks to interviewees for graciously taking time to provide
background information on this work. Special thanks to external reviewers Fenwick McKelvey,
Tiziana Terranova, Aayush Bansal, and Samuel Gursky for their generous feedback on the draft.
A big thank you to our friends, family, and partners for supporting us in so many ways through this
project and during everything else.
www.datasociety.net
@datasociety
Illustration by Jim Cooke
www.datasociety.net
@datasociety
BEYOND ILLUSIONS
JUNE 2024
BEYOND ILLUSIONS
UNMASKING THE THREAT OF SYNTHETIC MEDIA
FOR LAW ENFORCEMENT
Table of Contents
Foreword ................................................................................................................................................ 4
Acknowledgement ................................................................................................................................. 5
Executive Summary ................................................................................................................................ 6
1. Introduction ................................................................................................................................ 7
2. Types of Synthetic Media and the Technology Behind ............................................................. 7
2.1 Deepfakes ................................................................................................................................. 7
2.2 Synthetic Audio ........................................................................................................................ 9
2.3 Generated Text ....................................................................................................................... 10
2.4 Synthetic IDs ........................................................................................................................... 12
3. Enabling and Enhancing Law Enforcement Capabilities ............................................................. 13
3.1 Montage Generation .............................................................................................................. 13
3.2 Undercover Operations and Covert Surveillance .................................................................. 13
3.3 Training and Public Engagement ........................................................................................... 14
4. Potential Challenges for Law Enforcement.............................................................................. 14
4.1 Evidence Authentication ........................................................................................................ 14
Challenge to Legal Proceedings ................................................................................................... 14
4.2 Victim Identification ............................................................................................................... 15
Impersonation .............................................................................................................................. 15
4.3 Misinformation and Propaganda ........................................................................................... 16
4.4 Privacy Concerns .................................................................................................................... 17
4.5 Forensics in the Synthetic Media Era..................................................................................... 17
5. Investigative Techniques and Forensic Analysis ...................................................................... 18
5.1 Collection of Evidence ................................................................................................................ 18
Source Verification ....................................................................................................................... 18
Metadata Analysis ........................................................................................................................ 18
Reverse Image Searching ............................................................................................................. 18
Linguistic Analysis......................................................................................................................... 18
5.2 Emerging Technology Solutions for Synthetic Media Detection .......................................... 19
Deep Learning Models ................................................................................................................. 19
Explainable AI ............................................................................................................................... 19
File Structure Analysis .................................................................................................................. 20
Biological Signals .......................................................................................................................... 20
Statistical Analysis at a Pixel Level .............................................................................................. 21
Geometrical and Behavioral Analysis .......................................................................................... 21
AI-generated Speech Analysis ...................................................................................................... 21
2|Page
6. Regulatory and Policy Considerations ..................................................................................... 22
6.1 Intellectual Property .............................................................................................................. 22
6.2 Explainable AI ......................................................................................................................... 23
7. Recent Reports on Synthetic Media......................................................................................... 24
7.1 Deepfakes ............................................................................................................................... 24
7.2 Synthetic Audio ...................................................................................................................... 24
7.3 Generated Text ....................................................................................................................... 25
7.4 Synthetic IDs ........................................................................................................................... 25
8. INTERPOL’s Role in Synthetic Media Forensics .......................................................................... 25
9. Conclusion and Recommendations ............................................................................................ 26
3|Page
Foreword
In the ever-advancing landscape of technology, particularly with the rise of Artificial Intelligence (AI),
synthetically generated media has emerged as a very impactful development for law enforcement.
Although not a new technology, the recent rise of generative AI platforms and deepfake-as-a-service
offerings has led to a surge in synthetic media variations, now easily produced and accessible online
worldwide.
While synthetically generated media offers numerous positive applications across society, it also
presents significant challenges. The accessibility and affordability of various AI platforms have enabled
criminals to exploit this technology, leveraging synthetic media to propagate and perpetuate criminal
activities. This poses considerable challenges for the global law enforcement community. Recognizing
the diverse forms of synthetic media and its associated challenges, INTERPOL is committed to
exploring this dynamic landscape to support our member countries in addressing both current and
future threats posed by synthetic media.
Developing robust forensic and investigative capabilities is crucial for detecting and differentiating
between genuine and manipulated media. This requires a multi-stakeholder approach involving
member countries, industry stakeholders, and academic institutions to pool our knowledge, and share
expertise and advancing technologies. Only through our collective efforts can we mitigate the adverse
effects of synthetic media and ensure the sustained integrity and reliability of global law enforcement
operations.
To support this endeavor, INTERPOL has developed this background paper to offer a comprehensive
analysis of this technology, delving into the potential opportunities offered by it and the challenges it
presents for law enforcement.
I trust that this background paper will serve as a cornerstone for INTERPOL in further exploring the
synthetic media landscape and assisting member countries in advancing and innovating policing.
Jürgen Stock
4|Page
Acknowledgement
The successful completion of this effort was made possible through extensive collaboration. There
were many collaborators involved in the creation of this background paper on synthetic media and its
implication for law enforcement. First and foremost, INTERPOL would like to thank the participants of
the INTERPOL Metaverse Expert Group that inspired the formation of the synthetic media expert
group to discuss current and upcoming threats and support the creation of this document.
INTERPOL would like to extend a special thanks to Mark Evenblij, Brandon Epstein, Paul Warren-Tape,
Matthew Adams, Martino Jerian and Manon den Dunnen for actively contributing to the structure and
content of this document and helped shape this paper. I would like to also thank Ananya Das,
Christopher Church, Fabio Bruno, Julie Tomaszewski, Janani Nair, Priscilla Cabuyao, Toshinobu
Yasuhira, Wookyung Jung, Lindeberg Leite, Mike Price, Parya Lotfi, Mark Nutall, Scott Landman, Jan
Collie, Giorgio Patrini and Julia Absalyamova who peer reviewed this document, which greatly assisted
in adding invaluable contributions and additional insights for this background paper and were
instrumental in filling in the knowledge gaps I extend sincere gratitude to all the experts for their
invaluable insights and contributions.
INTERPOL stands ready to continuously explore the opportunities and challenges that synthetic media
presents to law enforcement and support member countries in understanding, investigating and
applying digital forensics to this emerging threat.
Madan Oberoi
INTERPOL Executive Director of Technology and Innovation
5|Page
Executive Summary
With the rapid advancement of AI technology, synthetic media is becoming an influential mode of
content. Its ease of availability has enabled its exploitation by criminals for malicious
activities. Currently, synthetic media files possess the ability to deceive human perception, making it
challenging to determine their authenticity. As AI continues to evolve, these artificially generated
media will become increasingly sophisticated, posing significant challenges for law enforcement
agencies in conducting forensic analyses of such content.
This background paper on synthetic media and its implications for law enforcement, offers a thorough
introductory understanding to this emerging technology. Based on the research and analysis of the
current synthetic media landscape, the main findings of this paper that are needed to be put forward
for law enforcement officers include:
1. Synthetic media is a new and emerging field and tackling the threats posed by it requires
dynamic research and analysis of the landscape to understand the technology and find
detection solutions.
2. To combat this threat effectively, it is crucial for law enforcement agencies to gain a
comprehensive understanding of the multifaceted nature of synthetic media, including its
creation, distribution, and potential impact.
3. Law enforcement requires a holistic solution to investigate digital media files efficiently and
effectively in this synthetic content era.
4. The rise and increasing ease of AI generated content calls for a collaborative approach
between global law enforcement, private sector, and academia to stay at the forefront of
synthetic media forensics and investigations through ongoing vigilance, continuous
technological advancements, and international collaboration.
In order to assist law enforcement in member countries and enhance collaboration between law
enforcement, industry and academia, INTERPOL has drafted this background paper to explore and
understand the synthetic media landscape and tackle the threats posed by it. Based on the findings,
INTERPOL will continue to work closely with various stakeholders in this endeavor.
6|Page
1. Introduction
Synthetic media refers to the type of media content, such as images, videos, audio, or text that has
been completely or partially generated or manipulated using artificial intelligence (AI) algorithms.
These technologies allow for the creation of highly realistic and convincing media that can be difficult
to distinguish from human-generated content.
Synthetically generated media has a multitude of positive applications across society, law
enforcement, and the private sector. It serves as valuable training material, enables the
anonymization of individuals, and consistently delivers high-quality content. While synthetic media
content is not a new technology, the recent proliferation of user-friendly generative AI platforms and
deepfake-as-a-service offerings has resulted in a surge of various synthetic media variations and use.
These can now be produced with minimal effort and are accessible to a broad internet user base
worldwide. This sudden rise of synthetic content on the internet has emerged as a significant concern
requiring global law enforcement attention.
Addressing the threats of synthetic media requires a multi-faceted approach. For instance, it may be
difficult to detect and assess the veracity and authenticity of legal documents or frameworks drafted
by AI platforms. Law enforcement agencies need to adapt their investigative approaches to detect
and verify the authenticity of media content, as well as collaborate with experts in the field of AI and
digital forensics to combat the misuse of synthetic media effectively.
Various categories of synthetic media have emerged as a transformative force in the digital realm,
altering how we generate and engage with content. These diverse forms of artificially created or
modified media span a broad spectrum of innovative uses, each with its own unique potential and
impact. Whether it is in the domain of images, sound, videos, or text, these creations rely on advanced
technologies for their generation. With the capacity to imitate and, in some cases, surpass human-
produced content, synthetic media represents a dynamic frontier with a potential for both positive
and concerning implications. Experts believe that 90% of the content available online will be
synthetically sourced by 20251. Some of the key types of synthetic media discussed in this paper are
outlined below.
2.1 Deepfakes
Representing a specific type of synthetic media, deepfakes are created using sophisticated AI
algorithms to depict individuals in highly realistic, and often deceptive, situations that never occurred.
These advanced AI models are capable of creating highly convincing visual and audio content, often
featuring individuals in fabricated scenarios or expressing statements they have never stated.2
1
Giardina, C. (2023, January 8). CES: Could 90 Percent of Content Be AI-Driven by 2025? The Hollywood Reporter.
https://www.hollywoodreporter.com/movies/movie-news/ces-ai-sag-aftra-1235290431/
2
Goodfellow, I. J., et al. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems
(https://arxiv.org/pdf/1406.2661.pdf)
7|Page
The emergence of deepfakes has generated profound implications across various domains, raising
both interest and concern in equal measure. With reference to its usage, deepfakes have unlocked
new horizons in the realm of special effects, enriching the entertainment industry with lifelike CGI3
and offering unparalleled creative potential. Conversely, the nefarious use of deepfakes for
misinformation, identity theft, and manipulation has raised significant ethical and security concerns,
underscoring the urgent need for safeguards and countermeasures.
3
CGI – Computer Generated Imagery. Computer Generated Imagery are a type of visual effects in which scenes, effects
and images are synthetically generated using a computer software.
4
Luca Guarnera et al. (2023) Level Up the Deepfake Detection: A Method to Effectively Discriminate Images Generated by
GAN Architectures and Diffusion Models (https://arxiv.org/pdf/2303.00608.pdf)
5
Aditya Ramesh et al. (2022) Hierarchical Text-Conditional Image Generation with CLIP Latents
(https://arxiv.org/pdf/2204.06125.pdf)
6
https://www.ibm.com/topics/autoencoder
8|Page
7
Voice synthesis is a distinct niche within the synthetic media landscape that has emerged with the rise
of generative AI. It involves the creation of lifelike speech patterns and audio content, often mimicking
the vocal characteristics of specific individuals, or generating a totally new tone or style of speech. In
its constructive application, voice synthesis has revolutionized the accessibility of technology, aiding
individuals with speech impairments, and enhancing the capabilities of virtual assistants with more
human-like interactions.
On the other hand, voice synthesis could be misused for manipulation of audio recordings,
impersonations, and other deceptive practices. These nefarious activities forecast the critical need for
security measures and ethical guidelines to safeguard against misuse. The ability to generate
convincing synthetic voices has deepened concerns about identity theft, fraud, and the credibility of
audio evidence, necessitating vigilance and countermeasures to uphold the integrity of auditory
content in a digitally evolving world.
8 Image Contributed by DuckDuckGoose (AI Detection Services | Verifying ID & Combatting Fraud (duckduckgoose.ai))
9|Page
Some of the main AI algorithms utilized for creating synthetic audio are:
Text generation represents a distinct area of synthetic media, enabling the automated creation of
written content with remarkable authenticity. It is the task of generating text with the goal of
appearing indistinguishable to human-written text, more formally known as Natural Language
Generation (NLG). Text generation plays a pivotal role in the dynamic landscape of synthetic media,
contributing to the creation of diverse and engaging content. The current most advanced methods for
text generation, including platforms like ChatGPT or BART AI, enable the generation of human-like and
contextually relevant text across various domains. In the realm of synthetic media, these models
empower e.g. content creators to produce realistic and coherent narratives for virtual characters,
dialogues for interactive storytelling, and even generate new article or blog posts with minimal human
intervention.
9
Robin M. Schmidt (2019). Recurrent Neural Networks (RNNs): A gentle Introduction and Overview
(1912.05911.pdf (arxiv.org))
10
Chris Donahue et al. (2018) Adversarial Audio GANs (https://arxiv.org/pdf/1802.04208.pdf)
11
Antoine Caillon et al. (2021) RAVE: A variational autoencoder for fast and high-quality neural audio synthesis
(https://arxiv.org/pdf/2111.05011.pdf)
10 | P a g e
The principal algorithms used in the generation of text are:
12
Ilya Sutskever et al. (2011) Generating Text with Recurrent Neural Networks
(https://www.cs.toronto.edu/~jmartens/docs/RNN_Language.pdf)
13
Vanishing Gradient Problem - Vanishing gradient problem is a phenomenon that occurs during the training
of deep neural networks, where the gradients that are used to update the network become extremely small or
“vanish” as they are backpropagated from the output layers to the earlier layers. (engati.com)
14
mRNN - A Multiplicative RNN (mRNN) is a type of recurrent neural network with multiplicative connections.
(paperwithcode.com)
15
Li Gong et al. (2019) Enhanced Transformer Model for Data-to-Text Generation
(https://aclanthology.org/D19-5615.pdf)
16
Open AI (2023) GPT-4 Technical Report (https://cdn.openai.com/papers/gpt-4.pdf)
11 | P a g e
2.4 Synthetic IDs
Synthetic IDs are artificial identity documents to produce images resembling genuine and authentic
identification documents. These IDs are designed to replicate the appearance of legitimate
identification papers, including driver's licenses, passports, or other official forms of identification.
The primary aim of generating synthetic IDs is to support the volume of data available for training
deep neural networks, specifically for the purpose of detecting and preventing fraudulent identity
documents. This approach assists in enhancing fraud detection mechanisms while simultaneously
addressing concerns related to privacy and the sensitive nature of personal identity documents.17
17
Daniel Benalcazar et al. (2022) Synthetic ID Card Image Generation for Improving Presentation Attack Detection
(https://arxiv.org/pdf/2211.00098.pdf)
18
Yiheng Xu et. al. (2019). LayoutLM: Pre-training of Text and Layout for Document Image Understanding
(https://arxiv.org/abs/1912.13318 )
12 | P a g e
(A) (B)
Figure 2: Synthetic IDs: (A) Driver’s License (B) A passport19. These synthetic IDs look very close to the
authentic IDs and often can be mistaken as original or even pass identity verification tests
Synthetic media offers a range of benefits for law enforcement, revolutionizing various aspects of their
operations and strategies.
Synthetic media may also be used to translate eyewitness descriptions into visual representations of
potential suspects: through analysis and interpretation of provided details, such as facial features or
distinct physical traits. This aids law enforcement investigations by providing leads and helping to
identify and locate potential suspects involved in criminal activities. By generating accurate and
detailed suspect images, synthetic media significantly supports law enforcement efforts, streamlining
investigations for more effective outcomes.20
Synthetic media significantly helps law enforcement capabilities through its roles in undercover
operations and covert surveillance. Primarily, in undercover operations, this technology serves as a
cornerstone for crafting and adapting personas of undercover officers. By leveraging synthetic media,
law enforcement agents can swiftly modify appearances, incorporating fictional injuries, scars,
tattoos, or other alterations necessary to sustain cover identities. This adaptability ensures the
maintenance of plausible and dynamic personas critical for successful undercover assignments.
Moreover, in the realm of covert surveillance, synthetic media plays a key role in supporting law
enforcement by assisting in the creation of realistic backstories for undercover officers. This
technology contributes to the development of believable and authentic personas for undercover
19
Image Contributed by IDVerse. (Home - IDVerse)
20
https://www.researchgate.net/publication/342978037_Suspect_Face_Generation
13 | P a g e
operatives, enhancing the officers’ capabilities to operate incognito within diverse environments. By
providing the means to establish credible backstories, synthetic media augments the effectiveness
and adaptability of covert surveillance strategies employed by law enforcement agencies.21
Synthetic media plays a significant role in augmenting law enforcement capabilities, particularly in
training, public engagement, and officer anonymity. Within the domain of training, law enforcement
agencies leverage synthetic media to create highly realistic and immersive scenarios. By simulating
authentic training environments, officers can enhance their preparedness to handle diverse and
complex situations they might encounter in the field. This enables them to refine their skills, decision-
making, and response strategies in a controlled yet realistic setting.
Furthermore, synthetic media finds utility in event reconstructions, aiding law enforcement in
recreating crime scenes or incidents for investigative purposes or public appeals. By recreating
scenarios using synthetic media, law enforcement can potentially gather valuable evidence, seek
public assistance, and increase community engagement in solving cases.
The rise of synthetic media brings forth significant challenges for evidence authentication in
investigations. Determining the authenticity of media content becomes a complex endeavor due to
the sophisticated manipulation capabilities. Traditional methods of visual and audio analysis may no
longer be sufficient to identify manipulated content, thereby requiring specialized expertise and the
utilization of advanced technologies for forensic analysis. This places additional burdens on
investigative bodies and legal authorities to keep pace with the evolving landscape of synthetic media
and its potential implications for the justice system.
21
Kelly W Sundberg and Christina M Witt (2019); Undercover operations: Evolution and modern challenges.
Undercover operations: Evolution and modern challenges | Journal of the Australian Institute of Professional
Intelligence Officers (informit.org)
14 | P a g e
synthetic media requires a closer examination of how legal systems adapt to address these emerging
threats, ensuring that the pursuit of truth and fairness remains paramount.
The proliferation of synthetic media poses a significant challenge when it comes to proving the
authenticity of evidence in legal proceedings. In an environment where fabricated media can be
virtually indistinguishable from genuine content, the burden of verifying the veracity of audio, video,
or images becomes increasingly complex. This challenge is compounded by the emergence of the
"Liars Dividend," a phenomenon where defendants claim that genuinely authentic media has been
artificially generated, creating a climate of doubt and uncertainty. As a result, judicial systems face the
pressing need to adapt to this evolving landscape, developing robust methods and standards for
evidence authentication to uphold the integrity of legal proceedings.
The emergence of synthetic media has given rise to a distressing concern regarding non-consensual
explicit content. Malicious actors can exploit this technology to create compromising content
featuring individuals without their knowledge or consent. In such scenarios, identifying and providing
support to the victims becomes notably intricate, as the line between reality and synthetic fabrication
blurs. Addressing these issues requires heightened vigilance, legal frameworks, and support systems
for potential victims to ensure their well-being and protection in an increasingly digital and
interconnected world.
Impersonation
Synthetic media enables highly convincing impersonations of individuals, presenting a concerning
challenge. For instance, an individual's voice can be synthesized with remarkable accuracy, making it
exceptionally challenging to distinguish between a genuine and manipulated phone call or audio
recording. This capability can be exploited for a range of malicious purposes, from carrying out
extortion schemes to disseminating false information, thereby amplifying concerns about identity
verification and trust in various aspects of communication and digital interactions. In an age where
authenticity is increasingly fragile, addressing the potential for impersonation is a crucial aspect of
managing the risks associated with synthetic media.
• AI Voice Scams
AI voice scams ingeniously leverage technology to replicate the voice of a victim's family member,
friend, or even impersonate customer service representatives, all with the intent to deceive.
Perpetrators utilize these fabricated voices to manipulate victims into revealing sensitive personal
information or coercing them into sending money. The scam operates by exploiting trust and
familiarity, leading unsuspecting individuals to believe they are interacting with someone they know
or a legitimate service provider, amplifying the success of the ruse.
15 | P a g e
Deepfakes. The reason is that real-time Deepfakes can reproduce faithfully facial landmark
movements of the attackers. Hence, even if facial traits e.g. eye shape and color, the height of the
cheekbones, the shape of mouth do change, the attacker can move their head left and right and smile,
easily following the active liveness instructions.
Know Your Customer (KYC) is the practice of verifying an individual’s identity in compliance with laws
and regulations, primarily for anti-money laundering purposes. In response to the pronounced
inability to detect and prevent online fraud, the KYC process has been introduced to reduce instances
of illegal transactions. Due to its sensitivity for businesses, KYC is regulated by respective national and
international government agencies. KYC procedures are necessary when opening an account with a
bank or when conducting financial transactions of any sort. The possibility of bypassing KYCs with aid
of synthetic media may pose a huge threat to the online liveness check procedures, as it enables
chances of impersonation and thus compromises the reliability of the identity verification process.
In a recent incident, it was reported that a multinational corporation fell victim to a sophisticated
deepfake scam, resulting in a huge financial loss of $25.6 million. With the use of advanced synthetic
media technology, perpetrators digitally replicated high-ranking company officials, including the Chief
Financial Officer, within a simulated video conference setting. The deepfake representations
convincingly mimicked the appearance, voice, and mannerisms of genuine personnel, fostering an
illusion of authenticity. Under the guise of legitimacy, the victim was deceived into executing a series
of 15 transfers totaling millions of dollars to multiple bank accounts. Despite initial skepticism, the
victim's compliance was facilitated by the seamless interaction and seemingly genuine appearance of
the deepfake personas.22
Synthetic media's capacity to generate convincing news articles, videos, or social media posts raises
substantial concerns related to the dissemination of false information and the manipulation of public
opinion. This creates formidable challenges for law enforcement agencies, which heavily depend on
accurate and reliable information to conduct effective investigations and ensure public safety. The
proliferation of synthetic media exacerbates the complexity of discerning fact from fiction,
underscoring the pressing need for robust mechanisms to verify the authenticity of digital content and
safeguard against the potential consequences of misinformation and propaganda in an information-
driven society.
Simultaneously, the rise in use of text generation tools also raises serious concerns. These ranges from
questions of plagiarism and intellectual property rights, to ethical concerns, as it becomes crucial to
ensure responsible use to prevent misinformation, deepfakes, or manipulation of public opinion. Large
Language Models (LLM) are vastly used to generate content across social media. Unsupervised use of
this content may lead to the spread of misinformation and propaganda. Social media is a breeding
ground for such false information, attracting such malicious content to reach a larger audience.23
22
https://economictimes.indiatimes.com/industry/tech/hong-kong-mnc-suffers-25-6-million-loss-in-deepfake-
scam/articleshow/107465111.cms?from=mdr
23
https://www.cpahq.org/media/ivih25ue/handbook-on-disinformation-ai-and-synthetic-media.pdf
16 | P a g e
4.4 Privacy Concerns
Synthetic media's capability to create explicit or compromising content, featuring individuals who may
have never engaged in such behavior, escalates profound privacy concerns. This technological
potential can be exploited for malicious purposes, including harassment, blackmail, or causing
significant harm to an individual's reputation. The ease with which synthetic content can be produced
and disseminated amplifies the need for robust privacy protection measures and legal safeguards to
mitigate the potential harm that could be inflicted upon individuals.
These synthetic IDs by impersonation of legitimate personal data mixed with synthetic information,
allow criminals to create false identities that could be exploited by fraudsters to secure credit lines,
apply for government benefits, intercept tax returns, and participate in various financial activities,
resulting in significant monetary losses for financial institutions and government agencies.24
A key element of the challenge facing forensic experts lies in the need to understand the AI media
generation process and integrate this advanced knowledge into the forensic identification steps.
While automated tools offer valuable assistance, they are not entirely adequate. To conduct effective
forensic analysis amid the ever-evolving landscape of AI-generated media, it is essential to combine
the use of these tools with rigorous analysis provided by experienced forensic professionals. This
balanced approach ensures an unbiased and interpretable examination.
24
https://legal.thomsonreuters.com/en/insights/articles/synthetic-identity-fraud
17 | P a g e
5. Investigative Techniques and Forensic Analysis
Investigative techniques and forensic analysis are vital in combating the growing unlawful synthetic
media content. For a seized evidence file, the following investigative techniques can be applied to
check the authenticity of an evidence file.
Source Verification
This technique involves verifying the origin and authenticity of a particular piece of synthetic media.
Investigators assess the credibility of the source, the techniques used to create the content, and any
potential motives for dissemination. This multifaceted approach requires technical expertise,
adaptability to evolving methods, and advanced technologies to detect manipulated content and
mitigate harm.
Metadata Analysis
Metadata analysis by supplying vital contextual information aids in the realm of synthetic
media investigations. This includes details about the source, creation date, and editing history,
providing investigators with the means to trace the origins and authenticity of potentially deceptive
or harmful content. However, it is important to note that in certain scenarios, particularly when
the media content has been transmitted over various applications, the metadata may be
compromised or incomplete. This inherent limitation highlights the need for a multi-faceted
approach, combining metadata analysis with other investigative techniques to ensure a
comprehensive examination of synthetic media.
Linguistic Analysis
Delving into the linguistic aspects and writing style embedded within synthetic media is a strategic
approach for unraveling its true origins. The unique language patterns, cultural allusions, and even
grammatical anomalies can serve as valuable clues in discerning the authenticity of the content.
Experts employ linguistic analysis to ascertain whether the media in question is a product of
manipulation or an authentic piece. This examination of language serves as a linguistic fingerprint that
can help investigators determine whether the content aligns with the expected linguistic traits of the
purported source, thereby aiding in the authentication of multimedia materials.
18 | P a g e
5.2 Emerging Technology Solutions for Synthetic Media Detection
Synthetic media refers to any kind of media content that has been artificially created or manipulated
using advanced technologies. Some of the detection solutions currently used are illustrated below.
In addition to identifying anomalies, digital forensic techniques are applied to analyze the metadata
of images and videos, examining format, compression, and noise distribution. Generative AI typically
leaves distinct digital fingerprints that forensic tools can detect. To stay ahead, incorporating
adversarial training exposes detection models to the latest AI-generated fraud, enabling them to learn
and adapt to new and evolving generative techniques.
Further combining multiple AI models to analyze diverse aspects of the data, such as visual and
temporal features enhance detection rates. For instance, a deepfake video might display realistic
visuals but may lack synchronization between the facial and verbal expressions, detectable by an AI
trained to recognize such discrepancies. Moreover, AI systems cross-reference content with known
databases of authentic images, videos, or documents, revealing discrepancies that could signify
fraudulent activity. This proves particularly effective in document verification, where details are cross-
checked against official records, enhancing the precision of synthetic media and synthetic ID detection
methods.
Explainable AI
The differentiation between authentic and synthetic media is becoming critically important. Deepfake
detection leveraging AI presents a promising solution to manage the increasing volume of synthetic
content. However, the inherent "black box25" nature of these AI systems hampers the transparency of
their decision-making processes, raising the question, "Why is this image classified as fake?" As the
average individual's ability to distinguish between sophisticated synthetic images and reality
diminishes, the necessity for Explainable AI (XAI) in this domain grows. Some of the explainable
deepfake detection approaches include the following.
25Black Box - A black box refers to a system whose behavior has to be observed entirely by inputs and outputs. Even if the
internal structure of the application under examination can be understood, the tester chooses to ignore it. Black box
testing assesses a system solely from the outside, without the operator or tester knowing what is happening within the
system to generate responses to test actions. (techtarget.com)
19 | P a g e
Method Description
Deepfake Deepfake detection typology helps identify the deepfake models which can
Detection provide insights into origins or creator identity, akin to tracing cybercriminal
Typology trends.
Biological Signals
Biological signals, such as photoplethysmography (PPG) cells26, can assist in discerning between
authentic and synthetic media. They are used in a smart system that spots different kinds of generated
models used in synthetic videos. These signals have patterns that help show where videos might have
been altered by highlighting out the changes from the real biological data. These signals are harnessed
to power a classification network, enabling the detection of generative models employed in videos.
The intricate spatiotemporal patterns within these biological signals serve as a crucial representation
of residuals. These patterns effectively unveil manipulation artifacts by segregating them from the
authentic biological signals. Given that biological signals remain absent in deepfakes, their absence
yields distinct signatures discernible in the generative noise. Consequently, these biological signals
serve as an indispensable projection, representing the residuals within a known dimension. This
26
Photoplethysmography (PPG) cells - Photoplethysmography (PPG) measures the amount of light absorbed or reflected
by human tissues. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8950880/)
20 | P a g e
projection allows for an exploration of unique signatures per model, enabling the identification and
differentiation of synthetic media from authentic content.27
By analyzing the statistical characteristics of decoded video pixels or their transformations, this
technique aims to uncover telltale signs of synthetic generation. Anomalies in statistical patterns,
unusual fluctuations, or irregularities in pixel transformations may signify the manipulation or artificial
generation of the media. These irregularities can range from subtle variations to distinct patterns that
deviate from what would be expected in naturally captured or authentic content.
For instance, when examining shadows and reflections within synthetic media, an in-depth analysis
focuses on their consistency and adherence to natural laws. Synthetic content may inadvertently
introduce inconsistencies in how shadows fall or how reflections interact with the environment,
deviating from the expected behavior seen in authentic visual content. By systematically analyzing
these elements, anomalies such as incorrect lighting angles or inconsistent shadow lengths can be
identified, indicating potential manipulation.
Similarly, behavioral analysis involves studying the behavior of objects or entities depicted in the
media. Synthetic media might exhibit discrepancies in behavioral patterns that depart from expected
real-world behaviors. This examination involves assessing how objects move, interact, or behave
within the digital environment. Any unnatural or irregular movements, unnatural physics, or peculiar
interactions within the synthetic content can signal potential artificial manipulation.
27
Umur Aybars Ciftci et al. (2020) How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting
Residuals with Biological Signals (https://arxiv.org/pdf/2008.11363.pdf)
28
Fourier Transform - The Fourier Transform is an important image processing tool which is used to decompose an image
into its sine and cosine components. The output of the transformation represents the image in the Fourier or frequency
domain, while the input image is the spatial domain equivalent. (homepages.inf.ac.uk)
21 | P a g e
frequency and time characteristics of the audio, enabling a detailed representation of speech patterns.
These features serve as inputs to deep learning models, such as convolutional neural networks (CNNs)
or recurrent neural networks (RNNs), adept at learning intricate patterns and variations in speech. By
leveraging these diverse feature sets, these models can power speech detection systems.
One example is the deepfake technology, which refers to the use of AI to create realistic synthetic
media, often used to manipulate videos or images of individuals. In many countries, laws regarding
privacy and defamation can be applied to deepfakes, holding creators accountable for violating privacy
rights, spreading false information, or defaming individuals.
The evolution of synthetic media poses significant challenges in copyright ownership. With AI-
generated content often blurring the lines of original authorship, determining rightful ownership
becomes intricate. Identifying the creator or owner of manipulated content in synthetic media raises
questions about copyright claims and protections. The complexities involved in establishing ownership
rights in AI-generated works call for a reassessment and potential adaptation of copyright laws to
address the novel aspects of content creation in the digital age.
The concept of fair use undergoes scrutiny in the context of synthetic media. Fair use exceptions are
subject to debate when assessing the transformative nature of AI-generated content. Differentiating
between legitimate transformative works and potential copyright infringement becomes increasingly
challenging. The interpretation of fair use laws needs refinement to accommodate the transformative
aspects of AI-generated content, ensuring the balance between creative freedom and copyright
protection.
The responsible use of AI in law enforcement prioritized the alignment of policing principles, ethical
standards, and human rights compliance. With this intent, INTERPOL, in partnership with the United
Nations Interregional Crime and Justice Research Institute (UNICRI), published the Toolkit for
Responsible AI Innovation in Law Enforcement29 in June 2023. This toolkit, comprises of seven
resources aimed at guiding law enforcement agencies in designing, developing, procuring, and
deploying AI technologies in a responsible manner.
Policymakers face the task of adapting existing intellectual property regulations to effectively
encompass the nuances presented by synthetic media. Establishing guidelines and standards that
govern AI-generated content is imperative to ensure compliance with intellectual property laws.
Crafting regulatory frameworks that address the intricate issues of copyright, licensing, trademark
29
https://www.interpol.int/en/How-we-work/Innovation/Artificial-Intelligence-Toolkit
22 | P a g e
protection, and fair use within the context of synthetic media creation is essential to safeguard the
interests of creators and copyright holders while fostering innovation.
Memo
In many countries, the exploration and utilization of generative AI, such as ChatGPT and Gemini
(previously Bard AI), have triggered intricate intellectual property debates under existing legislation.
There are countries whose domestic laws permit “information analysis”, allowing the use of
copyrighted materials for the training of generative AI models in the training stage.
6.2 Explainable AI
"Explainable AI" is emerging as a crucial factor in AI applications, especially in digital forensics, for
detecting synthetic or deepfake media and processing extensive data. To be considered "Explainable
AI," it is important for these tools to adhere to the principles outlined by NIST30 Circular 8312. This
entails providing clear, accurate, and understandable explanations for AI-generated outputs, including
confidence levels, beyond mere yes/no responses.
Memo
In some countries, within legal contexts, authenticating digital evidence traditionally involves
witness testimony. However, AI tools cannot testify, leading to requirements for expert
witnesses familiar with AI technology to authenticate evidence in court. Yet, challenges persist in
explaining AI processes adequately for authentication purposes. Moreover, some standards
assessing the admissibility of novel scientific testimony demand that AI techniques have been
tested, published, demonstrated error rates, follow standards, and enjoy wide scientific acceptance.
Transparency in AI algorithms becomes pivotal as a lack of explainability may hinder the
admissibility of AI-generated results.31
30
NIST - National Institute of Standards and Technology
31
https://medexforensics.com/2023/10/30/evaluating-the-use-of-ai-in-digital-evidence-and-courtroom-admissibility/
23 | P a g e
7. Recent Reports on Synthetic Media
With the rise of generative AI, the ease of producing synthetic media has significantly increased which
pose a serious concern in terms of their widespread creation and dissemination.
7.1 Deepfakes
• Two men have been founded guilty for using AI to generate exploitative images of children,
marking a significant instance in such cases. The court’s ruling established a significant
precedent by recognizing that sexually abusive content encompasses AI-generated imagery
with a "high level" of realism that can resemble real children and minors.32
• An investigation has been launched into the use of AI to manipulate images of young girls,
altering their clothing and circulating the edited photos.. It was revealed that multiple girls
were affected, and some of the preparators have been identified by the police. This stressed
the severity of the situation, calling for those responsible to cooperate and expressing fears
that the images might be disseminated on websites. This incident highlights the rising problem
of digital violence and the pressing need to address it.33
• In Quebec, Canada, a man was found guilty for producing synthetic videos of child sexual
abuse using deepfake technology. The judge highlighted the detrimental impact of these
synthetic images, not only fueling the market for child sexual abuse material but also
complicating police investigations and endangering the safety of children online whose
identities could be exploited.34 A Purple Notice35 was issued to alert INTERPOL member
countries on the use of deepfake technology to defraud and extort victims through
impersonation scams, online sexual blackmail and investment fraud.
• In 2023, The National Centre for Missing & Exploited Children (NCMEC) received
approximately 5000 reports of AI-generated child exploitation media files. NCMEC’s
CyberTipline 2023 Report37 highlights the concern of such AI-generated Child Sexual Abuse
Material or AIG-CSAM created by bad actors based on a real child or computer-generated
children. The report also states that more than 70% of these AIG-CSAM files were from
conventional online platforms38, thus highlighting the inadequate content tracking protocols
on these platforms. However, in April 2023, industry leaders like OpenAI, Amazon,
Anthropic, Civitai, Google, Meta, Metphysic, Microsoft, Mistral AI, and Stability AI have
joined an initiative led by Thorn to adopt ‘Safety by Design Principles’ to prevent the creation
and proliferation of AIG-CSAM39.
32
https://edition.cnn.com/2023/09/27/asia/south-korea-child-abuse-ai-sentenced-intl-hnk/index.html
33
https://edition.cnn.com/2023/09/20/europe/spain-deepfake-images-investigation-scli-intl/index.html
34
https://www.cbc.ca/news/canada/montreal/ai-child-abuse-images-1.6823808
35
INTERPOL Purple Notice: To seek or provide information on modus operandi, objects, devices and concealment methods used
by criminals.
36The National Center for Missing & Exploited Children is a private, non-profit corporation based in the United States
37
https://www.missingkids.org/cybertiplinedata
38
https://www.missingkids.org/blog/2024/generative-ai-csam-is-csam
39
https://openai.com/blog/child-safety-adopting-sbd-principles
24 | P a g e
7.2 Synthetic Audio
• A woman faced a harrowing ordeal when she received a call from someone who sounded
exactly like her grandson, claiming to be in jail and in need of bail money. Believing it to be
her grandson, she withdrew the ransom money from the bank and were about to send it when
a bank manager intervened. They soon realized the caller had impersonated their grandson,
and they had been scammed This incident reflects a growing trend of impersonation scams
aided by advancements in voice-generating AI technology. In recent years, such scams have
been on the rise, often targeting vulnerable individuals and resulting in substantial financial
losses. AI-driven voice-generating software can replicate a person's voice with a short audio
sample, enabling scammers to convincingly mimic trusted voices40
• A new phone application alert highlights the potential misuse of AI in the form of ChatGPT.
While ChatGPT has legitimate applications, it can also be exploited by scammers to generate
human-like summaries and stories, making it easier to craft well-written scam emails.
Criminals can abuse this technology to increase the volume of phishing attacks, potentially
leading to users clicking malicious links or divulging personal information. This approach
diverges from traditional phishing indicators, which often involve poorly written texts and
emails. With the rise of AI technologies aiding content creation, it's increasingly difficult to
distinguish phishing attempts from legitimate communication.41
• A woman, known to the immigration authorities as an illegal immigrant utilized two fake
passports and IDs to reside and work in a foreign country for seven years. Employing these
synthetic IDs, she secured a tenancy in a house, opened bank accounts and obtained
employment.42
• Analysis of emerging synthetic media trends, tools, and techniques to provide critical insights
into the evolution of threats and opportunities, enabling proactive and targeted responses.
• Forensic authentication to identify methodologies and standards for authenticating media
content. These measures will assist investigative efforts, helping to distinguish real content
from manipulated material.
40 https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/
41
https://abc7chicago.com/what-is-chatgpt-google-chatbot-ai-online-scams/12952645/
42
https://www.dailymail.co.uk/news/article-12990911/namibian-mother-known-home-office-illegal-
immigrant-fake-passport-lived-four-bed-house.html
25 | P a g e
• Multi-stakeholder threat assessment of the criminal landscape to provide an understanding
of its potential misuse as well as already evidenced misuse of this emerging technology by
criminal actors.
• Capacity Building and Training for law enforcement officers on the ever-evolving synthetic
media threat landscape and innovative solutions that will help identify, detect and analyze
synthetic media files to streamline synthetic media investigations. Since the field of synthetic
media forensics is closely related to AI due to the significant role AI plays in both the creation
and detection of synthetic media, INTERPOL aims to encourage the responsible use of AI
technologies across its member countries.
Reflecting the needs and ambitions of member countries in this area, a dedicated INTERPOL
Responsible AI Lab (I-RAIL) has been established. I-RAIL aims to be the focal point for matters related
to the responsible use of AI by law enforcement. In addition to facilitating general AI awareness,
I-RAIL provides practical support to law enforcement in member countries with regards to
theresponsible use of AI, including knowledge development and exchange, agency assessment
support and tailored capacity building and training.
26 | P a g e
INTERPOL INTERPOL_HQ @INTERPOL_HQ INTERPOL HQ INTERPOL
Shallow Fakes
69
fact that so much of everyday life takes place in a space that is marked by
constant, casual deception?
This Article defines shallow fakes and explains their centrality to the
social media ecosystem. It then turns normative, assessing the costs of
shallow fakes, which often slip through the hard and soft law that govern
other kinds of public information sharing, like advertising and journalism.
We end with prescriptions, chief among them a need for more
transparency around how the platforms operate.
Table of Contents
I. INTRODUCTION ................................................................................................ 71
II. WHAT ARE SHALLOW FAKES? ....................................................................... 78
A. The Core Elements............................................................................. 79
1. Superficial ................................................................................ 79
2. Commonplace .......................................................................... 80
3. Online....................................................................................... 81
4. The Self .................................................................................... 81
B. Examples of Shallow Fakes ............................................................... 82
1. The Filter.................................................................................. 82
2. The Crop .................................................................................. 83
3. The Mislabel ............................................................................ 83
4. The Product Endorsement ........................................................ 84
C. Distinguishing Deepfakes .................................................................. 85
III. PLATFORMS FOR SHALLOW FAKERY ............................................................. 86
A. The Arms Race ................................................................................... 87
B. Deception in the Algorithm................................................................ 89
C. Platform Policies on Deception......................................................... 90
IV. THE PROBLEM WITH SHALLOW FAKES ......................................................... 94
A. Gendered Harms................................................................................ 96
1. Body Dysmorphia .................................................................... 97
2. Depression and Anxiety ........................................................... 98
3. Pressure to Sexualize ............................................................... 98
4. “Real-Life Filtered Look” ...................................................... 100
5. Reinforcing Traditional Gender Roles ................................... 103
B. Racialized Harms ............................................................................ 107
1. Blackfishing and Other Forms of Appropriation ................... 107
2. Whitewashing and Exclusion ................................................. 109
C. Democratic Harms .......................................................................... 111
1. The Erosion of Expertise........................................................ 111
2. The Erosion of Public Discourse............................................ 114
V. PLATFORM REGULATION.............................................................................. 116
A. Transparency Reforms ..................................................................... 117
B. Deceptive and Unfair Trade Practices by the Platforms ................. 119
C. Other Initiatives............................................................................... 122
VI. CONCLUSION .............................................................................................. 124
I. INTRODUCTION
Social media is awash in fakery.1 Scholars and policymakers have
become especially worried about malicious disinformation tools like
“deepfakes”—hyper-realistic fake videos made with artificial
intelligence.2 But the problem of deception online is both subtler and more
endemic than a series of bad actors engaged in information warfare. Most
of today’s online fakes are actually quite shallow—superficial tweaks to
one’s self-presentation.3 Every day on social media, users place filters on
their selfies, post photos out of context, and otherwise present a digitally-
enhanced version of their lives. Unlike deepfakes, these superficial tweaks
to one’s self-presentation—which we term “shallow fakes”—are enabled
and encouraged by the platforms.
The ability to curate a better-than-real image is the sine qua non of
social media platforms. Instagram, for example, owes its start to the filter:
1. See, e.g., Suroush Vousoughi et al., The Spread of True and False News Online,
359 SCIENCE 1146, 1148–49 (2018) (examining 12 years of Twitter data and showing that
“[f]alsehood reached more people” than the truth and that users were 70% more likely to
share fake news than real news); CAILIN O’CONNOR & JAMES OWEN WEATHERALL, THE
MISINFORMATION AGE: HOW FALSE BELIEFS SPREAD 147–67 (2019) (explaining how social
networks are particularly fertile breeding grounds for misinformation); RICHARD L. HASEN,
CHEAP SPEECH: HOW DISINFORMATION POISONS OUR POLITICS—AND HOW TO CURE IT
(2022) (explaining the particular impact social media networks and fake news can have on
democratic elections and proposing legal reforms); Peter Sucio, Social Media Is Full of
Fakes—As In Fake Followers New Study Finds, FORBES (Nov. 17, 2021, 1:09 PM),
https://bit.ly/3ooJbUB (describing how over a third of top influencer’s followers are fake
accounts); Hunt Allcott & Matthew Gentzkow, Social Media and Fake News in the 2016
Election, 31 J. ECON. PERSPECTIVES 211, 219–23 (2017) (discussing data that shows the
pervasiveness of fake news on social media in the leadup to the 2016 presidential election).
2. See Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for
Privacy, Democracy, and National Security, 107 CALIF. L. REV. 1753, 1757 (2019)
(describing the problem of “[t]echnologies for altering images, video, or audio . . . in ways
that are highly-realistic and difficult to detect”). There are many different definitions of
deepfakes, but they all emphasize the use of sophisticated technology, specifically artificial
intelligence (AI) and deep learning, to manipulate content and deceive consumers. Indeed,
the word “deepfake” is a “portmanteau of ‘deep learning’”—a reference to a kind of AI
algorithm—and “fakes.” James Vincent, Why we need a better definition of ‘deepfake’,
THE VERGE (May 22, 2018, 2:53 PM), https://bit.ly/3WltoCB. Deepfakes originated in
pornography, when, in 2017, one anonymous Reddit user uploaded pornographic videos
featuring celebrities; most of the celebrities were female. See Russell Spivak,
“Deepfakes”: The Newest Way to Commit One of the Oldest Crimes, 3 GEO. L. TECH. REV.
339, 345–46 (2019). For one example among many of how advocates are responding to the
deepfake problem, see Prepare, Don’t Panic: Synthetic Media and Deepfakes, WITNESS
MEDIA LAB, https://bit.ly/43sz1Sj (last visited June 24, 2023).
3. This term is distinct from “Shallowfakes,” which is defined as “videos that have
been manipulated with basic editing tools or intentionally placed out of context.” HENRY
AJDER ET AL., THE STATE OF DEEPFAKES: LANDSCAPE, THREATS, AND IMPACT 11 (2019),
https://bit.ly/3BH6Se4. Both terms address low-technology edits, but that is where their
similarities end. We are not concerned with whether the user intends to deceive or whether
it is malicious. Our use of “shallow” is meant to both imply the surface level changes we
focus on along with their perceived unimportance.
4. See SARAH FRIER, NO FILTER: THE INSIDE STORY OF INSTAGRAM xxi (2020)
(“[B]ecause of filters that initially improved our subpar mobile photography, Instagram
started out as a place for enhanced images of people’s lives. Users began to accept, by
default, that everything they were seeing had been edited to look better.”); see also Amelia
Tait, How Instagram changed our world, THE GUARDIAN (May 3, 2020, 6:00 AM),
https://bit.ly/3Olir20 (“When Instagram launched, it offered filters that people could use to
make their photos—and by extension, their lives—look more appealing.”).
5. In some ways, Snapchat’s early success can be attributed to the playful filters that
allowed users, but especially young users, a way to be silly and rebel from the pressure to
conform to the kind of look one found on Instagram; eventually, though, that would change.
See FRIER, supra note 4, at 114, 179–207.
6. See id. at 113.
7. See Andrea Navarro, Snapchat’s “Pretty” Filters Allegedly Make You Whiter,
TEEN VOGUE (May 16, 2016), https://bit.ly/41R0Fqu (describing how filters like “flower
crown” do more than they first appear, including whitening and smoothing skin, thinning
the jawline, and more).
8. See Abby Ohlheiser, TikTok changed the shape of some people’s faces without
asking, MIT TECH. REV. (June 10, 2021), https://bit.ly/43aQSwn (describing users
discovering that TikTok applied beauty filters to users who had all filters turned off).
Additionally, Tristan Harris, the Executive Director for Human Technology, states:
Unless the government acts, the competition between technology businesses’
never-ending interest in capturing human attention, will irreversibly dismantle
the information environment, accelerate polarization leading towards civil war,
degrade the mental health of a generation of children and teenagers, and break
down the basis for trust itself, leading to market collapse and near permanent
civil disorder.
Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet
Platforms: Hearing Before the Subcomm. on Commc’n, Tech., Innovation, and the
Internet, of the S. Comm. on Commerce, 116 CONG. 50, 58 (2019) (Statement of Tristan
Harris, Exec. Dir. Ctr. for Humane Tech.).
9. We are addressing here the advertising-driven social media platforms—typified by
Facebook, Instagram, YouTube, TikTok, and Snapchat. There are smaller platforms, like
many dating apps, that are not advertising-driven.
10. See TIM WU, ATTENTION MERCHANTS: THE EPIC SCRAMBLE TO GET INSIDE OUR
HEADS 5 (2016) (“Over the last century, . . . we have come to accept a very different way
of being, whereby nearly every bit of our lives is commercially exploited to the extent it
can be.”).
11. See Amanda Reaume, How Does Instagram Make Money for Facebook (Meta
Platforms), SEEKING ALPHA (Dec. 1, 2021, 9:00 AM), https://bit.ly/3pVgKhE.
12. See id. (describing the different ways that ads can be presented to users).
13. As one recent description explains:
Social media influencers are people with extensive social media followings who
share content on Instagram, TikTok, Twitter, Facebook, and other social media
applications . . . . Influencers receive money from brands to promote various
products to their followers. An influencer’s ability to earn money from
promotions correlates with their number of followers.
Stasia Skalbania, Comment, Advising 101 For the Growing Field of Social Media
Influencers, 97 WASH. L. REV. 667, 669–70 (2022) (identifying four categories of followers
based on the number of followers, which begin at “nano” and end in “mega” influencers).
14. See Influencer Ad Disclosure on Social Media: A Report Into Influencers’ Rate
of Compliance of Ad Disclosure on Instagram, ADVERTISING STANDARDS AUTHORITY
REPORT 4 (Mar. 18, 2021), https://bit.ly/43eeb8A.
15. See 93% of Top Celebrity Social Media Endorsements Violate FTC Guidelines,
MEDIAKIX (May 31, 2017), https://bit.ly/3Q9o4kF.
16. See Lili Levi, A “Faustian Pact”? Native Advertising and the Future of the Press,
57 ARIZ. L. REV. 647, 665 (2015) (arguing that the “entire raison d’être [of native
advertising] is precisely to disable consumers from being able to distinguish between
editorial content and commercial propaganda—to trick consumers and end-run ad
avoidance”); see also FRIER, supra note 4, at 138 (describing the power of the unlabeled
paid post on Instagram: “[s]ince consumers are much more likely to be swayed to buy
something if friends or family recommend it, as opposed to advertisements or product
reviews, these ambiguous paid posts were effective”).
17. Take Facebook’s stated goal: “to give people the power to build community and
bring the world closer together.” Andy Wu, The Facebook Trap, Technology and
Analytics, HARVARD BUSINESS REVIEW (Oct. 19, 2021), https://bit.ly/3oi3QK1. Of course,
that is not the only goal. Connecting users and increasing their engagement with each other
“directly drive advertising revenue, the predominant mode by which Facebook captures
value, i.e., monetizes the user base that otherwise uses Facebook for free.” Id.
18. See FRIER, supra note 4, at xxi (“Users began to accept, by default, that everything
they were seeing had been edited to look better. Reality didn’t matter as much as aspiration
and creativity.”).
19. See Sarah Fielding, 90% of Women Report Using a Filter on Their Photos,
VERYWELL MIND (Mar. 15, 2021), https://bit.ly/3MFN7Zx.
20. See Connie Loizos, The maker of popular selfie app FaceTune just landed $135
million at unicorn valuation, TECHCRUNCH (July 31, 2019, 7:00 AM),
https://bit.ly/3Co4CII.
21. See Alexandra J. Roberts, False Influencing, 109 GEO. L.J. 81, 84 (2020)
(describing how “[a]uthenticity lies at the core of the [influencer] advertising model” which
“creates an exceptionally fertile breeding ground for deception and consumer harm”); see
also Georgia Wells et al., Facebook Knows Instagram is Toxic for Teen Girls, Company
Documents Show, WALL ST. J. (Sept. 14, 2021, 7:59 AM), https://on.wsj.com/3MHDfQ7
(“The features that Instagram identifies as most harmful to teens [things like trying to live
a perfect life online, having a perfect body, only sharing one’s best moments] appear to be
at the platform’s core.”).
22. See Sofia P. Caldeira et al., Exploring the Politics of Gender Representation on
Instagram: Self-representations of Femininity, 5 DIGEST. J. DIVERSITY & GENDER STUD.
23, 25 (2018) (“[T]here is still a sense of optimism surrounding the political potential of
self-representation on apps such as Instagram.”).
23. See, e.g., JULIE E. COHEN, CONFIGURING THE NETWORKED SELF: LAW, CODE, AND
THE PLAY OF EVERYDAY PRACTICE 53–58 (2012) (arguing for “a renewed appreciation for
the play of everyday practice” and the centrality of the idea of “play” to mature conceptions
of the networked self and information law).
24. See generally HANY FARID, FAKE PHOTOS (2019) (describing a long history of
photographic manipulation, including by leaders like Stalin and Mussolini, who regularly
doctored photographs to achieve political ends).
25. See the facebook files, WALL ST. J. (2021), https://bit.ly/3PiJeg1 (last visited June
24, 2023) (collecting news reports from the fall of 2021 that rely on leaked internal
Facebook documents to describe that “platforms are riddled with flaws that cause harm,
often in ways only the company fully understands”); see Reed Albergotti, Frances Haugen
took thousands of Facebook documents: This is how she did it, WASH. POST (Oct. 26, 2021,
12:00 PM), https://bit.ly/3rIqGvS (“For nearly a month, Haugen has made headlines for
her decision to blow the whistle on Facebook, testifying in front of Congress, appearing on
“60 Minutes” and on the cover of Time Magazine. Her revelations have created a firestorm.
And Facebook is reportedly considering a name change.”).
26. See Rosalind Gill, Changing the perfect picture: Smartphones, social media and
appearance pressures, CITY UNIVERSITY OF LONDON 5 (2020), https://bit.ly/3q01Mr4.
27. See infra Sections III.A and III.B.
28. See Wells et al., supra note 21. There were counter-studies—the internal research
was not all negative—but there was enough information that was worrying. See Instagram
Press Release, What Our Research Really Says About Teen Well-Being and Instagram
(Sept. 26, 2021), https://bit.ly/41Tk9ef.
three teen girls.”29 The pressure to conform to traditional norms can also
exclude nonbinary users.30
The social media ecosystem is not only deeply gendered, it is also
racialized. Reports of blackfishing are common, in which white users go
to extreme lengths, including darkening their skin, to appear Black.31
There are also simultaneous reports of whitewashing, in which filters
whiten the skin of non-white users and perpetuate various forms of digital
exclusion.32 With race, as with gender, digital tools appear to cheapen and
flatten user diversity.
Then there are epistemic and democratic concerns. What happens to
truth in a world where so much of everyday life is marked by constant
deception? And what are the implications for democracy and public
discourse? As people spend more time in online spaces where deception
is the norm, what happens to democratic deliberation? Blame for the
erosion of public trust and political polarization is often pinned on digital
echo chambers, foreign influence campaigns, or both.33 But we propose
that some share of the blame belongs to the fact that so much of everyday
life takes place in a space that is marked by sustained, but subtle,
deception.34
This Article is the first to fully consider the effects of these shallow
fakes. There is little scholarship on this type of deception. That, we
suspect, has to do with the fact that the problem is “shallow”—dealing
with surface-level aesthetics and thus seen as superficial and frivolous—
not to mention that much of the harm is felt by already-marginalized
communities. The closest analog is a small literature on influencer
marketing.35 This gap stands in stark contrast to deepfakes, where there
has been a considerable scholarly and policy response.36 In 2020, Congress
passed two bills—the U.S. National Defense Authorization Act
(“NDAA”) and the Identifying Outputs of Generative Adversarial
Networks (“IOGAN”) Act—with provisions aimed at addressing the
deepfake problem.37 Social media platforms have also responded. In the
last three years, Twitter,38 Facebook, TikTok, Snapchat, and YouTube
have updated their terms of service to explicitly address deepfakes.39
Deepfakes are undoubtedly a serious problem. But focusing solely on
deepfakes—which are now banned under most platforms’ inauthentic
content policies—provides the false sense that what remains is authentic.40
We aim to remedy the gap in scholarship and policy, which is
especially notable given that there are several ways the law could
the province of swoony liberal elites, but it does, in fact, blossom at both cultural poles.”).
A similar point has also been made in discussing the specific practice of “stealth
marketing,” in which Ellen Goodman argues that it “harms . . . by degrading public
discourse and undermining the public’s trust in mediated communication.” Ellen P.
Goodman, Stealth Marketing and Editorial Integrity, 85 TEX. L. REV. 83, 87 (2006).
35. See, e.g., Roberts, supra note 21; Skalbania, supra note 13, at 669–70; Annamarie
White Carty, Cancelled: Morality Clauses in Influencer Era, 26 LEWIS & CLARK L. REV.
565 (2022); Megan K. Bannigan & Beth Shane, Towards Truth in Influencing: Risks and
Rewards of Disclosing Influencer Marketing in the Fashion Industry, 64 N.Y.L. SCH. L.
REV. 247 (2019/2020).
36. See Chesney & Citron, supra note 2.
37. See William M. (Mac) Thornberry National Defense Authorization Act for Fiscal
Year 2021, Pub. L. No. 116-283, 133 Stat. 3388 (2021); Identifying Outputs of Generative
Adversarial Networks (IOGAN) Act, Pub. L. No. 116-258, 134. Stat. 1150 (2020) (to be
codified in scattered sections of the U.S. Code). The 2021 NDAA was passed over the
President’s veto, while the President signed the IOGAN Act. See Scott Briscoe, U.S. Laws
Address Deepfakes, TODAY IN SECURITY (Jan. 12, 2021), https://bit.ly/3MnzjTj.
38. Although Twitter now goes by the name X, we will continue to refer to the firm
as Twitter given that this is the commonly used name for the firm’s service.
39. Most of these are outright bans on any manipulated content that could lead to
harm. For an example, see Vanessa Pappas, Combating misinformation and election
interference on TikTok, TIKTOK NEWSROOM (Aug. 5, 2020), https://bit.ly/3OskL7y (“Our
Community Guidelines prohibit misinformation that could cause harm to our community
or the larger public, including content that misleads people about elections or other civic
processes, content distributed by disinformation campaigns, and health misinformation.”)
But rather than taking down “synthetic and misleading media,” Twitter often will label
Tweets “to help people understand their authenticity and to provide additional context.”
Synthetic and manipulated media policy, TWITTER, https://bit.ly/42OrNrm (last visited
June 25, 2023).
40. See infra Section IV.C.
intervene.41 The most obvious and most pressing need is for more
transparency from the platforms.42 There is a growing demand for national
transparency legislation, and we explain how such legislation would
alleviate some of the concerns raised here. Because the platforms are
advertising networks, we also explain how existing rules promulgated by
FTC that prohibit deception in the marketplace can apply more broadly to
social media fakery.43 Finally, there are other measures, like industry
norms and multistakeholder initiatives, that could help. Indeed,
multistakeholder initiatives have had success with revising social media
policies in related areas, especially with regard to violent and extremist
content.44
The Article proceeds in four parts. Parts II and III are descriptive,
outlining the many ways in which platforms provide the tools for users to
engage in shallow fakery. Part II provides a taxonomy of different types
of shallow fakes, and Part III explains how platforms promote and
encourage them, regardless of user preference. In Part IV, the Article turns
normative, assessing the costs of shallow fakes, in addition to possible
benefits. Part V looks ahead to implications for regulators, scholars, and,
ultimately, users.
1. Superficial
Shallow fakes are superficial edits to observable characteristics. They
are meant to improve the user’s physical appearance. For this reason, they
are typically seen as innocuous. The platforms themselves describe this
kind of enhancement—the use of filters, lighting, and crops—as harmless.
Instagram’s policy for deceptive material, for example, only applies to
edits that are “beyond adjustments for clarity or quality,” which leaves
considerable room for image enhancement.48 Clarifying the platform’s
stance on deepfakes, one platform spokesperson explained that content is
regularly manipulated “often for benign reasons” and that this content is
considered authentic.49 The seemingly benign nature of this widespread
media manipulation is why Instagram can describe itself as “an authentic
and safe place for inspiration and expression.”50 The paradox of shallow
46. Significantly, we are not critiquing the users who deploy the techniques we
describe. In Part III, we argue that the reason there is so much fakery on platforms is the
result of deliberate choices by the social media platforms.
47. In addition to the “front regions” and “back regions” that Erving Goffman
identified as crucial to social performance, he also identifies “the outside.” GOFFMAN,
supra note 45, at 134–35. Online, there is no “back region” or “outside”—the audience is
only privy to the front regions, which raises a set of issues specific to social media.
48. Monika Bickert, Enforcing Against Manipulated Media, META (Jan. 6, 2020),
https://bit.ly/3qZ6Kob.
49. Id.
50. Community Guidelines, INSTAGRAM, https://bit.ly/3piTOsJ. The description of
Community Guidelines continues: “Remember to post authentic content, and don’t post
anything you’ve copied or collected from the Internet that you don’t have the right to post.”
fakes is that they are subtle and superficial, which makes them less
suspicious and, in turn, gives them enormous reach.51
2. Commonplace
Making changes to one’s appearance is the norm on today’s social
media platforms. A survey conducted in the United Kingdom found that
90% of women aged 18–30 reported using a filter before posting online
photos.52 The purpose of these filters is to physically alter one’s
appearance—“to even out skin tone, reshape [the] jaw or nose, shave off
weight, brighten or bronze skin, and whiten teeth.”53 Unsurprisingly, the
same survey found that women feel “bombarded” and “overwhelmed” by
the pressure to look as good as the other filtered images they see online.54
One person explained, “[I]t is everywhere, all the time, and social pressure
to look a certain way is very real.”55
An indication of just how commonplace digital image distortion has
become is the success of photo editing apps such as FaceTune,56 an
extraordinarily popular app which claims to “effortlessly enhance every
selfie.”57 Its effects are palpable: “FaceTuning your jawline [has become]
the Instagram equivalent of checking your eyeliner in the bathroom of the
bar.”58 Within a year of its release, FaceTune was the number-one
downloaded app in the “photo and video” category on Apple’s platform in
120 countries.59 By 2018, it had spent four years as the most-downloaded
paid app worldwide and across all of Apple’s app categories.60 Lightricks,
the company that makes FaceTune, is valued at $1 billion dollars, having
recently raised $135 million in Series C funding.61 The app has been
downloaded 180 million times.62
3. Online
Another key component of our definition of shallow fakes is that they
occur online. Online deception is a different, and more difficult, problem
than offline deception for at least two reasons. First, online deception is
harder to identify. When one sees an airbrushed billboard of a celebrity or
a model, one understands at some level that the image is not a true
representation of a real person. Also, the billboard is clearly understood to
be an advertisement because all billboards are advertisements. Online, the
space between the “real” and the “fake” is much narrower. This is both
because of who is engaging in the deception and the context in which it
occurs. While comparing social media filters to airbrushing in magazines,
one survey respondent explained, “What’s different about social media is
these aren’t just celebrities and supermodels, these are people you know.
The feeling of ‘why isn’t that me’ becomes even stronger and more
significant.”63
Second, the context is less clearly defined, which means that casual,
online deception seeps into all aspects of one’s life. A magazine is a
highly-stylized product, and it is something you can pick up and put down.
Even if you access a magazine digitally, it has some distance from your
everyday life and you know and expect it to be an idealized version of real
life. Online airbrushing, however, is something everyone—including your
friends, classmates, and colleagues—does all the time. In the aggregate,
we start to live in a world where it can be hard to separate fact from fiction.
4. The Self
Shallow fakes address how individuals present themselves and their
lives online in a deceptive way. It is not deception about others or about
the world at large. This self-focused deception can occur in many forms,
but the basic idea is that rather than choosing to reflect a mere snapshot or
a moment in time, people, intentionally or not, curate the images they post
in a way that distorts their lives. As one of the interviewees in a study
addressing online deception noted about the kinds of things people post to
social media, “[T]his isn’t a realistic perception of everyday life. Things
go wrong but we only want other people to see the perfect bits. You can
so easily make people think you lead this picture perfect life when for most
people this is not the case.”64
This is not gross deception. It is designed to be slight—shallow fakes
are meant to be small enough to be believable. This is why, for example,
a British study of teenage girls’ use of social media found that teens feel
pressure to post images that are perfect—which requires using filters—but
not so perfect that the images appear wholly unrealistic. As one subject
said, “I don’t want my pictures to look too fake . . . . I want it to look as
natural as possible, even though I’m wearing makeup. I want it to look like
I haven’t put a filter on.”65
This deception in the presentation of one’s self, or one’s life, is what
distinguishes shallow fakes from other forms of fakery, like fake news,
that have received so much attention. Indeed, one of the most widely
discussed frameworks for identifying and distinguishing different types of
fake news does not even mention the subtle deception in self-presentation
of the kind we describe here.66
1. The Filter
Perhaps the most widespread deception today is the use of the filter:
a tool that digitally enhances the appearance of a person or object. Filters
can be basic, like changing the light and color in a photo. But they can be
more sophisticated and now increasingly rely on artificial intelligence to
change the physical appearance of an image. For example, it is common
for people to use filters that change their jawline, eye shape, skin tone, and
more.68 These beautification filters are effective primarily because they are
subtle. As one beauty blogger said, “If done properly, it should be hard to
tell you’ve used it.”69 They are both understated and ubiquitous.
2. The Crop
Another way that online media can deceive is by allowing a user to
present an image decontextualized from any surrounding facts. Cropped
out of the frame is more information, like a broader context or another
perspective. Cropping photos as a means of deception has a long history.70
Today, the cropped photo is standard fare on social media. Cropping can
be done to zoom in and make an image more visible, but it can also be
done to deceive, or to hide a broader setting. One classic example of the
way a crop can alter one’s surroundings is the sandbox masquerading as a
beach.71 In social media, it is also common to crop out ring lights, staging
equipment, and other tools used to create a highly constructed, yet
seemingly natural, photograph.72 This is the reason for the increasingly
popular “behind the scenes” shot, which exposes the “ridiculous reality”
behind many Instagram posts.73
3. The Mislabel
Content can be misleading not only because it is cropped, or taken
out of context, but also because it is mislabeled. For example, a filtered
image accompanied by the label “#nofilter” implies that the user applied
no filter when they might have. Researchers studying the use of the
“#nofilter” label found that about 12% of those labeled as such had a filter
applied to them.74
75. See Deryn Strange et al., Photographs Cause False Memories for the News, 136
ACTA PSYCHOLOGICA 90, 90–94 (Jan. 2011) (reporting the results of an experiment that
found people were more likely to remember false news reports and with more confidence
if those reports were accompanied by photographs).
76. Yiyi Li & Ying Xie, Is a Picture Worth a Thousand Words? An Empirical Study
of Image Content and Social Media Engagement, 57 J. MKTG. RSCH. 1, 1 (Nov. 18, 2019),
https://bit.ly/45C427W.
77. See Matthew Johnston, How Does Facebook (Meta) Make Money?,
INVESTOPEDIA (Jan. 10, 2023), https://bit.ly/45u6ld7 (“Meta Platforms (META), the
company that owns Facebook, primarily makes money by selling advertising space on its
various social media platforms. Major competitors include Apple (AAPL), Alphabet
(GOOGL) Google and YouTube, Tencent Music Entertainment Group (TME), Amazon
(AMZN), and X Corp (formerly Twitter).”).
78. It is difficult to find data on this topic. Model and influencer Emily Ratajkowski
explains the terms of one such exchange the following way:
A large hotel conglomerate had just opened a new luxury resort in the Maldives.
The hotel cost $400 million to build . . . . The hotel group needed to generate
awareness, and having me visit and tag their account and the location was
valuable to them. For this kind of advertisement, I was able to make a shit ton of
money just by vacationing here for five days and posting the occasional picture.
EMILY RATAJKOWSKI, MY BODY 87 (2021).
79. See Influencer Ad Disclosure on Social Media, supra note 14, at 4.
across a three-week period and found that nearly 3,800 of those were paid
advertisements with no label.80
C. Distinguishing Deepfakes
Finally, distinguishing shallow fakes from the better-known—and
more regulated—concept of deepfakes is analytically useful. The policy
responses to deepfakes are also important to briefly consider in describing
the phenomenon, given how they exacerbate the lack of attention paid to
shallow fakes and normalize the presence of shallow fakes online.
The core definition of a deepfake is that it uses artificial intelligence
to create images that trick the eye—the kind of computer-generated
images that once were available only to a well-resourced movie studio.81
Shallow fakes, however, do not turn in any meaningful way on the use of
high technology.82 As our examples above show, consumers of online
content can be misled by low-tech manipulations, like showing only part
of an interaction, either from one angle or from one perspective.83
Also, deepfakes are described as something that is done to images of
another—not to oneself. The illustrations provided by Bobby Chesney and
Danielle Citron in their foundational article on deepfakes generally feature
a sophisticated actor using digital tools to manipulate an image of someone
else.84 Shallow fakery, on the other hand, is something we do to ourselves.
Where deepfake scholars are worried about “the creation of realistic
impersonations out of digital whole cloth,”85 shallow fakes are people
merely impersonating better versions of themselves. The problem then is
not that someone else might be falsely portrayed as endorsing a product,
service, idea, or politician;86 it is that people are portraying false
presentations of themselves every day.
The discussion on deepfakes further frames the problem of online
deception around bad actors with an intent to deceive. But much of online
80. See id. at 3; see also Keisha Phippen, Are You Influencing Responsibly?, NAT’L
L. REV. (Apr. 7, 2021), https://bit.ly/43v3oXT (reporting the results of a study by the
British Advertising Standards Authority).
81. See Chesney & Citron, supra note 2, at 1763 (“As the volume and sophistication
of publicly available deep-fake research and services increase, user-friendly tools will be
developed and propagated online, allowing diffusion to reach beyond experts.”).
82. Although they can involve more sophisticated technology. See discussion infra
Part III.
83. One well-documented problem is when someone posts police body camera
footage that is taken out of context. See Emmeline Taylor & Murray Lee, The Camera
Never Lies?: Police Body-Worn Cameras and Operational Discretion, in POLICE
VISIBILITY: PRIVACY, SURVEILLANCE, AND THE FALSE PROMISE OF BODY-WORN CAMERAS
80–95 (Bryce Clayton Newell ed., 2021) (describing how police-worn camera video clips
are often taken out of context).
84. See Chesney & Citron, supra note 2, at 1776.
85. Id. at 1758.
86. See id. at 1774.
90. Tristan Harris used the term “arms race” to describe the pressure the platforms
experience to “beautify” their users. See ELISE HU, FLAWLESS: LESSONS IN LOOKS AND
CULTURE FROM THE K-BEAUTY CAPITAL 132 (2023) (quoting Tristan Harris).
91. In chronicling the rise of Instagram, Sarah Frier notes how it affected everyone
on the platform, becoming “a tool for crafting and capitalizing on a public image, not just
for famous figures but for everybody.” FRIER, supra note 4, at 128.
92. Id. at 233.
93. Id. at 173.
94. Gill, supra note 26, at 30.
95. FRIER, supra note 4, at 114.
96. See id. (explaining that “they would often delete pictures if they didn’t get 11
likes,” which “was the number of likes that would turn a list of names below an Instagram
post into a number—a space-conserving design that had turned into a popularity tipping
point for young people”). In a hall-of-mirrors sort of way, teens would have separate
accounts called “finstas,” short for “fake Instagram” accounts where they could post
images that were more realistic and unedited. Id. at 182–83. These accounts were, for the
most part, private. Id. This is also the reason for the widespread meme known as “felt cute,
might delete.” See Feeling Cute, Might Delete Later, KNOW YOUR MEME,
https://bit.ly/3sRUHda (last visited Sept. 12, 2023).
97. See, e.g., FRIER, supra note 4, at 278–79 (“Instagram isn’t designed to be a neutral
technology, like electricity or computer code. It’s an intentionally crafted experience, with
an impact on its users that is not inevitable, but is the product of a series of choices by its
makers about how to shape behavior.”).
108. See Paresh Dave, Facebook buys Masquerade, app company that competes with
Snapchat’s lenses, L.A. TIMES (Mar. 9, 2016, 10:24 AM), https://bit.ly/43mDaH4.
109. See Loizos, supra note 20.
110. See Ohlheiser, supra note 8.
111. See Jamie Luguri & Lior Jacob Strahlevitz, Shining a Light on Dark Patterns,
13 J. LEGAL. ANAL. 43 (2021) (reporting the results of studies that illustrate how dark
patterns work).
112. See Clodagh O’Brien, How Do Social Media Algorithms Work?, DIGIT. MKT.
INST. (Jan. 19, 2022), https://bit.ly/3BRGXAa (“Algorithms are used on social media to
sort content in a user’s feed. With so much content available, it’s a way for social networks
to prioritize content they think a user will like based on a number of factors.”).
113. Aaron Smith et al., Many Turn to YouTube for Children’s Content, News, How-
To Lessons, PEW RESEARCH CENTER (Nov. 7, 2018), https://bit.ly/3MBz2vU. One of the
findings shows that 28% of the “unique videos” recommended to users “were
recommended more than once over the study period, suggesting that the recommendation
algorithm points viewers to a consistent set of videos with some regularity. In fact, a small
number of these videos (134 in total) were recommended more than 100 times.” Id.
Similarly, “regardless of whether the initial video was chosen based on date posted, view
count, relevance or user rating,” the YouTube algorithm “consistently suggested more
popular videos.” Id.
114. See Ben Smith, How TikTok Reads Your Mind, N.Y. TIMES (Dec. 5, 2021),
https://bit.ly/3qcm5RC.
115. See id. (“[T]he app is shockingly good at reading your preferences and steering
you to one of its many ‘sides,’ whether you’re interested in socialism or Excel tips or sex,
conservative politics or a specific celebrity. It’s astonishingly good at revealing people’s
desires even to themselves.”).
116. See Joe Pinsker, The Hidden Economics of Porn, THE ATLANTIC (Apr. 4, 2016),
https://bit.ly/3OzykSD.
117. Id. Shira Tarrant, author of THE PORNOGRAPHY INDUSTRY, elaborates: “If you
are interested in something like double oral, and you put that into a browser, you’re going
to get two women giving one guy a blowjob . . . you’re not likely to get two men or two
people giving a woman oral sex.” Id.
118. Id.; see also AMIA SRINIVASAN, THE RIGHT TO SEX 67–68 (2021) (describing
how “free online porn doesn’t just reflect preexisting sexual tastes” given the ways that
companies use algorithms).
the community standards policy, but not the fake accounts or hate speech
policy, and vice versa. There is no consistent definition for what counts as
“misleading.”
Then there is the one-off narrowing of the type of deception that is
targeted. Some deception, for example, is banned only when it is
monetized, and only when it is monetized in certain ways. These rules
further confuse the kind of deception that is prohibited. For example,
Facebook openly acknowledges that “[a] lot of the misinformation that
spreads on Facebook is financially motivated.”124 To that end, the firm’s
Instagram Content Monetization Policies do not allow “[m]isinformation,”
which they define as “content that has been rated false by a third-party fact
checker.”125 But, that kind of information is implicitly allowed as long as
it is not “monetized.”126 Moreover, it is presumably the case that this
statement means that the user cannot monetize the misleading content, but
Facebook can because Facebook can still drive engagement (and therefore
advertising sales) to its platform by allowing, and even recommending,
deceptive content by algorithm.127
Unsurprisingly, then, enforcement is inconsistent and unpredictable.
The sweeping scope of these policies means that platforms have a great
deal of discretion in deciding how to implement them. Indeed, platforms
have a record of selectively intervening to manage content they deem
deceptive.128 The mechanisms the platforms rely on for identifying
misinformation are partly at fault. Most of the misinformation discovered
on Instagram depends on reports from users and verification from a “third-
party fact checker,”129 which limits enforcement of the misleading and
deceptive content rules to a tiny range of deception on the platform. The
majority of content will therefore not be flagged as deceptive—and,
ironically, the more deceptive it is, the less likely it is to be flagged, given
that the deception will be well-hidden. Of the content that is flagged,
action will be taken only if and when such content is reviewed by a third
party. Content moderation is thus limited to what can be verified as
130. This problem pervades the wellness industry, the success of which has been
buoyed by online platforms. See discussion infra Section V.C.1. Goop, a company founded
by Gwyneth Paltrow, broke off its relationship with Condé Nast given the latter’s
requirement to separate ad content from informative content. See Taffy Brodesser-Akner,
How Goop’s Haters Made Gwyneth Paltrow’s Company Worth $250 Million, N.Y. TIMES
MAGAZINE (July 25, 2018), https://bit.ly/46psZ70 (explaining that “they weren’t allowed
to use the magazine as part of their ‘contextual commerce’ strategy” even though Goop
“wanted to be able to sell Goop products (in addition to other products, just as they do on
their site” and treat the “magazine customer [as] also a regular customer” of Goop).
131. See Roberts, supra note 21, at 120.
132. Of course, an influencer might be invited to promote a product they already used
and loved; but failing to disclose the payment for promoting a product is still deceptive.
133. Bickert, supra note 48.
134. The academic conversation around deepfakes similarly asserts that shallow
fakes are harmless and widespread. For example, Chesney and Citron define the problem
posed by deepfakes by distinguishing them from the general deception that takes place
online all the time: “[i]nnocuous doctoring of images—such as tweaks to lighting or the
application of a filter to improve image quality—is ubiquitous.” Chesney & Citron, supra
note 2, at 1759.
135. See Eric Abent, Adobe expands Content Authenticity Initiative tools to fight
misinformation, SLASHGEAR (Oct. 26, 2021, 8:12 AM), https://bit.ly/46eL7jE; see also You
decide what content to trust, VERIFY, https://bit.ly/3r0ouzm (last visited Sept. 12, 2023).
136. See Bella M. DePaulo et al., Lying in Everyday Life, 70 J. PERSONALITY AND
SOC. PSYCH. 979, 991 (1996) (“Participants in the community study, on the average, told a
lie every day; participants in the college student study told two.”).
137. See GOFFMAN, supra note 45, at 35 (discussing “the tendency for performers to
offer their observers an impression that is idealized in several different ways”).
138. See, e.g., Naramol (Jaja) Pipoppinyo, Queer Identity Online: The Importance of
TikTok and Other Media Platforms, MEDIUM (Dec. 1, 2020), https://bit.ly/42XhjoW
(describing “Gay TikTok” and its repudiation of “the mainstream, allowing [users] to
embrace what it truly means to be queer”). But see Daniel Kershaw, LGBTQ and Online
Identities, MEDIUM (Dec. 19, 2013), https://bit.ly/3NMan9N (noting the ways that gender
play arises in online communities but also finding that “the community sometimes finds
this form of gender [play] deceptive” and having a “fluid identity” can become overly
idealistic).
the problem, but it is also a reason for why shallow fakes are so persistent
and problematic.
A. Gendered Harms
There is a distinctly gendered nature to the harms produced by
everyday deception online. When Frances Haugen, the Facebook
whistleblower, testified before Congress in October of 2021, she revealed
that the firm doggedly pursued teenage users, particularly girls, despite the
fact that the firm’s own research suggested a range of harms for those
users.139 As the Wall Street Journal explained in describing the firm’s
internal research, the firm repeatedly found “that Instagram is harmful for
a sizable percentage of [young users], most notably teenage girls.”140
But the range of gendered harms are broader and go much deeper.
Shallow fakes are harmful to all marginalized genders. The political
economy of seeking clicks, engagement, and an online audience reinforces
traditional gender norms and excludes those who do not fit into the narrow
binary presented. This is a complicated critique to undertake given that
female influencers have gained particular traction in these online spaces,
often by capitalizing on traditionally feminine roles, and have succeeded
in monetizing activities that are otherwise excluded from the market.141
The harms we identify are not the commercialization of previously
“intimate” spaces. Rather, the harms follow from how these spaces get
1. Body Dysmorphia
People who spend time on social media sites regularly have higher
rates of body dysmorphia, which is defined as “a mental health condition
in which you can’t stop thinking about one or more perceived defects or
flaws in your appearance.”142 As one researcher explained, “Smartphones,
together with the cosmetics industry, are producing significant shifts in
young women’s visual literacies of the body—particularly the face—such
that they quite literally see themselves and others differently from previous
generations.”143 Facebook’s research notes that this is especially true for
the kinds of filtered images that are shared on Instagram, where it found
that “[s]haring or viewing filtered selfies in stories made people feel
worse.”144 These findings are not a niche concern, as Facebook reveals that
it makes “body image issues worse for one in three teen girls.”145
Contributing to the problem is simply the physical reality of taking a
selfie. That is, “the angle and close distance at which selfies are taken may
distort facial features and lead to dissatisfaction.”146 But it is made worse
by online tools that allow people to manipulate their image with endless
tweaks and filters. For example, one study found that adolescent girls who
spent more time manipulating their photos reported higher levels of body
dysmorphia than those who spent less time doing so.147 This is merely a
correlation; it is possible that the causal story is that girls with higher rates
of body dysmorphia are likely to spend more time filtering their photos.148
That would still be problematic, though, because the filtering tools seem
to create a feedback loop for a slice of the population that is already
susceptible to body dysmorphia.149
There is enough additional evidence to create a credible claim that
the very existence of the filtering applications contributes to the incidence
of dissatisfaction with one’s body. Another study found that 32% of
teenage girls said that when they felt bad about their bodies, Instagram
made them feel worse.150 In the words of one study participant, the feeling
of seeing highly edited images of other women on Instagram is “like stick
thin women with the most amazing butt and the most amazing long hair,
and I’m just like, this isn’t me, and why am I constantly seeing this? And
it does make you feel abnormal, sometimes, and you are normal.”151
Another said, “[W]hen it comes to my skin, I know in my head that is
normal. But when you see the content, it’s like, it does make you feel
almost abnormal because it’s showing you that it shouldn’t be that way.”152
The prevalence of shallow fakery redefines what is considered normal.
3. Pressure to Sexualize
There is intense pressure, even for very young users, to present
themselves as sexual beings. A majority of women surveyed about social
media representations mentioned sexualization without being prompted.156
Health Practices, Social Media Use, and Mental Well-Being Among Teens and Young
Adults in the U.S., PROVIDENCE ST. JOSEPH HEALTH DIGIT. COMMONS 15 (2018), but this
does not minimize the harms caused by the shallow fakes found online.
149. See generally Gill, supra note 26.
150. See Wells et al., supra note 21.
151. Gill, supra note 26, at 19.
152. Id.
153. See Wells et al., supra note 21.
154. Id.
155. Gill, supra note 26, at 19.
156. See id., at 42.
Online photographs are being edited not just to beautify but to sexualize.
It is, in the words of one study, “a visually centered social media that
involves the presence of sexualized imagery,” which, in turn, has a
negative impact on mental health.157 Seventy-five percent of all young
women in a recent survey said that they feel pressure to receive “likes” on
their social media posts.158 Another study found that “sexualized photos
garnered more likes on Instagram,” suggesting that teens feel pressure to
post sexualized versions of themselves.159 This leads to lower feelings of
self-esteem and body image.160 Such findings are widespread and are
consistent with longstanding concerns about sexualization in traditional
media and self-esteem problems, especially in younger people.161
To be clear, the harm we identify is the pressure to conform to the
images presented online, a pressure which functions as a one-way ratchet.
Some individuals might feel empowered to present sexualized images of
themselves and therefore not experience it as a “harm” at all.162 Our
concern is not with these users but with the baseline the platforms create.
All users are pushed to participate in this dynamic, which makes it less of
a choice and more like the price of admission.163 These platforms
157. Francesca Guizzo et al., Instagram Sexualization: When Post Make You Feel
Dissatisfied and Wanting to Change Your Body, 39 BODY IMAGE 62, 62 (Dec. 2021).
158. Gill, supra note 26, at 24.
159. Laura Ramsey & Amber L. Horan, Picture This: Women’s Self-Sexualization in
Photos on Social Media, 133 PERSONALITY AND INDIVIDUAL DIFFERENCES 85, 85 (2018);
see also Kun Yan et al., A Sexy Post a Day Brings the “Likes” Your Way: A Content
Analytic Investigation of Sexualization in Fraternity Instagram Posts, 26 SEXUALITY &
CULTURE 685, 685 (2022) (finding a positive association between the degree of
sexualization in a post and the traffic and likes it received).
160. See id.
161. See Marika Skowronski et al., Predicting Adolescents’ Self-Objectification from
Sexualized Video Game and Instagram Use: A Longitudinal Study, 84 SEX ROLES 584, 585
(2021) (reporting the results of a longitudinal study involving 660 German adolescents and
concluding that “sexualization in video games and on Instagram can play an important role
in increasing body image concerns among adolescents”); see also Thomas Plieger et al.,
The Association Between Sexism, Self-Sexualization, and the Evaluation of Sexy Photos on
Instagram, 12 FRONTIERS IN PSYCH. 1, 1 (Aug. 2021) (reporting results of a survey of 916
participants and finding that “there were substantial correlations between appropriateness
and attractiveness evaluations of the presented photos and the self-sexualizing posting
behavior and enjoyment of sexualization of female users”).
162. See, e.g., Emily Ratajkowski, Emily Ratajkowski Explores What It Means to Be
Hyper Feminine, HARPER’S BAZAAR (Aug. 8, 2019), https://bit.ly/3WEQMLw.
Ratajkowski explains that:
Despite the countless experiences I’ve had in which I was made to feel extremely
ashamed and, at times, even gross for playing with sexiness, it felt good to play
with my feminine side then, and it still does now. I like feeling sexy in the way
that makes me, personally, feel sexy. Period.
Id.
163. One response to this harm, as to all the harms we identify, is to remove oneself
from social media entirely. While that might be a solution for some individuals, that is not
directly responsive to the harms caused by social media platforms—those harms are what
our piece seeks to expose, with the aim of beginning a conversation on how to remedy
them.
164. See, e.g., RATAJKOWSKI, supra note 78, at 5. Ratajkowski writes:
In many ways, I have been undeniably rewarded by capitalizing on my sexuality.
I became internationally recognizable, amassed an audience of millions, and
have made more money through endorsements and fashion campaigns than my
parents (an English professor and a painting teacher) ever dreamed of earning in
their lifetimes. I built a platform by sharing images of myself and my body
online, making my body and subsequently my name recognizable, which, at least
in part, gave me the ability to publish this book. But in other, less overt ways,
I’ve felt objectified and limited by my position in the world as a so-called sex
symbol . . . Whatever influence and status I’ve gained were only granted to me
because I appealed to men.
Id.
165. Others, however, do. For a recent review of different feminist takes—negative
and positive—on porn and its meteoric rise online, see SRINIVASAN, supra note 118, at 33–
71.
166. See, e.g., Emily Witt, A Hookup App for the Emotionally Mature, THE NEW
YORKER (July 11, 2022), https://bit.ly/3C2Y8Pt (describing the author’s experience on
Feeld, a dating app that “is popular with nonbinary and trans people, married couples trying
to spice up their sex lives, hard-core B.D.S.M. enthusiasts, and ‘digisexuals,’ who prefer
their erotic contact with others mediated by a screen”).
167. See Tolentino, supra note 58. Tolentino describes the “Instagram Face”:
It’s a young face, of course, with poreless skin and plump, high cheekbones. It
has catlike eyes and long, cartoonish lashes; it has a small, neat nose and full,
lush lips . . . . The face is distinctly white but ambiguously ethnic—it suggests a
National Geographic composite illustrating what Americans will look like in
2050 . . . .
Id.
168. American Academy of Facial Plastic Surgery, AAFPRS Announces Annual
Survey Results: Demand for Facial Plastic Surgery Skyrockets As Pandemic Drags On,
PR NEWSWIRE (Feb. 10, 2022), https://bit.ly/4518R9L [hereinafter AAFPRS Survey].
on apps are designed to identify a female face and make it more slender,
while they make a face identified as male more broad.188 These same filters
try to make women’s bodies thinner and men’s more muscular.
There are counter examples, to be sure. A recent report described how
Instagram can be a “lifeline” for nonbinary people “struggling to find
others just like them.”189 The same report noted, however, that the space
is filled with risks for nonbinary people, who post pictures of themselves
only to be criticized and bullied for not conforming to gender
stereotypes.190 One five-year study of queer youth of color found that
social media platforms like Facebook were dangerously heteronormative
and created spaces of “default publicness,” resulting in offline harms like
being disowned from one’s family.191
The marketplace for clicks and sponsored content pushes individuals
into more traditional roles if they want to succeed. Content online exists
not only in a market for likes but in an actual market where platforms are
seeking payment, and individual users are seeking endorsements. Many
brands use algorithms to help identify which individuals they should
contact to sponsor their products online. Studies of these algorithms reveal
how certain terms and activities are categorized in ways that marginalize
already-marginalized users. For example, Peg, a UK-based tool that
enables brands to identify possible marketers, codes the use of the term
“queer” as profanity, making individuals who employ that term less
attractive to brand partnerships and thereby excluding them from the
market and from the ability to influence.192 Similarly, YouTube creators
who identify as LGBTQ+ have a difficult time monetizing their content
given that such content is often age-restricted and marked as “not being
‘advertiser and family friendly.’”193
188. See Sage Anderson, Snapchat’s ‘gender-swap’ filter exposes the internet’s
casual transphobia, MASHABLE (May 16, 2019), http://bitly.ws/F59W.
189. Wortham, supra note 30.
190. See id. (interviewing one user who “doesn’t identify as any gender” who spoke
to the possibilities that social media creates of rendering gender nonconforming lives
visible and of challenging “mainstream perceptions of gender,” while also noting the
violence such individuals face, explaining, “I still receive daily hate mail from people of
all genders telling me that my body hair is ugly & that I need to shave to be more ‘real’ &
‘beautiful’”) (internal quotation marks omitted).
191. See Alexander Cho, Default Publicness: Queer Youth of Color, Social Media,
and Being Outed By The Machine, 1 NEW MEDIA & SOC. 3183, 3184, 3187 (2017)
(contrasting Facebook to Tumblr, and showing that queer young people preferred Tumblr,
which has less of a default public character).
192. See Sophie Bishop, Influencer Management Tools: Algorithmic Cultures, Brand
Safety, and Bias, 7 SOCIAL MEDIA & SOCIETY 1, 8 (March 30, 2021), http://bitly.ws/F5ar
(explaining “[w]hile queer does have roots as a homophobic slur, it is a term widely used
in activism and LGBTQ+ communities, in addition to within deconstructivist theory to
recognize that sexualities are ‘unstable, fluid, and constructed”) (citation omitted).
193. Zoë Glatt & Sarah Banet-Weiser, Productive Ambivalence, Economies of
Visibility and the Political Potential of Feminist YouTubers, ASSOCIATION OF INTERNET
RESEARCHERS (AOIR) VIRTUAL CONFERENCE (2020); see also Ari Ezra Waldman,
Disorderly Content, 97 WASH. L. REV. 907, 910–11 (2022) (arguing that content
moderation maintains and reifies “social media as ‘straight spaces’ that are hostile to queer,
nonnormative expression”).
194. “The death of the mom blog has something to do with shifts in how people
consume and create on the Internet. Blogging on the whole has fizzled as audiences and
writers have moved to other platforms.” Sarah Pulliam Bailey, What ever happened to the
mommy blog?, CHICAGO TRIBUNE (Jan. 29, 2018), https://bit.ly/45duS5x. Those platforms
include Instagram, which “is built for beauty (its filters make your life look better), not for
rawness.” Id.
195. Id. (“The shift to shorter posts and an emphasis on likes and hearts has changed
the tone and content of what moms find online: more pictures, fewer words, less grit.”).
196. Kelly D. Harding et al., #sendwine: An Analysis of Motherhood, Alcohol Use
and #winemom Culture on Instagram, 15 SUBSTANCE ABUSE: RESEARCH AND TREATMENT
1, 4–5 (2021).
197. Pamela Tick (@pamelatick), INSTAGRAM (Dec. 17, 2021), http://bitly.ws/F5fN.
198. See id.
199. Id.
B. Racialized Harms
As the discussion of gender makes clear, users construct their images
online in ways that implicate race. In this section, we identify further racial
harms that include blackfishing, whitewashing, and other forms of
appropriation and exclusion. To a certain extent, of course, this reflects the
interests of users—but the platforms enable such practices by offering
specific filters that amplify and propagate each of the harms we identify.
207. See id. at 7 (discussing how “wine mom culture” on social media leads to “a
continued reproduction of white, middle-upper class, neoliberal values”).
208. Id. The study explains: “It is notable that only one image corresponded to a
visible woman of colour, with all other images aligning with predominantly white women
who present themselves as being affluent.” Id.
209. See Stevens, supra note 31, at 1 (defining blackfishing as “a practice in which
cultural and economic agents appropriate Black culture and urban aesthetics in an effort to
capitalize on Black markets”). For a nuanced take on how race can be considered mutable,
see Deepa Das Acevedo, (Im)mutable Race?, 116 NW. U. L. REV. ONLINE 88 (2021).
210. Faith Karimi, What ‘Blackfishing’ means and why people do it, CNN (July 8,
2021, 8:37 AM), https://bit.ly/3MlOfRI.
211. Zawn Villines, What to know about blackfishing, MED. NEWS TODAY (Nov. 9,
2021), https://bit.ly/3OA5d1p.
also apparently successful: three of the top ten Instagram earners are
routinely accused of blackfishing.212
Examples abound. Emma Hallberg is a Swedish model and
influencer. She has over half a million followers on Instagram and she is
known for her makeup tutorials, which showcase her skin’s bronzed
glow.213 Most people assumed that Emma was Black, but, it turns out, that
she identifies as white. As she told BuzzFeed, “I do not see myself as
anything else than white,” and “I get a deep tan naturally from the sun.”214
Emma was accused, along with other white Instagram models, of
“adopting what some have called digital blackface, altering their
appearance with makeup and using Afrocentric hairstyles.”215 The purpose
in doing so is “to build their personal brand and secure lucrative brand
endorsements”—which, for many users, is the reason for being on social
media.216
Appropriation can take place not only in terms of physical appearance
but also in terms of activity. Addison Rae, a famous TikToker, has
performed and popularized a series of dances authored by Black creators,
without always giving them credit.217 The scale of blackfishing on TikTok
became significant enough that in 2021, Black TikTok creators staged a
strike.218
Blackfishing is, in many ways, only the latest chapter in a very “old
story about white people profiting off of black aesthetics to project a sense
212. The top-earners are Kim Kardashian, Ariana Grande, and Kylie Jenner. See
Donna Tang, How Much Do Instagram Influencers Make, CREDITDONKEY, (Apr. 12,
2022), https://bit.ly/3q4NVPW; Stevens, supra note 209, at 1 (listing Kim Kardashian and
Ariana Grande as being accused of blackfishing); Ryan Schocket, Kylie Jenner is Being
Accused of Blackfishing And the Twitter Reactions Say It All, BUZZFEED (Oct. 23, 2021),
https://bit.ly/45jYVsF.
213. See Tanya Chen, A White Teen Is Denying She Is “Posing” As A Black Woman
On Instagram After Followers Said They Felt Duped, BUZZFEED NEWS (Nov. 13, 2018,
5:05 PM), https://bit.ly/3Wt26dN.
214. Id.
215. Stevens, supra note 31, at 1.
216. Id.
217. Addison Rae received backlash for performing dances on Jimmy Fallon and not
giving credit to the choreographers. She does not claim otherwise and lists the creators on
her YouTube channel. Describing them, she has said, “They’re all so talented and I
definitely don’t do them justice.” Joe Price, Addison Rae Under Fire for Not Crediting
Black TikTok Creators While Performing Challenges on ‘Fallon’ (Update), COMPLEX
(Mar. 29, 2021), https://bit.ly/3pWoqjy.
218. See Sharon Pruitt-Young, Black TikTok Creators are on Strike to Protest a Lack
of Credit For Their Work, NAT. PUB. RADIO (July 1, 2021, 11:00 PM),
https://bit.ly/43cJQr8. TikTok claims it is taking steps to reduce the ability to engage in
cultural appropriation, but Black users say there has been little change. See Vanessa Pappas
& Kudzi Chikumbu, A message to our Black community, TIKTOK (June 1, 2020),
https://bit.ly/45teQoA; Kalhan Rosenblatt, Months after TikTok apologized to Black
creators, many say little has changed, NBC NEWS (Feb. 9, 2021, 5:11 AM),
https://bit.ly/3Ixcmf6.
219. Spencer Kornhaber, How Ariana Grande Fell Off the Cultural-Appropriation
Tightrope, ATLANTIC (Jan. 23, 2019), https://bit.ly/43kPG9R.
220. Stevens, supra note 31, at 6.
221. Charlie Duffield, Instagram ‘choco skin’ filter is form of brownface, says
teacher and activist, EVENING STANDARD (Sept. 1, 2020), https://bit.ly/43gV5iR.
222. Sarah Lee, Instagram Filters: ‘Our skin is for life, not for likes’, BBC NEWS
(Oct. 19, 2020), https://bit.ly/3oG4Yav.
223. See Robinson Meyer, The Repeated Racism of Snapchat, ATLANTIC (Aug. 13,
2016), https://bit.ly/3WMsT4P; see also Sam Levin, Snapchat faces backlash over filters
that promotes racist stereotypes of Asians, THE GUARDIAN (Aug. 10, 2016, 2:04 PM),
https://bit.ly/43jmdxp.
224. Rihanna’s decision to wear Chinese couture at the 2015 Met Gala is a good
example; she used her platform to respectfully honor Chinese designs, and she gave credit
to those designers. See Jenni Avins & Quartz, The Dos and Don’ts of Cultural
Appropriation, THE ATLANTIC (Oct. 20, 2015), https://bit.ly/45F0kdE. The setting was also
relevant: it was a costume ball at the Met Museum, and she chose a designer whose work
was on display at the Museum. See id.
filter which made darker skin tones appear lighter.225 The app’s filter did
not promise to make users whiter, but rather to make them “hotter,” which,
according to the platform, meant having whiter skin.226 The same is true
for Snapchat’s “flower crown” filter, which one would expect to merely
add a flower crown to images but which also whitens one’s skin tone
considerably.227 Similarly, Instagram’s “Attraction” filter—which has
been used in over 143,000 videos—pushes people towards European
standards of beauty.228
Whitewashing and blackfishing are not as paradoxical as might
initially appear. The problem with blackfishing is that it allows individuals
to selectively choose which aspects of Black culture to claim, in a world
that still holds mainstream white beauty as the norm.229 Indeed, the
baseline, “unfiltered” look is to whiten and lighten. The cumulative effect
of these filters is that they regularly exclude people of color.230 For
example, the “Glow” filter on TikTok, which is designed to make a face
look more beautiful, simply does not work on some people of color.231 As
one TikTok creator put it, “my first reaction was like, ‘Oh, great, another
one of those beauty filters that changes our features to make us cater to the
European so-called beauty standards.”232 The Glow filter has been used on
over 3 million TikTok videos.233 As one Myanmarese TikTok user
explained, “You have to be a white woman. You have to have darker skin
almost, but in the, bronze-y, ‘white woman with a tan’ way rather than
like, actually working for people with different skin tones.”234 Even though
the user base is diverse, the tools offered by the platforms are not.235
C. Democratic Harms
We have not meant to imply in our assessments of the harms that
users are naïve and unsuspecting victims. In fact, because everything is
being faked all the time, many of us view and engage in the online sphere
with a certain cynicism. “That can’t be real,” or “they’re just trying to sell
something,” are typical everyday reactions to a medium in which everyone
is searching for clicks and scrolls. This reaction is heightened given that
all of these interactions take place against a background of
commercialization.236 Knowing that “goods”—a product, a lifestyle, a
better version of one’s self—are constantly being peddled on social media
means that people react skeptically to posts. The corollary worry about
widespread deception is that no one trusts anyone about anything.
This skepticism manifests itself in a number of important ways that
threaten a healthy, functioning democratic society. We identify two: the
erosion of expertise and the inability to engage in informed public
discourse. While others have raised these concerns in the context of fake
news, their relationship with shallow fakes has not been considered. We
do that here. In addressing the erosion of expertise, we consider the rise of
the wellness industry—which has truly taken off on social media
platforms—as a case study. In considering the impoverishment of our
ability to engage in productive dialogue, we examine the role that constant,
casual, and unchecked deception plays on the internet.
238. See Brian Kennedy et al., Americans’ Trust in Scientists, Other Groups
Declines, PEW RSCH. CTR. (Feb. 15, 2022), https://bit.ly/3C8W1tG.
239. See Dominik Andrzej Stecula et al., How trust in experts and media use affect
acceptance of common anti-vaccination claims, MISINFORMATION REV. (Jan. 14, 2020),
https://bit.ly/3MOTD05.
240. Shallow fakes are of a piece with the obsession over self-improvement and self-
care. See JIA TOLENTINO, TRICK MIRROR: REFLECTIONS ON SELF-DELUSION 80 (2019) (“Old
requirements, instead of being over-thrown, are rebranded. Beauty work is labeled as ‘self-
care’ to make it sound progressive.”).
241. Young, supra note 34.
242. See Peter Dahlgren, Commentary, Public Sphere Participation Online: The
Ambiguities of Affect, 12 INT’L J. OF COMMC’N 2052, 2065 (2018) (“Today, in the viral
world of online information . . . what we feel—is clearly on the rise. Truth becomes
reconfigured as an inner subjective reality with an affective leap and thus becomes the
foundation for validity claims about reality.”).
243. See Brodesser-Akner, supra note 130.
through self-care.244 It sells products like the “$66 ‘Jade Egg,’” a stone
which is meant to be inserted into the vagina and which Goop claims
“could balance hormones, regulate menstrual cycles, prevent uterine
prolapse, and increase bladder control.” None of these claims are
supported by scientific evidence.245 As a result of these and other
assertions, ten California District Attorney’s offices filed suit for false and
misleading advertisement in violation of the state Business and
Professions Code, section 17500.246 Goop settled for $145,000.247
These deceptive marketing practices might be dismissed as the kind
that happen in all industries. But the wellness industry is particularly rife
with deceptive and unfair trade practices and is part of the rising tide of
online misinformation.248 As one report set forth, “[a] surge in
misinformation has grown with the internet, making wellness strategies
appear to have scientific foundations when instead they’re fueling baseless
and sometimes harmful theories.”249 To take another related example,
vitamin sales, which are heavily promoted on social media, have jumped
40% from 2019 to 2020, despite little medical evidence to show that they
increase health or wellness.250
Of course, traditional media also sells beauty, glamour, and the
products that help achieve both. But there are journalistic standards they
must adhere to that are wholly absent in the virtual arena. In 2017, Condé
Nast, a global mass media company that owns Vogue, a fashion magazine
marketed to women,251 decided to partner with Goop to deliver content to
Vogue’s readers.252 The deal soon fell apart. The problem was twofold.
First, Vogue publishes a magazine, not a catalog, which means that it must
enforce a separation between content and product placement—and
therefore a separation between reader and consumer—which Goop does
not do.253 Second, Vogue requires fact-checking and support for scientific
claims made.254 Goop, however, understands that support to be
unnecessary, given that “they’re never asserting anything like a fact” but
rather “just asking unconventional sources some interesting questions.”255
Facts, which remain relevant to traditional media, are of diminishing
importance to social media.
G.P. would say, then what is science, and is it all-encompassing and altruistic
and without error and always acting in the interests of humanity? These questions
had been plaguing Goop for a while – not just what is a fact, or how important is
a fact, but also what exactly is Goop allowed to be suggesting?
Id.
255. Id.
256. In discussing the more specific practice of “stealth marketing,” Ellen Goodman
argues that it “harms . . . by degrading public discourse and undermining the public’s trust
in mediated communication.” Goodman, supra note 34, at 87.
257. See PARISER, supra note 33; see also Khiara M. Bridges, Language on the Move:
“Cancel Culture,” “Critical Race Theory,” and the Digital Public Sphere, 131 YALE L.J.
FORUM 767, 770–71 (2022). Bridges writes:
On social media, rational debate – the hallmark of the civic deliberations that
took place in the Habermasian public sphere – is not a dominant presence. When
one dares to open the Twitter app, one is more likely to encounter abusive speech,
ad hominem attacks, and wildly fact-free and logic-free statements than rational
argumentation.
Id.
258. See Bill McCarthy, Misinformation and the jan. 6 insurrection: when ‘patriot
warriors’ were fed lies, POLITIFACT (June 30, 2021), https://bit.ly/3MxY4w0.
should expect that people will distrust what they see, and that distrust will
spill over into the other kinds of digital information they consume.
Consider again the “Moon Juice” peddler, Amanda Chantal Bacon.
Her clients are made up of the likes of “Gwyneth Paltrow, Emma Roberts,
and Shailene Woodley.”259 She is, famously, a critic of “Western
medicine.”260 Chantal Bacon proudly notes that she has “never paid
influencers.”261 Instead, “[s]ocial media, and specifically Instagram, has
always been important to me.”262 That is, the very nature of the social
media platforms allows her company to flourish. Instagram provides her
with a space to promote her products for profit, by positioning herself as
an expert on “‘wellness and longevity’” and on leading a “‘holistic
lifestyle.’”263
Consider now Alex Jones who, like Chantal Bacon, is a salesman. He
has a website, InfoWars, where he offers “organic fair-trade coffee” that
“can be purchased in an ‘Immune Support’ variety that includes cordyceps
and reishi mushroom extracts.”264 He also markets probiotics and a “Super
Female Vitality” supplement. These are made from the exact same extracts
that Moon Juice sells.265 Jones is perhaps best known as a right-wing radio
host and conspiracy theorist, who was instrumental in propagating the
false narrative that the 2020 election was stolen from Donald Trump.266
He spoke at a rally in DC on January 5, and was subpoenaed to discuss his
knowledge of, and involvement in, the January 6 attack.267
The link between Chantal Bacon, Jones, and January 6, is not as far-
fetched as it might initially seem. Chantal Bacon and Jones both traffic in,
and profit from, misinformation, and their business models depend on a
distrust of facts presented by sources external to themselves. The
background conditions of the platforms make it so that truth becomes a
commodity—with real world consequences that include harming our most
time-worn democratic institutions.268 This entwinement of profit and
259. Elana Lyn Gross, How Moon Juice’s Founder Built Her Wildly Popular
Wellness Brand, FORBES (Aug. 13, 2018), https://bit.ly/3QyIDr5.
260. Id.
261. Gross, supra note 259.
262. Id.
263. Id.
264. See Young, supra note 34.
265. See id.
266. See Frontline, What Conspiracy Theorist Alex Jones Said in the Lead Up to the
Capitol Riot, PBS (Jan. 12, 2021), https://bit.ly/3oyZK0n.
267. See Benjamin Siegel, Conspiracy theorist Alex Jones reveals he appeared
before Jan. 6 committee, ABC NEWS (Jan. 25, 2022, 6:54 PM), https://bit.ly/3oGYhVB.
268. See David Remnick, The Devastating New History of the January 6th
Insurrection, THE NEW YORKER, (Dec. 22, 2022), https://bit.ly/47t5ASt (describing the
attack as “a deliberate, coordinated assault on American democracy that could have easily
ended with the kidnapping or assassination of senior elected officials, the emboldenment
of extremist groups and militias, and, above all, a stolen election, a coup”).
politics on social media disrupts the possibility of creating any shared civic
discourse.269
The distrust of institutions that leads one to rely on the self—which
must be improved, immunized, and reinforced with various supplements
to withstand whatever may come—straddles the spectrum of political
parties as it does socioeconomic class. As individuals’ focus is pulled into
themselves and their interests, it is pulled away from the creation of a
collective reality or of a shared community.270 The result is that people feel
more alone and more siloed and therefore less engaged as members of
society. As people spend more of their lives online, and occupy spaces that
are rife with fakery, they will be less inclined to engage in a common civic
life.271
V. PLATFORM REGULATION
We have argued for a more robust assessment of the costs of shallow
fakes in both scholarship and policy. What does that mean in terms of
regulation? The first, and most obvious, regulatory move is to demand
greater transparency from social media platforms. Relatedly, the FTC
should sharpen and expand its guidelines around deception on social
media. Finally, we think there is room for voluntary initiatives by social
media firms, akin to the work being done in countering violent extremism
and child sexual abuse, though we note that some of the dominant policy
proposals today—especially antitrust policies aimed at greater
competition—are likely to be unhelpful in this context and may instead
make the problem worse.
Our focus in this Part is on the platforms, not the users, because it is
the platforms that are best situated to address the problem. They create the
market for shallow fakes, they have the most information about what is
happening on their services, and, crucially, they control users’ experiences
by incentivizing them to engage in shallow fakery. While platforms enjoy
broad immunity for much of what their users do under Section 230 of the
269. As researcher Peter Dahlgren has noted in his work on social media and
democracy, “from the standpoint of users, even if our intentions are civic or political, we
are still addressed by and embedded in dominant online consumerist discourses.” Dahlgren,
supra note 242, at 2060–61 (“These discourses offer us subject positions mostly as
consumers, rarely as citizens.”).
270. See TOLENTINO, supra note 240, at 30 (“Facebook’s goal of showing people only
what they were interested in seeing resulted, within a decade, in the effective end of shared
civic reality.”).
271. Jia Tolentino’s interview of author Naomi Klein discusses the consequences of
maintaining this hierarchy of priorities. Klein explains: “[T]he amount of labor we are
putting into optimizing our bodies, our image, our kids, is robbing from the work that needs
to be done to preserve the habitability of the planet, to preserve our humanity in the face
of those spasms.” Jia Tolentino, Naomi Klein Sees Uncanny Doubles in Our Politics, THE
NEW YORKER INTERVIEW (Sept. 10, 2023), https://bitly.ws/Uthu.
A. Transparency Reforms
Before we are ready to prescribe platform regulations with any detail,
we need to know much more about the platforms’ internal operations. How
are they offering filters and who are they targeting? How much are they
filtering by default, without giving users adequate notice? Relatedly, how
much user data are they tracking and towards what ends? We should also
know more about user behavior and how it is shaped by the platforms’
dark patterns. All of this to say: we simply do not know enough about what
social media platforms are doing and how people are using them.
For many of the most important questions, only the social media
platforms have the answers or have access to data that could provide
answers. Unfortunately, the platforms have not been terribly transparent.
As Facebook insider Frances Haugen testified, “I came forward because I
recognized a frightening truth: almost no one outside of Facebook knows
what happens inside Facebook.”274 Despite repeated calls to remedy this
problem, Facebook has refused to share its internal research. As the Wall
Street Journal reported last year, “Facebook has consistently played down
the app’s negative effects on teens, and hasn’t made its research public or
available to academics or lawmakers who have asked for it.”275 Tellingly,
the first time most of the public and regulators became aware of the scope
of the mental health crisis on social media was when a Facebook employee
blew the whistle and leaked internal research.276
277. See, e.g., Mathew Ingram, Facebook “transparency report” turns out to be
anything but, COLUM. JOURNALISM REV. (Aug. 26, 2021), https://bit.ly/43dqz8Q.
278. For example, Meta’s recently created “Widely Viewed Content Report” is
supposed to bring transparency into the way information goes viral on the platform, yet it
provides such a high-level overview, it gives only a snapshot of “what a typical feed looks
like,” and does not give granular information about how information spreads. See, e.g.,
Widely Viewed Content Report: What People See on Facebook Q1 2023 Report, META,
https://bit.ly/3OoHlxm (last visited Sept. 12, 2023).
279. See Steven Levy, It’s Time to Talk About Facebook Research, WIRED (Sept. 17,
2021), https://bit.ly/42Whv8H.
280. See Craig Timberg, Facebook made big mistake in data it provided to
researchers, undermining academic work, WASH. POST (Sept. 10, 2021),
https://wapo.st/3WjbWi2.
281. See Taylor Hatmaker, Facebook cuts off NYU researcher access, prompting
rebuke from lawmakers, TECHCRUNCH (Aug. 4, 2021, 8:17 PM), https://bit.ly/3DrXoEs.
282. See Ethan Zuckerman, Facebook has a misinformation problem, and is blocking
access to data about how much there is and who is affected, THE CONVERSATION (Nov. 2,
2021, 8:27 AM), https://bit.ly/41MtJPW.
283. See Senator Coons Press Release, Coons, Portman, Klobuchar Announce
Legislation to Ensure Transparency at Social Media Platforms (Dec. 9, 2021),
terrible.288 These are helpful, to be sure. But these guidelines do not apply
to the vast majority of the deception we describe, in which people are not
actively promoting a specific product. Even in the narrow situations they
are meant to cover, Alexandra Roberts has found that “[f]alse advertising
claims based on the use of editing software to improve people’s
appearance” are unlikely to succeed because the FTC looks for explicitly
misleading statements about a product’s efficacy.289 These rules are also
poorly enforced.290 As such, one study estimated that only 7% of all
sponsored influencer posts comply with FTC rules.291
Because the FTC has not been especially active in this space, some
scholars have turned to the Lanham Act as a promising option, given that
it allows for causes of action to be privately enforced.292 But making out a
successful claim is still difficult. In Lokai Holdings, LLC v. Twin Tiger
USA, LLC, Twin Tiger alleged, among other things, that competitor
Lokai’s “failure to disclose that it compensates certain influencers,
celebrities, and media outlets for their endorsement of Lokai products in
online and social media advertising is likely to deceive reasonable
consumers.”293 Yet the district court denied the claim, concluding that “the
Lanham Act does not impose an affirmative duty of disclosure.”294
Accordingly, “failure to disclose compensation to celebrities and
influencers for promoting its products is not actionable under the Lanham
Act.”295
These examples address only one small slice of the fakery taking
place online, and the focus is always on individual users or brands, as
opposed to the platforms or their policies as a whole. Our reforms are
aimed at the platforms themselves. Specifically, we call upon the FTC to
promulgate rules in this context. While we are not the first to do so,296 our
288. Id. at 6.
289. Roberts, supra note 21, at 114. Roberts writes:
[T]hose that equate to a false or misleading statement about the product’s
efficacy—like photoshopping whiter teeth in an influencer ad for a tooth
whitening product or longer lashes in an ad for a lash-lengthening mascara—
seem more likely to be fair game.
Id.
290. See id. at 120.
291. See MEDIAKIX, supra note 15.
292. See Roberts, supra note 21, at 86–88 (noting that the FTC “lacks the resources
and perhaps the authority to enforce industry-wide change” but that the Lanham Act allows
for private companies to sue one another and arguing in favor of “private actors . . . us[ing]
the Lanham Act to challenge competitors’ false influencing”).
293. Lokai Holdings, LLC v. Twin Tiger USA, LLC, 306 F. Supp. 3d 629, 639
(2018).
294. Id. at 640.
295. Id.
296. At least four student notes have identified this problem along with several
practitioners. See, e.g., Lauryn Harris, Too Little, Too Late: FTC Guidelines on “Deceptive
and Misleading” Endorsements by Social Media Influencers, 62 HOW. L.J. 947 (2019);
Laura E. Bladow, Worth the Click: Why Greater FTC Enforcement Is Needed to Curtail
Deceptive Practices in Influencer Marketing, 59 WM. & MARY L. REV. 1123 (2018); Tisha
James, The Real Sponsors of Social Media: How Internet Influencers Are Escaping FTC
Disclosure Laws, 11 OHIO ST. BUS. L.J. 61 (2017); Christopher Terry et al., Throw the Book
at Them: Why the FTC Needs to Get Tough With Influencers, 29 J. L. & POL’Y 406 (2021).
297. The problem has been framed as finding plaintiffs, see Roberts, supra note 21,
at 87–88, but we are ultimately concerned with identifying the correct defendant. There
are, of course, certain users, like influencers, and especially those with particularly large
followings, who should be held accountable. The problems we have identified, however,
are with platforms’ policies, and our concern here is therefore with regulating platform
behavior.
298. See generally WU, supra note 10.
299. See 15 U.S.C. § 45(a).
300. See F.T.C. v. Colgate-Palmolive Co., 380 U.S. 374, 384–85 (1965) (describing
how the Federal Trade Commission Act was significantly amended in 1938 to include a
prohibition on “deceptive acts or practices in commerce”).
301. See Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law
of Privacy, 114 COLUM. L. REV. 583, 586 (2014) (describing how the FTC has built up an
extensive body of regulations regarding privacy, which “is functionally equivalent to a
body of common law”).
302. See Roberts, supra note 21, at 84–85 (noting that “[o]mitting sponsorship
disclosure enables paid content to masquerade as organic buzz and peer-to-peer
testimonial, rendering misrepresentations even more persuasive,” and so “the majority of
influencers and brands go out of their way to obscure the nature of their relationship”).
C. Other Initiatives
In the absence of new regulations, industry reforms are a second-best
solution. While there are good reasons to be skeptical of voluntary industry
initiatives, there are several compelling precedents of industry-wide norms
developed by social media firms. For example, the Global Internet Forum
for Countering Terrorism (“GIFCT”) allows firms to harmonize their
efforts to combat extremist imagery, along with other counter-terrorism
steps. In 2017, the firms that make up the GIFCT created an industry
database of “perceptual hashes of known images and videos produced by
305. David Cohen, Global Internet Forum to Counter Terrorism Expands Scope of
Its Database, ADWEEK (July 26, 2021), https://bit.ly/44YHtsG (last visited Jul. 9, 2023).
306. See Anirudh Krishna, Note, Internet.gov: Tech Companies as Government
Agents and the Future of the Fight Against Child Sexual Abuse, 109 CAL. L. REV. 1581,
1602 (2021).
307. See id. at 1603.
308. See, e.g., Vanessa Friedman, Airbrushing Meets the #MeToo Movement. Guess
Who Wins, N.Y. TIMES (Jan. 15, 2018), https://nyti.ms/3MI6ulM (describing plans by CVS
and other firms to stop “materially altering” imagery in advertisements); see also Eric
Pfanner, A Move to Curb Digitally Altered Photos in Ads, N.Y. TIMES (Sept. 27, 2009),
https://bit.ly/3IrJzbG (describing a French law to ban airbrushing).
309. See Becky Bargh, Olay pledges to stop airbrushing advert campaigns,
COSMETICS BUS. (Feb. 24, 2020), https://bit.ly/3BKC7ov.
have been significantly altered.310 This labeling would allow users to better
distinguish between real and fake images and give them the tools to more
accurately interpret the posts.
This kind of initiative, which is ambitious on its own, would still not
remedy many of the problems described here. Users could still voluntarily
edit their photos using native photo-editing software and then upload them
to social media platforms, which likely would not be able to identify
whether the image was doctored. Even sophisticated face-recognition
software depends on a training image; if someone only posts edited images
of their face, that would be the face identified by the software. And many
of the problems described here—like a photo simply being taken out of
context—do not only concern editing software.
Any initiative, therefore, must also include media literacy training.
For more robust protection against trickery, social media users need to be
critical consumers who are versed in identifying fakes and, more
importantly, taking what they see with a healthy (rather than democracy-
defeating) dose of skepticism so that they do not think everything is fake
and can instead assess the difference between fact and fiction. Educating
users to be critical will not fundamentally change the nature of social
media platforms,311 but it will enable users to make a distinction between
the digital and the physical world in ways that give users more control and
more information about what exactly they are consuming.
One final way that industry norms could help would be to cultivate a
diversity of images across users’ visual fields. Just as people speak of
needing a balanced “information diet” to combat the filter bubble,
platforms could ensure that users receive a balanced diet of images—
filtered and unfiltered.312 This would go some way towards combatting the
unrealistic standards that are presented as the norm on social media
platforms. Being exposed to images of different body types would also put
the brakes on the common experience of tunneling down a filter bubble
where one is fed only one particular kind of image.
VI. CONCLUSION
There is a great deal of talk about the “metaverse,” a digital world
where we will be able to leave our bodies behind.313 The foundations for
310. See, e.g., Vanessa Friedman, supra note 308 (describing how CVS will mark
images that are not significantly digitally altered).
311. See TOLENTINO, supra note 240, at 19 (“The internet is engineered for this sort
of misrepresentation; it’s designed to encourage us to create certain impressions rather than
allowing those impressions to arise ‘as an incidental by-product of [our] activity.’”).
312. See Steven Leckart, Balance Your Media Diet, WIRED (Jul. 15, 2009),
https://bit.ly/45bAvS6.
313. See Eric Ravenscraft, What Is the Metaverse, Exactly?, WIRED (Nov. 25, 2021),
https://bit.ly/3MpVGHs.
that world are being built on today’s digital platforms.314 But today’s
platforms are awash in deceit, through deepfakes and shallow fakes alike.
Ours is a visual field marked by intense pressure to conform to a particular
ideal of the perfect self, one that can only ever exist in a fake world.315 For
most of us, though, our online images are not our reality. Let us keep it
that way.
314. See Katherine Fung, Facebook Changes Company Name to ‘Meta’ in Rebrand,
Social Network Name Will Stay, NEWSWEEK (Oct. 28, 2021, 2:38 PM),
https://bit.ly/44DLyD5 (“The metaverse is the next evolution of social connection. Our
company’s vision is to help bring the metaverse to life, so we are changing our name to
reflect our commitment to this future.”).
315. Consider @lilmiquela who is “a 19-year-old Robot living in LA.” See
@lilmiquela, INSTAGRAM, https://bit.ly/3Iwl4dI (last visited June 26, 2023). Lucy
Blakiston explains:
Miquela’s ‘life’ is the definition of unrealistic, because she’s literally an
amalgamation of a perfectly selected bunch of pixels. But between Photoshop,
Facetune, filters and all the other bullshit that exists these days to help us airbrush
our online lives, Miquela’s façade really isn’t that different from any other
influencer. And maybe that’s the point.
Lucy Blakiston, Lil Miquela and the rise of the robot influencer, THE SPINOFF (July 15,
2021), https://bit.ly/3r72ByA.
_TLP:CLEAR_
Publicul țintă al ghidului
Acest ghid se adresează persoanelor cu vârsta peste 18 ani, în vederea creșterii gradului de conștientizare
și înțelegere asupra utilizării instrumentelor de inteligență artificială (IA) de tip generativ pentru crearea
de conținut video, audio sau de imagini, ori privind detectarea utilizării unor astfel de instrumente.
Deepfake, o definiție
Deepfake este o manipulare digitală a unei înregistrări video, audio sau a unei imagini, realizată cu ajutorul
inteligenței artificiale (IA) sau a altor programe specializate.
Un exemplu: Există multiple cazuri în care un videoclip cu o personalitate publică oferă sfaturi financiare.
În acest videoclip, persoana recomandă insistent investiția într-o anumită companie sau afacere,
promițând profituri mari și riscuri minime.
Detaliile contează: Videoclipul este realizat profesional, cu o calitate a imaginii și sunetului ridicată.
Persoana publică pare sinceră și convingătoare, folosind un limbaj accesibil și argumente persuasive.
Scopul campaniei: Un astfel de Deepfake ar putea fi folosit pentru a manipula populația să investească
într-o afacere ilicită (scam 1), deținută sau controlată de manipulatori.
Impact: Videoclipul este distribuit rapid pe rețelele de socializare, cu un potențial uriaș de a fi vizionat
de milioane de oameni. Mulți dintre cei care văd videoclipul ar putea fi convinși să investească în afacerea
recomandată, riscând ulterior să piardă sume semnificative de bani.
De ce este important să înțelegem Deepfake
Înțelegerea Deepfake-urilor este crucială, având implicații semnificative în societate, politică, securitate
cibernetică și în manipularea încrederii publice.
Impact asupra adevărului și încrederii: Deepfake-urile pot distorsiona realitatea, creând conținut video
sau audio extrem de realist, care poate fi greu de distins de materialele autentice. Aceasta poate submina
încrederea în mass-media și în informațiile pe care ne bazăm pentru a lua decizii informate.
Securitate și fraude: Tehnologia Deepfake poate fi folosită pentru a crea înregistrări false care implică
persoane în activități pe care nu le-au realizat sau pentru a produce dovezi false în contexte juridice. De
asemenea, poate fi utilizată în tentative de fraudă, cum ar fi falsificarea identității în apeluri telefonice
sau video pentru a obține acces neautorizat la informații sau resurse financiare.
Manipulare și dezinformare: În politică și în alte domenii, Deepfake-urile pot fi folosite pentru a manipula
opinia publică, a discredita adversarii sau a crea confuzie. Acestea pot influența alegerile, relațiile
internaționale și pot alimenta teorii ale conspirației.
Impact social și etic: Utilizarea imorală a tehnologiei Deepfake, pentru crearea de materiale
compromițătoare sau hărțuirea online, ridică probleme serioase de etică și respectare a drepturilor
individuale.
Pregătire pentru viitor: Pe măsură ce tehnologia continuă să avanseze, este probabil ca Deepfake-urile
să devină tot mai sofisticate și mai greu de detectat. Înțelegerea modului în care funcționează și a
modalităților de a le identifica este esențială pentru dezvoltarea de tehnologii și strategii care să
contracareze utilizarea lor rău intenționată.
Educație și conștientizare: Educația publicului despre existența și capacitatea Deepfake-urilor poate
ajuta la atenuarea impactului lor prin creșterea scepticismului sănătos față de conținutul dubios și
încurajarea verificării informațiilor din surse multiple.
Pentru toate aceste motive, este important să dezvoltăm un nivel colectiv de înțelegere a Deepfake-urilor
și a potențialului lor de a afecta societatea. Acest lucru ne va permite să navigăm într-o lume digitală tot
mai complexă cu discernământ și o mai mare precauție.
1
Mod general de a defini diferite tipuri de fraudă cu care se confruntă utilizatorii atunci când folosesc Internetul. Cei care desfășoară astfel de
activități folosesc instrumente tehnice sau tehnici de inginerie socială cu intenția de a obține foloase financiare.
_TLP:CLEAR_ Pagina 2 / 8
Tehnologia din spatele Deepfake
Deepfake-urile sunt create folosind o combinație de tehnici de IA și învățare automată (Machine Learning
- ML). Tehnologiile cheie implicate sunt:
1. Rețele neuronale convoluționale (CNNs): Sunt tipuri de rețele neuronale artificiale specializate în
analiza imaginilor și a videoclipurilor. Ele sunt antrenate pe seturi mari de date, imagini și videoclipuri
reale pentru a învăța caracteristicile faciale, expresiile, mișcările corpului și alte detalii vizuale.
2. Rețele neuronale generative (GANs): Rețele neuronale artificiale care pot genera conținut nou, realist,
similar cu datele pe care au fost antrenate. În contextul Deepfake, GAN-urile sunt utilizate pentru a
genera imagini și videoclipuri false care sunt foarte asemănătoare cu cele reale.
3. Învățarea automată (ML): Este utilizată pentru a antrena algoritmii Deepfake să identifice și să
manipuleze elemente specifice ale imaginilor și videoclipurilor, cum ar fi expresiile faciale, mișcările
buzelor, sincronizarea audio, etc.
Procesul de creare a unui Deepfake
Colectarea datelor: Pentru a crea un Deepfake convingător, este necesară o cantitate mare de date pentru
antrenament care să includă imagini și videoclipuri capturate din diferite unghiuri și în diverse situații. Cu
cât datele sunt mai variate și de calitate mai bună, cu atât Deepfake-ul va fi mai realist.
Antrenarea modelului: Utilizând datele colectate, algoritmi de IA și ML sunt antrenați pentru a identifica
și învăța caracteristicile unice ale persoanei țintă, cum ar fi trăsăturile faciale, expresiile și modul în care
se mișcă sau vorbește. Scopul acestei faze este de a permite modelului să reproducă aceste detalii cu o
precizie cât mai mare.
Generarea Deepfake: După ce modelul este suficient antrenat, este folosit pentru a crea conținut
falsificat, în care persoana țintă spune sau întreprinde acțiuni neadevărate. Etapa implică generarea de
imagini sau secvențe video artificiale, dar extrem de realiste, utilizând capabilitățile modelului antrenat.
Manipularea detaliilor: Pentru a spori autenticitatea și credibilitatea Deepfake-ului, detaliile fine sunt
ajustate meticulos. Aceasta include sincronizarea mișcărilor buzelor cu conținutul audio falsificat,
corectarea expresiilor faciale pentru a se potrivi cu contextul generat și finisarea altor detalii minore care
contribuie la realismul general al conținutului.
Distribuirea: Deepfake-ul final poate fi distribuit online prin intermediul rețelelor sociale, al platformelor
de sharing video sau chiar prin mesagerie. Intenția distribuirii poate fi de a înșela, de a discredita o
persoană sau de a manipula opinia publică.
Exemple reale de utilizare a Deepfake
Banca Națională a României (BNR) a avertizat publicul cu privire la o schemă de înșelăciune care implică
utilizarea tehnologiei Deepfake pentru a crea videoclipuri false cu guvernatorul BNR. În aceste videocli-
puri, guvernatorul pare să promoveze o platformă de investiții, însă BNR a declarat că acestea sunt false.
Scamul folosește IA pentru a modifica vocea și imaginea guvernatorului, cu scopul de a induce în eroare
publicul pentru a participa la investiții frauduloase, promițând câștiguri financiare rapide și ușoare.
Figura 1 SCAM | Deepfake cu guvernatorul BNR Mugur Isărescu care promovează o platformă de investiții
_TLP:CLEAR_ Pagina 3 / 8
O femeie pensionară din Vaslui a fost victima unei escrocherii postată online pe platforma YouTube.
Escrocii au creat un videoclip fals în care un cunoscut bancher și alte personalități cunoscute recomandau
o platformă de investiții. Promisiunea unor profituri rapide a convins pensionara să investească suma de
52.000 de lei, economii strânse în 20 de ani de muncă. Chiar dacă femeia a raportat această fraudă
autorităților, experții consideră că șansele de recuperare a fondurilor pierdute sunt minime i.
Actorii rău intenționați devin tot mai inventivi. Acum pot imita perfect vocea unei persoane dragi pentru
a vă păcăli. Te pot suna pretinzând că sunt un membru al familiei care are nevoie urgentă de bani.
Recomandăm ca întotdeauna să fie verificată identitatea apelantului, chiar dacă pare să fie o rudă sau un
prieten cunoscut. Nu trimite niciodată bani în grabă și anunță imediat autoritățile dacă ai suspiciuni. Doar
prin vigilență și informare te poți proteja de această formă de fraudă ii.
În 2019, un Deepfake audio a fost folosit pentru a înșela un CEO cu 220.000 de euro. Directorul general al
unei firme de energie cu sediul în Marea Britanie a crezut că vorbea la telefon cu directorul general al
companiei-mamă germane atunci când a urmat ordinele de a transfera imediat 220.000 de euro în contul
bancar al unui furnizor maghiar. De fapt, vocea aparținea unui escroc care folosea tehnologia vocală IA
pentru a-l imita pe directorul executiv german iii.
Pericolul Deepfake în contextul electoral
În era digitală actuală, în care granița dintre realitate și ficțiune se estompează în mod constant, procesul
electoral depășește simpla confruntare a ideologiilor și promisiunilor politice, transformându-se într-un
teren complex de luptă ideologică. Tehnologiile Deepfake, capabile a sintetiza realist imagini și voci , pot
influența semnificativ opinia și votul alegătorilor în timpul campaniilor electorale, având un impact major
asupra procesului democratic, din următoarele considerente:
Impactul asupra politicienilor: Reputația unui politician poate fi grav afectată de videoclipuri Deepfake
fabricate, care pot discredita imaginea sa și pot deteriora șansele de a câștiga alegerile. Răspândirea
dezinformării prin Deepfake poate manipula percepția publică asupra caracterului și programului politici-
anului, afectând negativ cariera sa politică.
Impactul asupra partidelor politice: Deepfake poate fi utilizat ca instrument strategic pentru a discredita
partidele rivale, prin crearea de materiale false care să le prezinte într-o postură negativă. Promovarea
agendei propriului partid prin Deepfake poate fi o modalitate eficientă de a influența opinia publică și de
a atrage alegători. Utilizarea necorespunzătoare a Deepfake de către partidele politice poate duce la
scandaluri și la pierderea încrederii publicului în sistemul politic.
Impactul grupurilor de interese: Deepfake poate fi utilizat de grupuri de interese din interiorul unei țări
pentru a manipula opinia publică legată de probleme civice. Crearea de conținut falsificat adaptat la
contextele locale poate intensifica sau calma nemulțumirile legate de anumite probleme sau evenimente.
Utilizarea Deepfake de către grupurile de interese poate duce la polarizarea societății și la o erodare a
discursului public democratic.
Tehnicile IA/Deepfake utilizate în influențarea campaniilor electorale:
• Falsificarea video realistă: Algoritmi avansați de IA creează videoclipuri aproape imposibil de distins de
cele reale, prezentând scene false cu politicieni sau evenimente inventate.
• Clonarea vocii și falsuri audio: Vocea unui politician este reprodusă cu o precizie uimitoare, generând
mesaje false atribuite în mod eronat persoanei respective.
• Generarea de text sintetic: Texte credibile sunt create imitând stilul și tonul unei personalități politice
sau a unei instituții, răspândind dezinformare.
• Schimbarea fețelor și transformarea: Tehnologia permite modificarea fețelor în videoclipuri, creând
scenarii false care nu s-au petrecut niciodată sau nu în contextul prezentat.
• Predicția comportamentală: IA analizează comportamentul online pentru a prezice răspunsul la anumite
mesaje, facilitând crearea de Deepfake-uri țintite pentru a influența segmente specifice de alegători.
Aceste tehnici pot fi utilizate de actori malițioși pentru a discredita candidați, a manipula opinia publică
și a submina încrederea în procesul electoral. Este esențială conștientizarea pericolelor Deepfake și
implementarea de măsuri pentru a combate dezinformarea și proteja integritatea alegerilor.
_TLP:CLEAR_ Pagina 4 / 8
Semne de identificare a unui Deepfake
Protejarea împotriva Deepfake-urilor este o provocare continuă, deoarece tehnologia se perfecționează
constant, făcând conținutul manipulat tot mai greu de distins de cel real. Cu toate acestea, există anumite
indicii care pot trăda un Deepfake, iată la ce anume trebuie să fim atenți:
• Mediul înconjurător (de exemplu, umbre inexistente, reflexii prea puternice, zone neclare)
• Imperfecțiuni ale feței (alunițe nerealiste, clipire nesincronizată, distorsiuni în interiorul gurii cum ar
fi lipsa dinților și a limbii, dinți mult prea perfecți etc.)
• Nesincronizarea vorbirii/sunetului și a mișcării buzelor, de exemplu: din cauza strănutului
Nesincronizarea vorbirii/sunetului și a mișcării buzelor poate fi observată la pronunțarea literelor b, m și
p. Uneori apar pixeli în nuanțe de gri la marginile componentelor modificate. Se poate distinge dacă este
vorba de o falsificare și atunci când persoana din înregistrare este privită dintr-un alt unghi. Dacă pentru
crearea conținutului Deepfake nu s-au folosit fotografii ale persoanei din unghiuri diferite, algoritmul nu
poate deduce aspectul persoanei din alt unghi, rezultând distorsiuni.
Massachusetts Institute of Technology (MIT) este una dintre entitățile care
dezvoltă instrumente ML care pot identifica dacă un conținut este autentic
sau un Deepfake, oferind totodată un chestionar educativ pentru a învăța
publicul cum să facă această distincțieiv.
Un detector clasic de conținut vizual fals se bazează pe detectarea erorilor
rezultate din prelucrare. Cel mai adesea, aceasta implică analiza pixelilor
pe care ochiul uman nu îi poate vedea, deoarece prin manipularea imaginii
marginile componentei modificate au caracteristici speciale. Un astfel de
algoritm este hibridul dintre Long short term memory (LTSM – o rețea
neuronală) și algoritmul Encoder-Decoder. Acesta funcționează analizând
în paralel fiecare pixel individual și/sau întreaga imagine/videoclip
comprimat(ă). În cele din urmă, rezultatele celor două funcții sunt
comparate și dacă ambele indică aceeași regiune, materialul este considerat modificat.
Există, de asemenea, diferite tipuri de detectoare de acces. Detectorul Intel v se bazează pe observarea
unor indicii subtile, invizibile ochiului uman, pentru a verifica autenticitatea conținutului. Prin analiza
semnelor precum dilatarea pupilelor, modificări ale culorii vaselor de sânge în concordanță cu bătăile
inimii etc., se poate determina dacă acel conținut este autentic vi.
Instrumente și tehnici de detectare a Deepfake
Detectarea Deepfake-urilor este o provocare în continuă evoluție, întrucât tehnologiile de IA care stau la
baza creării Deepfake-urilor devin din ce în ce mai sofisticate. Ca răspuns, cercetătorii și dezvoltatorii
lucrează la dezvoltarea de instrumente și tehnici noi pentru a identifica aceste falsuri. Iată câteva dintre
cele mai promițătoare abordări în detectarea Deepfake-urilor:
Analiza comportamentală: Această metodă se bazează pe identificarea micilor imperfecțiuni sau anomalii
în comportamentul sau mișcările fizice ale subiectului din videoclip.
Consistența iluminării: Detectarea inconsistențelor în iluminare este o altă tehnică eficientă. Algoritmii
analizează umbrele, reflexiile și modul în care lumina se reflectă pe diferite suprafețe ale feței pentru a
determina dacă imaginea a fost manipulată.
Analiza texturii pielii: Tehnicile de Deepfake adesea netezesc textura pielii sau introduc anomalii la
nivelul texturii. Detectarea acestor modificări, care sunt adesea subtile și greu de observat cu ochiul liber,
poate ajuta la identificarea manipulărilor.
Detectarea artefactelor de compresie: Videoclipurile și imaginile manipulare prin IA pot prezenta
artefacte de compresie unice datorită procesului de generare și compresie. Analiza acestor artefacte poate
oferi indicii că materialul a fost alterat.
Examinarea metadatelor: Deși Deepfake-urile în sine pot fi convingătoare, metadatele asociate cu un
fișier video sau imagine (cum ar fi data creării, tipul camerei etc.) pot fi contradictorii sau suspecte,
sugerând manipulare.
_TLP:CLEAR_ Pagina 5 / 8
Verificarea consistenței respirației și a pulsului: Unele tehnici avansate includ analiza variațiilor minore
în culoarea feței sau a umbrelor, care pot indica bătăile inimii și respirația. Modificările în aceste modele
pot indica prezența unui Deepfake.
Utilizarea rețelelor neuronale convoluționale: CNN-urile sunt folosite pentru a analiza videoclipurile
frame cu frame, învățând caracteristicile specifice videoclipurilor autentice comparativ cu cele false.
Această metodă poate fi extrem de eficientă, dar necesită o cantitate mare de date de antrenament.
Aplicații și servicii comerciale: Există și instrumente comerciale disponibile sau soluții oferite de start-
up-uri specializate în detectarea Deepfake-urilor, care combină tehnologiile menționate mai sus pentru a
oferi soluții la cheie pentru organizații sau indivizi.
Provocări și limitări: Este important de menționat că, pe măsură ce tehnologiile de detectare devin mai
sofisticate, la fel devin și metodele de creare a Deepfake-urilor. Asta înseamnă că este nevoie de o
actualizare și îmbunătățire constantă a instrumentelor de detectare. În plus, multe dintre tehnici pot
genera răspunsuri fals pozitive sau fals negative, ceea ce necesită verificări suplimentare și îmbunătățirea
continuă a algoritmilor.
Detectarea Deepfake-urilor este un câmp de bătălie dinamic între creatorii de conținut falsificat și cei
care încearcă să protejeze autenticitatea informațiilor. Continuarea cercetării și dezvoltării în acest
domeniu este crucială pentru a ține pasul cu evoluția rapidă a tehnologiei.
Sfaturi pentru a evita să fii păcălit de Deepfake-uri
Evitarea înșelăciunii prin Deepfake necesită o combinație de scepticism sănătos, atenție la detalii și
utilizarea unor instrumente de verificare. Iată câteva sfaturi utile:
• Nu crede tot ce vezi online! Internetul este o sursă vastă de informații, dar nu toate sunt veridice. Este
important să dezvolți un scepticism sănătos și să analizezi cu atenție orice conținut video sau foto
înainte de a-l accepta ca fiind real.
• Caută semne de manipulare: Deepfake-urile pot fi foarte sofisticate, dar adesea pot fi identificate prin
anumite indicii. Fii atent la discrepanțe de iluminare, erori de aliniere, nere-
guli ale pielii sau probleme de sincronizare a buzelor cu sunetul.
• Verifică sursa: De unde provine videoclipul sau imaginea? Este distribuit pe o
platformă de încredere? Caută confirmarea informației din surse credibile sau
direct de la entitățile sau persoanele implicate.
• Folosește instrumente de verificare: Există numeroase organizații și instru-
mente online care te pot ajuta să verifici dacă o informație este reală. Utili-
zează-le pentru a cerceta autenticitatea conținutului suspect.
• Nu te baza pe o singură sursă: Caută confirmare din mai multe surse credibile.
Un singur videoclip sau imagine nu este suficient pentru a verifica o informație.
• Învață despre Deepfake-uri: Cu cât înțelegi mai bine cum funcționează această tehnologie, cu atât vei
fi mai capabil să identifici falsurile. Există multe resurse online care explică principiile Deepfake-urilor
și metodele de detectare.
Prin aplicarea acestor sfaturi, poți reduce riscul de a fi înșelat de conținutul Deepfake și poți contribui la
promovarea unei culturi a verificării și a responsabilității în mediul online.
Educația în combaterea Deepfake-urilor
Informarea publicului: Educația este esențială pentru a crește gradul de conștientizare cu privire la
Deepfake-uri și la pericolele asociate. Oamenii trebuie informați despre modul în care Deepfake-urile pot
fi utilizate pentru manipulare și dezinformare, pentru a reduce impactul lor negativ.
Dezvoltarea gândirii critice: Programele educaționale pot contribui la dezvoltarea abilităților de gândire
critică. Oamenii trebuie învățați cum să evalueze sursele de informații, să recunoască semnele de conținut
falsificat și să verifice informațiile înainte de a le distribui.
Securitate digitală: Educația privind securitatea online și confidențialitatea poate ajuta indivizii să își
protejeze mai bine propriile informații și să fie conștienți de potențialele abuzuri ale tehnologiei Deepfake.
_TLP:CLEAR_ Pagina 6 / 8
Alfabetizare media: Înțelegerea modului în care funcționează mass-media și a tehnicilor de producție a
conținutului poate ajuta publicul să discearnă mai bine Deepfake-urile.
Combinația dintre eforturi tehnologice și programe educaționale solide poate construi un răspuns puternic
și eficient la provocările prezentate de Deepfake-uri. Această abordare duală este esențială pentru a
minimiza impactul negativ al Deepfake-urilor asupra societății, politicilor și vieții private a cetățenilor.
Ce să faci dacă ești victima unui Deepfake
Dacă te afli în situația neplăcută de a fi victima unui Deepfake, este important să acționezi rapid și eficient
pentru a minimiza daunele. Iată câțiva pași pe care îi poți urma:
• Documentează abuzul: Salvează copii ale conținutului Deepfake, inclusiv
URL-uri, capturi de ecran, sau orice altă formă de dovezi care ar putea fi
relevante. Acest lucru este esențial pentru orice demersuri legale sau
raportări ulterioare.
• Raportează conținutul: Majoritatea platformelor de socializare și site-
urilor web au politici stricte împotriva Deepfake-urilor și a conținutului
manipulat. Utilizatorii pot semnala cu ușurință conținutul suspect folosind
funcția de raportare integrată a platformei.
• Contactează autoritățile: În cazuri grave, unde conținutul Deepfake
încalcă legile privind defăimarea, hărțuirea sau distribuția de materiale
pornografice fără consimțământ, poate fi necesar să contactezi
autoritățile locale sau alte organisme de aplicare a legii.
• Solicită ajutor juridic: Consultă un avocat pentru a evalua opțiunile legale disponibile pentru tine.
Aceasta poate include acțiuni legale împotriva celor care au creat sau distribuit conținutul Deepfake.
• Folosește servicii de gestionare a reputației online: Există companii specializate în îmbunătățirea
prezenței online și în eliminarea sau diminuarea impactului conținutului negativ. Aceste servicii pot fi
utile pentru a-ți proteja imaginea pe termen lung.
• Comunică cu transparență: Dacă Deepfake-ul are potențialul de a-ți afecta cariera sau relațiile
personale, ia în considerare să vorbești deschis despre situație cu angajatorul, colegii sau persoanele
apropiate. Oferirea contextului și a versiunii tale poate ajuta la diminuarea impactului negativ.
• Protejează-ți informațiile personale: În urma unui incident Deepfake, este important să fii mai precaut
în ceea ce privește securitatea online. Verifică setările de confidențialitate pe rețelele sociale, schimbă
parolele și monitorizează activitatea conturilor tale pentru semne de acces neautorizat.
• Suport emoțional: Trauma psihologică suferită de victima unui Deepfake poate fi semnificativă. Nu
ezita să cauți sprijin din partea prietenilor, familiei sau a profesioniștilor în sănătate mintală.
• Educație și conștientizare: Ajută la creșterea gradului de conștientizare despre pericolele Deepfake-
urilor prin împărtășirea experienței tale, dacă te simți confortabil. Acest lucru poate ajuta la informarea
și protejarea altora.
Acționând prompt și decisiv, poți trece prin provocările asociate cu a fi victima unui Deepfake și poți
începe procesul de recuperare și protecție a reputației tale.
Regulament privind inteligența artificială adoptată de Parlamentul European
Parlamentul European a adoptat Regulamentul COM/2021/206 privind IA care are ca scop protejarea
drepturilor fundamentale, democrației, statului de drept și sustenabilității mediului. Regulamentul
clasifică sistemele IA în funcție de nivelul de risc (inacceptabil, ridicat, limitat, minim) și interzice
anumite utilizări, cum ar fi manipularea cognitiv-comportamentală a persoanelor vulnerabile, sistemele
de credit social și identificarea biometrică în timp real (cu excepții). De asemenea, regulamentul impune
obligații de transparență, trasabilitate, evaluare a riscurilor și asigurare a calității.
Regulamentul abordează și problema Deepfake-urilor, definindu-le ca tehnici de manipulare a imaginilor
sau videoclipurilor pentru a crea iluzia că un subiect spune sau întreprinde o acțiune ce nu a avut loc sau
nu în contextul prezentat. Deepfake-urile cu risc inacceptabil sunt interzise, cele cu risc ridicat trebuie să
_TLP:CLEAR_ Pagina 7 / 8
fie identificabile ca atare, iar cele cu risc limitat și minim nu sunt supuse unor restricții specifice. Furnizorii
de Deepfake-uri trebuie să se asigure că utilizatorii sunt conștienți de natura artificială a conținutului, iar
platformele online trebuie să ia măsuri pentru a preveni răspândirea Deepfake-urilor dăunătoare vii.
Concluzii
Pe măsură ce tehnologia avansează, capacitatea de a crea Deepfake-uri devine tot mai accesibilă, sporind
necesitatea unor strategii eficiente de detecție și conștientizare.
Dezvoltarea algoritmilor de detectare: Cercetătorii lucrează la dezvoltarea de algoritmi bazați pe IA care
pot identifica Deepfake-urile prin analizarea detaliilor care sunt dificil de detectat de ochiul uman, cum
ar fi anomalii în clipire sau în mișcarea naturală a pielii.
Tehnici de autentificare a conținutului: Tehnologii precum blockchain și watermarking digital
(filigranare) pot ajuta la stabilirea originii și autenticității conținutului, oferind o metodă de verificare a
surselor video și audio originale.
Instrumente de verificare la scară: Platformele de social media și companiile tehnologice dezvoltă
instrumente automatizate pentru a scana și a elimina Deepfake-urile de pe site-urile lor, contribuind la
reducerea răspândirii dezinformării.
Parteneriate între industrii: Colaborarea între sectorul tehnologic, guvern și organizațiile de cercetare
poate accelera dezvoltarea și implementarea soluțiilor de detectare a Deepfake-urilor.
Această publicație este licențiată sub CC-BY 4.0: "Cu excepția cazului în care se specifică altfel, reutilizarea acestui
document este autorizată sub licența Creative Commons Attribution 4.0 International (CC BY 4.0) (https://creative-
commons.org/licenses/by/4.0/). Aceasta înseamnă că reutilizarea este permisă, cu condiția menționării corespun-
zătoare și a indicării oricăror modificări".
TLP:CLEAR se poate folosi atunci când informațiile prezintă un risc minim de utilizare abuzivă, în conformitate cu normele și
procedurile aplicabile pentru publicare. Sub rezerva regulilor standard ale drepturilor de autor, informațiile TLP:CLEAR pot fi
partajate fără restricții.
Informațiile și opiniile conținute în acest document sunt furnizate "ca atare" și fără garanții. Referirea din prezentul document la
orice produse, procese sau servicii comerciale specifice prin denumire comercială, marcă comercială, producător sau în alt mod
nu constituie sau implică aprobarea, recomandarea sau favorizarea acestora de către Directoratul Național de Securitate Ciber-
netică (DNSC), iar aceste îndrumări nu vor fi utilizate în scopuri publicitare sau de aprobare a produselor.
i https://www.digi24.ro/stiri/cum-a-fost-lasata-o-pensionara-fara-52-de-mii-de-lei-am-intrat-pe-youtube-unde-ascult-rugaciuni-si-am-dat-de-un-videoclip-cu-tiriac-2714127
ii https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes
iii https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
iv https://detectfakes.kellogg.northwestern.edu/
v https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html
vi https://www.cert.hr/wp-content/uploads/2023/12/zloupotreba_umjetne_inteligencije.pdf
vii https://www.europarl.europa.eu/news/ro/press-room/20240308IPR19015/legea-privind-inteligenta-artificiala-pe-adopta-un-act-de-referinta
_TLP:CLEAR_ Pagina 8 / 8
Shallow Fakes
Source: IE
Recently, the viral video featuring US Vice President Kamala Harris making irrational remarks points
toward the threat society faces regarding shallow fakes.
Shallow fakes or cheap fakes are pictures, videos, and voice clips created without the help
of Artificial Intelligence (AI) technology but by either editing or by using other simple software
tools.
Deepfakes are synthetic media that use AI to manipulate or generate visual and audio
content, usually intending to deceive or mislead someone.
These are created using a technique called generative adversarial networks (GANs),
which involve two competing neural networks: a generator and a discriminator.
Global Risk Report 2024, by the World Economic Forum (WEF), also
highlighted AI-powered misinformation and Disinformation as the most severe
risks in the next 2 years.
//
Cheapfakes
CITATIONS READS
0 82
1 author:
Dren Gerguri
University of Prishtina
27 PUBLICATIONS 43 CITATIONS
SEE PROFILE
All content following this page was uploaded by Dren Gerguri on 16 September 2024.
Dren Gërguri
University of Prishtina “Hasan Prishtina”, Kosovo, [email protected]
Accepted for publication in:Nai, A., Grömping, M., Wirz, D. (Eds.). Elgar Encyclopaedia
of Political Communication.Edward Elgar Publishing.
Abstract
Cheapfakes are fake audiovisual content. These manipulations are carried out through free
programs, which is why the phrases cheap and fake are used. One of the most common and
successful types of visual disinformation is still cheapfakes, and visual recontextualization is
most widespread amongst cheapfake types. Despite their potential to harm and undermine
public discourse, Cheapfakes have received relatively little attention from researchers. This
entry aims to address this knowledge gap by exploring the definition, types, detection, and
implications of Cheapfakes, and also distinguishes it from deepfakes. By examining the ways
in which Cheapfakes can be created, disseminated, and detected, we can better understand
their impact on society. Moreover, the entry recognizes the ongoing academic discussions
and open-ended concerns about the credibility and impact of visual disinformation,
highlighting the necessity of further research to comprehensively address these issues.
1
Cheapfakes are audiovisual manipulations, which are produced without the inclusion of
artificial intelligence (Gërguri, 2022). At a time when the greater use of Deepfakes was
expected, especially in the 2020 Presidential elections in the USA, Cheapfakes received
greater attention during the election campaign because more cheapfakes were circulating
online than Deepfakes (Schick 2020). The main reason why cheapfakes were used more is
related to the ease of production, and the difficulty to detect using technological means (Fazio
2020).
The term ‘cheapfakes’ is a combination of two words, cheap (free) and fake, referring to the
use of free programs to create false audio or visual content. Such manipulations have been
very frequent in recent years. However, during 2024 we may see a rise in the use of deepfake
techniques due to the rapid advancements in generative AI.
This entry aims to explain the concept of Cheapfakes, their differences from Deepfakes, and
their implications for society. The lack of study on Cheapfakes is a significant gap in our
understanding of the impact of this kind of audiovisual manipulations on society, therefore
this entry explores the definition, types, detection, and implications of Cheapfakes,
highlighting their potential to spread disinformation and undermine public debate.
Cheap manipulation
2
be harmless or not, it depends on the content. When a cheapfake is produced for political
purposes, then often there is an intention to harm or attack a leader, a party, or an institution.
The main forms of cheapfakes are: intervention in the context of images, intervention in the
context of videos, video acceleration, audio raising, slowing down videos, and lowering audio
(Paris & Donovan, 2019; Gërguri, 2022). Most prevalent across different platforms are the
intervention in the context of images or videos (Brennen et al., 2021).
Intervention in the context of images is the process of editing a photo's content or context in
order to fabricate a story. For instance, altering an image's background to depict people in a
different setting or adding or deleting things to change the image's meaning. For instance, a
Reuters photo from 19991 showed an Albanian woman entering North Macedonia with other
refugees during the Kosovo war. However, nearly two decades later, a manipulated version of
the image with a changed background emerged, falsely claiming to show a Serbian survivor
of the NATO bombing of Yugoslavia. The manipulated photo was used in a Russian TV
show2. Such examples of attempts to change reality by intervening in photographs are
numerous and cheapfakes of this type are made on various topics and for various purposes,
ranging from manipulating information about the war, e.g. manipulated photos from the
Russian invasion of Ukraine, to sports events or protests during the pandemic (Reuters,
2020).
The other type of cheapfake is the intervention in the context of the video. This technique,
which is similar to picture manipulation, is changing a video's content or context. To deceive
viewers about the events represented, this might involve adding or deleting components,
altering the environment, or modifying the content. Such a case was a cheapfake published in
2021 where the climate change activist Greta Thunberg, an MSNBC interview seemed to be
saying that climate change does not exist3. This is cheapfake of the video cropping type to
take words out of context.
1
Reuters, 30 years of Reuters Pictures: Part one. https://widerimage.reuters.com/story/30-years-of-reuters-
pictures-part-one
2
Centar za geostrateške studije, РИА Новости Рат у Украјини 2014. и у Југославији 1999.,
https://www.youtube.com/watch?v=6OLu4EYqEjk&t=146s
3
Reuters Fact Check, Isolated clip of Greta Thunberg saying ‘climate change does not exist’ is misleading,
https://www.reuters.com/article/idUSL2N2O52US/
3
Video acceleration is the other type of cheapfakes. This involves increasing the speed of a
video to make events appear to unfold more quickly than they did. This can be used to distort
the perception of time and influence the viewer's understanding of a situation. The first case
of cheapfakes of video acceleration occurred in November 2018, in the United States of
America, when CNN White House correspondent Jim Acosta appeared “aggressive” to a
White House worker at a briefing former President Trump's press. In fact, Acosta's reaction is
not unusual and he tells the intern, “Excuse me, ma'am” and continues the question, but the
manipulation of the video and speeding it up makes it appear that the journalist hit her with
his elbow. This was used by the Trump administration to suspend Acosta's accreditation but
was withdrawn after CNN decided to take the case to court.4
A year later, another cheapfake went viral, the manipulation of Nancy Pelosi's video, which
depicts the Democratic politician as drunk5. In Acosta's case, the interference was speeding
up the video, whereas in this case, the contrary has been done, slowing down the video and
creating the impression that Pelosi is drunk on the show. Slowing down a video can be used
to exaggerate and emphasize certain actions or events. Slowing down footage can create a
false sense of drama or make subtle movements appear more significant than they are in
reality.
Different tools could be useful to help identify a cheapfake, such as Google Lens, InVID, etc
and there are studies (Qian, 2023) showing that when people are aware of those tools, they
develop a habit of verifying cheapfakes. Disinformers can disseminate a cheapfake by
changing a video’s or image’s context, therefore it is important to develop search techniques
and tools among journalists, fact-checkers and society in general. To detect a manipulation it
is necessary to have the critical judgement for the exposed media content and different
models orientate everyone to maintain critical skills. One of the models is that of three basic
questions, developed by Gërguri (2022) during the period of the pandemic: 1. Who is the
author? 2. Have you checked the story on multiple sources?, and 3. Is the information
evidence-based and backed by reliable and official sources? This model is settled in
extraordinary circumstances, however is appropriate even in the post-pandemic information
environment. The three basic questions model highlights the importance of knowing the
authors and their credibility, the significance of having multiple sources for information to
create a wider perspective on the information, and the role of supporting sources in the news.
4
Forbes, https://www.forbes.com/sites/laurenaratani/2018/11/08/altered-video-of-cnn-reporter-jim-acosta-
heralds-a-future-filled-with-deep-fakes/?sh=430def513f6c
5
CBS Morning, https://www.youtube.com/watch?v=EfREntgxmDs
4
Cheapfakes implications for society
Cheapfakes can have different effects on society, undermining the public debate or causing
misunderstandings, misconceptions, and the propagation of false narratives. Visual
disinformation, including Cheapfakes may seem convincing (Weikmann & Lecheler, 2023),
therefore they can be used to manipulate public opinion on important issues, particularly in
political contexts (Schick, 2020), but also can be used to blackmail or defame people (Paris &
Donovan, 2019).
Another potential consequence of cheapfakes on individuals and societies is effect on
attitudes and perceptions. People are more exposed to cheapfakes than deepfakes (Diya
2024), while studies shows that cheapfakes can go undetected more often than deepfakes
(Hameleers 2024). This can create confusion, incorrect assumptions about important topics or
increase skepticism towards reliable sources (Gërguri, 2022). Hence, an additional
unanticipated consequence would be that individuals would get so confused by the existence
of visual deception that they would begin to think everything is manipulated and lose trust in
all visuals, even when it is real (Weikmann & Lecheler, 2023).
Visual disinformation, such as cheapfakes, can contribute to social and political polarization
even more than non-visual disinformation (Dan et al., 2021). They could be used to reinforce
preexisting prejudices and opinions, strengthen echo chambers, and so on.
Although researchers have looked at the consequences of various visual disinformation, such
as cheapfakes and deepfakes, there are still some unanswered questions and disagreement
between scholars in some areas. For instance, there is a disagreement on the credibility of
cheapfakes compared to deepfakes. Scholars disagree on the credibility of cheapfakes
compared to deepfakes. Some argue that cheapfakes are most widespread and more credible
(Fazio, 2020) because they may appear more authentic to viewers, while others argue that
people are more likely to believe that deepfakes are more accurate than cheapfakes, and that
people are more inclined to spread deepfakes than cheapfakes (Ahmed & Chua, 2023).
Scholarly discussion continues also about the degree to which individuals distinguish the
credibility of visual disinformation (cheapfakes and deepfakes) and non-visual
disinformation. Hameleers (2024) compared the effects of cheapfakes and deepfakes and
non-visual disinformation in a political context and he concludes that both deepfakes and
cheapfakes was not seen as more believable or with more effect than the same disinformation
presented in textual format. This is the opposite of what another study found, which
concludes that visual disinformation is more believable than text (Hameleers et al., 2020).
5
Therefore, further research and with bigger sample size is needed to fully understand the
differential effects of these two forms of visual disinformation, also in comparison with
textual disinformation.
These types of audiovisual manipulations are a very dangerous virus for journalism too. The
crisis of trust in media and journalists, and the change of information-seeking habits, often
having social media as the main source of information in different countries worldwide
(Newman et al., 2023), has created a suitable terrain for those who aim to impact people's
perceptions and beliefs through cheapfakes. Information providers' credibility is weakened by
the pervasive usage of cheapfakes. People's confidence in institutions, journalism, and even
interpersonal communications may decline as they grow less convinced about the veracity of
media content (Kavanagh & Rich, 2018). Training journalists to detect cheapfakes is
important, but the mission is not fulfilled if the public is not educated about such a
phenomenon. These trends have made the extension of digital media literacy in society much
more important, and there are studies showing that digital media literacy have improved
people's ability to distinguish between true and false news (Guess et al., 2020).
Conclusion
In the digital era, where easily available tools for modifying information are made possible
by technological breakthroughs, cheapfakes constitute a worrying aspect. As we have seen,
these dishonest techniques come in a variety of forms, such as picture and video
manipulation, slowing down footage, and video acceleration. The examples given, which
include image alterations, manipulated videos, and false statements, highlight the wide
variety of cheapfakes that might appear. Cheapfakes are harmful when intended to influence
people’s opinion, although they can be entertaining when utilized for that purpose. People
need to be careful to confirm the legitimacy of the stuff they come upon as technology
develops further. Being aware of the possibility of manipulation and being aware of the many
strategies used enables people to use digital media more responsibly. Cheapfakes will have a
negative influence on public opinion and trust, thus it will be important to stay informed and
modify our media consumption habits as technology advances.
Future research on cheapfakes should investigate the long-term consequences of exposure to
cheapfakes on individuals' trust in institutions and the media, as well as their impact on social
and political polarization. Additionally, it is crucial to expand the scope of research beyond
the political domain, as cheapfakes also exist in other areas such as health, economy, and
6
more. Understanding how social media platforms and other communication channels could
be used for cheapfakes dissemination is also crucial to grasping the full extent of their
negative consequences. Furthermore, exploring how individuals’ process cheapfakes through
systematic and heuristic processing styles can provide valuable insights into developing
effective countermeasures. By examining these aspects, future studies can shed light on the
complex dynamics of cheapfakes and their far-reaching implications.
References
7
Deepfakes in a Political Setting, International Journal of Public Opinion Research,
Volume 36, Issue 1, https://doi.org/10.1093/ijpor/edae004
Hameleers, M., Powell, T. E., Van Der Meer, T. G., & Bos, L. (2020). A picture paints a
thousand lies? The effects and mechanisms of multimodal disinformation and
rebuttals disseminated via social media. Political Communication, 37(2), 281–301.
https://doi.org/10.1080/ 10584609.2019.1674979
Kavanagh, J., & Rich, M. D. (2018). Truth decay: An initial exploration of the diminishing
role of facts and analysis in American public life. Rand Corporation.
Newman, N., Fletcher, R., Eddy, K., Robertson, C.T., & Nielsen, R.K. (2023). Digital News
Report 2023, Reuters Institute. https://static.poder360.com.br/2023/06/Digital-News-
Report-Reuters-2023.pdf
Paris, B., & Donovan, J. (2019). Deepfakes and cheapfakes: the manipulation of audio and
visual evidence. Data & Society Report.https://datasociety.net/library/deepfakes-and-
cheap-fakes/
Qian, S., Shen, C., Zhang, J. (2023). Fighting cheapfakes: using a digital media literacy
intervention to motivate reverse search of out-of-context visual misinformation,
Journal of Computer-Mediated Communication, Volume 28, Issue 1,
https://doi.org/10.1093/jcmc/zmac024
Schick, N. (2020). Don’t underestimate the cheapfake. Technology Review,
https://www.technologyreview.com/2020/12/22/1015442/cheapfakes-morepolitical-
damage-2020-election-than-deepfakes/
Weikmann, T., & Lecheler, S. (2023). Visual disinformation in a digital age: a literature
synthesis and research agenda.
Deepfake Video
• Face Swap (Schimbarea feței) –
Înlocuirea feței unei persoane cu cea a altei
persoane într-un videoclip. Exemple includ Aplicații deepfake Video în afara Social Media:
utilizarea AI pentru a pune fața unei
celebrități pe corpul altei persoane. 1. Industria filmului și divertismentului – pentru
• Lip Sync (Sincronizarea buzelor) – dublarea actorilor, recrearea digitală a persoanelor
Manipularea buzelor unei persoane într-un decedate sau rejuvenarea personajelor (ex.:
videoclip astfel încât să pară că spune tehnologia folosită în Star Wars pentru Leia și Tarkin).
altceva decât în realitate.
• Face Reenactment (Imitarea 2. Publicitate și marketing – în crearea
expresiilor faciale) – Modificarea expresiilor reclamelor cu ambasadori digitali sau portretizarea
faciale ale unei persoane într-un videoclip unor personalități fără filmări reale.
pentru a le schimba reacțiile emoționale.
3. Escrocherii financiare și fraude – Deepfake-
urile video sunt utilizate pentru a imita directori de
companii sau personalități publice în atacuri
Deepfake Audio - imitarea vocii unei persoane prin IA
Cercetarea a implicat mai multe grupuri, inclusiv elevi de liceu, profesori, directori de
școli și studenți universitari, pentru a evalua capacitatea acestora de a distinge între
videoclipuri autentice și deepfakes.
Rezultatele au arătat că între 27% și 50% dintre participanți nu au reușit să
identifice corect autenticitatea videoclipurilor, indicând o vulnerabilitate
semnificativă la deepfakes în diverse segmente ale populației.
https://www.nature.com/articles/s41598-023-39944-3
Un sondaj realizat în 2024 de
compania Regula arată că
pierderile financiare provocate de
fraudele cu deepfake-uri variază
considerabil de la o țară la alta: Pierderi financiare legate de deepfake: Țări
Mai multe studii definesc MEDIA SINTETICĂ drept orice tip de conținut creat sau
modificat cu ajutorul inteligenței artificiale sau al învățării automate (AI/ML).
Misreport: Media sintetică este un termen general care se referă la orice tip de
conținut media generat sau modificat cu ajutorul inteligenței artificiale.
ChatGPT Plus are peste 12 mil. de abonați plătitori la 20 USD pe lună, față de cele 10 milioane de abonați în 2024.
Retenția abonaților este remarcabil de puternică, 89% dintre utilizatori continuându-și abonamentul după un
trimestru și 74% rămânând abonați după trei trimestre.
Acoperire globală:
Interdicții:
ChatGPT este interzis în 15 țări, inclusiv Italia
(prima care a interzis), China, Iran și Rusia.
https://www.demandsage.com
https://www.demandsage.com/chatgpt-statistics/
Mai mult decât „deepfakes”- „Media sintetică” și dezinformare
MEDIA SINTETICĂ devine o problemă atunci când este folosită pentru a
genera conținut ce poate induce în eroare publicul, fiind folosită pentru a
genera conținut ce dezinformează.
INTERPOL a mai emis și alte tipuri de notificări, precum cele Albastre și Roșii,
care se concentrează pe identificarea persoanelor și localizarea infractorilor
internaționali. Adăugarea Notificării Violetă reflectă adaptarea continuă a
organizației la noile provocări tehnologice...
EUROPOL: În documentul
Confruntarea cu Realitatea?
Aplicarea Legii și Provocarea
Deepfake-urilor” (2023),
Europol menționează că media
sintetică se referă la "media
generată sau manipulată
folosind inteligența artificială",
subliniind utilizările sale atât în
gaming, cât și în îmbunătățirea
serviciilor sau a calității vieții.
În Raportul Europol privind Amenințările Grave și
Criminalitatea Organizată în UE 2025, un document de
referință elaborat la fiecare patru ani, bazat pe datele
colectate de la forțele de poliție din întreaga Europă.
Acest raport va juca un rol crucial în orientarea politicilor
de securitate ale UE în următorii ani.
CNTI definește media sintetică drept conținut media (audio, imagini, texte
și videoclipuri) creat sau manipulat cu ajutorul inteligenței artificiale (IA).
Experții CNTI subliniază că, Numărul deepfake-urilor online a crescut de
zece ori din 2022 până în 2023. https://sumsub.com/fraud-report-2024/
Experții CNTI subliniază că, deși tehnicile de manipulare a imaginilor există
de peste 150 de ani, deepfake-urile reprezintă un nivel nou de realism
datorită avansurilor în AI, având un impact semnificativ asupra încrederii
publice și integrității informațiilor.
https://innovating.news/article/synthetic-media-deepfakes/
În loc de concluzii
1
1.0. ABSTRACT
Throughout history, philosophers have argued that things are not always as they appear. With the rise of
Machine Learning, we are faced with the proliferation of highly convincing forgeries known as Deep
Fake and Shallow Fake. Indeed, this trend raises concerns about the consequences as the line between
reality and fabrication becomes increasingly blurred. Unfortunately, the legal system, the precursor of
justice to curb these practices, has now fallen victim to passing off Deep Fakes and Shallow Fakes
evidence in courts.
This paper endeavours to analyse the effect of Deep Fake and Shallow Fake in the evidential litigation
system in common law countries. It also focuses on evidential principles in litigation, analysing Deep
Fake and Shallow Fake technology, discussing their dangers in court, and proposing recommendations
to address this issue.
Key Words: Deep Fake, Shallow Fake, Evidence, Technology, Machine Learning, Artificial Intelligence
2.0. INTRODUCTION
A feeling of certainty or conviction, even one of great conviction, may be misleading _ David Lund
(Philosopher).
In 2017, a seemingly harmless Redditor with a subreddit account1 began a trend that would question
reality as the world sees it. The Redditor, with the account name Deep Fake, started to post
pornographic videos that swapped the faces of porn actors with those of celebrities and well-known
persons.2 This trend became widespread and notoriously practised across different platforms despite
several attempts to shut it down. Shallow Fakes have been around for a while. They are manipulated or
doctored audio-visual representations used for deceptive purposes through simple, intentional editing.3
1
Subreddits are niche communities within Reddit, a social news website forum for stories. These communities have their
rules. Subscribers, and posts, depending on the subject matter and as indicated by the URL structure. <
https://www.shopify.com/ng/blog/how-to-use-reddit#:~:text=Subreddits%20are%20niche%20communities%20within,%2C%
20Controversial%2C%20and%20Top%20submissions. > accessed 21 September 2022
2
‘Deep Fake’ (Dictionary.com, January 2020) < https://www.dictionary.com/e/tech-science/Deep Fake/ > accessed 21
September 2022
3
Kalev Leetaru, ‘The Real Danger Today is Shallow Fakes and Selective Editing not Deep Fakes’ (Forbes, 26 August 2019)
< https://www.google.com/amp/s/www.forbes.com/sites/kalevleetaru/2019/08/26/the-real-danger-today-is-Shallow
Fakes-and-selective-editing-not-deep-fakes/amp/ > accessed 21 September 2022
2
Although this was not the first instance of technological face adjustment techniques, Deep Fake and
Shallow Fake schemes have gone past the malicious use of AI learning for pornography; it has
transcended into various aspects of society and has become an issue of concern for several stakeholders.
With the widespread use of Deep Fakes and Shallow Fakes, experts have posited that audio-visual
technology may be tools of deceit and disinformation that can set the legal system, which has attempted
to incorporate electronic video and audio evidence back for several years.
Ironically, for a technique hinged on deceit with potential dastardly consequences in the wrong hands, an
average person knows so little about the intricate methods of Deep and Shallow Fakes. This may be why
people have become highly susceptible to them. Deep Fake involves audio-visual manipulation using
machine learning algorithms. The technique leverages Artificial Intelligence to create realistic
impersonation by tweaking the words and actions of video and audio recordings of actual people to
simulate reality.4
Deep Fake's technical workings are pretty simple; it uses the primary technique called the Generative
Adversarial Network (GAN). The GAN contains a generator and a discriminator. The generator creates a
new fake image drawing from the real one and then runs the picture or audio through the discriminator.
The latter checks for similarity with the original image and continues to rate the authenticity of the fake
image while making recommendations to improve the similarity between both pictures. These
recommendations continue until the photos are similar and the discriminator can no longer distinguish
between the actual entry and the fake one.5
On the other end of the spectrum are Shallow Fakes. Unlike Deep Fakes, Shallow Fakes do not require
complex AI and Machine Learning Algorithms. Instead, creators of Shallow Fakes use image and video
editing software to effect minimal or subtle changes to the media concerned.6 There can be very little
4
Bobby Chesney & Danielle Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’
(2019) 107:1753 California Law Review < https://bit.ly/2C78jnQ > accessed 18 September 2022
5
Jeffrey Westling, ‘Are Deep Fakes a Shallow Concern? A Critical Analysis of the Likely Societal Reaction to Deep Fakes’
(The 47th Research Conference on Communication, Information and Internet Policy, USA, 25 July 2019)
6
Arnold, ‘What is the Difference between a Deep Fake and Shallow Fake?’ (Deep FakeNow, 21 April 2020) < https://Deep
Fakenow.com/what-is-the-difference-between-a-Deep Fake-and-Shallow Fake/ > accessed 18 September 2022
3
difference between Deep and Shallow Fake save for the techniques involved in causing the changes in
the audio-visual representations.7
Surprisingly, Deep and Shallow Fakes have some legitimate uses; they can be used for works of art,
education, entertainment, business, and, in some instances, investigations.8 Sadly, their malicious
services far outweigh the legitimate ones, and this has caused increasing worries, disinformation, and
widespread panic. Deep Fake and Shallow Fake technologies are getting more accessible and advancing
considerably with more realistic results. These technologies currently deceive people nearly 30% of the
time.9 Moreover, internet accessibility makes sharing this information faster and more far-reaching.10
The detrimental effect of these fakes is felt even more in the legal aspect. They threaten the rights to
privacy, democracy, and other personal rights and may occasion irreversible injustice. Hence, its effects
will be considered in subsequent sections.
A party proffers evidence to a suit to prove that the existence of a fact is more or less probable.11 This
proof can take any form - documents, testimony, electronic recordings, and other tangible objects. 12
Every country's legal system has a set of laws or principles that guide its receipt of evidence. These
systems may vary in form and structure, but this paper seeks to give attention to only one: the Common
Law system of Evidence.
Under the common law system of evidence, which still serves as the inspiration for the law of evidence
in many jurisdictions, evidence must fulfil specific rules before admissibility in a court proceeding.
Although several countries have broken the strains of common law and adjusted to an independent
7
Ibid
8
Supra, 3
9
Kyle Wiggers, ‘Carnegie Mellon Researchers Create the Most Convincing Deep Fakes Yet’ (The Machine, 16 August 2018)
<https://venturebeat.com/ai/carnegie-mellon-researchers-create-the-most-convincing-Deep Fakes-yet/ > accessed 20
September 2022
10
Supra, 2
11
‘Evidence’ Cornell Law School Wex < https://www.law.cornell.edu/wex/evidence > accessed 20 September 2022
12
‘Basic Principles and Rules of Law of Evidence’ (Law Corner, 27 July 2021) <
https://lawcorner.in/basic-principles-and-rules-of-law-of-evidence/ > accessed 24 September 2022
4
system of law, there are similarities in rules and principles of evidence such as Authenticity, Standard of
Proof, and Best Evidence.13
4.1. Authenticity
Authenticity is a condition precedent to the admissibility of evidence in the common law system and this
paper's crux. It involves satisfying the court, through sufficient evidence, that a person's claim is as they
say. If a judge is satisfied that there is a possibility that evidence is authentic, no matter how little, s/he is
bound to advance its admissibility.14 While there are rules to ensuring that the evidence (oral and
documentary)15 tendered is authentic, the court may need help concerning audio-visual representation.
This issue is compounded when one considers that, unlike the civil law system, the common law does
not allow inquisitorial actions by the court. The court may need help to evaluate the weight of the
evidence by inquiry, and this option is only available by tactical cross-examination on the part of a
knowledgeable counsel. There is also no leeway to subject such evidence if admitted, to appeals for
factual errors.16
The standard of proof in common law necessitates the party who accuses to establish that the facts they
claim are actual.17 A recognised principle of evidence in common law is that facts admitted need not be
proven.18 The party adducing evidence directs its proof to confirm that the act alleged was committed
beyond all reasonable doubts, at least for a criminal proceeding. This principle begs the question: what
happens if a litigant submits an incriminating Deep Fake video without the knowledge of the court?
13
Caslav Pejovic, ‘Civil Law and Common Law: Two Different Paths Leading to the Same Goal’ (2001) 32 VUWLR pp.
817-841
14
See R v Randall (2003) UKHL 69 @ 20. Rosemary Pattenden, ‘Authenticating Things in English Law: Principles for
Adducing Tangible Evidence in Common Law Jury Trials (2009) 12 The International Journal of Evidence & Proof <
https://ssrn.com/abstract=1704625 > accessed 23 September 2022
15
Documentary Evidence here should be taken to be Written documents as some common law jurisdictions define electronic
audio-visual evidence as documentary, see section 258 of the Evidence Act, 2011 Laws of the Federation of Nigeria 2004
16
Supra, 13
17
Kevin Clermont and Emily Sherwin, ‘A Comparative View of Standards of Proof’ (2002) 50:2 The American Journal of
Comparative Law < https://scholarship.law.cornell.edu/facpub/222 > accessed 25 September 2022
18
Supra, 12
5
Would the court recognise that as proof beyond a reasonable doubt? Indeed, one party is incriminated,
and proof that the evidence was tampered with may not readily come to light.
The best evidence rule was directly inspired by antique common law. The principle necessitates that the
original and highest or best evidence is presented to the court. The essence is to present legible and
concrete evidence before the court.19 This rule can be directly traced to 1745 when Lord Harwicke stated
that no evidence was admissible except it was "the best that nature will allow."20 If anything is clear, the
very nature of Deep and Shallow Fake goes against the principles of the evidence above. This clash has
dire consequences for the legal system, which will be analysed below.
5.0. THE EFFECTS OF DEEP AND SHALLOW FAKE IN THE LEGAL EVIDENTIAL
SYSTEM
Previously, the legal system had to rely heavily on witnesses and circumstantial evidence for proof of an
act or omission. Indeed, the go-to proof method was primarily written evidence in the early eighteenth
century.21 The trend of oral and written evidence continued well into the nineteenth and twentieth
centuries and was riddled with confusion, controversy, hearsay, and falsehoods.22 One can only imagine
the ease audio-visual evidence has brought to the judicial system. Various experts have regarded the
paradigm shift to using audio-visual technological evidence as reasonable and necessary for the legal
profession.23
Sadly, while audio-visual technologies have set the steps of justice in the right direction, the danger of
evidential deceit arising from fake audio-visual is becoming quite evident. The legal system has become
19
Osaro Eghobamien , Samuel Ehiwe, and Chike Okpaleke, ‘Revisiting the Best Evidence Rule: Matters Arising’ (Mondaq,
16 August 2022) <
https://www.mondaq.com/nigeria/contracts-and-commercial-law/1221804/revisiting-the-best-evidence-rule-matters-arising#:
~:text=The%20best%20evidence%20rule%20is,the%20truth%20of%20its%20content > accessed 23 September 2022
20
See Omychund v. Baker (1745) 26 ER 15
21
John Langbein, ‘Historical Foundations of the Law of Evidence: A View from the Ryder Sources (1996) 96:1168 Columbia
Law Review.
22
Thomas Gallanis, ‘The Rise of Modern Evidence Law’ (1999) 84 Iowa Law Review p. 513
23
Molly Mullen, ‘A New Reality: Deep Fake Technology and the World Around Us’ (2022) 48:1 Mitchell Hamline Law
Review p.219
6
swamped with various technological evidence and has been left to grapple with receiving and
authenticating them.
Experts argue that the danger of fakes in the court system goes far deeper than passing off phoney
representations. The proliferation of Deep Fake and Shallow Fake has made it relatively easy to create
and forge counterfeit evidence. Moreover, technological advancement has made it somewhat
challenging to spot the differences between a fake and accurate audio-visual representation.
Evidence tampering is one of the foremost concerns regarding the malicious use of fakes in court
systems. The danger becomes evident in the mistrust it breeds; it engenders the mistake of dismissing
what is real and cripples the basis of evidence authenticity. Herein lies the dilemma: a judicial officer
with limited insights into Machine Learning and Artificial Intelligence has to determine whether the
evidence presented is accurate enough to be admitted into the court's records.
The practical effect of phonies on the legal system was seen in a custody case in the United Kingdom,
where one of the parties presented a Deep Fake of the other party threatening and harassing the children.
It took the admission of falsehood by the lawyer of the erring party for the court to order an in-depth
analysis of the recording's metadata.24 Without the vehement denial by the accused party and the
vigilance of his counsel, the court would not have realised the forgery of the evidence.
This incident and others question the authenticity of evidence admitted by the court and the court's
capacity to sift through the originality and authenticity of electronic audio-visual representations. It is
clear that while it is impossible to stop fakes' menace completely, their effects can be limited on the legal
system through sifting and authentication processes. Unfortunately, many court systems have shallow
standards for electronic evidence authentication.25
Conversely, even if authentication processes are initiated, it may cost an arm and leg, one that many
litigation systems do not have stowed away. Perchance the court uses testimonies to validate the
24
Bari Weinberger, ‘Looking Out for Manipulated Deep Fake Evidence in Family Law Cases’ (New Jersey Law Journal, 22
February 2021) < https://www.law.com/njlawjournal/2021/02/22/looking-out-for-manipulated-Deep
Fake-evidence-n-family-law-cases/?slretun=20220523104512 > accessed 23 September 2022
25
George Blum, ‘Authentication of Social Media Records and Communications (2019) 70:1 ALR
7
audio-visual evidence, but it may not make any difference as Deep Fake and Shallow Fake have
implications on oral testimony; a study has shown that falsified videos can induce false testimony.26
Deep and Shallow, fake questions the burden of proof before the court; can an artificial but
undiscernible audio-visual representation presented as evidence before the court be taken as proof
beyond a reasonable doubt? The answer is glaringly negative, and the same would be true if the issue of
best evidence arises.
6.0. RECOMMENDATIONS
Several attempts have been made to develop technologies to detect Deep and Shallow Fakes and combat
their widespread use. Sadly, technology is fluid; the closer we are to discovering an efficient fake
detection technique, the more the existing fake technologies are further developed. The available Deep
Fake detection technologies are expensive and often need to be more efficient. Regardless, the cruel
effect of the Deep and Shallow Fake cannot be ignored, especially in the legal system. Therefore, the
question on everyone's mind is: what is the best way to combat Deep and Shallow Fake representations?
The menace of Deep and Shallow Fakes has been proven to cut across the board; the effects are evident
for individuals, law enforcement agencies, the government, and the legal system. Fake detection
techniques have never caught up with these phonies because of the stakeholders' lack of collaboration
and incentive. Collaboration among stakeholders is the most prominent and reasonable solution.
Collaboration would mean increased interest, funding, and development of detection technologies. The
combined strength of crime labs, private sectors, academic research institutions, public companies, and
all forensic-interested persons will surely advance Deep and Shallow Fake detection technology.27
Authentication methods to affirm the chain of custody and validity of an audio-visual representation is a
great way to begin a new regime of authenticity. Times have changed; by virtue, traditional authentic
measures may not be enough to test the reality of fakes, as these fakes can mislead even a witness. As
mentioned earlier, a solution to top those would be reinforcing authenticity techniques in court,
especially regarding electronic audio-visual evidence.
26
Kimberly Wade, et al, ‘Can Fabricated Evidence Induce False Eyewitness Testimony?’ (2010) Applied Cognitive
Psychology 24:7 < https://onlinelibrary.wiley.com/doi/10.1002/acp.1607 > accessed 22 September 2022
27
Federick Dauer, ‘Law Enforcement in the Era of Deep Fakes’ (Police Chief Online, 29 June 2022) <
https://www.policechiefmagazine.org/law-enforcement-era-Deep Fakes/ > accessed 24 September 2022
8
Lastly, one must uphold the importance of a rigorous regulatory approach to the use and pass-off of
Deep and Shallow Fakes. This approach will include educating the general public and legal officials on
the dangers and recognising mid-level fakes that can be easily discovered. The legislation of different
jurisdictions may also update the authenticity of electronic mechanisms to include an identification or
watermark to distinguish original from counterfeit. The last resolution would be to criminalise the use
and passing of fakes within the court's territory.
7.0. CONCLUSION
The bitter truth is that Deep Fake and Shallow Fakes have come to stay; with every development in AI
and Machine Learning, there is a simultaneous growth of these phonies. One cannot stress the danger it
presents to a rigid, often archaic profession like law, especially the common law system. Regardless, this
author advises that the legal system acknowledges the menace of Deep Fake and Shallow Fake and does
not wait until there are consistently ill-delivered judgments brought about by admitting phoney evidence
produced by these technologies.
9
Curs: Etică și integritate profesională în online
Specialitatea Producție multimedia, an II
Test II. Conținutul deepfake și integritatea informațională în era
inteligenței artificiale
Subiecte:
1. Provocări etice online: tehnologiile deepfake
- Ce este un un deepfake. Deepfakes – realism artificial periculos
- Cheap fakes (falsuri ieftine) - manipulare simplă, impact mare
- Shallow fakes (falsuri superficiale) – denaturarea prin context
- Diferențele dintre deepfakes -cheap fakes și shallow fakes.