ASystematic Reviewon Deepfake Technology
ASystematic Reviewon Deepfake Technology
Abstract. The 21st century has seen technology become an integral part of
human survival. Living standards have been affected as the technology
continues to advance. Deep Fake is a deep learning-powered application that
recently appeared on the market. It allows to make fake images and videos that
people cannot discern from the real ones, and is a recent technique that allows
swapping two identities within a single video. A wide range of factors,
including communities, organizations, security, religions, democratic processes
as well as personal lives are being impacted by deep fakes. In the case of
images and videos being presented to an organization as evidence, if the images
or videos are altered, then the whole truth will be transformed into a lie. It is
inevitable that with every new and beneficial technology, some of its adverse
effects will follow, causing world problems. There have been several instances
in which deep fakes have alarmed the entire network. During the last few years,
the number of altered images and videos is growing exponentially, posing a
threat to society. Additionally, this paper explores where deep fakes are used,
the impacts made by them, as well as the challenges and difficulties associated
with deep fakes in this rapidly developing society.
1 Introduction
Deepfake was originally conceived by Reddit user of the same name in late
2017 [2]. Users of this news aggregation website shared indecent videos using
open-source face-swapping technology on the user's space. This requirement
initially made celebrities and politicians vulnerable to charades, as deepfake
requires the production of large amounts of video footage. Researchers from
the University of Washington used deepfakes to circulate fake photos of
President Barack Obama on the Internet, and the president was forced to
explain what their purpose was[3]. Such technology could be abused. Another
falsified video of House Speaker Nancy Pelosi which made it appear as if she
was drunkenly stumbling over her words. President Trump retweeted this
tweet as genuine. Although the video was timed to create the effect, it was
widely believed to be a true portrayal. Another example of illegal use of deep
fakes is that- two British artists created a video deepfake of Facebook CEO
Mark Zuckerberg discussing the "truth of Facebook and who controls the
future" with CBS News [4]. The video spread rapidly worldwide on Instagram.
In addition to being credible, such deepfake trickery is accessible.
Fake content is becoming more credible. The proliferation of deep fakes over
the past few decades has been increasing despite the decline in trust in
photography since the development of image-editing technology in the past
few decades [5]. Despite the fact that deepfakes found today are mostly
derived from this original code, each one is an entertaining thought experiment
but none is reliable [6].
In this article the researcher has focused: As written in the overview to deep
fakes, we'll discuss how deep fakes employ artificial intelligence to replace the
likeness of one person with another in video and other media, what they are,
how they work, and the challenges and difficulties they face - concerns have
been raised about the use of deep fakes to create fake news and misleading,
fraudulent videos.
2 Background study
Deepfakes: what are they and how do they work? A deep fake is the most widely used
and most efficient face swapping technique used to alter videos. Rather than changing
the actual face of the subject, the face of another person is substituted, while keeping
the original expressions and background. To create deepfake software, there are many
different methods, but generally, these Machine Learning algorithms are designed to
generate content based on personal data input. For instance, if a program is asked to
create a new face or replace a part of a person's face, then the algorithm must be
trained first. Or in simple word we can say that when a person commits an action that
they didn't commit themselves, the person is imitated by a program that mimics their
actions, speech, and emotions. Almost all of the programs are fed enormous amounts
of data which they use to create their own, new data. They are primarily autoencoders
and sometimes generative adversarial networks (GANs) [7].
are better than by manual methods [12]. Models using GANs require a lot of training
data and are hard to work with they take longer to create images than other
techniques. GAN networks consist of a generator and a discriminator The generator
produces realistic images, while the discriminator uses convolutional layers to
determine fake images from real ones. GAN models are good for synthesizing
images, but not for making videos. It is hard for them to retain temporal consistency,
or to keep images aligned from frame to frame. In addition, the best-known audio
"deepfakes" do not employ GANs. Dessa, a Canadian AI company owned by Square,
used the voice of a talk show host to utter sentences he never said without using
GANs [12]. Today's deepfakes are primarily made by using a constellation of AI and
non-AI algorithms in combination.
Architecture: One of the many applications of deep learning is for the reduction of
dimensionality and high-dimensional data, and this capability has been widely applied
to image compression and deep autoencoding. An image is deep broken using a fake
technique that produces a new image based on the characteristics of a previous image
[13]. Using this technique, they can learn the latent features based on the cycle loss
function. To put it another way, the model does not require that the source and target
images be related to learn their features (see Fig. 1).
3. The researcher proposes in paper [14] a method to detect deep fake images
and videos, as well as explain the advantages of GANs over autoencoders in
the detection of deep fakes. In reviewing several papers, the authors found that
the SSTNet model achieved the highest accuracy level of approximately 90%
to 95%.
4. For the purpose of detecting fake images, the author reviewed in this article
that the technology behind deepfakes and methods to create and detect them.
In order to detect fake images, the author used 600,000 real images and 10,000
fake images during training, while 10,000 of both types were used during
testing. The proposed Iterative Convolution (RCN) model, which covers 1,000
videos and includes GAN, shows promising results for Face Forensics++
datasets, facilitating the detection of deepfake in these cases. In this method,
the results indicate that the proposed method can detect fake videos with
promising performance, which can be enhanced by incorporating dynamic
blink patterns. Before digital media forensics results can be used in court, they
need to be proven and reliable.
5. The paper [15] discusses how Deepfake videos can be detected and how they
are used in digital media forensics. There were detailed explanations of the
6
methods used in the first category, based on what they mean and whether
deepfake videos are physically/ physiologically exhibited. It explains the curbs
of deep fakes and forecast that future breakthrough will make fake videos
even more realistic and effective.
6. The author presented in this work [16] a deep learning-based method for
identifying fake videos generated by AI from real videos based on an immense
amount of factual and Deepfake images. CNN models were used to train the
CNN classifier based on a profusion of factual and Deepfake depiction. In
addition, it explains that it can effectively capture Deepfake Videos using two
sets of Deepfake video datasets, so that they can be effectively captured by
using deep neural networks. In Table 1 additional related works are listed.
2. 2019 Wagner & “The Word Real The article shuns the
Blewer Is No Longer Real”: inescapability arguments
in support of a critical
[18] Deepfakes, Gender,
approach to deepfakes to
and the Challenges visual information
of AI-Altered Video literacy and womanism
approaches to artificial
intelligence building can
avert the dissemination
of violently nubile
deepfakes.
A critical examination of
how deepfakes
perpetuate gendered
disparities in visual
information is explored.
3. 2019 Silbey & Three traditional
Hartzog institutions - education,
The Upside of Deep journalism, and
[19]
Fakes representative democracy
- may have deep flaws
that need strengthening
as a result of deep
fakes. The current paper
discusses some deep
problems and how they
may also be neutralized
by addressing them with
the help of deep fakes.
By doing so, innovative
solutions to other sticky
social and political
problems can be
unlocked.
This article illustrates how deepfakes are used to deceive and exploit
individuals through examples of deepfakes, which present a dark picture of
society. Unfortunately, that is often where technology is first used. It seems
that technological progress brings forth both good and bad in people, pushing
us from side to side at the same time[32]. As a result, deepfake technology has
the potential to lead to both short-term and long-term consequences, and below
you can find out how this unethical practice can impact our society and those
living there and being threatened by it. However, by using a proper Deepfake
algorithm, such videos or images can be detected, minimizing the impact on
the public[33]. Several studies have shown that deepfake can influence the
sentiments and perceptions of people around us, posing a real threat to
societies, organizations, and governments. A fake video or image circulating
on social media or other online channels can affect the voting patterns during a
state or central government election.
4 Deepfake Challenges:
Increasingly, deep fakes are being distributed, but there is no standard to evaluate
deep fake detection The number of deep fake videos and images online has nearly
doubled since 2018. MIT analyzed 126,000 stories distributed by 3,000,000 users
over 10 years and concluded that fake news spreads 1,500 people six times faster than
real news[35]. A deepfake can be used to hurt individuals and societies as well. Some
examples include joking to embarrass a coworker, identity theft or even to spur
violence, creating a smut video for someone's gratification, and creating political
distress. Special techniques are also used for faking terrorism events, blackmail,
defaming individuals, and creating political distress.
i. Deep fakes and similar technologies pose grave challenges for international
diplomacy. First, the risks posed by emerging technologies like deep fakes
elevate the importance of diplomacy.
ii. Secondly, deep fakes might be so frequently used that individuals and
governments eventually grow weary of being bombarded with manipulated
videos and images. If one of these fake videos is actually authentic and
authorities fail to respond quickly, then it becomes a problem.
iii. As a third issue, analysts who analyze data and identify trends will be
affected because they now have to put in so much more time and effort to
even verify that something is true, leaving them with fewer resources to
actually analyze.
Deepfakes have made several efforts to increase their visual quality, but there are
still a number of challenges that remain. This section discusses a few of those
challenges.
a) Abstraction: Often, it is not possible to convincingly generate deepfakes for
a specific victim because the data used to train the model is often
insufficient. Programs that create deepfakes use generic models based on the
data they use for training[36]. For audiovisual content, the training process
takes hours. The distribution of driving content data is readily available, but
finding sufficient data for a specific victim can prove difficult, requiring a
generalized model to account for multiple targets that were not seen during
training or where fewer examples were available. Moreover, each specific
target identity requires a different retraining of the model.
d) Fraudulent use of social media: As the main online platforms for spreading
audio-visual content, social networks such as Twitter, Facebook, and
Instagram are used to disseminate audio-visual content on a global scale. To
save on bandwidth or to protect users' privacy, these types of manipulation
are common, known as social media laundering[39]. This obfuscation
removes clues about underlying frauds and eventually leads to false positive
detections.
16
7 Solution(s) to Deepfakes?
1. Discovering deep fakes is difficult. Deepfakes that are simple can of course
be seen by the naked eye if they are bad. Some detection tools can even spot
the faulty characteristics we discussed previously. However, artificial
intelligence is improving continuously, and soon we will have to rely on
Deepfake detectors to detect them [40]. Since Deepfakes can spoof
common movements like blinks and nods, we need to make sure companies
and providers are using facial authentication software that offers certified
liveness detection. In addition, authentication processes will need to adapt to
guide users through a less predictable range of live actions.
9 References
1. W. Zhang, C. Zhao, and Y. Li, “A Novel Counterfeit Feature Extraction Technique for
Exposing Face-Swap Images Based on Deep Learning and Error Level Analysis,” Entropy,
vol. 22, p. 249, Feb. 2020, doi: 10.3390/e22020249.
2. S. Maurya, T. Mufti, D. Kumar, P. Mittal, and R. Gupta, “A Study on Cloud Computing:
A Review,” presented at the ICIDSSD 2020, Jamia Hamdard, New Delhi, India, Mar.
2021. doi: 10.4108/eai.27-2-2020.2303253.
3. A. Dertat, “Applied Deep Learning - Part 3: Autoencoders,” Medium, Oct. 08, 2017.
https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798
(accessed May 05, 2022).
4. T. Mufti and D. Kumar, “BIG DATA: TECHNOLOGICAL ADVANCEMENT IN THE
FIELD OF DATA ORGANIZATION,” p. 4.
5. T. Mufti, N. Saleem, and S. Sohail, “Blockchain: A detailed survey to explore innovative
implementation of disruptive technology,” EAI Endorsed Transactions on Smart Cities,
vol. 4, no. 10, Art. no. 10, 2020, doi: 10.4108/eai.13-7-2018.164858.
6. J. Sharma and S. Sharma, “Challenges and Solutions in DeepFakes,” arXiv:2109.05397
[cs], Sep. 2021, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/2109.05397
7. P. Dey, “DEEP FAKES One man’s tool is another man’s weapon,” International Journal
of Scientific Research and Management, vol. 05, p. 7, Jul. 2021.
8. D. Citron and R. Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy,
and National Security,” California Law Review, vol. 107, no. 6, p. 1753, Dec. 2019.
9. B. U. Mahmud and A. Sharmin, “Deep Insights of Deepfake Technology : A Review,” p.
12.
10. A.-C. Guei and M. Akhloufi, “Deep learning enhancement of infrared face images using
generative adversarial networks,” Appl. Opt., AO, vol. 57, no. 18, pp. D98–D107, Jun.
2018, doi: 10.1364/AO.57.000D98.
18
11. T. Nguyen, C. M. Nguyen, T. Nguyen, T. Duc, and S. Nahavandi, “Deep Learning for
Deepfakes Creation and Detection: A Survey,” Sep. 2019.
12. S. Lyu, “DeepFake Detection: Current Challenges and Next Steps,” arXiv:2003.09234
[cs], Mar. 2020, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/2003.09234
13. C. Vaccari and A. Chadwick, “Deepfakes and Disinformation: Exploring the Impact of
Synthetic Political Video on Deception, Uncertainty, and Trust in News,” Social Media +
Society, vol. 6, p. 205630512090340, Feb. 2020, doi: 10.1177/2056305120903408.
14. D. Gamage, J. Chen, P. Ghasiya, and K. Sasahara, “Deepfakes and Society: What lies
ahead?,” 2022.
15. A. Almars, “Deepfakes Detection Techniques Using Deep Learning: A Survey,” Journal
of Computer and Communications, vol. 09, pp. 20–35, Jan. 2021, doi:
10.4236/jcc.2021.95003.
16. M. Masood, M. Nawaz, K. Malik, A. Javed, and A. Irtaza, Deepfakes Generation and
Detection: State-of-the-art, open challenges, countermeasures, and way forward. 2021.
17. “Deepfakes, explained | MIT Sloan.”
https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained (accessed May 08,
2022).
18. J. Mach, “Deepfakes: The Ugly, and The Good,” Medium, Dec. 02, 2019.
https://towardsdatascience.com/deepfakes-the-ugly-and-the-good-49115643d8dd
(accessed May 05, 2022).
19. M. Albahar and J. Almalki, “DEEPFAKES: THREATS AND COUNTERMEASURES
SYSTEMATIC REVIEW,” . Vol., no. 22, p. 9, 2005.
20. J. Kietzmann, L. Lee, I. McCarthy, and T. Kietzmann, “Deepfakes: Trick or treat?,”
Business Horizons, vol. 63, Dec. 2019, doi: 10.1016/j.bushor.2019.11.006.
21. Y. Li and S. Lyu, “Exposing DeepFake Videos By Detecting Face Warping Artifacts,”
arXiv:1811.00656 [cs], May 2019, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/1811.00656
22. S. Bian, W. Luo, and J. Huang, “Exposing Fake Bit Rate Videos and Estimating Original
Bit Rates,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no.
12, pp. 2144–2154, Dec. 2014, doi: 10.1109/TCSVT.2014.2334031.
23. J. Botha and H. Pieterse, “Fake News and Deepfakes: A Dangerous Threat for 21st
Century Information Security,” Mar. 2020.
24. “How Deepfake Technology Impact the People in Our Society? | by Buzz Blog Box |
Becoming Human: Artificial Intelligence Magazine.” https://becominghuman.ai/how-
deepfake-technology-impact-the-people-in-our-society-e071df4ffc5c (accessed May 08,
2022).
25. “(PDF) A Study on Combating Emerging Threat of Deepfake Weaponization.”
https://www.researchgate.net/publication/344990492_A_Study_on_Combating_Emerging
_Threat_of_Deepfake_Weaponization (accessed May 02, 2022).
26. “(PDF) Deep fake : An Understanding of Fake Images and Videos.”
https://www.researchgate.net/publication/351783734_Deep_fake_An_Understanding_of_
Fake_Images_and_Videos (accessed May 02, 2022).
27. C. Jayathilaka, “(PDF) Deep Fake Technology Raise of a technology that affects the faith
among people in the society -A Literature Review,” ResearchGate.
https://www.researchgate.net/publication/351082194_Deep_Fake_Technology_Raise_of_
a_technology_that_affects_the_faith_among_people_in_the_society_-
A_Literature_Review (accessed May 02, 2022).
28. J. Yang Hui, “Preparing to counter the challenges of deepfakes in Indonesia,” Dec. 2020.
29. S. M. A. K. Chowdhury and J. I. Lubna, “Review on Deep Fake: A looming Technological
Threat,” in 2020 11th International Conference on Computing, Communication and
Networking Technologies (ICCCNT), Jul. 2020, pp. 1–7. doi:
10.1109/ICCCNT49239.2020.9225630.
30. N. Veerasamy and H. Pieterse, “Rising Above Misinformation and Deepfakes,” Mar.
2022.
31. Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM
Computing Surveys, vol. 54, pp. 1–41, Jan. 2021, doi: 10.1145/3425780.
32. A. Ruiter, “The Distinct Wrong of Deepfakes,” Philosophy & Technology, vol. 34, Dec.
2021, doi: 10.1007/s13347-021-00459-2.
33. M. Westerlund, “The Emergence of Deepfake Technology: A Review,” TIm Review, vol.
9, no. 11, pp. 39–52, Jan. 2019, doi: 10.22215/timreview/1282.
34. Shakil et al., “The Impact of Randomized Algorithm over Recommender System,”
Procedia Computer Science, vol. 194, pp. 218–223, Jan. 2021, doi:
10.1016/j.procs.2021.10.076.
35. “The legal implications and challenges of deepfakes.”
https://www.dacbeachcroft.com/en/gb/articles/2020/september/the-legal-implications-and-
challenges-of-deepfakes/ (accessed May 08, 2022).
36. J. M. Silbey and W. Hartzog, “The Upside of Deep Fakes,” Social Science Research
Network, Rochester, NY, SSRN Scholarly Paper 3452633, Sep. 2019. Accessed: May 02,
2022. [Online]. Available: https://papers.ssrn.com/abstract=3452633
37. T. Wagner and A. Blewer, “‘The Word Real Is No Longer Real’: Deepfakes, Gender, and
the Challenges of AI-Altered Video,” Open Information Science, vol. 3, pp. 32–46, Jan.
2019, doi: 10.1515/opis-2019-0003.
38. “What Are Deepfakes and How Are They Created?,” IEEE Spectrum, Apr. 29, 2020.
https://spectrum.ieee.org/what-is-deepfake (accessed May 05, 2022).
39. “What is a deepfake? Everything you need to know about the AI-powered fake media |
Business Insider India.” https://www.businessinsider.in/tech/how-to/what-is-a-deepfake-
everything-you-need-to-know-about-the-ai-powered-fake-media/articleshow/
80411144.cms (accessed May 08, 2022).
40. “Words We’re Watching: ‘Deepfake.’” https://www.merriam-webster.com/words-at-
play/deepfake-slang-definition-examples (accessed May 08, 2022).