0% found this document useful (0 votes)
13 views19 pages

ASystematic Reviewon Deepfake Technology

notes

Uploaded by

23ballb86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views19 pages

ASystematic Reviewon Deepfake Technology

notes

Uploaded by

23ballb86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

A Systematic Review on Deepfake Technology

Abstract. The 21st century has seen technology become an integral part of
human survival. Living standards have been affected as the technology
continues to advance. Deep Fake is a deep learning-powered application that
recently appeared on the market. It allows to make fake images and videos that
people cannot discern from the real ones, and is a recent technique that allows
swapping two identities within a single video. A wide range of factors,
including communities, organizations, security, religions, democratic processes
as well as personal lives are being impacted by deep fakes. In the case of
images and videos being presented to an organization as evidence, if the images
or videos are altered, then the whole truth will be transformed into a lie. It is
inevitable that with every new and beneficial technology, some of its adverse
effects will follow, causing world problems. There have been several instances
in which deep fakes have alarmed the entire network. During the last few years,
the number of altered images and videos is growing exponentially, posing a
threat to society. Additionally, this paper explores where deep fakes are used,
the impacts made by them, as well as the challenges and difficulties associated
with deep fakes in this rapidly developing society.

Keywords: Deepfake, Deepfake videos, Artificial Intelligence.


2

1 Introduction

Individuals' lives have changed due to advancements in technology. As,


today’s society firmly credence in digital world especially younger generation,
through visual media. With the help of tech, the images, videos and audios are
altered easily without any noticeable error. Using visual content for personal
gain or for commercial gain has become a ubiquitous practice in our digital
society, and it is among the most controversial topics. In particular, the Deep
fakes can be explained as, the video that is edited to replace the person in the
original video with someone else, making it look like the original [1].
Deepfake is a term that combines "deep learning" and "fake," and has become
increasingly popular with the rise of artificial intelligence. It is also called as
synthetic media which states: an image, sound, or video that appears to be
produced through traditional methods but is actually composed by a
sophisticated tool.

Deepfake was originally conceived by Reddit user of the same name in late
2017 [2]. Users of this news aggregation website shared indecent videos using
open-source face-swapping technology on the user's space. This requirement
initially made celebrities and politicians vulnerable to charades, as deepfake
requires the production of large amounts of video footage. Researchers from
the University of Washington used deepfakes to circulate fake photos of
President Barack Obama on the Internet, and the president was forced to
explain what their purpose was[3]. Such technology could be abused. Another
falsified video of House Speaker Nancy Pelosi which made it appear as if she
was drunkenly stumbling over her words. President Trump retweeted this
tweet as genuine. Although the video was timed to create the effect, it was
widely believed to be a true portrayal. Another example of illegal use of deep
fakes is that- two British artists created a video deepfake of Facebook CEO
Mark Zuckerberg discussing the "truth of Facebook and who controls the
future" with CBS News [4]. The video spread rapidly worldwide on Instagram.
In addition to being credible, such deepfake trickery is accessible.

Fake content is becoming more credible. The proliferation of deep fakes over
the past few decades has been increasing despite the decline in trust in
photography since the development of image-editing technology in the past
few decades [5]. Despite the fact that deepfakes found today are mostly
derived from this original code, each one is an entertaining thought experiment
but none is reliable [6].
In this article the researcher has focused: As written in the overview to deep
fakes, we'll discuss how deep fakes employ artificial intelligence to replace the
likeness of one person with another in video and other media, what they are,
how they work, and the challenges and difficulties they face - concerns have
been raised about the use of deep fakes to create fake news and misleading,
fraudulent videos.

2 Background study

Deepfakes: what are they and how do they work? A deep fake is the most widely used
and most efficient face swapping technique used to alter videos. Rather than changing
the actual face of the subject, the face of another person is substituted, while keeping
the original expressions and background. To create deepfake software, there are many
different methods, but generally, these Machine Learning algorithms are designed to
generate content based on personal data input. For instance, if a program is asked to
create a new face or replace a part of a person's face, then the algorithm must be
trained first. Or in simple word we can say that when a person commits an action that
they didn't commit themselves, the person is imitated by a program that mimics their
actions, speech, and emotions. Almost all of the programs are fed enormous amounts
of data which they use to create their own, new data. They are primarily autoencoders
and sometimes generative adversarial networks (GANs) [7].

Autoencoders: As a neural network unsupervised learning technique, trains the


network to ignore signal noise in order to learn efficient data representations
(encoding). Denoising images, compressing images and generating images are just
some of its applications [8]. In autoencoding, data is compressed in a manner similar
to the training data but not exactly the same as the input. An encoder can be used for a
deepfake generator in two ways Using the facial feature of the actor or using the
target image to train the autoencoder [9]. The encoder is also used for image
compression and, sometimes, even image generation. It is not the end of the deep fake
generation process if you use autoencoders, but they are certainly the most crucial
part. The number of nodes per layer decreases as the layers get deeper, and there is no
limit to how deep an autoencoder architecture can be [10]. In introductory deep
learning courses, autoencoders are very popular as dimensionality reduction
techniques and are often used as teaching materials. The loss function can either be
mean squared error or binary cross-entropy. Training will require several iterations,
sometimes over 50000, and if not for a powerful GPU, it could take days. Therefore,
cloud-based services are beneficial.

Generative Adversarial Networks (GANs): The development and distribution of


fake images have been disrupted thanks to a new neural network called Generative
Adversarial Networks (GANs) [11]. GANs were introduced by Ian Goodfellow in
2014 as a relatively new module for the generation of fake images. With Generative
Adversarial Networks, the fake image creation process is automated, and the results
4

are better than by manual methods [12]. Models using GANs require a lot of training
data and are hard to work with they take longer to create images than other
techniques. GAN networks consist of a generator and a discriminator The generator
produces realistic images, while the discriminator uses convolutional layers to
determine fake images from real ones. GAN models are good for synthesizing
images, but not for making videos. It is hard for them to retain temporal consistency,
or to keep images aligned from frame to frame. In addition, the best-known audio
"deepfakes" do not employ GANs. Dessa, a Canadian AI company owned by Square,
used the voice of a talk show host to utter sentences he never said without using
GANs [12]. Today's deepfakes are primarily made by using a constellation of AI and
non-AI algorithms in combination.

Architecture: One of the many applications of deep learning is for the reduction of
dimensionality and high-dimensional data, and this capability has been widely applied
to image compression and deep autoencoding. An image is deep broken using a fake
technique that produces a new image based on the characteristics of a previous image
[13]. Using this technique, they can learn the latent features based on the cycle loss
function. To put it another way, the model does not require that the source and target
images be related to learn their features (see Fig. 1).

1. An autoencoder model consists of two neural networks: an encoder and a


decoder. The output created by the autoencoder consists of three phases:
encoder, latent space, and a decoder. The encoded image is used as input to
the latent space to learn new patterns and patterns between the data points. In
conclusion, the decoder reconstructs the image based on its representation in
latent space. It recreates the image as closely as possible to the original [14].
2. A conventional GAN model consists of two neural networks: a generator and
a discriminator. In order to train the generator and discriminator, the min-
max method is used. A min (0) represents a false output, while a max (1)
represents the correct output. GANs are more suitable for generating new data
in images and videos as the discriminator must get as close to the maximum
value as possible in order to generate a realistic-looking deepfake [14].
Fig. 1. Architecture Model of Deep Fake

3. The researcher proposes in paper [14] a method to detect deep fake images
and videos, as well as explain the advantages of GANs over autoencoders in
the detection of deep fakes. In reviewing several papers, the authors found that
the SSTNet model achieved the highest accuracy level of approximately 90%
to 95%.
4. For the purpose of detecting fake images, the author reviewed in this article
that the technology behind deepfakes and methods to create and detect them.
In order to detect fake images, the author used 600,000 real images and 10,000
fake images during training, while 10,000 of both types were used during
testing. The proposed Iterative Convolution (RCN) model, which covers 1,000
videos and includes GAN, shows promising results for Face Forensics++
datasets, facilitating the detection of deepfake in these cases. In this method,
the results indicate that the proposed method can detect fake videos with
promising performance, which can be enhanced by incorporating dynamic
blink patterns. Before digital media forensics results can be used in court, they
need to be proven and reliable.
5. The paper [15] discusses how Deepfake videos can be detected and how they
are used in digital media forensics. There were detailed explanations of the
6

methods used in the first category, based on what they mean and whether
deepfake videos are physically/ physiologically exhibited. It explains the curbs
of deep fakes and forecast that future breakthrough will make fake videos
even more realistic and effective.
6. The author presented in this work [16] a deep learning-based method for
identifying fake videos generated by AI from real videos based on an immense
amount of factual and Deepfake images. CNN models were used to train the
CNN classifier based on a profusion of factual and Deepfake depiction. In
addition, it explains that it can effectively capture Deepfake Videos using two
sets of Deepfake video datasets, so that they can be effectively captured by
using deep neural networks. In Table 1 additional related works are listed.

Table 1. Literature Survey

S. Year Author’s Title Key Findings


No of name
Publicati
on
1. 2020 Zhang et A Novel  Using deep learning and
al. Counterfeit Feature error level analysis
(ELA) to identify
[1 Extraction
counterfeit features, a
7] Technique for new technique for
Exposing Face- extracting fake facial
Swap Images Based features has been
on Deep Learning developed to separate
and Error Level DeepFake-generated
images from real ones.
Analysis
 The proposed technique
can obtain accurate
counterfeit feature,
which enables it to
outperform direct
detection techniques
from a simplicity and
efficiency perspective.

2. 2019 Wagner & “The Word Real  The article shuns the
Blewer Is No Longer Real”: inescapability arguments
in support of a critical
[18] Deepfakes, Gender,
approach to deepfakes to
and the Challenges visual information
of AI-Altered Video literacy and womanism
approaches to artificial
intelligence building can
avert the dissemination
of violently nubile
deepfakes.
 A critical examination of
how deepfakes
perpetuate gendered
disparities in visual
information is explored.
3. 2019 Silbey &  Three traditional
Hartzog institutions - education,
The Upside of Deep journalism, and
[19]
Fakes representative democracy
- may have deep flaws
that need strengthening
as a result of deep
fakes. The current paper
discusses some deep
problems and how they
may also be neutralized
by addressing them with
the help of deep fakes.
By doing so, innovative
solutions to other sticky
social and political
problems can be
unlocked.

4. Westerlund  Analysing, based on 84


2019 [20] online news articles, this
The Emergence of work inspect what
Deepfake deepfakes are, who
Technology: makes them, and where
A Review they come from, what
their advantages and
disadvantages are, what
examples exist, and how
to combat them.
 Deepfakes are examined
comprehensively in this
study, and entrepreneurs
with expertise in
cybersecurity and
artificial intelligence can
leverage these techniques
in
5. 2021 Ruiter  Deep fakes raise ethical
[21] concerns about
The Distinct Wrong blackmail, intimidation,
of and sabotage that can
Deepfakes result in ideological
incitement, as well as
broader implications for
trust and accountability.
 A deepfake's moral status
can be determined by
8

three factors: (i) whether


the deep faked person
disagrees with how they
are presented; (ii)
whether it deceives
viewers; and (iii) the
intention behind its
creation.
6. 2021 Mirsky &  Provide a detailed look
Lee at how deepfakes work
The Creation and and how to detect them.
[22]
Detection of  By analysing of this
Deepfakes: study is to present the
A Survey reader with a
comprehensive view of
There are two methods
for creating and
detecting deepfakes.
They are: (1) current
research and (2)
technological
advancements in this
area. (4) issues with
existing Défense
solutions; and, (5)
potential improvements
to current solutions. (6)
the domain that need to
be further researched.
7. 2022 Veerasamy  Examines the governing,
& Pieterse communal, and
Rising Above technical issues
[23]
Misinformat surrounding Deepfakes.
ion and  Uses these five factors
Deepfakes to identify measures to
reduce and mitigate
Deepfakes' spread.
Together, these
measures can help
reduce the threat,
mitigate Deepfakes'
spread, and provide a
holistic understanding of
the risks and impact of
Deepfakes.
8. 2020 Chowdhury &  End-to-end deepfake
Lubna video detection program
Review on can train, interpret frame
[24]
DeepFake: A sequences with LSTM
looming structure.
Technologica  Prosper more effective
explication to strike
them and stop data
frame-ups.
l Threat

9. 2021 Jayathilaka  Deep Fake alters face


[25] swaps and provides
Deep Fake cybersecurity that can
Technology protect against thieves
Raise of a as well as providing
technology security and privacy.
 By modifying face
that affects
swaps, editing videos,
the faith and manipulating audio
among and video.
people in
the society -
A Literature
Review

10. 2020 Botha &  Emphasis on generating


Pieterse and in-depth fake news
Fake News and and counterfeiting
[26]
Deepfakes: detection.
A  Describes hoax news
Dangerous and deep fakes as well
as the technologies and
Threat for
tools that can be used to
21st generate and identify
Century hoax bulletin.
Information
Security

11. 2014 Bian et al  In addition to analysing


[27] in order to visualize fake
Exposing Fake Bit bit rate tapes and
Rate Videos estimate their native bit
and rates, a 16-D feature
Estimating vector based on the first-
digit law can be used for
Original Bit
exposing statistical
Rates artifacts, including
equitizations artifacts in
the frequency domain
(12-D) and spatial
domain (4-D).
 Compared the resulting
video sequences across
four different sizes and
two typical compression
10

schemes (i.e., MPEG-2


and H.264/AVC) with
older ones which are
effective.
12. 2019 Li & Lyu  Convolutional neural
[28] networks (CNNs) are
Exposing DeepFake effectively able to
Videos By recognize the original
Detecting face in the video, but
Face DeepFake can only
produce images of
Warping
limited resolution.
Artifacts  Such artifacts can be
simulated via simple
image processing
operations on an image,
thus rendering it as a
negative example
 Since such visual
artifacts can be found in
DeepFake videos from a
variety of sources, our
method is more robust.
13. 2019 Kietzmann  As a strategic approach
et al. to risk management in
Deepfakes: Trick or depth, R.E.A.L.
[29]
treat? emphasizes
documenting original
content, exposing deep-
seated mistakes as soon
as possible, and
protecting the rights of
the law.
The R.E.A.L.
framework helps
organizations
understand the risks and
opportunities associated
with deepfakes

14. 2020 Lyu  A number of technology


[30] advances, the fake
DeepFake videos will improve in
Detection: quality, and their
Current generation will become
Challenges more efficient.
DeepFake videos are
and Next
created by artificial
Steps intelligence and their
detection techniques are
used digital media
forensics. Several recent
studies have
demonstrated that GAN
models can recover
facial details
successfully.
 Video synthesized with
realistic samples
produced in one tool can
resemble real videos
more realistically.
15. 2021 Masood et  With Tensorflow or
al Keras, open-source
Deepfakes trained models,
[31]
Generation affordable computing
and infrastructure, and the
Detection: rapid evolution of deep-
learning (DL) methods,
State-of-the-
especially Generative
art, open Adversarial Networks
challenges, (GAN)
countermeas  Detection and
ures, and generation of audio and
way forward video deepfakes, this
study examined existing
tools and machine
learning (ML)
approaches to deepfake
generation.
16. 2021 Sharma &  As a result of using 140k
Sharma real and fake faces in our
Challenges and dataset, which includes
[32]
Solutions in 70k real faces collected
DeepFakes from Flickr by Nvidia as
well as 70k fake faces
generated from 1 million
fake faces generated by
GAN style, the models
were trained so that they
can discriminate between
the two.
 95.90 % accuracy for the
ResNet50 model and
87.50 % accuracy for the
VGG16 model, which is
notably more effective at
detecting deep fakes.
12

3 Deepfake Impact on individuals, organizations, and


governments:

This article illustrates how deepfakes are used to deceive and exploit
individuals through examples of deepfakes, which present a dark picture of
society. Unfortunately, that is often where technology is first used. It seems
that technological progress brings forth both good and bad in people, pushing
us from side to side at the same time[32]. As a result, deepfake technology has
the potential to lead to both short-term and long-term consequences, and below
you can find out how this unethical practice can impact our society and those
living there and being threatened by it. However, by using a proper Deepfake
algorithm, such videos or images can be detected, minimizing the impact on
the public[33]. Several studies have shown that deepfake can influence the
sentiments and perceptions of people around us, posing a real threat to
societies, organizations, and governments. A fake video or image circulating
on social media or other online channels can affect the voting patterns during a
state or central government election.

As part of the risk, deepfakes can manipulate civil discourse, interfere in


elections and national security, and undermine public trust, which may
eventually lead to an erosion of public faith in journalists and public
institutions. Among the harmful effects they cause to individuals and
companies, there is the risk of false endorsements, fraudulent submission of
documentary evidence, extortion, harassment, and reputation damage. In
addition, the economic harm posed to individuals and companies is no less
noteworthy, including false advertisements, fraud, in addition, clients
experience creative control loss, extortion, persecution, and a reputational hit.
As well as posing systemic risks to social and political institutions, deepfakes
can manipulate civil discourse, sabotage elections, and erode the trust in
government in general[34].

4 Deepfake Challenges:

Increasingly, deep fakes are being distributed, but there is no standard to evaluate
deep fake detection The number of deep fake videos and images online has nearly
doubled since 2018. MIT analyzed 126,000 stories distributed by 3,000,000 users
over 10 years and concluded that fake news spreads 1,500 people six times faster than
real news[35]. A deepfake can be used to hurt individuals and societies as well. Some
examples include joking to embarrass a coworker, identity theft or even to spur
violence, creating a smut video for someone's gratification, and creating political
distress. Special techniques are also used for faking terrorism events, blackmail,
defaming individuals, and creating political distress.
i. Deep fakes and similar technologies pose grave challenges for international
diplomacy. First, the risks posed by emerging technologies like deep fakes
elevate the importance of diplomacy.

ii. Secondly, deep fakes might be so frequently used that individuals and
governments eventually grow weary of being bombarded with manipulated
videos and images. If one of these fake videos is actually authentic and
authorities fail to respond quickly, then it becomes a problem.

iii. As a third issue, analysts who analyze data and identify trends will be
affected because they now have to put in so much more time and effort to
even verify that something is true, leaving them with fewer resources to
actually analyze.

5 Genesis Deepfakes: Challenges

Deepfakes have made several efforts to increase their visual quality, but there are
still a number of challenges that remain. This section discusses a few of those
challenges.
a) Abstraction: Often, it is not possible to convincingly generate deepfakes for
a specific victim because the data used to train the model is often
insufficient. Programs that create deepfakes use generic models based on the
data they use for training[36]. For audiovisual content, the training process
takes hours. The distribution of driving content data is readily available, but
finding sufficient data for a specific victim can prove difficult, requiring a
generalized model to account for multiple targets that were not seen during
training or where fewer examples were available. Moreover, each specific
target identity requires a different retraining of the model.

b) Identifier condensation: In source identity-based reconstruction tasks,


preserving the target identity can be difficult when the targets are clearly out
of sync, especially if the matching is based on the same identity and training
is based on multiple identity [36].
14

c) Training Pairs: Developing high-quality output from a trained supervised


model requires pairing of data, which is a process of identifying similar input
examples to yield similar outputs.

d) Occlusion: Occlusion, occurring when facial features of the source and


victim are obscured by objects such as hands, hair, glasses, or any other item,
is a major challenge in deepfake generation. Furthermore, the image can be
distorted as a result of the hidden object or as a result of the occlusion of the
face and eye area.

e) Artificially realistic audio: The quality of the service needs to be improved,


despite improvements. False emotions, pauses, breathiness, and lack of
natural voice in the target make deep fakes challenging to pull off [37].

6 Detecting Deepfakes: Challenges

Despite impressive advances in deepfake detection techniques, this chapter


describes some of the challenges of deepfake detection approaches. Deepfake
detection challenges are discussed in base in this chapter.

a) Dataset standard: A major factor used in the origination of deepfake


spotting techniques is the availability of large databases of deepfakes. In
these databases, different artifacts can be visualized, such as temporal
flickering during some speech, blurriness around facial regions,
oversmoothed texture/lack of detail in the facial texture, lack of head
rotation, or absence of face-obscuring objects. Deepfake datasets result from
imperfect manipulation steps. Additionally, low quality manipulated content
is barely persuasive or creates a real impact, therefore even if detection
approaches are more successful when applied in the wild there is no
guarantee these approaches will perform better when used in real life [38].

b) Analyzing performance: Today's Deepfake detection methods are


formulated as binary classification problems, where each sample consists of
either true or false possibilities. However, you can edit videos in other ways
with deepfakes, which makes it impossible to be 100% sure that the content
that is not detected as manipulation is genuine. Also, deepfake content can
be altered for a variety of reasons, such as audio/video, so a single tag may
not always be accurate. To support the challenges of real-world scenarios, a
multiclass/multi-label and local classification/detection scheme should be
applied to visual content with multiple people's faces, usually one or more of
them are manipulated over an entire segment of frames.

c) Unfairness and distrust: Existing deepfakes datasets have been found to be


biased, containing imbalanced data across races and genders, as well as the
detection techniques used are biased. As a result, very little work has been
done to fill this gap yet. Therefore, it is urgent for researchers to develop
methods that improve the data and make the detection algorithms fairer.

d) Fraudulent use of social media: As the main online platforms for spreading
audio-visual content, social networks such as Twitter, Facebook, and
Instagram are used to disseminate audio-visual content on a global scale. To
save on bandwidth or to protect users' privacy, these types of manipulation
are common, known as social media laundering[39]. This obfuscation
removes clues about underlying frauds and eventually leads to false positive
detections.
16

7 Solution(s) to Deepfakes?

1. Discovering deep fakes is difficult. Deepfakes that are simple can of course
be seen by the naked eye if they are bad. Some detection tools can even spot
the faulty characteristics we discussed previously. However, artificial
intelligence is improving continuously, and soon we will have to rely on
Deepfake detectors to detect them [40]. Since Deepfakes can spoof
common movements like blinks and nods, we need to make sure companies
and providers are using facial authentication software that offers certified
liveness detection. In addition, authentication processes will need to adapt to
guide users through a less predictable range of live actions.

2. Active methods can be used with passive methods for high-risk/high-value


transactions, such as money transfers or changes to account information.
Active methods include two-factor authentication using a one-time token or
verification through an alternate channel like SMS.

3. It is also possible for account holders' identities to need to be reconfirmed at


various points after their accounts have been established. For instance, some
companies request identity verification every time a dormant account is
activated for high-value transactions, or whenever passive analytics indicate
hyper-fraud potential.

8 Conclusion and Future Scope

 Although sticks-and-stones might seem like a harmless aphorism, lies have


always been capable of causing significant harm to individuals,
organizations, and society as a whole. From that perspective, deep fakes
appear to be just a technology-based solution to an age-old problem. Despite
the fact that new developments in deep fake are improving the quality of
fake videos, people are becoming distrustful of the online media content as a
result. Several techniques have been developed to manipulate images and
videos due to the development of artificial intelligence. Although several
legal or technological solutions could mitigate the threat, none is going to
completely remove it. In this paper, we have focused on the real challenges
for generating deep fakes, detection method and solution to the deep fakes.
Introducing robust, scalable, and generalizable methods needs to be the focus
of future research. This is a difficult problem, and there are no easy solutions
for it.
 There may be three stages to the proposed approach: First, we will use a
method that focuses primarily on the altered faces in the videos to test the
videos. As soon as this video is processed by this classifier, it will be passed
on to another classifier which will test its audio changes. Based on the results
from both classifiers, we will be able to determine whether it is fake or not.
Currently, videos are both audio and video edited, so we need a technique
that can detect and test both. There are a lot of scholarly papers, articles
about this technology that need to be covered to gain a deeper understanding
of the deep fake generation techniques. To gain a deeper understanding of
this technology, we need to explore deeper. Although we just examined the
articles briefly and tried to present the important points, an extensive study
would have provided a more comprehensive analysis. This technology has
been subjected to extensive analysis and deep insights about it will help
provide more opportunities for research in this area.

9 References
1. W. Zhang, C. Zhao, and Y. Li, “A Novel Counterfeit Feature Extraction Technique for
Exposing Face-Swap Images Based on Deep Learning and Error Level Analysis,” Entropy,
vol. 22, p. 249, Feb. 2020, doi: 10.3390/e22020249.
2. S. Maurya, T. Mufti, D. Kumar, P. Mittal, and R. Gupta, “A Study on Cloud Computing:
A Review,” presented at the ICIDSSD 2020, Jamia Hamdard, New Delhi, India, Mar.
2021. doi: 10.4108/eai.27-2-2020.2303253.
3. A. Dertat, “Applied Deep Learning - Part 3: Autoencoders,” Medium, Oct. 08, 2017.
https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798
(accessed May 05, 2022).
4. T. Mufti and D. Kumar, “BIG DATA: TECHNOLOGICAL ADVANCEMENT IN THE
FIELD OF DATA ORGANIZATION,” p. 4.
5. T. Mufti, N. Saleem, and S. Sohail, “Blockchain: A detailed survey to explore innovative
implementation of disruptive technology,” EAI Endorsed Transactions on Smart Cities,
vol. 4, no. 10, Art. no. 10, 2020, doi: 10.4108/eai.13-7-2018.164858.
6. J. Sharma and S. Sharma, “Challenges and Solutions in DeepFakes,” arXiv:2109.05397
[cs], Sep. 2021, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/2109.05397
7. P. Dey, “DEEP FAKES One man’s tool is another man’s weapon,” International Journal
of Scientific Research and Management, vol. 05, p. 7, Jul. 2021.
8. D. Citron and R. Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy,
and National Security,” California Law Review, vol. 107, no. 6, p. 1753, Dec. 2019.
9. B. U. Mahmud and A. Sharmin, “Deep Insights of Deepfake Technology : A Review,” p.
12.
10. A.-C. Guei and M. Akhloufi, “Deep learning enhancement of infrared face images using
generative adversarial networks,” Appl. Opt., AO, vol. 57, no. 18, pp. D98–D107, Jun.
2018, doi: 10.1364/AO.57.000D98.
18

11. T. Nguyen, C. M. Nguyen, T. Nguyen, T. Duc, and S. Nahavandi, “Deep Learning for
Deepfakes Creation and Detection: A Survey,” Sep. 2019.
12. S. Lyu, “DeepFake Detection: Current Challenges and Next Steps,” arXiv:2003.09234
[cs], Mar. 2020, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/2003.09234
13. C. Vaccari and A. Chadwick, “Deepfakes and Disinformation: Exploring the Impact of
Synthetic Political Video on Deception, Uncertainty, and Trust in News,” Social Media +
Society, vol. 6, p. 205630512090340, Feb. 2020, doi: 10.1177/2056305120903408.
14. D. Gamage, J. Chen, P. Ghasiya, and K. Sasahara, “Deepfakes and Society: What lies
ahead?,” 2022.
15. A. Almars, “Deepfakes Detection Techniques Using Deep Learning: A Survey,” Journal
of Computer and Communications, vol. 09, pp. 20–35, Jan. 2021, doi:
10.4236/jcc.2021.95003.
16. M. Masood, M. Nawaz, K. Malik, A. Javed, and A. Irtaza, Deepfakes Generation and
Detection: State-of-the-art, open challenges, countermeasures, and way forward. 2021.
17. “Deepfakes, explained | MIT Sloan.”
https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained (accessed May 08,
2022).
18. J. Mach, “Deepfakes: The Ugly, and The Good,” Medium, Dec. 02, 2019.
https://towardsdatascience.com/deepfakes-the-ugly-and-the-good-49115643d8dd
(accessed May 05, 2022).
19. M. Albahar and J. Almalki, “DEEPFAKES: THREATS AND COUNTERMEASURES
SYSTEMATIC REVIEW,” . Vol., no. 22, p. 9, 2005.
20. J. Kietzmann, L. Lee, I. McCarthy, and T. Kietzmann, “Deepfakes: Trick or treat?,”
Business Horizons, vol. 63, Dec. 2019, doi: 10.1016/j.bushor.2019.11.006.
21. Y. Li and S. Lyu, “Exposing DeepFake Videos By Detecting Face Warping Artifacts,”
arXiv:1811.00656 [cs], May 2019, Accessed: May 02, 2022. [Online]. Available:
http://arxiv.org/abs/1811.00656
22. S. Bian, W. Luo, and J. Huang, “Exposing Fake Bit Rate Videos and Estimating Original
Bit Rates,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no.
12, pp. 2144–2154, Dec. 2014, doi: 10.1109/TCSVT.2014.2334031.
23. J. Botha and H. Pieterse, “Fake News and Deepfakes: A Dangerous Threat for 21st
Century Information Security,” Mar. 2020.
24. “How Deepfake Technology Impact the People in Our Society? | by Buzz Blog Box |
Becoming Human: Artificial Intelligence Magazine.” https://becominghuman.ai/how-
deepfake-technology-impact-the-people-in-our-society-e071df4ffc5c (accessed May 08,
2022).
25. “(PDF) A Study on Combating Emerging Threat of Deepfake Weaponization.”
https://www.researchgate.net/publication/344990492_A_Study_on_Combating_Emerging
_Threat_of_Deepfake_Weaponization (accessed May 02, 2022).
26. “(PDF) Deep fake : An Understanding of Fake Images and Videos.”
https://www.researchgate.net/publication/351783734_Deep_fake_An_Understanding_of_
Fake_Images_and_Videos (accessed May 02, 2022).
27. C. Jayathilaka, “(PDF) Deep Fake Technology Raise of a technology that affects the faith
among people in the society -A Literature Review,” ResearchGate.
https://www.researchgate.net/publication/351082194_Deep_Fake_Technology_Raise_of_
a_technology_that_affects_the_faith_among_people_in_the_society_-
A_Literature_Review (accessed May 02, 2022).
28. J. Yang Hui, “Preparing to counter the challenges of deepfakes in Indonesia,” Dec. 2020.
29. S. M. A. K. Chowdhury and J. I. Lubna, “Review on Deep Fake: A looming Technological
Threat,” in 2020 11th International Conference on Computing, Communication and
Networking Technologies (ICCCNT), Jul. 2020, pp. 1–7. doi:
10.1109/ICCCNT49239.2020.9225630.
30. N. Veerasamy and H. Pieterse, “Rising Above Misinformation and Deepfakes,” Mar.
2022.
31. Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM
Computing Surveys, vol. 54, pp. 1–41, Jan. 2021, doi: 10.1145/3425780.
32. A. Ruiter, “The Distinct Wrong of Deepfakes,” Philosophy & Technology, vol. 34, Dec.
2021, doi: 10.1007/s13347-021-00459-2.
33. M. Westerlund, “The Emergence of Deepfake Technology: A Review,” TIm Review, vol.
9, no. 11, pp. 39–52, Jan. 2019, doi: 10.22215/timreview/1282.
34. Shakil et al., “The Impact of Randomized Algorithm over Recommender System,”
Procedia Computer Science, vol. 194, pp. 218–223, Jan. 2021, doi:
10.1016/j.procs.2021.10.076.
35. “The legal implications and challenges of deepfakes.”
https://www.dacbeachcroft.com/en/gb/articles/2020/september/the-legal-implications-and-
challenges-of-deepfakes/ (accessed May 08, 2022).
36. J. M. Silbey and W. Hartzog, “The Upside of Deep Fakes,” Social Science Research
Network, Rochester, NY, SSRN Scholarly Paper 3452633, Sep. 2019. Accessed: May 02,
2022. [Online]. Available: https://papers.ssrn.com/abstract=3452633
37. T. Wagner and A. Blewer, “‘The Word Real Is No Longer Real’: Deepfakes, Gender, and
the Challenges of AI-Altered Video,” Open Information Science, vol. 3, pp. 32–46, Jan.
2019, doi: 10.1515/opis-2019-0003.
38. “What Are Deepfakes and How Are They Created?,” IEEE Spectrum, Apr. 29, 2020.
https://spectrum.ieee.org/what-is-deepfake (accessed May 05, 2022).
39. “What is a deepfake? Everything you need to know about the AI-powered fake media |
Business Insider India.” https://www.businessinsider.in/tech/how-to/what-is-a-deepfake-
everything-you-need-to-know-about-the-ai-powered-fake-media/articleshow/
80411144.cms (accessed May 08, 2022).
40. “Words We’re Watching: ‘Deepfake.’” https://www.merriam-webster.com/words-at-
play/deepfake-slang-definition-examples (accessed May 08, 2022).

You might also like