DF Report
DF Report
SEMINAR-2 REPORT
Submitted by
MADHUJNA VALLURU (RA2111003020124)
DEEPIKA SENTHIL KUMAR (RA2111003020139)
SAI RISHITHA BADDALA (RA2111003020152)
CINTILLA JEBASEN (RA2111003020163)
Under the guidance of
Mrs. J. Juslin Sega
Mrs. G. Gangadevi
(Assistant Professors, Department of Computer Science and Engineering)
In partial fulfillment for the award of the degree
of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
of
COLLEGE OF ENGINEERING AND TECHNOLOGY
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Submitted for the Seminar-2 Viva Voce Examination held on.............................at SRM
Institute of Science and Technology, Ramapuram, Chennai-600089.
EXAMINER 1 EXAMINER 2
ABSTRACT
iii
TABLES OF CONTENT
ABSTRACT iii
LIST OF FIGURES vi
LIST OF ACRONYMS AND ABBREVIATIONS vii
1 INTRODUCTION 1
2 PROJECT DESCRIPTION 5
3 DESIGN 10
6 REFERENCES 32
v
LIST OF FIGURES
vi
LIST OF ACRONYMS AND SYNONYMS
AI - Artificial Intelligence
CNN - Convolutional Neural Network
RNN - Recurrent Neural Network
GAN - Generative Adversarial Network
LSTM - Long Short Term Memory
ResNet - Residual Network
vii
Chapter 1
INTRODUCTION
The practice of swapping faces in photographs has a rich history dating back over one hundred
and fifty years. Over time, film and digital imagery have exerted a profound influence on
individuals and societal discourse. In the past, creating convincing fake images or tampering
with videos required specialized knowledge or expensive computing resources. However, with
the emergence of Deepfakes technology, the landscape has dramatically shifted. Deepfakes
represent a new frontier in media manipulation, capable of producing incredibly convincing face-
swapped videos. What makes Deepfakes particularly alarming is that they can be generated using
consumer-grade hardware like a GPU and readily available software packages. This accessibility
has led to a surge in their popularity, both for harmless parody videos and for malicious purposes
such as targeted attacks on individuals or institutions. The widespread availability of tools to
create deepfakes has underscored the urgent need for automated detection methods. While digital
forensics experts can analyze individual videos for signs of manipulation, the sheer volume of
videos uploaded to the internet and social media platforms daily makes manual scrutiny
impractical. As technology continues to advance, particularly in image, video, and audio editing
capabilities, the potential for creating and controlling sophisticated content grows exponentially.
Deepfakes, in particular, have garnered attention for their ability to seamlessly replace a person's
face in a video with another, creating hyper-realistic digital imagery. While they have
applications in entertainment, art, and education, they also pose significant challenges, especially
when used to spread misinformation or cause harm on social media platforms.
In this context, our project aims to address the growing concerns surrounding deepfakes by
developing automated detection methods. By leveraging advancements in deep learning and
artificial intelligence, we seek to create a system capable of identifying and flagging deepfake
videos, thereby helping to mitigate the risks associated with their proliferation on digital
platforms.
2
Furthermore, the solution must prioritize real-time processing capabilities to enable timely
identification and mitigation of DeepFake threats across various online platforms and
communication channels.
Overall, the goal is to devise a comprehensive DeepFake detection framework that can reliably
discern synthetic media from authentic content, thereby safeguarding the integrity of digital
information and protecting individuals and organizations from the detrimental impacts of
misinformation and deception.
2. Computer Vision:
Facial recognition: DeepFake detection involves analyzing facial features and
expressions to identify inconsistencies or manipulations that indicate synthetic
media.
Image and video processing: Techniques such as frame analysis, optical flow
estimation, and motion detection are essential for identifying anomalies in DeepFake
videos.
3. Multimedia Analysis:
Audio processing: DeepFake detection may involve analyzing audio tracks to detect
anomalies in speech patterns, voice characteristics, and background noise.
Metadata analysis: Examining metadata associated with multimedia files can provide
valuable insights into their authenticity and origin.
4. Cybersecurity:
3
Digital forensics: DeepFake detection often overlaps with forensic analysis
techniques used to uncover evidence of tampering or manipulation in multimedia
content.
Adversarial attacks: Understanding potential vulnerabilities in DeepFake detection
models and developing defenses against adversarial attacks is crucial for ensuring the
reliability of detection systems.
4
Chapter 2
PROJECT DESCRIPTION
3. Capsule-Forensics:
This method employs capsule networks, an alternative to traditional convolutional neural
networks (CNNs), for detecting deepfakes. Capsule networks are designed to better
capture hierarchical relationships in data, which can improve detection accuracy.
4. XceptionNet:
This deep learning-based approach focuses on the subtle artifacts and inconsistencies left
behind in deepfake images. It utilizes an XceptionNet architecture, a type of
convolutional neural network, to detect these anomalies.
5. Audio-Visual Approaches:
Some methods combine both visual and auditory cues for deepfake detection. By
analyzing both video and audio streams simultaneously, these approaches aim to improve
detection accuracy and robustness.
5
6. Feature-based methods:
Feature-based methods analyze specific features or patterns in images or videos to detect
deepfakes. These features may include facial landmarks, eye blinking patterns, or
inconsistencies in lighting and shadows.
7. GAN-based Detection:
Some approaches leverage generative adversarial networks (GANs), the same technology
used to create deepfakes, to detect them. By training a discriminator network to
distinguish between real and fake samples, these methods can effectively identify
deepfake content.
8. Lip-Sync Detection:
Deepfakes often struggle with accurately synchronizing lip movements with
accompanying audio. Lip-sync detection methods exploit this weakness by analyzing
discrepancies between lip movements and speech in videos.
9. Ensemble methods:
Ensemble methods combine multiple detection algorithms to improve overall detection
accuracy and robustness. By leveraging the strengths of different approaches, ensemble
methods can achieve better performance than any single method alone.
6
3 DeepFake Detection for Human Asad Malik, Minoru Deep Neural
Face Images and Videos Kuribayashi, Sani Networks, AI, ML
M. Abdullahi, Ahmad
Neyaz Khan
1. Limited Coverage:
Some deepfake detection websites may focus on specific types of deepfakes or use cases,
potentially missing others. For example, a platform might be more adept at detecting
face-swapping deepfakes but less effective at identifying voice manipulation or other
types of synthetic media.
2. Accuracy Concerns:
While many deepfake detection websites employ advanced algorithms, they are not
infallible. False positives or false negatives may occur, leading to incorrect identifications
or missed detections. The effectiveness of detection can vary depending on the
sophistication of the deepfake and the quality of the analysis.
3. Resource Intensive:
Deepfake detection algorithms can be computationally intensive, requiring significant
processing power and time to analyze media files. This can lead to delays or limitations
in the number of files that can be analyzed simultaneously on a website, affecting user
experience, especially during periods of high demand.
7
4. Privacy Risks:
Users submitting media to deepfake detection websites may expose sensitive information
or personal data. While most platforms prioritize user privacy and data security, there is
always a risk that uploaded content could be mishandled or accessed by unauthorized
parties.
5. Scalability Challenges:
As the volume and complexity of deepfakes continue to grow, scalability becomes a
significant concern for detection websites. Ensuring timely and accurate analysis of a
large number of media files requires robust infrastructure and ongoing optimization
efforts.
6. Adversarial Evasion:
Deepfake creators actively work to evade detection algorithms by exploiting
vulnerabilities or weaknesses in existing methods. As a result, detection websites must
continuously adapt and improve their algorithms to stay ahead of emerging threats.
8
2.4 Software Requirements
Chapter 3
9
DESIGN
3.1 Proposed System
The proposed deepfake detection system leverage techniques such as analyzing facial
inconsistencies, detecting unnatural eye movements, examining inconsistencies in audio and
visual elements, utilizing blockchain for tamper-proofing, and employing machine learning
algorithms trained on large datasets of both real and fake content to differentiate between them.
These approaches aim to enhance the robustness and reliability of deepfake detection systems.
Recent advancements in deepfake detection have focused on developing sophisticated algorithms
and techniques to combat the proliferation of AI-generated synthetic media. One proposed
approach involves leveraging convolutional neural networks (CNNs) to analyze subtle
inconsistencies in facial features and movements that are often characteristic of deepfake videos.
Additionally, researchers are exploring the integration of biometric authentication systems to
verify the authenticity of individuals appearing in multimedia content. Moreover, advancements
in natural language processing (NLP) have enabled the detection of synthetic voices and textual
inconsistencies, further strengthening deepfake detection capabilities. By combining these
approaches and continually refining detection algorithms through large-scale training datasets,
the research community aims to stay ahead of evolving deepfake techniques and safeguard
against their malicious use.
10
3.2 Architecture Diagram
The above Fig 3.1 illustrates the architecture of the processing of the DeepFake model. This
diagram illustrates the components and flow of data within the system designed to identify
manipulated media. Given the complexity and diversity of approaches to deepfake detection, a
generalized model is described that incorporates several key components commonly found in
deepfake detection systems. This model uses a combination of convolutional neural networks
(CNNs) for feature extraction and analysis, supplemented by other techniques to enhance
detection capabilities. The architecture might also show parallel paths for processing different
aspects of the data and include external components like databases for storing training data or
logs.
11
3.3 Design Phase
The design phase of a deepfake detection model is critical, as it lays the foundation for the
development and effectiveness of the system. This phase involves several key steps, from
understanding the problem space to detailing the specific components that will be built. The
processes involved are:
1. Problem Definition and Scope:
Identify the specific type of deepfakes to be detected (e.g., facial manipulation,
voice synthesis).
Understand the constraints such as computational resources, real-time processing
requirements, and privacy concerns.
12
1. Data Flow Diagram
2. Use Case Diagram
15
If we have an input of size W x W x D and Dout number of kernels with a spatial size of F with
stride S and amount of padding P, then the size of output volume can be determined by the
following formula:
W −F+ 2 P
Wout = +1
S
where W lk and b lk are the weight vector and bias term of the kth filter of the lth layer, respectively.
16
3.7 GANs Background
Generative Adversarial Networks (GANs) are a class of artificial intelligence algorithms used in
unsupervised machine learning, implemented by a system of two neural networks contesting
with each other in a zero-sum game framework. This technique was introduced by Ian
Goodfellow and his colleagues in 2014 and has since been an active area of research and
development. One network, the generator, creates outputs (like photos) that are as realistic as
possible; the other, the discriminator, judges them. The discriminator's goal is to determine
whether the output it reviews is "real" (drawn from actual data) or "fake" (produced by the
generator). The generator's objective is to increase the error rate of the discriminator, essentially
fooling it into believing that the outputs it generates are authentic. This dynamic pushes both
networks to improve their methods over time, leading to the production of extremely realistic
synthetic outputs. GANs are used widely in various applications, including art creation, photo
realistic images, and even video game character creation, demonstrating their broad potential
across fields.
Generative Model: A key element responsible for creating fresh, accurate data in a Generative
Adversarial Network (GAN) is the generator model. The generator takes random noise as input
and converts it into complex data samples, such text or images. It is commonly depicted as a
deep neural network.
The training data’s underlying distribution is captured by layers of learnable parameters in its
design through training. The generator adjusts its output to produce samples that closely mimic
real data as it is being trained by using backpropagation to fine-tune its parameters.
The generator’s ability to generate high-quality, varied samples that can fool the discriminator
is what makes it successful.
Generator Loss: The objective of the generator in a GAN is to produce synthetic samples that
are realistic enough to fool the discriminator. The generator achieves this by minimizing its
17
loss function JG. The loss is minimized when the log probability is maximized, i.e., when the
discriminator is highly likely to classify the generated samples as real. The following equation
is given below:
m
−1
JG= ∑ log D(G ( Z i ) )
m i=1
18
3. Normalization:
Purpose: Normalizes pixel values to a standard range, typically [0, 1] or [-1, 1].
This helps in stabilizing the learning process and leads to faster convergence
during model training.
Techniques used: Pixel values, originally in the range of [0, 255] for standard
RGB images, are scaled down by dividing by 255. Alternatively, mean
subtraction and division by standard deviation (standard scaling) can be used to
normalize the data.
5. Quality Enhancement:
Purpose: Enhances the quality of the input data if it is degraded, which is
common in real-world scenarios where data might come from various sources
with different quality levels.
Techniques used: Denoising, super-resolution, and sharpening filters can be
applied to improve the clarity and details of the images, potentially aiding in more
accurate deepfake detection.
The design of the input preprocessing module can significantly impact the performance of the
entire deepfake detection system. It must be carefully tailored to the specific characteristics of
the data and the requirements of the downstream analysis modules.
19
Techniques used: Features such as texture patterns, edges, and color histograms.
Other specialized features might include frequency domain characteristics (using
FFT or DCT), motion consistency in videos, and facial landmarks.
2. Learned features through Deep Learning:
Purpose: With the advent of deep learning, models have been designed to
automatically learn to extract features that are most effective for the task of
distinguishing between real and fake content.
Techniques used:
o Convolutional Neural Networks: Widely used for image and video
analysis due to their ability to capture spatial hierarchies in data. CNNs
can automatically learn the features from raw pixels during training.
o Recurrent Neural Networks or Long Short-Term Memory Networks:
Useful for capturing temporal inconsistencies in videos by analyzing
sequences of frames over time.
o Autoencoders: Sometimes used for anomaly detection by learning a
compact representation of normal images or frames and detecting
deviations in new inputs.
3. Hybrid Features:
Purpose: Combines handcrafted and learned features to leverage both domain
expertise and the power of machine learning.
Techniques used: This approach might involve using a CNN to extract deep
features followed by statistical analysis or additional processing of these features
to enhance detection capabilities.
4. Feature Engineering:
Purpose: Enhance the discriminative power of the features. This process involves
selecting the most relevant features and possibly transforming them to improve
the effectiveness of the classification or anomaly detection.
Techniques used:
o Dimensionality Reduction: Techniques like Principal Component
Analysis (PCA) or t-Distributed Stochastic Neighbor Embedding (t-SNE)
to reduce the number of dimensions without losing critical information.
o Feature Selection: Statistical tests or model-based selection methods to
retain the most informative features and discard irrelevant or redundant
ones.
20
o 3D CNNs: Process both spatial and temporal dimensions simultaneously
by working on several contiguous frames.
o Fusion Model: Combine features extracted independently from spatial
and temporal analysis for a comprehensive understanding.
6. Implementation Considerations:
Scalability: The feature extraction process must be efficient enough to handle
large volumes of data without excessive computation, especially important for
applications requiring real-time detection.
Adaptability: Features should be robust against various manipulation techniques
and generalizable across different contexts and datasets.
The effectiveness of the feature extraction module significantly determines the overall success of
a deepfake detection model. A well-designed feature extraction phase that captures both obvious
and subtle indicators of manipulation can significantly improve detection rates and contribute to
more reliable media verification tools.
21
with autoencoders fall into this category, where the model learns to
reconstruct normal data and any significant reconstruction error signals an
anomaly.
5. Hybrid Models:
Purpose: Combines multiple anomaly detection techniques to improve accuracy
and robustness against various types of manipulations.
Techniques used:
o Ensemble Methods: Use multiple models or algorithms to make
individual decisions, which are then aggregated to improve detection
reliability.
o Multi-modal Analysis: Incorporates different types of data (e.g., audio
and video) to spot discrepancies between modalities, such as mismatches
between lip movements and spoken words.
6. Implementation Considerations:
False Positives and Negatives: Striking a balance between sensitivity and
specificity is essential to minimize both false alarms and missed detections.
Scalability and Real-Time Processing: Especially important in applications that
require live streaming or quick feedback.
Adaptability and Learning: The ability of the system to adapt to new types of
deepfakes as they evolve, possibly using online or continual learning approaches.
22
The anomaly detection module is where the "decision-making" happens in a deepfake detection
system, analyzing whether the features extracted indicate typical or manipulated content. This
module's effectiveness largely determines the overall success and reliability of the deepfake
detection model, making its design and implementation critical to the system's performance.
2. Classifier-Based Decisions:
Machine Learning Classifiers: Techniques such as Support Vector Machines
(SVM), Decision Trees, or Neural Networks can be trained on labeled datasets to
distinguish between real and fake content. The decision module might use one or
more of these classifiers to make a final determination.
Ensemble Methods: Multiple classifiers are used, and their outputs are combined
(e.g., via voting or averaging) to improve the accuracy and robustness of the
decision. This method helps to mitigate the weaknesses of individual classifiers.
23
Techniques used: Implementing feedback loops where the system’s predictions
are reviewed and, if necessary, corrected. These corrections can then be used as
new training data to refine the models.
7. Implementation Considerations:
False positives and negatives: he decision module must carefully manage the
trade-off between false positives (labeling genuine content as fake) and false
negatives (failing to detect actual fakes), which can have significant implications
depending on the application.
Real-Time Processing: For applications that require real-time analysis, such as
live streaming, the decision module must be optimized for speed without
sacrificing accuracy.
Scalability: The module should efficiently handle varying volumes and types of
content, maintaining performance as the system scales.
The decision module is a critical component of any deepfake detection system, where all prior
analyses converge into a final verdict. Its design directly impacts the usability and effectiveness
of the system, influencing how reliably deepfakes can be identified and mitigated.
24
1. Result Interpretation:
Purpose: Converts the raw outputs of the decision module into more
understandable and actionable forms for the user.
Techniques used:
o Confidence Scores: Presenting a confidence level or probability that
indicates how likely it is that the content is a deepfake. This helps in
assessing the reliability of the detection.
o Explanation Techniques: Techniques such as feature importance or
saliency maps to show which parts of the media were most influential in
determining the result. This is important for transparency and helps users
understand why a particular piece of content was flagged as fake.
2. Visualization:
Purpose: Visual aids enhance the comprehensibility of the detection results,
making it easier for users to grasp complex information at a glance.
Techniques used:
o Heatmaps and Overlays: Visual representations that highlight areas in
the image or video frames where anomalies or manipulations were
detected.
o Graphs and Charts: Displaying statistical data or trends in the detection
results over time or across different datasets.
3. Reporting:
Purpose: Provides detailed reports that document the detection process and its
outcomes, useful for audits, further analysis, or regulatory compliance.
Techniques used:
o Automated Report Generation: Summarizes the detection results,
methodologies used, and any other relevant data in a structured format.
This can include textual descriptions, tables, and embedded visualizations.
o Customizable Reports: Allow users to select what information is
included in a report, catering to different needs and preferences.
25
5. Data Logging and Archiving:
Purpose: Automated procedures that escalate the issue to higher levels of
scrutiny or intervention when certain criteria are met.
Techniques used:
o Secure Data Storage: Ensures that all data related to detections are
securely stored, maintaining confidentiality and integrity.
o Access Logs: Tracks who accessed the detection results and when, which
is crucial for maintaining security and accountability.
6. User Interface:
Purpose: Tracks who accessed the detection results and when, which is crucial
for maintaining security and accountability.
Techniques used:
o Dashboard: A central interface where users can see summaries of
detection activities, access detailed reports, and manage alerts and settings.
o Interactive Tools: Features that allow users to manipulate the data
visualization or delve deeper into specific aspects of the detection results.
The Post-Processing and Reporting Module is vital for ensuring that the outcomes of a deepfake
detection system are usable and beneficial in practical scenarios. By effectively communicating
complex data and providing essential tools for interaction, this module helps bridge the gap
between sophisticated detection technologies and everyday users who rely on these systems to
safeguard the authenticity of digital media.
Implementation
26
27
A deepfake detection model works by systematically arranging steps to examine digital content
for indications of manipulation. The first step in the process is data ingestion, which entails
gathering digital content from multiple sources, including photographs, videos, and audio. After
then, this content is preprocessed to make sure it is consistent across a variety of media formats
and improve its readiness for in-depth examination. The next crucial stage is feature extraction,
which separates the distinctive characteristics from the standard information which may be
indications of manipulation. Some instances of these characteristics encompass discrepancies in
audio patterns or unreliable facial expressions. The core detection and analysis step utilizes these
28
obtained characteristics as a starting point for assessing the content using sophisticated machine
learning approaches, most often Convolutional Neural Networks (CNNs), to differentiate real
from modified media. Following the analytical rigor, a decision-making and reporting process is
put into effect. In the subsequent phase, the outcomes of the analysis are put together into
comprehensive documents that highlight the content's authenticity, confidence scores, and any
possible signs of manipulation. This all-encompassing procedure is facilitated by continuous
training of models and updates, which guarantees the system's ability to evolve in response to
novel deepfake methods and improve the effectiveness of its detection. Furthermore, a feedback
loop encourages ongoing detection model improvement by allowing for adjustments based on
user feedback and fresh perspectives, creating a culture of continuous development. All in all,
these procedures provide an adequate framework for deepfake detection models that operate
within, fusing the latest advancements with an attentive approach to safeguarding multimedia
integrity.
Chapter 4
RESULTS AND DISCUSSION
4.1 Results
To pre-trained models that include ResNet50, we apply Transfer Learning using Conv+LSTMs
and test-time augmentation. Weare able to prepare and evaluate our strategies with the support of
the DFDC dataset. Our comparison demonstrates the extent to which our method is better than
other approaches. We are able to compute a video level forecast by simply normalizing the
expectations for every frame when these networks are taken into account. The configuration that
produces the best adjusted precision is found using the validation set as a guide.
After the fifth epoch, the validation loss begins to increase, which is an indication of overfitting.
Test-time augmentation (TTA) was implemented to further enhance testing. You can get
multiple copies of a test image and average predictions by performing data augmentation on it by
employing TTA. Comparing to the ResNet model, we made use of an individual set of
transformations for our TTA evaluation. ResNet50 + LSTM is the model that achieves the
highest accuracy, 94.63%.
29
Chapter 5
CONCLUSION AND FUTURE ENHANCEMENT
5.1 Conclusion
In summary, our deepfake detection project has made great strides in tackling the challenges
posed by manipulated media. Using advanced machine learning techniques like neural networks
and deep learning, we've created a strong system that can tell real videos from fake ones with an
accuracy of 94.63%. This level of accuracy is vital for stopping misinformation and keeping
digital content trustworthy. Looking ahead, we can improve real-time detection, explore new
detection methods, work with experts for better datasets, and use explainable AI to clarify our
detection process. These steps will help us stay ahead of threats and maintain the authenticity of
digital media.
In conclusion, our project marks a big step in fighting deepfakes and making the digital world
safer. Continued progress in deepfake detection is key to keeping online content reliable and
trustworthy.
30
5.2 Future Enhancement
2. Multi-Modal Detection:
Combine audio, video, and textual analysis for a more comprehensive approach to
deepfake detection.
3. Real-Time Optimization:
Further optimize real-time detection to swiftly identify and mitigate emerging deepfake
threats.
4. Continuous Learning:
Implement mechanisms for the system to adapt and improve over time with new data and
insights.
6. Explainable AI Techniques:
Incorporate methods to provide insights into the model's decision-making process for
increased transparency.
8. Cross-Platform Compatibility:
Ensure compatibility across platforms and media formats for consistent deepfake
detection capabilities.
These future directions aim to further enhance the deepfake detection system's capabilities,
improve prediction accuracy, reduce overfitting, and explore innovative technologies like
blockchain for enhancing digital provenance and authenticity verification. Continued research
and experimentation in these areas will contribute to the development of more robust and reliable
deepfake detection solutions.
31
REFERENCES
[1] Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies,
Matthias, “FaceForensics++: Learnings to detect Manipulated Facial Images”, Cornell
University, 2019
[2] Yuezun Li, Cong Zhang, Pu Sun, Lipeng Ke, Yan Ju, Honggang Qi, Siwei Lyu, “DeepFake-
o-meter: An open platform for DeepFake Detection”, IEEE International Conference on
Autonomous Systems, 2021
[3] Julian Choy Jin Lik, Julia Juremi, Kamalakannan Machap, “De-faketection: DeepFake
detection”, AIP Conference, 2024
[4] Asad Malik, Minoru Kuribayashi, Sani M. Abdullahi, Ahmad Neyaz Khan, “DeepFake
Detection for Human Face Images and Videos”, IEEE Magazines, 2022
[5] Arash Heidari, Nima Jafari Navimipour, Hasan Dag, Mehmet Unal, “DeepFake detection
using Deep Learning methods”, A systematic and comprehensive review, 2023
[6] Zaynab Almutairi, Hebah Elgibreen, “A review of Modern Audio DeepFake Detection
Methods”, Academic Journel, 2022
32
[7] Gourav Gupta, Kiran Raja, Manish Gupta, Tony Jan, Mukesh Prasad, Scott Thompson
Whiteside, “A Comprehension Review of DeepFake Detection using Advanced Machine
Learning and Fusion methods”, Artificial Intelligence and Optimization Research Centre, 2024
[8] Leandro A. Passos, Danilo Jodas, Kelton A. P. Costa, Luis A. Souza, Douglas Rodrigues,
Javier Del Ser, David Camacho, Joao Paulo Papa, “A review of Deep Learning-based
Approaches for DeepFake Content Detection”, Sao Paul State University, 2024
[9] Laura Stroebel, Mark, Tricia Hartley, Tsui Shan Ip, Mohuiddin Ahmed, “A systematic
literature review on the effectiveness of deepfake detection techniques”, 2023
[10] Mika Westerlund, “The Emergence of DeepFake Technology”, Technology Innovation
Management Review, 2019
[11] Neeraj Guhagarkar, Sanjana Desai Swanand Vaishyampayan, Ashwini Save, “DeepFake
Detection Techniques”, 9th National Conference on Role of Engineers in Nation Building, 2021
[12] Jia Wen Seow, Mei Kuan Lim, Raphael C.W. Phan, Joseph K. Liu, “A comprehensive
overview of DeepFake”, 2022
[13] Andrew Lewis, Patrick Vu, Raymond M. Duch, Areeq Chowdhury, “DeepFake detection
with and without content warnings”, 2023
[14] Sayed Shifa Mohd Imran, Dr. Pallavi Davendra Tawde, “DeepFake Detection: Literature
Review", International Research Journal of Engineering and Technology, 2024
[15] MD Shohel Rana, Mohammad Nur Nobi, Beddhu Murali, Andrew H. Sung, “DeepFake
Detection: A Systematic Literature Review”, 2022
33