Deep Fake Detection Research Assignment
Deep Fake Detection Research Assignment
Course-Title
Research Methods
1|Page
Assignment 02
Assignment: 02
Summaries of Research papers:
This research explores the evolving field of Deep Fakes, highlighting the threats they bring to
trust among people and cybersecurity as well as its consequences for the transmission of digital
fraud. The writers examine the two face of Deep Fake technology: its beneficial applications
in areas like entertainment and education, as well as its malicious uses in areas like fraud,
identity theft, and disinformation. They talk about the weapons competition between Deep
Fake production and detection, offering insights into deep learning methods specifically,
GANs, CNNs, and transformers that are employed in both procedures. The potential of
blockchain technology for digital content verification and misuse protection is also highlighted
in the report.
In this paper,study covers the essential issue of recognizing Deep Fakes, which are very
realistic but modified media that is made using powerful deep learning (DL) algorithms or
technologies. By promoting disinformation and abuse in the legal and political areas Deep
Fakes damage public confidence, privacy, and security. The authors want to address this issue
by solving the detection of these fakes. The study divides detection methods into image, video,
audio, and hybrid approaches, and evaluates their performance. It explains that the most widely
used detection algorithms are Convolutional Neural Network (CNNs), which usually achieve
high accuracy. However, it points out that most techniques focus on improving a single
parameter, like accuracy, without taking stability across new datasets into attention.
Using a systematic literature review (SLR), the study analyzes current methodologies, datasets,
and simulation environments while talking about issues like computing failure, a lack of
defense against threatening fraud, and limitations on how detection techniques can be applied
2|Page
Assignment 02
to different kinds of Deep Fakes. In order to manage heterogeneous Deep Fake content, future
directions involve expanding hybrid techniques, building various and realistic datasets, and
establishing more general models. This thorough analysis offers a road map for improving DL-
based Deep Fake detection systems in order to effectively deal with changing threats.
This study focuses on the issue of racism in Deep Fake detection systems, as existing models
often show different performance across racial or gender-based groupings. The authors provide
two new approaches DAG-FDD (demographic-agnostic) and DAW-FDD (demographic-
aware) to address this. While DAW-FDD employs demographic data to balance errors across
specific groups, DAG-FDD works without using demographic details, trying to maintain
fairness across all hidden groupings. Both approaches deal with data problems, such as
variations in demographic representation and actual vs artificial cases, using a technique known
as Conditional Value-at-Risk (CVaR). By testing these techniques on four large data sets, the
authors found that they maintain a high level of detection accuracy while minimizing unfairness
(such as variations in false detection rates). One drawback is that these techniques rely on
specific loss functions, which makes them more difficult to use with some models, including
graph-based models. Future objectives call for refining techniques for models with larger loss
functions and verifying fairness across various datasets. A step toward unbiased and equal
Deep Fake detection algorithms is this work.
The developing issue of Deep Fake movies which are produced by modifying artificial
intelligence (AI) algorithms to produce false yet convincing media content, is covered in this
study. Because these movies have the potential to propagate false information and harm
society, they present risks to privacy, politics, and security. The authors list many major
obstacles to identifying Deep Fakes, including the use of imbalanced databases, computing
mistakes, and improper detection model extension.
They evaluate the efficacy of deep learning techniques, specifically convolutional and recurrent
neural networks, in identifying Deep Fakes. Although these techniques have future potential
they are limited in their ability to handle new Deep Fake technology and demand a significant
amount of processing power. The study also highlights need for stronger real-time detection
systems and the value of high-quality datasets to improve detection systems. Better dataset
creation, the development of detection techniques that extend to a variety of situations, and the
solution of computational difficulties are some future goals. This study addresses present
scalability and dataset quality constraints while offering suggestions for enhancing the
reliability as well as efficiency of Deep Fake detection methods.
3|Page
Assignment 02
GAN-based techniques have advanced into modern diffusion models that produce highly
realistic results.
Using datasets like FFHQ and FaceForensics++ as comparisons, it examines detection methods
ranging from traditional techniques to modern deep learning and hybrid approaches. Since
there are some significant advancements, problems with managing real-time situations,
offensive attacks, and generalization across many media types still exist. The study highlights
the necessity of resolving dataset biases, adopting heterogeneous techniques, and enhancing
real-time detection systems. Although progress, creating a balance between ethical protections
and high-quality creation is still an important challenge that requires for more comprehensive
study.
6. Paper Title: Real-Time Deep Fake Video Detection Using Eye Moment
Analysis with a Hybrid Deep Learning Approach
This study covers the problem of real-time Deep Fake video detection, which is supported by
the growing misuse of this technology for disinformation. The authors provide a hybrid deep
learning model that uses eye movement analysis to detect the visual characteristics of Deep
Fakes, combining MesoNet4 for small face manipulation detection with ResNet101 for deep
extraction of features. The model demonstrated great accuracy (e.g 98.73% on
FaceForensics++) when tested on datasets such as FaceForensics++, CelebV1, and CelebV2,
and its robustness proved by measures like precision and F1-score. Limitations, however,
include sensitivity to environmental elements like lighting and challenges adjusting to live-
streaming. The study suggests joining systems to improve detection and identifies drawbacks
in existing techniques, such as limited live-stream response. Future research will focus on
enhancing real-time capabilities and covering a wider range of datasets to ensure reliable
performance in a variety of complex media types.
This study presents a system that combines error-level analysis (ELA) and convolutional neural
networks (CNNs) for effective detection in order to deal with the growing issue of deep fake
material, which raises fears about disinformation and public harm. Using CNN architectures
(GoogLeNet, ResNet18, SqueezeNet) for feature extraction as well as improved support vector
machines (SVM) and k-nearest neighbors (KNN) for classification, the method processes
pictures using ELA to detect pixel-level changes. When measured using requirements like
precision, recall, and F1-score, the combination of ResNet18 and KNN produced the maximum
accuracy of 89.5% on a publicly available dataset. With its potential, the system has drawbacks,
including a reliance on image-based data and issues with compress or low-quality information.
The work highlights limitations in current approaches, such as high overfitting and high
processing costs, and suggests further research to enhance generalization through testing on
real-world data and video-based datasets. The study highlights the need for scalable, effective
solutions by pointing out the flaws of current approaches, such as high overfitting and
significant processing costs. Although the suggested technique shows promise, more research
is required to generalize detection across other media types.
4|Page
Assignment 02
5|Page
Assignment 02
10.Paper Title: Deep Fake Generation and Detection: Case Study and
Challenges
This study focuses at both the creation and detection of Deep Fake technologies. Advanced
machine learning (ML) and deep learning (DL) techniques, such as GANs, are used to make
Deep Fakes, which are dangerous because they produce realistic-looking but fake audio, video,
and images that can be used fraudulently for identity theft, misinformation, and societal harm.
The authors analyze current Deep Fake creation methods, such as GANs and self-encoder and
emphasize how well they produce highly realistic content. In terms of detection, the study talks
about techniques like physical attribute analysis and CNN-based models that find abnormalities
in features like facial modeling or eye blinking.
A multipurpose detection method involving many sources of data, including audio and video,
is added in the paper. There are still issues, such as poor generalization across datasets and
difficulty identifying more recent, complex Deep Fakes, even with high detection accuracy in
controlled conditions. Future research, according to the authors, should focus on diverse
datasets, improving real-time detection skills, and developing models that perform well across
changing Deep Fake methods. This work presents important new information for improving
Deep Fake detection and creating techniques.
6|Page