0% found this document useful (0 votes)
5 views10 pages

Deep Fake Detection - Finaal

Uploaded by

anna7march
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views10 pages

Deep Fake Detection - Finaal

Uploaded by

anna7march
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

BEYOND THE ILLUSION: IDENTIFYING

AND PREVENTING DEEPFAKE


MANIPULATION
Introduction to Deepfakes

Deepfake technology leverages advanced machine learning


techniques to generate or alter images and videos in a highly realistic
manner. Although impressive, this technology can be misused to
produce deceptive content, leading to misinformation and potential
security threats.
With the rapid spread of manipulated media online, it is imperative to
develop robust systems capable of distinguishing genuine content
from deepfakes.
Problem Statement:
Misinformation & Trust:
Deepfakes can undermine trust in digital media by making it
difficult to distinguish real events from manipulated ones. This
is particularly concerning in the realms of politics, celebrity
news, and corporate communications.

Security Risks:
Manipulated media can be used to spread propaganda,
manipulate public opinion, and even impersonate individuals
for fraudulent activities.

Need for Detection:


A reliable detection model is crucial for social media
platforms, news organizations, and law enforcement to verify
content authenticity.
Proposed Solution:
AI-Based Model:
Our solution is built around a deep learning model that
leverages ResNet50 for robust feature extraction from facial
regions in media.

Pipeline:
The pipeline involves detecting faces in the provided media
(images or videos), extracting high-level features from these
regions, and then classifying them to determine whether the
content is manipulated.

Benefits:
This approach balances accuracy with computational
efficiency, making it feasible to deploy in real-world
scenarios.
Model Architecture:
Face Detection:
We start by detecting faces using the open-source
face_recognition library, which quickly locates facial regions
in an image or video frame.

Feature Extraction:
Detected faces are resized and passed through a pre-trained
ResNet50 model, which extracts rich feature vectors using
deep convolutional layers.

Classification:
The resulting features are then averaged (when multiple faces
are present) and fed into a custom-trained TensorFlow model
that classifies the input as either ‘Manipulated (Deepfake)’ or
‘Original (Real)’ based on the probability output.
Dataset
Data Source:
We utilize a dataset sourced from Kaggle (FaceForensics++ dataset),
which contains genuine and manipulated media samples.

Diversity:
The dataset covers a wide range of scenarios, lighting conditions, and facial
expressions to ensure the model is robust and can generalize well to unseen
data.

Preprocessing:
Data preprocessing includes normalization, resizing, and splitting into
training and testing sets to help in effective model training.
Implementation
Media Upload:
Users can upload both images and videos. The system supports multiple file types
and dynamically previews the upload.

Face Extraction:
Once the file is uploaded, faces are automatically detected and extracted using the
face_recognition library. For videos, multiple frames are processed to capture varied
expressions.

Feature Processing:
Extracted faces are resized and processed through ResNet50 to obtain feature vectors.
These features are averaged and passed to the TensorFlow model for prediction.

Result Display:
The system displays the prediction (Manipulated or Original) along with a score. If
deepfake manipulation is detected, extracted faces are also presented. Additionally, the
uploaded file is previewed for validation.
Future Scope
Future Enhancements:
Future work might include: Incorporating additional modalities to improve detection accuracy.
Expanding the dataset to include more varied examples. Refining the model for real-time
detection applications. Integrating the system with social media platforms for automatic
content verification.

Potential Impact:
These enhancements aim to further secure digital media and combat the spread of
misinformation.
Conclusion:
In summary, our deepfake detection model effectively distinguishes between genuine and
manipulated content through a sophisticated pipeline involving face detection, feature
extraction, and classification.

Future outlook
Continued vigilance and innovation are essential as deepfake technology evolves,
necessitating adaptable strategies that protect authentic discourse
THANK YOU

You might also like