Deep Fake Detection - Finaal
Deep Fake Detection - Finaal
Security Risks:
Manipulated media can be used to spread propaganda,
manipulate public opinion, and even impersonate individuals
for fraudulent activities.
Pipeline:
The pipeline involves detecting faces in the provided media
(images or videos), extracting high-level features from these
regions, and then classifying them to determine whether the
content is manipulated.
Benefits:
This approach balances accuracy with computational
efficiency, making it feasible to deploy in real-world
scenarios.
Model Architecture:
Face Detection:
We start by detecting faces using the open-source
face_recognition library, which quickly locates facial regions
in an image or video frame.
Feature Extraction:
Detected faces are resized and passed through a pre-trained
ResNet50 model, which extracts rich feature vectors using
deep convolutional layers.
Classification:
The resulting features are then averaged (when multiple faces
are present) and fed into a custom-trained TensorFlow model
that classifies the input as either ‘Manipulated (Deepfake)’ or
‘Original (Real)’ based on the probability output.
Dataset
Data Source:
We utilize a dataset sourced from Kaggle (FaceForensics++ dataset),
which contains genuine and manipulated media samples.
Diversity:
The dataset covers a wide range of scenarios, lighting conditions, and facial
expressions to ensure the model is robust and can generalize well to unseen
data.
Preprocessing:
Data preprocessing includes normalization, resizing, and splitting into
training and testing sets to help in effective model training.
Implementation
Media Upload:
Users can upload both images and videos. The system supports multiple file types
and dynamically previews the upload.
Face Extraction:
Once the file is uploaded, faces are automatically detected and extracted using the
face_recognition library. For videos, multiple frames are processed to capture varied
expressions.
Feature Processing:
Extracted faces are resized and processed through ResNet50 to obtain feature vectors.
These features are averaged and passed to the TensorFlow model for prediction.
Result Display:
The system displays the prediction (Manipulated or Original) along with a score. If
deepfake manipulation is detected, extracted faces are also presented. Additionally, the
uploaded file is previewed for validation.
Future Scope
Future Enhancements:
Future work might include: Incorporating additional modalities to improve detection accuracy.
Expanding the dataset to include more varied examples. Refining the model for real-time
detection applications. Integrating the system with social media platforms for automatic
content verification.
Potential Impact:
These enhancements aim to further secure digital media and combat the spread of
misinformation.
Conclusion:
In summary, our deepfake detection model effectively distinguishes between genuine and
manipulated content through a sophisticated pipeline involving face detection, feature
extraction, and classification.
Future outlook
Continued vigilance and innovation are essential as deepfake technology evolves,
necessitating adaptable strategies that protect authentic discourse
THANK YOU