Deepfake Image Detection
Deepfake Image Detection
Background Objectives
➢Sensor Pattern Noise (SPN) is an intrinsic ➢The primary goal of this work is to develop a
artifact resulting from imperfections in machine learning model that utilizes SPN for
camera sensors. It is unique to each camera, camera identification. By harnessing SPN's
effectively functioning as a digital fingerprint.
unique properties, the study aims to showcase
This characteristic makes SPN invaluable for
tracing the origin of digital images, its practical applications in digital forensics
particularly in fields such as forensics and for source tracing and authenticity
ensuring media integrity. verification to detect potential tampering in
multimedia content.
Literature Review
❑ Preprocessing
➢ Resizing: All images were resized to 128x128 pixels to maintain uniformity and
optimize computational efficiency.
➢ SPN Extraction: Sensor Pattern Noise (SPN) was extracted using high-pass
filtering, isolating the unique noise patterns from image content.
➢ Format Conversion: Images were standardized into RGB format to align with
the input requirements of the deep learning model.
❑ Dataset Splits
The dataset was divided into the following proportions for model training and
evaluation:
➢70% Training: Used for model learning.
➢15% Validation: For tuning and preventing overfitting.
➢15% Testing: For assessing model performance.
Methodology - SPN Extraction
The Sensor Pattern Noise (SPN) extraction process involves the following steps:
❖ Convert to Grayscale
Images are converted to grayscale to eliminate color information and focus on
the luminance channel, which contains the SPN.
❖ Apply Gaussian Blur
A Gaussian blur is applied to smooth the image and suppress high-frequency
content unrelated to SPN.
❖ Perform High-Pass Filtering
High-pass filtering is used to isolate the SPN by removing low-frequency
components, such as general image content and lighting variations.
❖ Expand to RGB Format
The extracted SPN is expanded back into an RGB format, ensuring
compatibility with the input requirements of the DenseNet121 deep learning
model.
Model Architecture
accuracy 0.87 75
macro avg 0.86 0.87 0.86 75
weighted avg 0.88 0.87 0.87 75
Accuracy of Phone 1:
Precision: 0.94
Recall: 0.80
F1 Score: 0.86
Accuracy of Phone 2:
Precision: 0.82
Recall: 0.69
F1 Score: 0.75
Accuracy of Phone 3:
Precision: 0.71
Recall: 1.00
F1 Score: 0.83
Accuracy of Phone 4:
Precision: 0.92
Recall: 0.86
F1 Score: 0.89
Accuracy of Phone 5:
Precision: 0.90
Recall: 1.00
F1 Score: 0.95
➢ Expand Dataset
Future efforts will focus on expanding the dataset by incorporating more
camera models to address dataset imbalance and enhance the model's
generalization across diverse devices.
➢ Explore Advanced Architectures
Exploration of advanced architectures, such as Efficient Net or Vision
Transformers, could improve accuracy and efficiency by leveraging state-
of-the-art techniques in image classification.
➢ Real-time Deployment
The goal is to develop a real-time deployment of the model, enabling its
use in practical scenarios such as digital forensics and social media
platforms, where rapid camera source identification and authenticity
checks are critical.
Conclusion
➢ SPN and DenseNet121
The combination of Sensor Pattern Noise (SPN) and the DenseNet121
architecture has proven to be an effective approach for camera identification,
achieving a high level of accuracy in distinguishing devices based on unique
sensor noise patterns.
➢ Practical Applications
This method holds significant promise for digital forensics and ensuring
media authenticity, providing a reliable way to trace the origins of images and
detect potential manipulation in various contexts.
➢ Future Improvements
While the current model demonstrates strong performance, further
improvements can focus on enhancing generalization to diverse camera
types and increasing efficiency for real-time deployment in practical, large-
scale applications.
References
➢ Lukás, J., Fridrich, J., & Goljan, M. (2006). "Digital camera identification
from sensor pattern noise." IEEE Transactions on Information Forensics
and Security, 1(2), 205-214.
➢ Zhang, Y., & Yu, H. (2019). "Camera model identification using
convolutional neural networks." Journal of Visual Communication and
Image Representation, 62, 120-126.
➢ Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). "Densely
connected convolutional networks." Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR).
➢ Kaggle. (n.d.). "Camera model identification dataset." Retrieved from
[https://www.kaggle.com/].
➢ Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning." MIT Press.
➢ Conference on Computer Vision and Pattern Recognition (CVPR).