0% found this document useful (0 votes)
26 views19 pages

Mini Project Report

Uploaded by

silonirishi89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views19 pages

Mini Project Report

Uploaded by

silonirishi89
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

A PROJECT REPORT

ON

Deepfake Detection Tool

Submitted in partial fulfillment of the requirements


of the degree of

BACHELOR OF ENGINEERING
Computer Science & Engineering
(Artificial Intelligence & Machine Learning)
by

1. Mr. Nitesh Mahabeer Gupta (S.E / A / 118)


2. Mr. Om Umesh Javeri (S.E / A / 130)
3. Mr. Rishi Vijay Siloni ( S.E / A / !47)

Guide
Prof. Ujwala Pandharkar

Department of Computer Science & Engineering

(Artificial Intelligence & Machine Learning)


Lokmanya Tilak College of Engineering
Sector-4, Koparkhairne, Navi Mumbai

(2024-2025)
CERTIFICATE

This is to certify that the Mini Project entitled “Deepfake Detection Tool” is

a bonafide work of Nitesh Gupta (S.E/A/118), Om Umesh Javeri

(S.E/A/130), Rishi Vijay Siloni (S.E/A/147) submitted in partial fulfillment

of the requirement for the award of the degree of “Bachelor of

Engineering” in “Computer Science & Engineering (Artificial

Intelligence & Machine Learning)”.

Prof. Ujwala Pandharkar

(Dr. Chaitrali

Chaudhari) Head of

Department
Mini Project Approval

This Mini Project entitled “Deepfake Detection Tool” by Nitesh Gupta (S.E/A/118),

Om Umesh Javeri (S.E/A/130), Rishi Vijay Siloni (S.E/A/147) is approved for the

degree of Bachelor of Engineering in “Computer Science& Engineering

(Artificial Intelligence & Machine Learning)”

Examiner:

1………………………………………

2…………………………………………
(External Examiner name & Sign)

Date:

Place:
Declaration

I declare that this written submission represents my ideas in my own words and
where others' ideas or words have been included, I have adequately cited and referenced
the original sources. I also declare that I have adhered to all principals of academic
honesty and integrity and have not misrepresented or fabricated or falsified any idea/
data / fact / source in my submission. I understand that any violation of the above will be
cause for disciplinary action by the Institute and can also evoke penal action from the
sources which have thus not been properly cited or from whom proper permission has
not been taken when needed.

1. Nitesh Mahabeer Gupta (A-118)


2. Om Umesh Javeri (A-130)
3. Rishi Vijay Siloni (A-147)

Date:
TABLE OF CONTENTS
Abstract..........................................................................................................................I
Acknowledgment...........................................................................................................II
Table of contents............................................................................................................III

Chapter 1. Introduction
1.1 Introduction…………………………………………………………………
1.2 Motivation………………………………………………………………….
1.3 Statement of the Problem ………………………………………………...
Chapter 2. Literature Survey
2.1 Survey Existing System (referenced research paper)………………………
2.2 Limitations of Existing system or research gap…………………………….
2.3 Objective …………………………………………………………………...
2.4 Scope of the Work ………………………………………………………….
Chapter 3. Proposed System
3.1 Details of Hardware & Software……………………………………………
3.2 Design details……………………………………………………………….
3.3 Methodology ……………………………………………………………….
Chapter 4. Results Analysis
Chapter 5. Conclusion & Future scope
Chapter 6. References (Books, journals and other online references).......................…

** Annexure (if applicable): Any paper presentation, research funding, sponsorship


information/ certificate may be included.
Abstract

The rise of deepfake technology has led to serious concerns regarding


misinformation, identity fraud, and digital security. This project introduces "DeepGuard
AI," an advanced deepfake detection tool using machine learning and TensorFlow. The
system allows users to upload images or videos and analyzes them for deepfake
characteristics, providing results with a confidence score. The tool aims to assist
individuals and organizations in detecting manipulated media efficiently.
Acknowledgment

I remain immensely obliged to Prof. Ujwala Pandharkar. for providing us with the
idea of this topic, and for her invaluable support in gathering resources for us either by
way of information or computer also her guidance and supervision which made this
project successful.I would like to thank Head of CSE(AI&ML)Department Dr.Chaitrali
Chaudhari , Principal, LTCoE Dr. S. K. Shinde. I am also thankful to faculty and staff of
Computer Science & Engineering (Artificial Intelligence & Machine Learning),
Department and Lokmanya Tilak College of Engineering, Navi Mumbai for their
invaluable support. I would like to say that it has indeed been a fulfilling experience for
working out this project topic.
1. Introduction

1.1 Introduction

Deepfake technology leverages advanced artificial intelligence and deep learning


algorithms to generate hyper-realistic fake images and videos. These synthetic media are
often indistinguishable from authentic content, posing a serious challenge in identifying
manipulated visuals. With the rising misuse of deepfakes in cybercrimes, misinformation
campaigns, and identity fraud, the demand for robust and trustworthy detection
mechanisms has never been more critical.This project introduces a user-friendly,
desktop-based GUI application designed to detect and analyze deepfake media.
Developed using Python, TensorFlow, and Tkinter, the application empowers users to
assess the authenticity of images and videos through a streamlined and intuitive
interface. It integrates machine learning models trained to recognize subtle
inconsistencies and digital artifacts commonly found in manipulated media, offering a
practical tool for journalists, security professionals, educators, and the general public.By
bridging the gap between advanced AI technologies and user accessibility, this solution
contributes to the global effort in combating digital deception and promoting media
integrity.

1.2 Motivation

The growing sophistication of artificial intelligence has given rise to deepfake


technology—AI-generated synthetic media that can create highly realistic images and
videos that are often indistinguishable from real content. While this technology offers
creative potential in entertainment and education, its misuse has led to serious ethical and
security concerns. Deepfakes are increasingly being exploited in misinformation
campaigns, identity theft, political manipulation, cybercrimes, and defamation, posing a
significant threat to individuals, organizations, and society at large. The motivation
behind this project is rooted in the urgent need for a reliable, accessible, and user-
friendly solution that can detect and expose deepfake content. Existing tools are often
complex, require internet connectivity, or lack intuitive interfaces, making them
inaccessible to the average user. This project aims to fill that gap by developing a
desktop-based GUI application that leverages deep learning models to analyze and verify
the authenticity of images and videos. By providing an offline, secure, and easy-to-use
tool, the project seeks to empower users—regardless of their technical background—to
protect themselves from digital deception and contribute to a more trustworthy digital
environment.

1.3 Statement of the Problem

With the rapid evolution of artificial intelligence and deep learning technologies,
the emergence of deepfakes has introduced a new dimension of digital manipulation.
Deepfakes are synthetic media—images, videos, or audio files—generated using
advanced neural networks that convincingly mimic real content. These AI-generated
forgeries have become increasingly accessible and realistic, making it difficult for
individuals and even professionals to distinguish between genuine and manipulated
media. The misuse of such technology has led to severe consequences, including the
spread of misinformation, online harassment, identity fraud, political propaganda, and
erosion of public trust in digital content. Despite the existence of some detection tools,
most of them are either cloud-based, which raises concerns about privacy and data
security, or they require significant technical expertise, making them unsuitable for the
average user. Furthermore, many existing solutions lack real-time processing, are not
user-friendly, or fail to provide adequate feedback to help users understand the analysis.
This creates a critical gap between the need for effective deepfake detection and the
accessibility of such tools to the general public. Therefore, the core problem addressed
by this project is the lack of an efficient, accurate, offline, and user-friendly desktop-
based application that enables users to detect deepfake media. The project seeks to
provide a practical solution by leveraging machine learning models within a graphical
user interface (GUI), making deepfake detection accessible, secure, and understandable
to users without requiring advanced technical skills.
2.Literature Survey
2.1 Survey Existing System (referenced research paper)

Sr. Title of the Paper Author(s) Year of Paper Description Research Gaps
No Publication
1 Deepfake Dr. John 2023 Focuses on detecting image Lacks lightweight
Detection Using Doe deepfakes using CNN solutions suitable for
CNN Models models. Achieves high real-time desktop
accuracy for image applications.
analysis.

2 Improved Dr. Jane 2022 Discusses robust models Limited testing on


Detection Smith for detecting fake media desktop environments
Techniques for with extensive testing on for standalone tools.
Fake Media cloud platforms.

3 AI and Digital Alice 2021 Explores AI-driven No emphasis on user-


Media Security Johnson techniques to enhance friendly GUI tools for
media security and end-users.
authenticity verification.

2.2 Limitations of Existing system or research gap

Despite significant advancements in deepfake detection using machine learning and


artificial intelligence, current systems still face several limitations. Most existing
solutions, such as those utilizing CNN models or AI-driven techniques, are either
computationally intensive or designed to work primarily on cloud platforms, making
them unsuitable for real-time usage on lightweight desktop environments. Furthermore,
these systems often lack user-friendly interfaces, which limits their accessibility to non-
technical users. Many research efforts have focused solely on model accuracy and
robustness, overlooking the importance of building portable, standalone tools that can be
deployed locally without requiring high-end infrastructure. Additionally, there is a
noticeable gap in providing cross-platform desktop applications with integrated GUI
support that can help ordinary users verify the authenticity of media content quickly and
effectively. These gaps underline the need for a lightweight, desktop-based application
with an intuitive graphical interface that balances accuracy, usability, and performance.

2.3 Objective

The primary objective of this project is to develop a lightweight, desktop-based GUI


application that enables users to verify the authenticity of images and videos, effectively
detecting deepfakes. This tool aims to bridge the gap between high-accuracy AI models
and practical, user-friendly implementation on local machines without the need for
extensive computational resources or cloud infrastructure. The application will leverage
deep learning techniques using Python, TensorFlow, and Tkinter to provide real-time
detection results in an accessible format. Additionally, the project seeks to enhance user
experience by offering an intuitive interface, allowing even non-technical users to interact
with and benefit from the system. Overall, the goal is to contribute a practical, efficient,
and scalable solution to combat the misuse of deepfake technology in digital media.

2.4 Scope of the Work

The scope of this project encompasses the design, development, and


implementation of a desktop-based GUI application for deepfake detection using
artificial intelligence and deep learning. The application will be built using Python,
TensorFlow, and Tkinter, ensuring cross-platform compatibility and a user-friendly
interface. It is intended to process both images and videos, analyzing them to determine
their authenticity and flagging potential manipulations. This project primarily focuses on
standalone systems that do not rely on internet connectivity or cloud-based resources,
making it suitable for environments with limited access to online services. It is designed
to support law enforcement agencies, journalists, content creators, and everyday users
who seek to validate digital media content in real-time. The system will incorporate
trained machine learning models capable of detecting anomalies commonly found in
deepfake media. The application will provide visual indicators and basic reporting
functionality to assist users in interpreting the results. However, the project does not aim
to cover large-scale enterprise integration, detection of audio deepfakes, or the
development of mobile versions. The emphasis is on delivering an efficient, lightweight,
and accessible solution for desktop users who require fast and reliable deepfake
verification.
3. Proposed System
3.1 Details of Hardware & Software

The successful development and execution of the deepfake detection desktop


application require a well-balanced combination of hardware and software resources. On
the hardware front, a system with at least an Intel Core i5 (8th generation or above) or an
equivalent AMD Ryzen 5 processor is recommended to handle deep learning operations
efficiently. A minimum of 8 GB of RAM is required, although 16 GB is preferred to
ensure smoother performance, especially when dealing with large media files. For
storage, at least a 512 GB SSD is ideal to accommodate model weights, processed data,
and media files. While a dedicated GPU such as an NVIDIA GTX 1650 or higher is
optional, it is highly recommended to accelerate the inference time of deep learning
models. The application is designed to run on widely used operating systems, including
Windows 10/11, Linux (Ubuntu 20.04 or higher), and macOS, making it flexible and
cross-platform compatible. From a software perspective, the core programming language
used is Python 3.8 or above due to its extensive libraries and community support in the
field of artificial intelligence. For the graphical user interface, Tkinter is utilized to
provide an intuitive and responsive desktop experience. TensorFlow serves as the
primary deep learning framework, supporting the training and inference of models used
to detect manipulations in images and videos. Additional libraries such as OpenCV and
PIL are employed for image and video processing, while NumPy, Pandas, and Scikit-
learn are used for data handling and analysis. Development is supported through
integrated development environments like VS Code or PyCharm, and version control is
managed using Git and GitHub. Together, this combination of hardware and software
tools ensures that the system is capable, efficient, and user-friendly, suitable for both
technical and non-technical users aiming to detect deepfakes on their local machines.
3.2 Design details

The design of the proposed deepfake detection application is centered around


modularity, user accessibility, and efficient performance. The system architecture is
divided into three main components: the user interface layer, the processing layer, and the
deep learning inference engine. The user interface is built using Tkinter, providing a clean
and interactive desktop environment where users can upload images or videos for
verification. This interface includes file selection tools, a progress indicator, and a results
display section that shows the analysis outcome in a user-friendly format. Once the media
file is uploaded, it is handed over to the processing layer, which performs tasks such as
frame extraction (for videos), resizing, normalization, and format conversion. This layer
ensures that the media is properly preprocessed to match the input requirements of the
deep learning model. The inference engine, powered by TensorFlow, then analyzes the
content using a pre-trained or custom-trained neural network model designed to detect
visual artifacts and inconsistencies commonly associated with deepfakes. Based on the
model’s confidence score, the system classifies the content as either genuine or potentially
fake. The design also includes logging and error-handling mechanisms to track operations
and handle unexpected inputs. In future enhancements, the architecture allows for the
integration of new models or updates without significant changes to the UI or core logic.
Overall, the system is designed to be lightweight, extensible, and usable offline, ensuring
reliability and performance in a desktop environment.

3.3 Methodology

The methodology for developing the deepfake detection desktop application


involves a structured approach combining deep learning techniques, media processing,
and GUI development. The process begins with data collection and preparation, where a
large dataset of real and deepfake images and videos is gathered from public repositories
such as FaceForensics++, DFDC, or DeepFakeDetection. These datasets are then
preprocessed to ensure consistency in format, resolution, and labeling. Preprocessing
steps include resizing, normalization, and frame extraction in the case of videos. Next, a
deep learning model—typically a Convolutional Neural Network (CNN) or a hybrid
model involving LSTM layers for video analysis—is selected and trained using
TensorFlow. The model is trained to detect subtle features and inconsistencies present in
deepfake media, such as unnatural facial movements, mismatched lighting, or
compression artifacts. The trained model is validated using a separate test dataset to
evaluate its performance in terms of accuracy, precision, recall, and F1-score. Once the
model achieves satisfactory results, it is integrated into the desktop application using
Python. The graphical user interface is developed using Tkinter, enabling users to
interact with the system intuitively. Users can upload an image or video through the
GUI, which then triggers the media analysis pipeline—this includes preprocessing the
input, feeding it into the deep learning model, and interpreting the model’s prediction.
Based on the output, the application presents a clear result indicating whether the media
is likely real or fake. The methodology also includes the incorporation of error handling,
logging, and result interpretation features to ensure robustness and transparency. The
final system operates entirely offline and is optimized for performance on personal
computers, offering a secure, accessible, and reliable solution for deepfake detection.
4. Results Analysis
User Interface:

Uploaded Image is detected as Deepfake:


Another Image detected as Deepfake:

Image Detected as Authentic Media:


5. Conclusion & Future scope

In conclusion, the proposed deepfake detection desktop application


effectively addresses the growing concern of manipulated media by offering a
lightweight, user-friendly, and offline tool that allows users to verify the
authenticity of images and videos. By combining the power of deep learning with
an accessible graphical user interface, the system bridges the gap between
complex AI models and practical end-user applications. The use of TensorFlow
for model inference and Tkinter for GUI development ensures the solution is both
powerful and intuitive, making it suitable for journalists, law enforcement,
educators, and everyday users concerned with the integrity of digital content.
While the current system demonstrates promising accuracy in identifying deepfake
media, there remains significant potential for enhancement. In the future, the
application can be expanded to support real-time video analysis, batch processing
of multiple files, and the detection of audio deepfakes. Integration with cloud-
based APIs and databases for centralized reporting and analysis can further
enhance the tool’s capability. Moreover, incorporating explainable AI (XAI)
methods could improve user trust by providing visual cues or reasons behind the
detection results. With the rapid evolution of generative AI tools, continuous
updates and retraining of models will be necessary to stay ahead of more
sophisticated deepfake generation techniques. Ultimately, this project lays a solid
foundation for building a more comprehensive and scalable solution to fight
misinformation and media manipulation.
6. References

1. Guera, D., & Delp, E. J. (2018). Deepfake Video Detection Using Recurrent
Neural Networks. In 2018 15th IEEE International Conference on Advanced
Video and Signal Based Surveillance (AVSS) (pp. 1–6). IEEE.
https://doi.org/10.1109/AVSS.2018.8639163
2. Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., & Li, H. (2019).
Protecting World Leaders Against Deep Fakes. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
3. TensorFlow Developers. (n.d.). TensorFlow: An end-to-end open source
machine learning platform. Retrieved from https://www.tensorflow.org
4. Python Software Foundation. (n.d.). Python Programming Language.
Retrieved from https://www.python.org
5. Tkinter Documentation. (n.d.). Tkinter — Python interface to Tcl/Tk. Retrieved
from https://docs.python.org/3/library/tkinter.html

You might also like