AIML Major Minor FORMAT DeepFake

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

DEEPFAKE DETECTION

A Minor Project Report Submitted to

Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal


Towards partial fulfillment for the award of
Bachelor of Technology
AIML

Submitted by
1. Unnayan Mishra (0818CL211063)
2. Krish Munat (0818CL211030)
3. Anurag Sharma (0818CL211011)
Under the supervision of
Pritesh Saklecha
Assistant Professor

Department of Artificial Intelligence & Machine Engineering


Indore Institute of Science and Technology, Indore
Session 2023-2024

Indore Institute of Science and Technology, Indore (M.P.)

Department of Computer Science & Engineering


DECLARATION

I, Unnayan Mishra (0818CL211063.), Krish Munat (0818CL211030) & Anurag


Sharma(0818CL211011) student of Bachelor of Technology in AIML discipline, Indore
Institute of Science and Technology, Indore (M.P.), hereby declare that the work presented in
this project entitled “DeepFake Detection” is the outcome of my own work, is bonafide and
correct to the best of my knowledge and this work has been carried out taking care of
Engineering Ethics. The work presented here does not infringe any patented work and has not
been submitted to any other university or anywhere else for the award of any degree or any
professional diploma.

1. Krish Munat (0818CL211030)

2. Unnayan Mishra (0818CL211063)

3. Anurag Sharma (0818CL211011)

Date :

Indore Institute of Science and Technology, Indore (M.P.)

Department of Computer Science & Engineering


RECOMMENDATION

This is to certify that the work embodied in this project work entitled “DeepFake Detection”,
being submitted by Unnayan Mishra (0818CL211063.) & Krish Munat (0818CL211030) in
partial fulfillment of the requirement for the award of Bachelor of Technology in AIML
discipline, to Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal (M.P.) during the
academic year 2022-23 is a record of bonafide piece of work, carried out by his/her under my
supervision and guidance.

Guide Name HOD AIML


(Pritesh Saklecha)
Project Guide

Forwarded by:

Dr. Keshav Patidar


Principal, IIST, Indore
Indore Institute of Science and Technology, Indore (M.P.)

Department of Computer Science & Engineering

CERTIFICATE

The Project entitled “DeepFake Detection” being submitted by Unnayan Mishra


(0818CL211063.) & Krish Munat (0818CL211030) has been examined by us and is hereby
approved for the award of degree Bachelor of Technology in AIML discipline, for which it has
been submitted. It is understood that by this approval the undersigned do not necessarily endorse
or approve any statement made, opinion expressed or conclusion drawn therein, but approve the
dissertation only for the purpose for which it has been submitted.

(Internal Examiner) (External Examiner)


Date: Date:
ACKNOWLEDGEMENT

After the completion of this project, words are not enough to express my feelings about all
those who helped me to reach my goal; feeling above this is my indebtedness to the Almighty
for providing me this moment in life.
In this project we have received constant support from Dean Academics Dr. Richa Gupta and
Head of Department Dr. Shweta Agrawal . Also, I am heartily indebted to the constant support
and guidance of Mr./Ms. Pritesh Saklecha . Without his/her guidance and scholarly suggestion
an urge to bring out the best would not have been possible. I hope to propagate his scientific,
industrial, and professional fervors to the best of my abilities. His/her clear view and knowledge
provided help during every phase of Project Development. His perpetual motivation, patience,
and excellent expertise in discussion during progress of the project work have benefited us to an
extent, which is beyond expression. His/her in-depth and breadth of knowledge of Computer
Engineering field made me realize that theoretical knowledge always helps to develop efficient
operational software, which is a blend of all core subjects of the field. He/she was major support
to me throughout my project, being available with his/her odd ideas, inspiration, and
encouragement. It is a through his masterful guidance that I have been able to complete my
Project.

I am also thankful to all the teaching and non-teaching staff from Department of AIML and
friends and people who helped me directly or indirectly for the completion of this project, with
success.

The successful completion of a project is generally not an individual effort. It is an outcome of


the cumulative effort of many people, each having their own importance to the objective. This
section is a vote of thanks and gratitude towards all those persons who have directly or indirectly
contributed in their own special way towards the completion of this project.

Date:
1. Krish Munat 0818CL211030
2. Unnayan Mishra 0818CL211063
3. Anurag Sharma 0818CL211011
ABSTRACT (In single page-Demo is given)

Title: Unveiling the Facets of Deep Fake Technology: Implications, Challenges, and
Countermeasures
The advent of deep fake technology has ushered in a new era of both fascination and concern.
This project endeavors to explore the multifaceted landscape of deep fake technology,
investigating its creation, application, and the ethical dilemmas it presents.
This research delves into the technical underpinnings of deep fake algorithms, examining their
capacity to generate highly realistic audiovisual content by leveraging machine learning and
neural networks. The project scrutinizes the diverse applications of deep fakes, encompassing
entertainment, politics, and cybercrime, while elucidating the potential ramifications on privacy,
misinformation, and trust in the digital age.
Moreover, this study critically evaluates the ethical implications and societal risks associated
with the proliferation of deep fake technology. It probes into the challenges posed by the
manipulation of media content and the erosion of truth, highlighting the urgent need for ethical
guidelines and legal frameworks to mitigate the negative consequences.
Furthermore, this project delineates the current landscape of countermeasures and detection
techniques against deep fake technology. It assesses the efficacy of existing methods and
proposes innovative approaches, including AI-based detection algorithms and media
authentication protocols, aimed at mitigating the spread and impact of deceptive content.
Ultimately, this research contributes a comprehensive overview of the burgeoning domain of
deep fake technology, encompassing its technical intricacies, societal implications, and potential
safeguards. By shedding light on its multifaceted dimensions, this project aims to foster a better
understanding of deep fakes and stimulate discourse on ethical and regulatory frameworks to
navigate this evolving technological landscape.

TABLE OF CONTENTS

TITLE PAGE NO.

Recommendation I

Certificate II
Project Approval Sheet III

Declaration IV

Acknowledgement V

Abstract VI

Table of Contents VII

List of Figures IX

List of Tables X

List of Graphs XI

List of Acronyms XII

1. INTRODUCTION 1

1.1 Introduction 1

1.2 Motivation 16

1.4 Summary 19

2. LITERATURE SURVEY 22

2.1 Background and Related Work

2.2 Survey Extraction

2.3 Problem Statement

2.4 Objective of Research Work

2.5 Solution Approach

2.6 Chapter Summary

3. PROPOSED METHODOLOGY
3.1 Proposed Method

3.2 Proposed Methodology Flow Chart

3.3 Summary 28

4. SYSTEM REQUIREMENT SPECIFICATIONS 28

4.1 Identification of Specifications 29

4.2 Hardware Requirements 29

4.3 Software Requirements 32

4.4 Tools/API/Any other specification (If any) 32

5. SYSTEM ANALYSIS, DESIGN AND IMPLEMENTATION 36

5.1 System Analysis 43

5.1.1 Identification of System Requirements 44

5.1.2 Functional Requirements 46

5.1.3 Non-Functional Requirements 46

5.1.4 Feasibility Analysis and Summary 47

5.2 System Design 48

5.2.1 Architecture/Component Diagram 48

5.2.2 Data Flow Diagrams (DFD) 49

5.2.2.1 Level 0 DFD 50

5.2.2.2 Level 1 DFD 51-52

5.2.3 UML Diagrams 53-54

5.2.3.1 Use-Case Diagram 55

5.2.3.2 Sequence Diagram 56


5.2.3.3 Activity diagram 57

5.2.3.4 Class Diagram 57

5.2.3.5 Deployment Diagram 58

6. RESULTS 59

6.1 Experimental Setup 60

6.2 Data Set (If any) 61

6.3 Results (Charts/Graphs/Tables) 62

6.4 Chapter Summary 62

7. CONCLUSION AND FUTURE WORK 63

7.1 Conclusion 64

7.2 Limitation 65

7.3 Future Work 66

REFERENCES 66

APPENDIX 67

67

PUBLICATIONS (If any)


LIST OF FIGURES

FIGURE NO.
PAGE NO.

1.1 Inheritance relationship 12

3.1 Methodology Flow Chart 30

4.1 Screen Snapshot for Choice of the Measurement 32

4.2 Screen Snapshot for Cohesion Measurement 32

4.3 Screen Snapshot for Uploading the Source Code 33

4.4 Screen Snapshot for Upload Input C# Inheritance Source Code 33

4.5 Screen Snapshot for Upload Input C# Interface Source Code 34

4.6 Screen Snapshot for Applying Scanning on Both Source Code 34

4.7 Screen Snapshot for Calculation 35

4.8 Screen Snapshot of Calculate Total No of TCC LCC Value 36

4.9 Screen Snapshot for Choice of the Measurement 36

4.10 Screen Snapshot for Input C# Inheritance Source Code 37

4.11 Screen Snapshot for First Input Source Code Load 37

4.12 Screen Snapshot for Display of Calculate Total No of Classes 38

4.13 Screen Snapshot for Display List of All Class Names 38

4.14 Screen Snapshot for Display Total No of Inherited Classes 39

4.15 Screen Snapshot for Apply All Metrics And Weighted Output 39
4.16 Screen Snapshot for Display of Weighted Metrics Values 40

4.17 Screen Snapshot for Load Second Input Source Code 40

4.18 Screen Snapshot for Display Total Classes, List of Class, Interface Name 41

4.19 Screen Snapshot for Apply All Metrics And Calculate Metrics Values 41

4.20 Screen Snapshot for Display Calculate Total Sum of Metrics Values 42

4.21 Screen Snapshot for Display Comparison of Total Metrics Weighted Values 43

LIST OF TABLES
TABLE NO.
PAGE NO.

Table 5.1 Calculated TCC and LCC Values for Both the Inputted Source Code 48
for Cohesion Measurement

Table 5.2 Calculated Total Metrics Weighted Values on Both Inputted Source 49
Code for Coupling Measurement
LIST OF GRAPH

GRAPH NO.

PAGE NO.

Graph 5.1 The comparison between inheritance and interface based on 49


calculated data of TCC and LCC

Graph 5.2 The comparison between inheritance and interface based on 50


calculated data of Coupling metrics
LIST OF ACRONYMS

OOP Object Oriented Programming

TCC Tight Class Cohesion

LCC Loose Class Cohesion

CK Chidamber and Kemerer

CBO Coupling Between Object

NOC Number of Children’s

DIT Depth of Inheritance Tree

NOASSocC Number of Association per Class

NDepIN Number of Dependencies In Metric

NDepOUT Number of Dependencies Out Metric

NP(C) Number of Total Pair of Method in a Class


NDC Number of Direct Method in a Class

NIC Number of Indirect Method in a Class


Dear Students,

Please find the attachment for the guidelines and format of the Minor Project Report.

No. of Copies required N (For ex. If there is a group of 4 members, then 4 copies will have
to be submitted.)

Color of Report(Only Spiral binding is required.)

Note: -
Margins given must be the same as the given format

Authority Names could be changed

Places with underlines should be replaced with main content like name, roll number title, branch, etc.

Font size should be proper(12)

Headings(12 Bold)

Indentation should be proper

The numeric Page number should be started from the Chapter 1 before those Roman numbers are used

The number of chapter & their names can be varied as per the instruction of the guide

Referencing must be of the form

[1] Author Name 1, Author Name 2 & Author Name 3, “Paper Or Article or Book or Content full
title here” in Conference Name (Or Journal name or proceedings name or Book Name), ISSN
Num:____-____, Vol No,_, Issue No._, pp;__,__, Month Year.

Other changes should be suggested as per your guide

Note:-The format is made by only analyzing various reports & is not an authorized one by the
institution. Changes can be made as suggested after consulting with the guide, department heads &
institution heads.

This file is only generated for the help of students & had nothing related or instructed by any dignitary
of the institution.

Suggestions are always welcomed for further improvements.

The given format includes a few names which belong to CS/IT departments which can be changed as
per the instruction of the In-charge & institution heads & guide.
Introduction:
In recent years, the advent of deep learning and artificial intelligence has ushered
in a new era of technological advancement, offering unprecedented capabilities
and opportunities for both innovation and deception. Among the many
applications and challenges that AI has brought to the forefront, deepfake
technology has emerged as a particularly significant and pressing issue. Deepfakes
are digitally altered media, often in the form of images or videos, that
convincingly manipulate the likeness and actions of individuals, making them
appear to say or do things they never did. This technology has raised concerns
about its potential for misinformation, privacy invasion, and even malicious
exploitation.

The rise of deepfake technology has far-reaching implications for individuals,


organizations, and society as a whole. As it becomes increasingly accessible and
sophisticated, the ability to create convincing deepfake content is no longer
limited to a select few with specialized knowledge; rather, it is now within the
reach of anyone with access to the right tools and data. This accessibility has
fueled concerns about the potential abuse of deepfakes for various malicious
purposes, such as spreading false information, impersonating public figures,
defaming individuals, and more.
To address these concerns and mitigate the risks associated with deepfakes, the
development of effective deepfake detection methods has become an urgent and
critical research area. Detecting deepfakes is a multifaceted challenge that
requires a combination of traditional computer vision techniques and cutting-
edge artificial intelligence. By identifying telltale signs of manipulation in digital
media, it is possible to distinguish authentic content from deepfakes.

This document serves as an introduction to the field of deepfake detection,


exploring the various techniques, strategies, and technologies that are currently
being used to combat the spread of manipulated media. We will delve into the
key characteristics of deepfakes, the motivations behind their creation, and the
potential consequences of their proliferation. Additionally, we will examine the
importance of addressing the deepfake problem, not only from a technological
perspective but also from a societal and ethical standpoint.

As we progress through this document, we will gain insights into the current state
of deepfake detection, its limitations, and the ongoing efforts to develop more
robust and reliable detection methods. By the end of this document, readers will
have a foundational understanding of the challenges and opportunities in the
field of deepfake detection and be better equipped to navigate the evolving
landscape of AI-generated deception.
Motivation :
Motivation for Deepfake Detection Project
The motivation behind undertaking a deepfake detection project is rooted in the
profound impact that deepfake technology can have on our society and the urgent
need to combat its potential misuse. Deepfakes pose a significant threat to truth,
privacy, and trust in a world increasingly reliant on digital media. Here are several
compelling reasons why this project is crucial:
1. Preserving Truth and Authenticity: In an era where information is
disseminated through digital media, the authenticity of audiovisual content is
paramount. Deepfakes can distort reality, undermine trust, and erode the
foundations of truth. Detecting and preventing the spread of manipulated
media is essential to safeguard the integrity of information and preserve trust
in digital content.
2. Misinformation and Disinformation: Deepfakes can be weaponized as tools
for spreading misinformation and disinformation. False narratives, fabricated
speeches, and deceptive actions attributed to public figures can sow
confusion, manipulate public opinion, and have serious real-world
consequences. Detecting deepfakes can help counteract these harmful
effects.
3. Privacy and Consent: Deepfake technology poses a grave threat to personal
privacy. Individuals can have their likeness and voice manipulated without
their consent, making them unwitting actors in fabricated scenarios. The
violation of privacy and potential harm to an individual's reputation make it
imperative to develop effective deepfake detection methods.
4. Security and Fraud Prevention: In an age where identity theft and online
scams are prevalent, deepfake detection can play a crucial role in preventing
fraudulent activities. By distinguishing between authentic and manipulated
content, it can thwart attempts to deceive and defraud individuals and
organizations.
5. Media Integrity: Deepfake technology undermines the integrity of visual and
auditory media. It jeopardizes the credibility of evidence in legal
proceedings, news reporting, and other contexts where media serves as a
record of events. Ensuring media authenticity is essential for upholding the
principles of justice and accountability.
6. Advancing AI and Computer Vision: Deepfake detection represents a
challenge at the forefront of artificial intelligence and computer vision
research. Developing robust detection methods necessitates pushing the
boundaries of AI technology and innovation. The project provides an
opportunity to advance the field and pioneer new techniques in the realm of
AI-driven security.
7. Social and Ethical Responsibility: As creators and consumers of technology,
we have a social and ethical responsibility to confront the ethical dilemmas
posed by deepfake technology. This project aligns with the broader mission
of using AI for the betterment of society and upholding ethical standards in
the development and use of advanced technology.

Summary :
Summary:

The introduction to deepfake detection underscores the critical importance of


addressing the challenges posed by deepfake technology in today's digital age.
Deepfakes are convincingly manipulated media that can distort reality, spread
misinformation, violate privacy, and compromise the integrity of digital content.
This document aims to provide an overview of deepfake detection, its significance,
and the motivations behind tackling this issue.

The motivations for a deepfake detection project are multifaceted and compelling.
They include preserving truth and authenticity in a world reliant on digital media,
countering the spread of misinformation, safeguarding individual privacy and
consent, preventing fraud, maintaining media integrity, advancing AI and
computer vision research, and upholding social and ethical responsibilities.

In addressing these motivations, the project seeks to develop robust and innovative
deepfake detection methods, leveraging the latest advancements in artificial
intelligence and computer vision. By doing so, it contributes to the protection of
truth, privacy, and trust, and aligns with the broader mission of using technology
for the betterment of society and ethical technology development.

You might also like