0% found this document useful (0 votes)
5 views

Machine Learning Report (1)

Uploaded by

I Am Ankur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Machine Learning Report (1)

Uploaded by

I Am Ankur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

MOVIE RECOMMENDATION SYSTEM

A MINOR PROJECT REPORT

Submitted by

SONY(RA2211026020156)
SIVA SRAVANI(RA2211026020158)
MUKTHANANDA(RA2211026020138)
Under the guidance of

ABHIRAMI
(Designation, Department of Computer Science and Engineering)

in partial fulfilment for the award of the

degree of

BACHELOR OF TECHNOLOGY
in

COMPUTER SCIENCE AND ENGINEERING

of

FACULTY OF ENGINEERING AND TECHNOLOGY

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY


RAMAPURAM, CHENNAI -600089

NOV 2024
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
(Deemed to be University U/S 3 of UGC Act, 1956)

BONAFIDE CERTIFICATE

Certified that this project report titled “MOVIE RECOMMENDATION SYSTEM”


is the bonafide work RAVURI SONY [REG NO:RA2211026020156], SIVA
SRAVANI [REG NO: RA2211026020158], MUKTHANANDHA [REG NO:
RA2211026020138] who carried out the project work under my supervision.
Certified further, that to the best of my knowledge, the work reported herein does not
form any other project report or dissertation on the basis of which a degree or award
was conferred on an occasion on this or any other candidate.

SIGNATURE SIGNATURE

Dr. K. RAJA, M.E., Ph.D.,


Name of the Supervisor
Professor and Head
Designation
Computer Science and Engineering,
Computer Science and Engineering, SRM Institute of Science and Technology,
SRM Institute of Science and Technology,
Ramapuram, Chennai.
Ramapuram, Chennai.

Submitted for the project viva-voce held on___________ at


SRM Institute of Science and Technology, Ramapuram,
Chennai -600089.

INTERNAL EXAMINER EXTERNAL EXAMINER


SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
RAMAPURAM, CHENNAI - 89

DECLARATION

We hereby declare that the entire work contained in this project report titled “MOVIE
RECOMMENDATION SYSTEM” has been carried out by RAVURI SONY[ REG NO:
RA22110260156],SIVA SRAVANI [REG NO: RA2211026020158], MUKTHANANDA [REG NO:

RA2211026020138] at SRM Institute of Science and Technology, Ramapuram Campus,


Chennai- 600089, under the guidance of ABHIRAMI, Department of Computer Science
and Engineering.

Place: Chennai
Date: RAVURI SONY

SIVA SRAVANI

MUKTHANANDA
ABSTRACT

A movie recommendation system is a machine learning-based application designed to suggest


films to users based on their preferences and behavior. By analyzing user interactions, ratings, and
watching history, these systems aim to predict what movies a user is likely to enjoy. They often
employ algorithms such as collaborative filtering, content-based filtering, or hybrid methods,
integrating both personal preferences and the behavior of similar users. This technology plays a
crucial role in enhancing user experience on streaming platforms, improving user engagement, and
providing personalized entertainment content efficiently. A movie recommendation system is an
advanced data-driven tool that leverages machine learning algorithms to deliver personalized film
suggestions to users. By utilizing techniques such as collaborative filtering, content-based filtering,
and hybrid approaches, the system analyzes user data, such as past viewing behavior, ratings,
preferences, and demographic information. It can also consider contextual factors like time,
location, and mood to refine recommendations. These systems not only enhance the user
experience by providing tailored content but also help platforms retain and engage users by
reducing the effort needed to discover new movies. Key challenges include addressing the cold-
start problem, ensuring diversity in recommendations, and maintaining scalability as the user base
grows. Such systems are widely used in video streaming services, e-commerce, and social media to
improve content discoverability and user satisfaction.

iv
TABLE OF CONTENTS

Page.No

ABSTRACT iv

1.INTRODUCTION 1
1.1 Introduction 2
1.2 Problem statement 2
1.3 Project Domain 2
1.4 Scope of the Project 2
1.5 Methodology 3

2 LITERATURE REVIEW 4

3 PROJECT DESCRIPTION 7
3.1 Existing System 7
3.2 Proposed System 7
3.3 Feasibility Study 8
3.3.1 Economic Feasibility 8
3.3.2 Technical Feasibility . 9
3.3.3 Social Feasibility 9
3.4 System Specification
Hardware Specification
Software Specificattion 10

4 PROPOSED WORK
4.1 General Architecture 11
4.2 Block Diagram 12
4.3 UML Diagram 13
4.4 Use Case Diagram 14
4.1 Module Description
4.1.1 MODULE1: DATA COLLECTION AND TRAIN- ING
DATA...........................................................................16
4.1.2 Step:1 Data collecting 16
4.1.3 Step:2 Processing of data 16
4.1.4 Step:3 Split the Data 18
4.1.5 DATASETS SAMPLE 18
4.1.6 Step:4 Building the Model 19
4.1.7 Step:5 Testing the Model 19
4.1.8 Step:6 Implementing the model 20

IMPLEMENTATION AND TESTING 21


5.1 Input and Output 21
5.1.1 View of a person without mask 21
5.1.2 View of a person with mask 22
5.2 Testing 22
5.2.1 Types of Testing 22
5.2.2 Unit testing 22
5.2.3 Integration testing 23
5.2.4 Functional testing 24
5.2.5 Test Result 25
5.3 Testing Strategy 26

RESULTS AND DISCUSSIONS 27


6.1 Efficiency of the Proposed System 27
6.2 Comparison of Existing and Proposed System 27

CONCLUSION AND FUTURE ENHANCEMENTS 32


7.1 Conclusion 32
7.2 Future Enhancements 32

SOURCE CODE & POSTER PRESENTATION 35


8.1 Sample Code 35
8.2 Poster Presentation 39

References 39

Appendix (If Required)

A. Sample screenshots

B. Proof of Publication/Patent filed/ Conference Certificate


LIST OF FIGURES

4.1 Architecture Diagram 11


4.2 Block Diagram 12
4.3 UML Diagram 13
4.4 Use Case Diagram 14
CHAPTER 1
INTRODUCTION
1.1 Introduction
A Movie Recommendation System is a specialized software application that assists users in
discovering films tailored to their personal tastes and preferences. By analyzing a variety of factors,
such as viewing history, user ratings, and movie characteristics (e.g., genre, cast, director), these
systems use machine learning algorithms to generate personalized movie suggestions. Two primary
techniques employed in recommendation systems are content-based filtering, which recommends
films similar to those the user has liked in the past, and collaborative filtering, which makes
suggestions based on patterns found in the preferences of other users with similar tastes. Many
modern systems adopt a hybrid approach, combining both methods to deliver more accurate and
diverse recommendations. Widely used by platforms like Netflix and Amazon Prime, movie
recommendation systems enhance user experience by simplifying content discovery and keeping
viewers engaged with relevant.

1.2 Problem statement


The primary problem that a Movie Recommendation System aims to solve is the challenge of
content overload in the digital entertainment space. With the vast number of movies available across
streaming platforms, users often struggle to find content that aligns with their preferences. This leads
to decision fatigue, where users spend excessive time searching for movies they may enjoy,
potentially resulting in user disengagement or dissatisfaction.

1.3 Objective of the Project


The objective of a Movie Recommendation System is to design and implement a system that
provides personalized movie suggestions to users, enhancing their viewing experience and reducing
the time spent searching for relevant content.

1.4 Project Domain


The domain of the project is Machine Learning. The progress of machine learning techniques
have been challenging when it comes computer vision and Image processing. Machine learning uses
various algorithms based on the requirement of the project.

9
1.5 Scope of the Project
The scope of a Movie Recommendation System encompasses the development and
deployment of a solution that can efficiently predict and suggest movies to users based on their
preferences and behavior patterns. This system will utilize various techniques such as content-based
filtering, collaborative filtering, and hybrid methods to deliver accurate recommendations. The
system is expected to handle large datasets, including user profiles, viewing histories, and movie
metadata (e.g., genres, actors, directors), to generate real-time suggestions. Additionally, it will
incorporate mechanisms to adapt to evolving user preferences by continuously learning from user
interactions. The recommendation system can be implemented across various platforms, including
streaming services, online movie rental sites, and personalized content platforms. Furthermore, the
scope extends to optimizing the system for scalability, allowing it to cater to a growing user base
while maintaining performance. Integration with user feedback loops, such as ratings and reviews,
will also be a key component in refining recommendations and improving the system’s accuracy
over time.

1.6 METHODOLOGY

The methodology for developing a Movie Recommendation System involves several


systematic steps aimed at delivering personalized movie suggestions to users. Initially, data
collection is conducted to gather user-specific information, such as viewing history and ratings,
alongside movie metadata like genres, cast, and release year. This data undergoes preprocessing,
including cleaning and normalization, to ensure consistency. Exploratory Data Analysis (EDA) is
performed to identify patterns in user preferences and movie characteristics. Next, various
recommendation techniques are employed, including content-based filtering, which focuses on the
similarities between movies, and collaborative filtering, which leverages user interactions and
preferences to find patterns among users. A hybrid approach that combines these methods is often
utilized to enhance recommendation accuracy. The models are then trained and evaluated using
metrics like Precision and Recall to ensure effectiveness. Following model validation, the system is
deployed to provide real-time recommendations, continuously adapting based on user interactions
and feedback. Finally, an intuitive user interface is integrated, enabling users to easily interact with
the system and refine their movie suggestions, ultimately enhancing user engagement and
satisfaction The methodology for developing a Movie Recommendation System involves several key
steps, leveraging data collection, machine learning techniques, and recommendation algorithms to
build an efficient, scalable system. Below is a step-by-step outline of the methodology

10
CHAPTER 2

LITERATURE REVIEW

11
12
13
14
CHAPTER 3

PROJECT DESCRIPTION

3.1 EXISTING SYSTEM

The existing movie recommendation system is typically built using collaborative filtering,
content-based filtering, or a hybrid approach that combines both. Collaborative filtering recommends
movies by analyzing patterns of user behavior, such as ratings, watch history, and preferences. It
identifies similarities between users or movies to suggest content based on what similar users have
enjoyed. Content-based filtering, on the other hand, focuses on the features of the movies
themselves, such as genre, director, actors, or themes, and recommends movies with similar
attributes to what a user has previously liked. Many modern systems incorporate machine learning
models to refine recommendations further, utilizing user data and feedback to continuously improve
accuracy and personalization. Popular platforms like Netflix and Amazon Prime also use hybrid
systems, blending both collaborative and content-based filtering to provide more comprehensive and
tailored movie suggestions.

3.2 PROPOSED SYSTEM

The proposed movie recommendation system aims to enhance personalization and accuracy
by integrating advanced machine learning techniques with deep learning algorithms. Unlike
traditional systems that rely solely on collaborative or content-based filtering, this system will
leverage neural networks to capture complex patterns in user behavior and movie features. By
incorporating user demographic data, emotional sentiment analysis from movie reviews, and real-
time feedback, the system will dynamically adjust recommendations to better align with user
preferences. Additionally, the proposed system will employ reinforcement learning to continuously
improve recommendations as more data becomes available. The goal is to create a more interactive
and adaptive recommendation process, delivering highly relevant and diverse movie suggestions to
users.

3.3 Feasibility Study


A feasibility study for a movie recommendation system evaluates its viability across different

15
dimensions, such as technical, economic, and operational factors.

1. Technical feasibility
2. Economic feasibility
3. Operational feasibility
4. Legal and ethical feasibility
1. Technical Feasibility
The technical feasibility of a movie recommendation system depends on the availability of
advanced technologies and infrastructure. The system would require access to large datasets
containing user preferences, movie metadata, and historical viewing patterns. Implementing machine
learning algorithms such as collaborative filtering, content-based filtering, and deep learning
techniques will also require robust computational resources, including cloud computing or high-
performance servers. Modern technologies like Apache Spark, TensorFlow, or Py Torch can support
scalable and efficient data processing.

2. Economic Feasibility
From an economic standpoint, the development and deployment of a movie recommendation
system entail costs related to infrastructure, personnel, and maintenance. Initial costs include the
procurement of servers, cloud storage, and machine learning tools, while ongoing costs involve
software maintenance, data storage, and algorithm updates. However, the benefits, such as increased
user engagement, customer retention, and potentially higher subscription revenue, make the
investment economically viable. Subscription models, advertising, or partnerships with studios can
further offset the costs, leading to long-term profitability.

3. Operational Feasibility
The operational feasibility hinges on the organization’s ability to integrate the system into its
current platform and maintain it efficiently. The system will require continuous monitoring and
optimization to ensure it delivers accurate recommendations based on real-time user data. Training
staff in machine learning, data science, and system maintenance is essential for smooth operations.
Additionally, the system should be user-friendly and seamlessly integrated into the platform’s UI/UX
to ensure ease of use.

16
4. Legal and Ethical Feasibility
A recommendation system must comply with data privacy laws such as GDPR or CCPA,
particularly regarding the collection and processing of user data. Ethical considerations, including
transparency in how recommendations are generated and avoiding algorithmic biases, are critical to
maintain user trust and comply with regulations. Implementing proper data encryption and security
measures will be essential to mitigate risks.

3.4 System Specification

3.4.1 Hardware Specification

1. Server Infrastructure:

 Minimum: 4-core CPU, 16 GB RAM, 500 GB SSD

 Recommended: 8-core CPU, 32 GB RAM, 1 TB SSD or higher (scalable


depending on the number of users)

2. GPU (for deep learning models):

 Minimum: NVIDIA GTX 1080 or equivalent

 Recommended: NVIDIA Tesla V100 or A100 for large-scale

3. Storage:

 Minimum: 1 TB for storing movie metadata, user interaction data, model


artifacts

 Recommended: 10 TB (especially for platforms with a user base)

 Network: High-speed internet connection for real-time processing cloud


services.

17
3.4.2. Software Requirements

1.Operating System:

 Server: Linux (Ubuntu, CentOS) or Windows Server

 Local development: Windows, macOS, or Linux

2.Programming Languages:

 Python (for building machine learning models, API integration, etc.)

 JavaScript or TypeScript (for front-end integration)

 SQL/NoSQL (for database queries)

3.Database Management System (DBMS):

 MySQL, PostgreSQL (for structured data like user information, ratings)

 MongoDB, Cassandra (for unstructured or semi-structured user reviews,


movie descriptions)

4.Machine Learning and Deep Learning Frameworks:

 TensorFlow, PyTorch, Scikit-learn (for developing the recommendation


algorithms)

 Apache Spark (for distributed data processing at scale)

18
CHAPTER 4

PROPOSED WORK

The proposed work for the movie recommendation system aims to develop a personalized
platform that suggests films to users based on their preferences, viewing history, and trends within
the broader user base. The system will leverage collaborative filtering, content-based filtering, and
hybrid approaches to deliver recommendations tailored to individual tastes. Collaborative filtering
will identify patterns in user behavior, while content-based filtering will analyze movie attributes
such as genre, cast, and director to match user interests. Additionally, the hybrid model will combine
the strengths of both techniques, overcoming limitations like cold start issues and sparse data.
Advanced machine learning algorithms, such as matrix factorization or deep learning techniques,
will be used to optimize predictions and ensure scalability. The system will also incorporate user
feedback to refine suggestions continuously, providing a dynamic and engaging experience for users
across various demographics. Furthermore, attention will be given to the system's performance,
scalability, and security, ensuring it can handle a large dataset and protect user privacy.

4.1 General Architecture

Fig 4.1
The architecture diagram for the movie recommendation system outlines the key components

19
and data flow necessary for personalized recommendations. At the core, the system consists of a
User Interface Layer, where users interact with the platform via web or mobile applications,
providing inputs such as preferences and ratings. This layer connects to the Application Layer, which
processes these inputs and communicates with the Recommendation Engine. The recommendation
engine is powered by two main modules: Collaborative Filtering and Content-Based Filtering. The
Data Processing Layer handles large datasets, including user behavior, movie metadata, and
interaction logs. It processes and stores this data in a Data Storage Layer, which includes both
structured (relational databases) and unstructured data (NoSQL, cloud storage) for scalability.

4.2 BLOCK DIAGRAM

Fig 4.2

20
The block diagram for the movie recommendation system visually represents the flow of data
and the interaction between different modules to deliver personalized movie recommendations. It
begins with the User Interaction Block, where users provide inputs such as movie ratings,
preferences, and feedback through a user-friendly interface. This data is passed to the Preprocessing
Block, which cleanses, normalizes, and transforms the input for further analysis. The core of the
system lies in the Recommendation Engine Block, consisting of two key components: Collaborative
Filtering and Content-Based Filtering modules. These modules analyze both user behavior and
movie features to generate relevant suggestions.

4.3 UML Diagram

21
Fig 4.3

The UML diagram for the movie recommendation system captures the structural and
behavioral aspects of the system, detailing how different components interact to deliver personalized
recommendations. The Class Diagram represents the key entities such as User, Movie,
Recommendation Engine, and Rating. Each class has its attributes and methods; for example, the
User class contains attributes like user ID, preferences, and viewing history, while the Movie class
includes movie ID, genre, director, and cast details. The Recommendation Engine class is
responsible for generating recommendations, and the Rating class links users and movies through
ratings or reviews.

4.4 USECASE DIAGRAM

22
Fig 4.4

The use case diagram for the movie recommendation system highlights the interaction
between users and the system, demonstrating the key functionalities that the system offers. The
primary actors in the diagram are Users and Admin. The User actor interacts with the system to
perform several use cases such as Search Movies, View Recommendations, Rate Movies, and
Update Profile. When a user searches for a movie or provides ratings, the system collects this
information to refine and deliver personalized recommendations through the Receive
Recommendations use case.

MODULE DESCRIPTION

23
4.5 Module Description

Our entire project is divided into two modules.

24
4.5.1 MODULE1: DATA COLLECTION AND TRAINING DATA

Data Collection and training using Machine Learning Algorithms

4.5.2 Step:1 Data collecting

4.5.2.1 The development of the Face Mask Recognition model begins


with collecting the data

4.5.2.2The data set train data on people who use masks and who do not.

Figure 4.6: Test Image

4.5.3 Step:2 Processing of data

4.5.3.1 The Pre-processing section could be a section before the


coaching and testing of the info. There are four steps within
the pre- processing that area unit resizing image size, changing
the image to the array, pre-processing input victimization
MobileNetV2, and the last one is performing hot encoding on
labels.

25
4.5.3.2 The resizing image could be a vital Pre-processing step in pc
vision because of the effectiveness of training model. The next
step is to method all the images within the data set into an
array. The im- age is born-again into the array for line them by
the loop perform. After that, the image are going to be
accustomed Pre- process in- put victimization MobileNetV2.
and therefore the last step during this section is playacting hot
secret writing on labels as a result of several machine learning
algorithms cannot operate knowledge labeling directly. They
need all input variables and output variables to be numeric, as
well as this algorithm. The labeled knowledge are going to be
remodeled into a numerical label, that the algorithm will
perceive it.

Figure 4.7: Pre processing of Data

26
4.5.4 Step:3 Split the Data

4.5.4.1 After the pre-processing part, the information is split into 2


batches, that are training data specifically seventy five percent,
and the rest is testing knowledge. every batch is containing
each of with-mask and without-mask pictures.

4.5.5 DATASETS SAMPLE

Figure 4.8: Without mask

27
Figure 4.9: With Mask

4.5.6 Step:4 Building the Model

4.5.6.1 The next part is building the model. There are six steps in
building the model that are constructing the training image
generator for augmentation, the base model with
MobileNetV2, adding model parameters, collecting the model,
coaching the model, and there- fore the last is saving the model
for the long run prediction method.

4.5.7 Step:5 Testing the Model

4.5.7.1 To make sure the model can predict well, there are steps in
testing model. The first step is making predictions on the
testing set.

28
4.5.8 Step:6 Implementing the model.

4.5.8.1 The model enforced within the video. The video scan from
frame to border, then the face detection formula works. If a
face is detected, it takings to future method. From detected
frames containing faces, reprocessing are going to be disbursed
together withresizing the image size, changing to the array,
preprocessing in- put victimization MobileNetV2. future step
is predicting input file from the saved model. Predict the input
image that has been processed employing a antecedently
designed model. Besides, the video frame also will be labelled
that the person is sporting a maskor not alongside the
predictive proportion.

29
Chapter 5

IMPLEMENTATION AND TESTING

5.1 Input and Output

5.1.1 View of a person without mask

Figure 5.1: Person without mask

30
5.1.2 View of a person with mask

Figure 5.2: Person with mask

5.2 Testing

Testing is the process of evaluating a system or its component(s) with the intent
to find whether it satisfies the specified requirements or not.

5.2.1 Types of Testing

5.2.2 Unit testing

Unit testing is a benificiable software testing method where the units of source
code is tested to check the efficiency and correctness of the program.

31
Input

1 f o r c a t e g o r y i n CATEGORIES :
2 p a t h = os . p a t h . j o i n ( DIRECTORY, c a t e g o r y
3 ) f o r img i n os . l i s t d i r ( p a t h ) :
4 i m g p a t h = os . p a t h . j o i n ( path , img )
5 image = l o a d i m g ( img path , t a r g e t s i z e =( 224 , 224 ) )
6 image = i m g t o a r r a y ( image )
7 image = p r e p r o c e s s i n p u t ( image )
8

9 d a t a . append ( image )
10 l a b e l s . append ( c a t e g o r y )

Test result

• Data sets images are accessed.

• Images of size 224*224 pixels are considered

• The considered images are loaded into an array for preprocessing.

5.2.3 Integration testing

Input

1 f rame = vs . r e a d ( )
2 f rame = i m u t i l s . r e s i z e ( frame , width = 400 )
3 ( l o c s , p r e d s ) = d e t e c t a n d p r e d i c t m a s k ( frame , f ace Net ,
4 maskNet ) f o r ( box , p r ed ) i n z i p ( lo c s , p r e d s ) :
5 ( s t a r t X , s t a r t Y , endX , endY ) =
6 box ( mask , without Mask ) = p r e d
7 l a b e l = ” Mask” i f mask > without Mask e l s e ”No Mask”

32
8 c o l o r = ( 0 , 255 , 0 ) i f l a b e l == ” Mask” e l s e ( 0 , 0 , 255 )

Test result

• A frame of size 400 pixels is created to take the input and display the output.

• A label is defined as a square box.It attains green colour when a mask is present
on the face and red colour when there is no mask.

5.2.4 Functional testing

Input

1 p r i n t ( ” [ INFO ] t r a i n i n g head . . . ” )
2 H = model . f i t (
3 aug . f low ( t r a i n X , t r a i n Y , b a t c h s i z e =BS )
4 ,s t e p s p e r e p o c h = l e n ( t r a i n X ) / / BS ,
5 v a l i d a t i o n d a t a =( t e s t X , t e s t Y ) ,
6 v a l i d a t i o n s t e p s = l e n ( t e s t X ) / / BS
7 , epochs =EPOCHS)
8 p r i n t ( ” [ INFO ] e v a l u a t i n g network . . . ” )
9 p r e d I d x s = model . p r e d i c t ( t e s t X , b a t c h s i z e
10 =BS ) p r i n t ( ” [ INFO ] s a v i n g mask d e t e c t o r model . . . ” )
11 model . s a ve ( ” m a s k d e t e c t o r . model ” , s a v e f o r m a t =” h5 ” )

Test Result

33
• All the images from data sets are loaded into the model and carried out for
training.

• Training is done by considering each image and saving the characteristics of the
image.

• After completing the training the MobilenetV2 saves the detector model and
can compete with the input given.

5.2.5 Test Result

Figure 5.3: Test Image

34
5.3 Testing Strategy

• Unit testing: Unit testing verifies the bits of code to check the viability of the
code.

• Integration testing: Integration testing is carried out to the efficiency of


the modelwith functional requirements.

• Functional testing: The functional testing is done to verify the output with
theprovided input against the functional requirements.

35
Chapter 6

RESULTS AND

DISCUSSIONS

Results
Result of the Work Done
Measuring the Result via Accuracy, Precision, Recall, F measure
Preferred to be in Table Format with Numerical Values
Graph for the Table Generated

Comparison of The Results with the Existing System


Preferred to be in Table Format with Numerical Values
Graph for the Table Generated

6.1 Efficiency of the Proposed System

The goal of these experiment is to create awareness among the


citizens and to decrease humans activity and to stop spread of virus,
provide insight into the performance of detection techniques with
36
masked-face images and to evaluate performance of recognition models
that in addition to the presence of face-masks also determine whether the
masks are wearing properly or not within in single shot capturing using
SSD-single shot detection and mobile net-v2 technology where we can
see high performance and accuracy of results.

6.2 Comparison of Existing and Proposed System

The existing system has no key factor like mask detection and there is
no specific software to deploy in surveillance cameras and detection with
less efficient cameras are highly impossible and performance rate and
accuracy is very low. only single direction capturing. No object detection
Technique. In our proposed System the technology was used is highly
accurate and processing is very high. when there is a high processing
rate. we can say our system is efficient. latest technology SSD–single
shot detection using object detection model. deployable in less efficient
web cam

37
technology SSD-single shot detection using object detection model. High
performance rate and high accuracy of results. Processing speed is high. Easily
detects the log of objects. Applicable for less efficient camera’s like web cams.

16

38
Output

Figure 6.1: Model detected person is without mask

39
Figure 6.2: Model detected person is without mask

40
Chapter 7

CONCLUSION AND FUTURE


ENHANCEMENTS

7.1 Conclusion

We conclude that by using machine learning techniques validating the usage


and detection of mask using machine learning with this we can filter the people
in community areas to avoid the spread of corona virus. In proposed mask detection
work, the training and testing data sets are implemented successfully by
categorizing as masked and unmasked MobilenetV2 image classifiers are used to
classify the images as masked faces and unmasked faces, it is the important factor in
the implementation. The accuracy of the proposed model was good and can be
implemented at any time. The model can be implemented in any organizations,
schools, offices, malls, densely crowded areas etc to stop the spreading of virus.

7.2 Future Enhancements

• This project is not only limited to this current pandemic but can be used as
a protective technology to prevent from the micro organisms and pollution by
wearing a mask.

41
• With the data gathered through this work we can use this for creating a survey,
by analyzing how much percentage of people are wearing masks and not.

• If the person is not wearing mask, then we can create a software that sends the
warning messages to wear a mask and it can used to create awareness among
the
people.

42
Chapter 8

SOURCE CODE & POSTER


PRESENTATION

9.1 Sample Code

1 from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image import ImageDataGenerator


2 from tensorflow . keras . applications import Mobile Net V2
3 from tensorflow . keras . layers import Average Pooling 2 D
4 from tensorflow . keras . layers i m p o r t Dropout
5 from tensorflow . keras . layers import Flatten
6 from tensorflow . keras . layers i m p o r t Dense
7 from tensorflow . keras . layers import Input
8 from t e n s o r f l o w . k e r a s . models i m p o r t Model
9 from tensorflow . keras . optimizers i m p o r t Adam
10 from tensorflow . keras . applications . mobilenet v2 import preprocess input
11 from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image import img to array
12 from t e n s o r f l o w . k e r a s . p r e p r o c e s s i n g . image import load img
13 from tensorflow . keras . utils import to categorical
14 from sklearn . preprocessing import LabelBinarizer
15 from sklearn . model selection i m p o r tt r a i n t e s t s p l i t
16 from sklearn . metrics import classification report
17 from i m u t i l s import paths
18 import matplotlib . pyplot as p l t
19 i m p o r t numpy as np
20 i m p o r t os
21

22 # initialize the initial l e a r n i n g r a t e , number of epochs t o t r a i n fo r ,

43
23 # and batch size
24 INIT LR = 1 e −4
25 EPOCHS = 20
26 BS = 32
27

28 DIRECTORY = r ”C: \ Mask D e t e c t i o n \CODE\ Face −Mask− D e t e c t i o n − m a s t e r \ d a t a s e t ”


29 CATEGORIES = [ ” with mask ” , ” w i t h o u t m a s k ” ]
30

31 # gr ab t h e l i s t of images i n our dataset directory , then initialize


32 # t h e l i s t of data (i .e. , images ) and class images
33 p r i n t ( ” [ INFO ] loading images . . . ” )
34

35 data = []
36 labels = []
37

38 for category i n CATEGORIES :


39 p a t h = os . p a t h . j o i n ( DIRECTORY, c a t e g o r y )
40 f o r img i n os . l i s t d i r ( p a t h ) :
41 i m g p a t h = os . p a t h . j o i n ( path , img )
42 image = l o a d i m g ( img path , t a r g e t s i z e =( 224 , 224 ) )
43 image = i m g t o a r r a y ( image )
44 image = p r e p r o c e s s i n p u t ( image )
45

46 d a t a . append ( image )
47 l a b e l s . append ( c a t e g o r y )
48

49 # perform one − h o t e n c o d i n g on t h e l a b e l s
50 lb = LabelBinarizer ()
51 labels = lb . fit transform ( labels )
52 labels = to categorical ( labels )
53

54 d a t a = np . a r r a y ( data , d t y p e =” f l o a t 3 2 ” )
55 l a b e l s = np . a r r a y ( l a b e l s )
56

57 ( t r a i n X , t es t X , t r a i n Y , t e s t Y ) = t r a i n t e s t s p l i t ( data , l a b e l s ,
58 t e s t s i z e = 0 . 2 0 , s t r a t i f y = l a b e l s , r a n d o m s t a t e = 42 )
59

60 # construct the training image generator for data augmentation


61 aug = I m a g e D a t a G e n e r a t o r (

44
62 r o t a t i o n r a n g e = 20 ,
63 zoom range = 0 . 1 5 ,
64 width shift range =0.2 ,
65 height shift range =0.2 ,
66 shear range =0.15 ,
67 h o r i z o n t a l f l i p =True ,
68 f i l l m o d e =” n e a r e s t ” )
69

70 # load t h e Mobile Net V2 network , e n s u r i n g t h e head FC l a y e r s e t s a r e


71 # left off
72 base Model = Mobile Net V2 ( w e i g h t s =” i m a g e n e t ” , i n c l u d e t o p = F a l s e ,
73 i n p u t t e n s o r = I n p u t ( shape =( 224 , 224 , 3 ) ) )
74

75 # construct t h e head of t h e model t h a t w i l l be p l a c e d on t o p of the


76 # the ba se model
77 head Model = base Model . o u t p u t
78 head Model = Average Pooling 2 D ( p o o l s i z e =( 7 , 7 ) ) ( head Model )
79 head Model = F l a t t e n ( name=” f l a t t e n ” ) ( head Model )
80 head Model = Dense ( 1 2 8 , a c t i v a t i o n =” r e l u ” ) ( head Model )
81 head Model = Dropout ( 0 . 5 ) ( head Model )
82 head Model = Dense ( 2 , a c t i v a t i o n =” softmax ” ) ( head Model )
83

84 # place t h e head FC model on t o p of the b as e model ( this will become


85 # the a c t u a l model we w i l l t r a i n )
86 model = Model ( i n p u t s =base Model . i n p u t , o u t p u t s =head Model )
87

88 # l oo p ov e r all layers in the ba s e model and f r e e z e them so they will


89 # n o t be u p d a t e d d u r i n g t h ef i r s t training process
90 for layer i n base Model . l a y e r s :
91 layer . trainable = False
92

93 # compile our model


94 p r i n t ( ” [ INFO ] c o m p i l i n g model . . . ” )
95 o p t = Adam ( l r =INIT LR , decay =INIT LR / EPOCHS)
96 model . compile ( l o s s =” b i n a r y c r o s s e n t r o p y ” , o p t i m i z e r =opt ,
97 m e t r i c s =[ ” a c c u r a c y ” ] )
98

99 # t r a i n t h e head of the network


100 p r i n t ( ” [ INFO ] training head . . . ” )

45
101 H = model . f i t (
102 aug . f low ( t r a i n X , t r a i n Y , b a t c h s i z e =BS ) ,
103 s t e p s p e r e p o c h = l e n ( t r a i n X ) / / BS ,
104 v a l i d a t i o n d a t a =( t e s t X , testY ) ,
105 v a l i d a t i o n s t e p s = l e n ( t e s t X ) / / BS ,
106 epochs =EPOCHS)
107

108 # make p r e d i c t i o n s on t h e t e s t i n g s e t
109 p r i n t ( ” [ INFO ] evaluating network . . . ” )
110 p r e d I d x s = model . p r e d i c t ( t e s t X , b a t c h s i z e =BS )
111

112 # f o r each image in the testing s e t we need t o f i n d t h e i nd e x of t h e


113 # label w i t h c o r r e s p o n d i n gl a r g e s t p r e d i c t e d p r o b a b i l i t y
114 p r e d I d x s = np . argmax ( p r e d I d x s , a x i s = 1 )
115

116 # show a n i c e l y formatted classification report


117 p r i n t ( c l a s s i f i c a t i o n r e p o r t ( t e s t Y . argmax ( a x i s = 1 ) , p r e d I d x s ,
118 target names=lb . classes ) )
119

120 # s e r i a l i z e t h e model to disk


121 p r i n t ( ” [ INFO ] s a v i n g mask d e t e c t o r model . . . ” )
122 model . s a ve ( ” m a s k d e t e c t o r . model ” , s a v e f o r m a t =” h5 ” )
123

124 # plot the t r a i n i n g l o s s and accuracy


125 N = EPOCHS
126 p l t . s t y l e . use ( ” g g p l o t ” )
127 plt . figure ()
128 p l t . p l o t ( np . a r a n g e ( 0 , N) , H. h i s t o r y [ ” l o s s ” ] , l a b e l =” t r a i n l o s s ” )
129 p l t . p l o t ( np . a r a n g e ( 0 , N) , H. h i s t o r y [ ” v a l l o s s ” ] , l a b e l =” v a l l o s s ” )
130 p l t . p l o t ( np . a r a n g e ( 0 , N) , H. h i s t o r y [ ” a c c u r a c y ” ] ,l a b e l =” t r a i n a c c ” )
131 p l t . p l o t ( np . a r a n g e ( 0 , N) , H. h i s t o r y [ ” v a l a c c u r a c y ” ] , l a b e l =” v a l a c c ” )
132 plt . title (”Training Loss and Accuracy ” )
133 p l t . x l a b e l ( ” Epoch # ” )
134 p l t . y l a b e l ( ” Loss / Accuracy ” )
135 p l t . l e g e n d ( l o c =” lower l e f t ” )
136 p l t . s a v e f i g ( ” p l o t . png ” )

46
References

Use this format


Material Type Works Cited

Book in print [1] D. Sarunyagate, Ed., Lasers. New York: McGraw-Hill, 1996.

[2] G. O. Young, "Synthetic structure of industrial plastics," in Plastics, 2nd ed.,


Chapter in book
vol. 3, J. Peters, Ed. New York: McGraw-Hill, 1964, pp. 15-64.

[3] L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice, 2nd


eBook
ed. Reading, MA: Addison Wesley, 2003. [E-book] Available: Safari e-book.

[4] G. Liu, K. Y. Lee, and H. F. Jordan, "TDM and TWDM de Bruijn networks and
Journal article shufflenets for optical communications," IEEE Trans. Comp., vol. 46, pp. 695-
701, June 1997.

[5] H. Ayasso and A. Mohammad-Djafari, "Joint NDT Image Restoration and


Segmentation Using Gauss–Markov–Potts Prior Models and Variational
eJournal (from
Bayesian Computation," IEEE Transactions on Image Processing, vol. 19, no. 9,
database)
pp. 2265-77, 2010. [Online]. Available: IEEE Xplore, http://www.ieee.org.
[Accessed Sept. 10, 2010].

[6] A. Altun, “Understanding hypertext in the context of reading on the web:


eJournal (from Language learners’ experience,” Current Issues in Education, vol. 6, no. 12,
internet) July, 2005. [Online serial]. Available: http://cie.ed.asu.edu/volume6/number12/.
[Accessed Dec. 2, 2007].

[7] L. Liu and H. Miao, "A specification based approach to testing polymorphic
attributes," in Formal Methods and Software Engineering: Proceedings of the
Conference paper 6th International Conference on Formal Engineering Methods, ICFEM 2004,
Seattle, WA, USA, November 8-12, 2004, J. Davies, W. Schulte, M. Barnett,
Eds. Berlin: Springer, 2004. pp. 306-19.

[8] T. J. van Weert and R. K. Munro, Eds., Informatics and the Digital Society:
Conference Social, ethical and cognitive issues: IFIP TC3/WG3.1&3.2 Open Conference on
proceedings Social, Ethical and Cognitive Issues of Informatics and ICT, July 22-26, 2002,
Dortmund, Germany. Boston: Kluwer Academic, 2003.

[9] J. Riley, "Call for new look at skilled migrants," The Australian, p. 35, May 31,
Newspaper article
2005. [Online]. Available: Factiva, http://global.factiva.com. [Accessed May 31,
(from database)
2005].

[10] K. E. Elliott and C.M. Greene, "A local adaptive protocol," Argonne National
Technical report
Laboratory, Argonne, France, Tech. Rep. 916-1010-BB, 1997.

47
[11] J. P. Wilkinson, “Nonlinear resonant circuit devices,” U.S. Patent 3 624 125,
Patent
Jul. 16, 1990.

Standard [12] IEEE Criteria for Class IE Electric Systems, IEEE Standard 308, 1969.

Thesis/ [1] J. O. Williams, “Narrow-band analyzer,” Ph.D. dissertation, Dept. Elect. Eng.,
Dissertation Harvard Univ., Cambridge, MA, 1993.

[1] Dr. Vandana S. Bhat , Arpita Durga Shambavi , Komal Mainalli , K M


Manushree, Shraddha V Lakamapur, “Review on Literature Survey of Human
Recognition with Face Mask”,Issue 1,20 Mar,2021

[2] Zhongyuan Wang, Guangcheng Wang, Baojin Huang, Zhangyang Xiong, Qi


Hong, Hao Wu, Peng Yi, Kui Jiang,Nanxi Wang, Yingjiao Pei, Heling Chen,
Yu Miao, Zhibing Huang, and Jinbi Liang, “Masked Face Recognition Dataset
and Application” ,version2,23 Mar,2020

[3] M.H.Yang,D.J.Kriegman, and N.Ahuja,”Detecting Faces in Images:A sur-


vey”,IEEE Trans.pattern analysis and machine intelligence,vol.24,01Jan,2002.

[4] Preethi Nagrath,Rachana jain,Agam madan,Rohan Arora ,SSDMNV2: A real


time DNN-based face mask detection system using single shot multibox detector
and MobileNetV2,Dec31,2020

[5] X. Zhang, S. Ren and J. Sun, ”Deep residual learning for image recogni-
tion”,June,2016.

[6] H.Qu,X.Fu ,”Research on semantic segmentation of high-resolution remote sens-


ing image based on full convolutional neural network”, 2018 12th International
Symposium on Antennas Propagation and EM Theory (ISAPE), pp. 1-4, Dec
2018.

48
[7] S. Kumar, A. Negi, J. N. Singh and H. Verma, ”A deep learning for brain tumor
mri images semantic segmentation using fcn”, 2018 4th International Conference
on Computing Communication and Automation (ICCCA), pp.1-4,Dec 2018

[8] K. Li, G. Ding and H. Wang, ”L-fcn: A lightweight fully convolutional network
for biomedical semantic segmentation”, 2018 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM), Dec 2018

[9] T. Poggio and L. Stringa, “A project for an intelligent system: Vision and Learn-
ing,” Int. J. Quantum Chem., vol. 42,2017

[10] Mingjie Jiang, Xinqi Fan and Hong Yan, RETINAFACEMASK: A FACE
MASK DETECTOR,8 Jun,2020.

49

You might also like