A Project Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

A Project Report

on
AUTOMATIC ATTENDENCE TAKING SYSYTEM
USING PYTHON AND ML
Submitted in partial fulfillment of the
requirement for the award of the degree of

Bachelor of Technology in Computer science and


Engineering

Under The Supervision of


Name of Supervisor: Mr. Abdul Mazid

Submitted By
ANKIT GANGWAR 21SCSE1011704

SCHOOL OF COMPUTING SCIENCE AND ENGINEERING


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING /
DEPARTMENT OF COMPUTERAPPLICATION
GALGOTIAS UNIVERSITY, GREATER NOIDA
INDIA
May, 2023
SCHOOL OF COMPUTING SCIENCE AND
ENGINEERINGGALGOTIAS UNIVERSITY, GREATER NOIDA
Abstract
In colleges, universities, organizations, schools, and offices, taking attendance is
one of themost important tasks that must be done on a daily basis. The majority of
the time, it is donemanually, such as by calling by name or by roll number. The
main goal of this project is tocreate a Face Recognition-based attendance system
that will turn this manual process into an automated one. This project meets the
requirements for bringing modernization to the way attendance is handled, as well
as the criteria for time management. This device is installed in the classroom,
where and student's information, such as name, roll number, class, sec, and
photographs, is trained. The images are extracted using Open CV. Before the start
of the corresponding class, the student can approach the machine, which will begin
taking pictures and comparing them to the qualified dataset. Logitech C270 web
camera and NVIDIA Jetson Nano Developer kit were used in this project as the
camera and processing board. The image is processed as follows: first, faces are
identified using a Haarcascade classifier, then faces are recognized using the LBPH
(Local Binary Pattern Histogram) Algorithm, histogram data is checked against an
established dataset, and the device automatically labels attendance. An Excel sheet
is developed, and it is updated everyhour with the information from the respective
class instructor.
Table of Contents

Title Page
No.
Abstract 2
Chapter 1 Introduction 4
1.1 Project Objective 4
1.2 Background 5
1.3 Problem Statement
1.4 Aims and Objective
1.5 Flow chart
1.6 Scope of the project
Chapter 2 Literature Survey/Project Design 10
2.1 Student Attendance System
2.2 Digital Image Processing

Chapter 3 Model Implementation And Analysis 14


3.1 Introduction
3.2 Modal Implementation
3.3 Design Requirements
3.3.1 Software Implementation
Chapter 4 Code Implementation 17
4.1 Code Implementation
4.1.1 main.py
4.1.2 Photos

Chapter 5 Performance Analysis/Conclusion and 22


Future Scope
5.1 Introduction 22
5.2 Analysis

Conclusion 22
Reference 23
CHAPTER-1

Introduction
1.1 Project Objective:

Attendance is prime important for both the teacher and student of an


educational organization. So it is very important to keep record of the attendance.
The problem arises when we think about the traditional process of taking
attendance in class room.
Calling name or roll number of the student for attendance is not only a
problem of time consumption but also it needs energy. So an automatic
attendance system can solve all above problems.

There are some automatic attendances making system which are


currently used by much institution. One of such system is biometric technique
and RFID system. Although it is automatic and a step ahead of traditional method
it fails to meet the time constraint. The student has to wait in queue for giving
attendance, which is time taking.

This project introduces an involuntary attendance marking system,


devoid of any kind of interference with the normal teaching procedure. The
system can be also implemented during exam sessions or in other teaching
activities where attendance is highly essential. This system eliminates classical
student identification such as calling name of the student, or checking
respective identification cards of the student, which can not only interfere with
the ongoing teaching process, but also can be stressful for students during
examination sessions. In addition, the students have to register in the database
to be recognized. The enrolment can be done on the spot through the user-
friendly interface.

1.2 Background:

Face recognition is crucial in daily life in order to identify family, friends


or someone we are familiar with. We might not perceive that several steps have
actually taken in order to identify human faces. Human intelligence allows us
to receive information and interpret the information in the recognition process.
We receive information through the image projected into our eyes, by
specifically retina in the form of light. Light is a form of electromagnetic waves
which are radiated from a source onto an object and projected to human vision.
Robinson-Riegler, G., & Robinson-Riegler, B. (2008) mentioned that after
visual processing done by the human visual system, we actually classify shape,
size, contour and the texture of the object in order to analyze the information.
The analyzed information will be compared to other representations of objects
or face that exist in our memory to recognize. In fact, it is a hard challenge to
build an automated system to have the same capability as a human to recognize
faces. However, we need large memory to recognize different faces, for
example, in the Universities, there are a lot of students with different race and
gender, it is impossible to remember every face of the individual without
making mistakes. In order to overcome human limitations, computers with
almost limitless memory, high processing speed and power are used in face
recognition systems.

The human face is a unique representation of individual identity. Thus,


face recognition is defined as a biometric method in which identification of an
individual is performed by comparing real-time capture image with stored
images in the database of that person (Margaret Rouse, 2012).
Nowadays, face recognition system is prevalent due to its simplicity and
awesome performance. For instance, airport protection systems and FBI use face
recognition for criminal investigations by tracking suspects, missing children
and drug activities (Robert Silk, 2017). Apart from that, Facebook which is a
popular social networking website implement face recognition to allow the users
to tag their friends in the photo for entertainment purposes (Sidney Fussell,
2018). Furthermore, Intel Company allows the users to use face recognition to
get access to their online account (Reichert, C., 2017). Apple allows the users to
unlock their mobile phone, iPhone X by using face recognition (deAgonia, M.,
2017).

The work on face recognition began in 1960. Woody Bledsoe, Helen Chan
Wolf and Charles Bisson had introduced a system which required the
administrator to locate eyes, ears, nose and mouth from images. The distance and
ratios between the located features and the common reference points are then
calculated and compared. The studies are further enhanced by Goldstein,
Harmon, and Lesk in 1970 by using other features such as hair colour and lip
thickness to automate the recognition. In 1988, Kirby and Sirovich first
suggested principle component analysis (PCA) to solve face recognition
problem. Many studies on face recognition were then conducted continuously
until today (Ashley DuVal, 2012).

1.3 Problem Statement:

Traditional student attendance marking technique is often facing a lot of


trouble. The face recognition student attendance system emphasizes its
simplicity by eliminating classical student attendance marking technique such as
calling student names or checking respective identification cards. There are not
only disturbing the teaching process but also causes distraction for students
during exam sessions. Apart from calling names, attendance sheet is passed
around the classroom during the lecture sessions. The lecture class especially the
class with a large number of students might find it difficult to have the attendance
sheet being passed around the class. Thus, face recognition attendance system is
proposed in order to replace the manual signing of the presence of students which
are burdensome and causes students get distracted in order to sign for their
attendance. Furthermore, the face recognition based automated student
attendance system able to overcome the problem of fraudulent approach and
lecturers does not have to count the number of students several times to ensure
the presence of the students.

The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial
identification. One of the difficulties of facial identification is the identification
between known and unknown images. In addition, paper proposed by Pooja G.R et al.
(2010) found out that the training process for face recognition student attendance
system is slow and time-consuming. In addition, the paper proposed by Priyanka Wagh
et al. (2015) mentioned that different lighting and head poses are often the problems
that could degrade the performance of face recognition based student attendance
system.
Hence, there is a need to develop a real time operating student attendance system
which means the identification process must be done within defined time constraints
to prevent omission. The extracted features from facial images which represent the
identity of the students have to be consistent towards a change in background,
illumination, pose and expression. High accuracy and fast computation time will be
the evaluation points of the performance.

1.4 Aims and Objectives:

The objective of this project is to develop face recognition attendance system.


Expected achievements in order to fulfill the objectives are:

● To detect the face segment from the video frame.


● To extract the useful features from the face detected.
● To classify the features in order to recognize the face detected.
● To record the attendance of the identified student.

1.5 Flow chart


1.1 Scope of the project:

We are setting up to design a system comprising of two modules. The


first module (face detector) is a mobile component, which is basically a
camera application that captures student faces and stores them in a file
using computer vision face detection algorithms and face extraction
techniques. The second module is a desktop application that does face
recognition of the captured images (faces) in the file, marks the students
register and then stores the results in a database for future analysis.
CHAPTER-2

LITERATURE REVIEW

2.1 Student Attendance System:

Arun Katara et al. (2017) mentioned disadvantages of RFID (Radio


Frequency Identification) card system, fingerprint system and iris
recognition system. RFID card system is implemented due to its simplicity.
However, the usertends to help their friends to check in as long as they
have their friend’s ID card. The fingerprint system is indeed effective but
not efficient because it takes time forthe verification process so the user has
to line up and perform the verification one by one. However for face
recognition, the human face is always exposed and contain less information
compared to iris. Iris recognition system which contains more detail might
invade the privacy of the user. Voice recognition is available, butit is less
accurate compared to other methods. Hence, face recognition system is
suggested to be implemented in the student attendance system.

2.2 Digital Image Processing:

Digital Image Processing is the processing of images which are digital


in nature by a digital computer. Digital image processing techniques are
motivated by three major applications mainly:

● Improvement of pictorial information for human perception


● Image processing for autonomous machine application

● Efficient storage and transmission.


2.3 Image Representation in a Digital Computer:

An image is a 2-Dimensional light intensity function

𝐟 (𝐱,𝐲) = 𝐫 (𝐱,𝐲) × 𝐢 (𝐱,𝐲) -(2.0)

Where, r (x, y) is the reflectivity of the surface of the corresponding


image point. i (x,y) Represents the intensity of the incident light. A digital
image f(x, y) is discretized both in spatial co-ordinates by grids and in
brightness by quantization.Effectively, the image can be represented as a
matrix whose row, column indices specify a point in the image and the
element value identifies gray level value at thatpoint. These elements are
referred to as pixels or pels.

Typically following image processing applications, the image size


which is used is𝟐𝟓𝟔 × 𝟐𝟓𝟔, elements, 𝟔𝟒𝟎 × 𝟒𝟖𝟎 pels or 𝟏𝟎𝟐𝟒 × 𝟏𝟎𝟐𝟒
pixels. Quantization of these matrix pixels is done at 8 bits for black and
white images and24 bits for colored images (because of the three color
planes Red, Green and Blueeach at 8 bits).
2.4 Steps in Digital Image Processing:

Digital image processing involves the following basic tasks:

● Image Acquisition - An imaging sensor and the capability to


digitize thesignal produced by the sensor.
● Preprocessing – Enhances the image quality, filtering,
contrastenhancement etc.
● Segmentation – Partitions an input image into constituent parts
ofobjects.

● Description/feature Selection – extracts the description of image


objects suitable for further computer processing.
● Recognition and Interpretation – Assigning a label to the object
based on the information provided by its descriptor. Interpretation

assigns meaning to aset of labelled objects.


● Knowledge Base – This helps for efficient processing as well as
inter module cooperation.

2.5 Definition of Terms and History:

Face Detection
Face detection is the process of identifying and locating all the
present facesin a single image or video regardless of their position, scale,
orientation, age and expression. Furthermore, the detection should be
irrespective of extraneous illumination conditions and the image and video
content.

2.5.1 Face Recognition


Face Recognition is a visual pattern recognition problem, where
the face, represented as a three dimensional object that is subject to
varying illumination,

pose and other factors, needs to be identified based on acquired images.

Face Recognition is therefore simply the task of identifying an


already detected face as a known or unknown face and in more
advanced cases tellingexactly whose face it is.
Difference between Face Detection and Face Recognition

Face detection answers the question, Where is the face? It identifies


an objectas a “face” and locates it in the input image. Face Recognition on
the other hand answers the question who is this? Or whose face is it? It
decides if the detected face is someone .It can therefore be seen that face
detections output (the detected face) is the input to the face recognizer and
the face Recognition’s output is the final decision i.e. face known or face
unknown.

Face Detection

A face Detector has to tell whether an image of arbitrary size


contains a human face and if so, where it is. Face detection can be
performed based onseveral cues: skin color (for faces in color images and
videos, motion (for faces invideos), facial/head shape, facial appearance or
a combination of these parameters. Most face detection algorithms are
appearance based without using other cues. Aninput image is scanned at
all possible locations and scales by a sub window. Face detection is posed
as classifying the pattern in the sub window either as a face or a non-face.
The face/nonface classifier is learned from face and non-face training
examples using statistical learning methods[9]. Most modern algorithms
are basedon the Viola Jones object detection framework, which is based on
Haar Cascades.
Chapter-3

MODEL IMPLEMENTATION AND ANALYSIS

3.1 INTRODUCTION:

Face detection involves separating image windows into two


classes; one containing faces (turning the background (clutter). It is
difficult because althoughcommonalities exist between faces, they can vary
considerably in terms of age, skincolor and facial expression. The problem
is further complicated by differinglighting conditions, image qualities and
geometries, as well as the possibility ofpartial occlusion and disguise. An
ideal face detector would therefore be able todetect the presence of any
face under any set of lighting conditions, upon anybackground. The face
detection task can be broken down into two steps. The first step is a
classification task that takes some arbitrary image as input and outputs a
binary value of yes or no, indicating whether there are any faces present
in theimage. The second step is the face localization task that aims to take
an image asinput and output the location of any face or faces within
that image as somebounding box with (x, y, width, height).After taking
the picture the system will compare the equality of the pictures in its
database and give the most related result.We will use NVIDIA Jetson Nano
Developer kit, Logitech C270 HD Webcam, open CV platform and will do
the coding in python language.

3.2 Modal Implementation:


The main components used in the implementation approach are open source
computer vision library (OpenCV). One of OpenCV’s goals is to provide a
simple- to-use computer vision infrastructure that helps people build fairly
sophisticated vision applications quickly. OpenCV library contains over 500
functions that span many areas in vision. The primary technology behind
Face recognition is OpenCV. The user stands in front of the camera keeping
a minimum distance of 50cm and hisimage is taken as an input. The frontal
face is extracted from the image then converted to gray scale and stored. The
Principal component Analysis (PCA) algorithm is performed on the images
and the eigen values are stored in an xml file. When a user requests for
recognition the frontal face is extracted from the captured video frame
through the camera. The eigen value is re-calculated for the test face andit is
matched with the stored data for the closest neighbour.

3.3 Design Requirements:

We used some tools to build the system. Without the help of these tools
it would not be possible to make it done. Here we will discuss about the
most important one.

3.3.1 Software Implementation:


1. OpenCV: We used OpenCV 3 dependency for python 3. OpenCV is
library where there are lots of image processing functions are
available. This is veryuseful library for image processing. Even one
can get expected outcome without writing a single code. The library
is cross-platform and free for use under the open-source BSD license.
Example of some supported functions are given bellow:
● Derivation: Gradient/Laplacian computing, contours delimitation
● Hough transforms: lines, segments, circles, and geometrical
shapes detection

● Histograms: computing, equalization, and object localization


with backprojection algorithm
● Segmentation: thresholding, distance transform,
foreground/backgrounddetection, watershed segmentation

● Filtering: linear and nonlinear filters, morphological operations


● Cascade detectors: detection of face, eye, car plates

● Interest points: detection and matching

● Video processing: optical flow, background subtraction,


camshaft(object tracking)
● Photography: panoramas realization, high definition imaging
(HDR),image inpainting

2. Python IDE: There are lots of IDEs for python. Some of them are
PyCharm, Thonny, Ninja, Spyder etc. Ninja and Spyder both are very
excellent and free but weused Spyder as it feature- rich than ninja.
Spyder is a little bit heavier than ninja but still much lighter than
PyCharm.
Chapter-4

CODE IMPLEMENTATION
4.1 Code Implementation:

All our code is written in Python language. First here is


our projectdirectory structure and files.
1.Main.py
2.Photos

4.1.1 main.py

All the work will be done here, Detect the face ,recognize the faces
and takeattendance.

import face_recognition
import cv2
import numpy as np
import csv
from datetime import datetime

video_capture = cv2.VideoCapture(0)

# Load known faces

Angelina_image = face_recognition.load_image_file("photos/Angelina Jolie.jpg")


Angelina_encoding = face_recognition.face_encodings(Angelina_image)[0]

Brad_image = face_recognition.load_image_file("photos/Brad Pitt.jpg")


Brad_encoding = face_recognition.face_encodings(Brad_image)[0]

Denzel_image = face_recognition.load_image_file("photos/Denzel Washington.jpg")


Denzel_encoding = face_recognition.face_encodings(Denzel_image)[0]

Hugh_image = face_recognition.load_image_file("photos/Hugh Jackman.jpg")


Hugh_encoding = face_recognition.face_encodings(Hugh_image)[0]

Jennifer_image = face_recognition.load_image_file("photos/Jennifer Lawrence.jpg")


Jennifer_encoding = face_recognition.face_encodings(Jennifer_image)[0]

Johnny_image = face_recognition.load_image_file("photos/Johnny Depp.jpg")


Johnny_encoding = face_recognition.face_encodings(Johnny_image)[0]

Kate_image = face_recognition.load_image_file("photos/Kate Winslet.jpg")


Kate_encoding = face_recognition.face_encodings(Kate_image)[0]

Leonardo_image = face_recognition.load_image_file("photos/Leonardo DiCaprio.jpg")


Leonardo_encoding = face_recognition.face_encodings(Leonardo_image)[0]

Megan_image = face_recognition.load_image_file("photos/Megan Fox.jpg")


Megan_encoding = face_recognition.face_encodings(Megan_image)[0]

Natalie_image = face_recognition.load_image_file("photos/Natalie Portman.jpg")


Natalie_encoding = face_recognition.face_encodings(Natalie_image)[0]

Nicole_image = face_recognition.load_image_file("photos/Nicole Kidman.jpg")


Nicole_encoding = face_recognition.face_encodings(Nicole_image)[0]

Robert_image = face_recognition.load_image_file("photos/Robert Downey Jr.jpg")


Robert_encoding = face_recognition.face_encodings(Robert_image)[0]

Sandra_image = face_recognition.load_image_file("photos/Sandra Bullock.jpg")


Sandra_encoding = face_recognition.face_encodings(Sandra_image)[0]

Scarlett_image = face_recognition.load_image_file("photos/Scarlett Johansson.jpg")


Scarlett_encoding = face_recognition.face_encodings(Scarlett_image)[0]

Tom_image = face_recognition.load_image_file("photos/Tom Cruise.jpg")


Tom_encoding = face_recognition.face_encodings(Tom_image)[0]

Will_image = face_recognition.load_image_file("photos/Will Smith.jpg")


Will_encoding = face_recognition.face_encodings(Will_image)[0]

Shikhar_image = face_recognition.load_image_file("photos/Shikhar.jpg")
Shikhar_encoding = face_recognition.face_encodings(Shikhar_image)[0]

Ankit_image = face_recognition.load_image_file("photos/Ankit Gangwar.jpg")


Ankit_encoding = face_recognition.face_encodings(Ankit_image)[0]

Aamir_image = face_recognition.load_image_file("photos/Aamir_Khan.jpg")
Aamir_encoding = face_recognition.face_encodings(Aamir_image)[0]

Abhishek_image = face_recognition.load_image_file("photos/Abhishek_Bachchan.jpg")
Abhishek_encoding = face_recognition.face_encodings(Abhishek_image)[0]

Aishwarya_image = face_recognition.load_image_file("photos/Aishwarya_Rai.jpg")
Aishwarya_encoding = face_recognition.face_encodings(Aishwarya_image)[0]
Ajay_image = face_recognition.load_image_file("photos/Ajay_Devgn.jpg")
Ajay_encoding = face_recognition.face_encodings(Ajay_image)[0]

Akshay_image = face_recognition.load_image_file("photos/Akshay_Kumar.jpg")
Akshay_encoding = face_recognition.face_encodings(Akshay_image)[0]

Amitabh_image = face_recognition.load_image_file("photos/Amitabh_Bachchan.jpg")
Amitabh_encoding = face_recognition.face_encodings(Amitabh_image)[0]

known_face_encodings = [Angelina_encoding , Brad_encoding , Denzel_encoding,


Hugh_encoding, Jennifer_encoding, Johnny_encoding, Kate_encoding,
Leonardo_encoding, Megan_encoding, Natalie_encoding, Nicole_encoding,
Robert_encoding, Sandra_encoding, Scarlett_encoding, Tom_encoding, Will_encoding,
Shikhar_encoding, Ankit_encoding, Aamir_encoding, Abhishek_encoding,
Aishwarya_encoding, Ajay_encoding, Akshay_encoding, Amitabh_encoding]
known_face_names = ["Angelina Jolie" , "Brad Pitt" , "Denzel Washington", "Hugh
Jackman", "Jennifer Lawrence", "Johnny Depp", "Kate Winslet", "Leonardo DiCaprio",
"Megan Fox", "Natalie Portman", "Nicole Kidman", "Robert Downey Jr", "Sandra
Bullock", "Scarlett Johansson", "Tom Cruise", "Will Smith", "Shikhar", "Ankit
Gangwar", "Aamir_Khan", "Abhishek_Bachchan", "Aishwarya_Rai", "Ajay_Devgn",
"Akshay_Kumar", "Amitabh_Bachchan"]

# List of expected students

students = known_face_names.copy()

face_locations = []
face_encodings = []

# Get the current date and time

now = datetime.now()
current_date = now.strftime("%Y-%m-%d")

f = open(f"{current_date}.csv" , "w+" , newline="")


lnwriter = csv.writer(f)

while True:
_, frame = video_capture.read()
small_frame = cv2.resize(frame ,(0,0) , fx=0.25 , fy=0.25)
rgb_small_frame = cv2.cvtColor(small_frame , cv2.COLOR_BGR2RGB)

# Recognise faces
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame ,
face_locations)

for face_encoding in face_encodings:


matches =
face_recognition.compare_faces(known_face_encodings,face_encoding)
face_distance =
face_recognition.face_distance(known_face_encodings,face_encoding)
best_match_index = np.argmin(face_distance)

if(matches[best_match_index]):
name = known_face_names[best_match_index]

# Add the text if a person is present


if name in known_face_names:
font = cv2.FONT_HERSHEY_SIMPLEX
bottomLeftCornerOfText = (10,100)
fontScale = 1.5
fontColor = (255,0,0)
thickness = 3
lineType = 2
cv2.putText(frame, name + " Present", bottomLeftCornerOfText, font,
fontScale, fontColor, thickness, lineType)

if name in students:
students.remove(name)
current_time = now.strftime("%H:%M:%S")
lnwriter.writerow([name,current_time])

cv2.imshow("Camera" , frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

video_capture.release()
cv2.destroyAllWindows()
f.close()
4.1.2 Photos:
Chapter-5
PERFORMANCE ANALYSIS

5.1 Introduction:

We conducted a series of experiments to illustrate the system performance under


different situations. By carrying out those tests, we were able to get the graph
shown above (Distance vs Confidence Level). We may deduce from the graph that
when the face is closer to the camera, the confidence level is higher, and vice versa.
Therefore, by keepinga threshold for confidence level, we can mark attendance to
the person according to the threshold.
5.2 Analysis:

Here we consider one constant parameter intensity of light . we performed different


experiments on different distance and different angles. we observed the confidence
level at the different positions by gradually increasing the distance .we plotted the
graph usingthe x and y coordinates by considering the x values as the confidence level
or accuracy rate. and y values as the distance (cms).

CONCLUSION

Face recognition systems are part of facial image processing applications and their
significance as a research area are increasing recently. Implementations of system
are crimeprevention, video surveillance, person verification, and similar security
activities. The facerecognition system implementation can be part of Universities.
Face Recognition Based Attendance System has been envisioned for the purpose
of reducing the errors that occur in the traditional (manual) attendance taking
system. The aim is to automate and make a system that is useful to the organization
such as an institute. The efficient and accurate method of attendance in the office
environment that can replace the old manual methods. This method is secure
enough, reliable and available for use. Proposed algorithm is capable of detect
multiple faces, and performance of system has acceptable good results.
REFERENCE

[1]. A brief history of Facial Recognition, NEC, New Zealand,26 May 2020.[Online].
Available:https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-
of-facial- recognition/
[2]. Face detection,TechTarget Network, Corinne Bernstein, Feb, 2020.[Online].
Available:https://searchenterpriseai.techtarget.com/definition/face-detection
[3]. Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of
SimpleFeatures. Accepted Conference on Computer Vision and Pattern Re cognition,
2001.
[4]. Face Detection with Haar Cascade,Towards Data Science-727f68dafd08,Girija
ShankarBehera, India, Dec 24, 2020.[Online].
Available:https://towardsdatascience.com/face-detection-with-haar-cascade-
727f68dafd08
[5]. Face Recognition: Understanding LBPH Algorithm,Towards Data Science-
90ec258c3d6b,Kelvin Salton do Prado, Nov 11, 2017.[Online]. Available
:https://towardsdatascience.com/face-recognition-how-lbph-works-90ec258c3d6b
[6]. What is Facial Recognition and how sinister is it, Theguardian, IanSample, July, 2019.
[Online]. Available: https://www.theguardian.com/technology/2019/jul/29/what-is-facial-
recognition-and-how-sinister-is-it
[7].Kushsairy Kadir , Mohd Khairi Kamaruddin, Haidawati Nasir, Sairul I Safie, Zulkifli
Abdul Kadir Bakti,"A comparative study between LBP and Haar-like features for Face
Detection using OpenCV", 4th International Conference on Engineering Technology and
Technopreneuship (ICE2T), DOI:10.1109/ICE2T.2014.7006273, 12 January 2015.
[8].Senthamizh Selvi.R,D.Sivakumar, Sandhya.J.S , Siva Sowmiya.S, Ramya.S ,
Kanaga SubaRaja.S,"Face Recognition Using Haar - Cascade Classifier for Criminal
Identification", International Journal of Recent Technology and Engineering(IJRTE),
vol.7, issn:2277-3878, ,issue-6S5, April 2019.
….

…………

You might also like