0% found this document useful (0 votes)
35 views63 pages

Free Fire

The document presents a project report on a Face Recognition Based Attendance System developed by Aditya Prakash and Baddigam Lokeswar Reddy as part of their Bachelor of Engineering degree. The system automates attendance marking using facial recognition technology, enhancing accuracy and efficiency while addressing issues like fraud and manual errors. The report includes acknowledgments, an abstract, objectives, significance, and a literature survey related to the project.

Uploaded by

emorababu1996
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views63 pages

Free Fire

The document presents a project report on a Face Recognition Based Attendance System developed by Aditya Prakash and Baddigam Lokeswar Reddy as part of their Bachelor of Engineering degree. The system automates attendance marking using facial recognition technology, enhancing accuracy and efficiency while addressing issues like fraud and manual errors. The report includes acknowledgments, an abstract, objectives, significance, and a literature survey related to the project.

Uploaded by

emorababu1996
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

FACE RECOGNITION BASED ATTENDANCE SYSTEM

Submitted in partial fulfilment of the requirements for the award of Bachelor of


Engineering degree in Electronics and Communication Engineering

By

ADITYA PRAKASH (39130010)

BADDIGAM LOKESWAR REDDY (39130050)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


SCHOOL OF ELECTRICAL AND ELECTRONICS

SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY

(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A” by NAAC
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI - 600 119

APRIL- 2023
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with "A" grade by NAAC
Jeppiaar Nagar, Rajiv Gandhi Salai, Chennai - 600 119
www.sathyabama.ac.in

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BONAFIDE CERTIFICATE

This is to certify that this Project Report is the bonafide work ADITYA PRAKASH (39130010) and
BADDIGAM LOKESWAR REDDY (39130050) who carried out the project entitled " FACE
RECOGNITION BASED ATTENDANCE SYSTEM " under our supervision from November 2022 to
April 2023.

Internal Guide
Dr. S.POORNAPUSHPAKALA, M.E., Ph.D.,

Head of the Department

Dr. T. RAVI, M.E., Ph.D.,

Submitted for Viva voce Examination held on 27.04.2023

Internal Examiner External Examiner

i
DECLARATION

We, ADITYA PRAKASH (39130010) and BADDIGAM LOKESWAR REDDY (39130050) here
by declare that the Project Report entitled " FACE RECOGNITION BASED ATTENDANCE
SYSTEM " done by us under the guidance of Dr. S . POORNAPUSHPAKALA, M.E., Ph.D.,
Associate Professor, Dept. of Electronics and Communication Engineering, Sathyabama
institute of science and technology , Chennai, submitted partial fulfillment of the
requirements for the award of Bachelor of Engineering degree in Electronics and
Communication Engineering.

DATE :27.04.23 SIGNATURE OF CANDIDATES

PLACE :Chennai 1.

2.

ii
ACKNOWLEDGEMENT

We are pleased to acknowledge our sincere thanks to Board of Management of


SATHYABAMA for their kind encouragement in doing this project and for
completing it successfully. We are grateful to them.

I convey our thanks to Dr. N.M. Nandhitha,M.E,Ph.D, Dean, School of Electrical


and Electronics and Dr. T Ravi,M.E.Ph.D, Head of the Department, Dept. of
Electronics and Communication Engineering for providing us necessary support and
details at the right time during the progressive reviews.

We would like to express our sincere and deep sense of gratitude to our Project
Guide Dr.S.Poornapushpakala, Ph.D., for her valuable guidance, suggestions and
constant encouragement paved way for the successful completion of our project
work.

We wish to express our thanks to all Teaching and Non-teaching staff members of
the Department of Electronics and Communication Engineering who were
helpful in many ways for the completion of the project.

iv
ABSTRACT

The face recognition-based attendance system is an innovative solution that uses


facial recognition technology to automate the attendance marking process. The
system captures live video feed from a camera and identifies individuals by matching
their facial features with the pre-stored facial data. The system utilizes OpenCV and
face_recognition libraries in Python to detect and recognize faces in real-time.
The system first requires the collection of facial data of individuals and their respective
names. This data is stored in a database for reference. When the system is
operational, the live video feed is processed, and individuals are identified by
comparing their facial features with the pre-stored data. If an individual's facial features
match with the data in the database, their attendance is automatically marked, and the
timestamp is recorded.
The system is beneficial in various settings, including educational institutions,
workplaces, and other organizations that require attendance management. The
system provides an efficient and convenient solution to the manual attendance
marking process, which is time-consuming and prone to errors.
The accuracy of a face recognition-based attendance system can vary depending on
several factors, including the quality of the camera, lighting conditions, and the
accuracy of the facial recognition algorithm. Generally, the accuracy of modern facial
recognition algorithms is high, and they can achieve an accuracy of over 99%.
However, it is important to note that the accuracy of the system may also depend on
the quality and quantity of the training data used to train the facial recognition
algorithm. If the training data is biased or insufficient, the accuracy of the system may
be compromised.
To improve the accuracy of a face recognition-based attendance system, it is
recommended to use high-quality cameras with good lighting conditions. The system
should also be trained on a diverse set of facial data to ensure that it can accurately
recognize individuals from various backgrounds and ethnicities.

v
TABLE OF CONTENT

CHAPTER.NO TITLE PAGE


NO

ABSTRACT V

LIST OF FIGURES viii

1 INTRODUCTION 1

1.1 Introduction 1

1.2 Objective 2

1.3 Significance of this project 2

1.4 Background of study 3

1.5 Problem statement 4

2 LITERATURE SURVEY 6

3 AIM AND SCOPE 13

3.1 Aim 13

3.2 Objective 13

3.3 Scope 13

3.4 Limitations 14

3.5 Motivation 14

vi
4 EXPERIMENTAL AND MATERIALS AND 15
METHODS AND ALGORITHMS

4.1 Research methodology 15

4.2 Block diagram 16

4.3 Tools/Materials 18

4.4 Flow chart 23

4.5 Software Requirements 23

4.6 Algorithm 25

4.7 Applications 27

4.8 Working principle 28

5 RESULT,DISCUSSION AND 38
PERFORMANCE ANALYSIS

5.1 Result and performance analysis 38

5.2 Disadvantages 46

6 SUMMARY AND CONCLUSION 48

6.1 Summary 48

6.2 Conclusion 48

6.3 Advantages 49

6.4 Future scope 50

REFERENCES 52

vii
LIST OF FIGURES

S.NO FIGURES PAGE NO

4.2.1 Circuit Diagram 16

4.2.2 Uml Diagram 16

4.2.3 Data Flow Diagram 17

4.2.4 Architecture Diagram 17

4.4.1 Esp32Cam Module 18

4.4.2 Stepdown transformer 19

4.4.3 Rectifier 19

4.4.4 Capacitor 20

4.4.5 Voltage Regulator 20

4.4.6 Resistor 21

4.4.7 Led Bulb 22

4.4.8 Led Display 22

4.5.1 Working Flow Chart 23

viii
4.7.1 Hardware Model 37

5.1.1 Face Recognition 44

5.1.2 Face Recognition 45

5.1.3 Attendance Data Storage 45

ix
CHAPTER 1

INTRODUCTION

1.1.Introduction:
Face recognition-based attendance system is a modern and innovative approach to
managing attendance records in various industries. This technology relies on
computer vision and machine learning algorithms to accurately and securely capture
attendance data by analyzing facial features of individuals. The system compares the
captured image with pre-stored images in the database and marks attendance
automatically, eliminating the need for manual data entry and minimizing the risk of
errors.
The use of face recognition technology in attendance management has several
advantages over traditional attendance systems. It provides accurate, efficient, and
real-time tracking of attendance records, making it easy for teachers or administrators
to monitor attendance data. Face recognition-based attendance systems also
eliminate the possibility of fraudulent practices such as buddy punching, as the system
can only recognize and mark attendance for registered individuals.
Face recognition-based attendance systems have been adopted in various industries,
including education, healthcare, government, and corporate settings. In schools and
universities, face recognition systems have been found to improve attendance rates
and reduce truancy. In healthcare, the technology is being used to manage staff
attendance and ensure that only authorized personnel have access to restricted areas.
Similarly, in government and corporate settings, face recognition technology is being
used to enhance security and manage employee attendance.
However, the use of face recognition technology has also raised concerns around
privacy and security, particularly with regards to storing and sharing biometric data.
Therefore, it is essential to implement proper safeguards and regulations to protect
individuals' privacy and prevent misuse of biometric data.
In summary, face recognition-based attendance systems provide an efficient,
accurate, and secure way of managing attendance records in various industries, and

1
its adoption is expected to increase as organizations recognize the benefits of this
technology.

1.2 Objective:
The objective of a face recognition-based attendance system is to automate the
process of taking attendance by using a computer vision technology that identifies and
verifies an individual's identity based on their facial features. The system typically
involves capturing an image of the person's face and comparing it to a database of
pre-stored images to determine if a match is found. If a match is found, the person is
marked as present for that session, and the attendance record is updated in real-time.
The main benefits of a face recognition-based attendance system include increased
accuracy, efficiency, and security. Since the system relies on biometric data that is
unique to each individual, it eliminates the possibility of manual errors or fraudulent
attendance practices such as buddy punching (when one person clocks in for another).
Additionally, the system can reduce administrative workload and save time by
automating the attendance process, which is especially useful in large organizations
with many employees or students. Overall, the objective of a face recognition-based
attendance system is to provide an accurate, efficient, and secure way to manage
attendance records.

1.3 Significant Of The Project:


The significance of a face recognition-based attendance system lies in its ability to
streamline the attendance management process while providing a higher level of
security and accuracy than traditional methods.
Here are some of the key benefits and significance of a face recognition-based
attendance system:
Accurate attendance tracking: Face recognition technology is highly accurate and
eliminates the possibility of human errors such as incorrectly marking attendance or
making mistakes while maintaining manual records.
Time-saving: With an automated attendance system, there is no need for manual data
entry or time-consuming paperwork. This frees up time for teachers or administrative
staff to focus on more important tasks.
Real-time updates: The attendance system updates in real-time, which means
teachers or managers can track attendance data on an ongoing basis.
2
Increased security: The system ensures that only the authorized personnel can access
the attendance data, which improves the security of the system and reduces the
possibility of fraudulent practices such as buddy punching.
Easy to use: The system is user-friendly and requires minimal training to operate,
making it an ideal solution for organizations with a large workforce or multiple
locations.
Cost-effective: While initial investment may be required for implementing the system,
in the long run, it is a cost-effective solution since it eliminates the need for manual
processes and reduces errors and associated costs.
In summary, a face recognition-based attendance system is significant for its ability to
provide accurate, efficient, and secure attendance management, save time, and
reduce costs.

1.4 Background Of Study:


A background study of face recognition-based attendance systems reveals that this
technology has become increasingly popular in recent years due to its ability to provide
an accurate, efficient, and secure way of managing attendance records.
One of the key advantages of face recognition technology is that it relies on biometric
data that is unique to each individual, making it highly accurate and secure. It
eliminates the possibility of manual errors or fraudulent attendance practices such as
buddy punching, which is a common problem in traditional attendance systems.
The development of face recognition-based attendance systems has been made
possible by advancements in computer vision, machine learning, and artificial
intelligence. These technologies enable computers to recognize and analyze facial
features, identify individuals, and match them against pre-stored images or databases
in real-time.
Face recognition-based attendance systems have been adopted in a wide range of
industries, including education, healthcare, government, and corporate settings. In the
education sector, face recognition systems are being used in schools and universities
to track student attendance, which has been found to improve attendance rates and
reduce truancy.
In the healthcare sector, face recognition-based attendance systems are being used
to manage staff attendance and ensure that only authorized personnel have access

3
to restricted areas. Similarly, in government and corporate settings, face recognition
technology is being used to enhance security and manage employee attendance.
However, the use of face recognition technology has also raised concerns around
privacy and security, particularly when it comes to storing and sharing biometric data.
There have been instances of data breaches and misuse of biometric data, which
highlight the need for proper regulations and safeguards to protect individuals' privacy.
In conclusion, the background study of face recognition-based attendance systems
highlights the technology's benefits in improving attendance management while also
highlighting the importance of privacy and security considerations in their
implementation.

1.5. Problem Of Statement:


The problem statement of face recognition-based attendance systems can be framed
around the challenges faced by traditional attendance systems, which are manual,
time-consuming, error-prone, and vulnerable to fraudulent practices such as buddy
punching. Some of the key problems that a face recognition-based attendance system
can address include:
Inaccuracy and errors in attendance tracking: Manual attendance systems rely on
human input, which can be prone to errors, leading to incorrect attendance records.
Time-consuming attendance tracking: Traditional attendance systems require
teachers or administrative staff to manually enter attendance data, which can be time-
consuming, leading to delays in attendance reporting.
Fraudulent attendance practices: Buddy punching, where one person marks the
attendance for another, is a common problem in traditional attendance systems,
leading to inaccurate attendance records.
Security and privacy concerns: Traditional attendance systems are vulnerable to data
breaches and unauthorized access, leading to potential misuse of attendance data.
Difficulty in tracking attendance data in real-time: Traditional attendance systems may
not provide real-time updates, making it challenging to track attendance data in real-
time.
Inefficiency in attendance management: Traditional attendance systems can be
inefficient, leading to the wastage of time and resources.
The problem statement highlights the need for an accurate, efficient, and secure
attendance management system that can address the challenges faced by traditional
4
attendance systems. A face recognition-based attendance system can provide a
solution by automating attendance tracking, eliminating fraudulent practices, providing
real-time updates, and ensuring the security and privacy of attendance data.

5
CHAPTER 2

LITERATURE SURVEY

Face Detection and Recognition Student Attendance System:

Author: Jireh Jam(Feb 2019)

This paper will show how they can implement algorithms for face detection and
recognition in image processing to build a system that will detect and recognise frontal
faces of students in a classroom. “A face is the front part of a person’s head from the
forehead to the chin, or the corresponding part of an animal” (Oxford Dictionary). In
human interactions, the face is the most important factor as it contains important
information about a person or individual.

Attendance system using Multi-face Recognition:

Author: P. Visalakshi, Sushant Ashish(May 2019)

Face Recognition is one of the best and one of the regularly developing security
features used. In this project, the attendance in a class will be monitored by the class
camera which will continuously monitor. The student’s database is fed into the
attendance system, and as soon as the camera recognized the face, attendance is
marked for that student.Since a class camera is being used, it will be difficult to detect
faces if they are shot in different resolutions. This is done using the OpenCV module.
The face will be 6ecognized using the local histograms method.Camera is present in
the classroom where the students are seated. The camera will constantly monitoring
students in the video footage.

Webcam Based Attendance System:

Author: Shraddha Shinde,Ms. Patil Priyanka(March 2020)

In this paper, they propose a system that takes the attendance of students for
classroom lecture. Our system takes the attendance automatically using face
recognition. In this paper, we propose a method for estimating the attendance
precisely using all the results of face recognition obtained by continuous observation.
Continuous observation improves the performance for the estimation of the
attendance.

6
Automatic Attendance System Using Webcam:

Author: Simran Raju Inamdar, Aishwarya Vijay Kumar Patil, Ankita Digambar Patil ,
Dr. S. M. Mukane(November 2020)

Attendance marking in a classroom during a lecture is not only burdensome but also
a time-consuming task. Due to a usually large number of students present in the
lecture hall, there is always a possibility of proxy attendance. It is extremely difficult for
lecturers to manually identify the students who skip their lectures on a regular basis.
Attendance management of students through the conventional methods had been a
challenge in the recent years.

Automated Attendance System based on Facial Recognition:

Author: Rakshitha, S R Dhanush, Shreeraksha Shetty, Sushmitha(Feb 2021)

In this project they have implemented the automated attendance system using
MATLAB. We have projected our ideas to implement “Automated Attendance System
Based on Facial Recognition”, in which it imbibes large applications. The application
includes face identification, which saves time and eliminates chances of proxy
attendance because of the face authorization. Hence, this system can be implemented
in a field where attendance plays an important role.

Smart Attendance Management System Based On Face Recognition


Algorithm:

Author: M.Kasiselvanathan, 2Dr.A.Kalaiselvi, 3Dr.S.P.Vimal, 4V.Sangeetha(Oct


2021)

Facial Recognition is a technology of biometrics has been used in many areas like
security systems, human machine interaction and image processing techniques. The
main objective of this paper is to calculate the attendance of students in a easier way.
We proposed a system called automated attendance management system that uses
face recognition method gives solution to the faculty thereby reducing the burden in
taking attendance. The system used to calculate attendance automatically by
recognizing the facial dimension.

7
Accurate and Robust Facial Capture Using a Single RGBD Camera:

Author: Yen-Lin Chen, Hsiang-Tao Wu, Fuhao Shi(2021)

This paper presents an automatic and robust approach that accurately captures high-
quality 3D facial performances using a single RGBD camera. The key of our approach
is to combine the power of automatic facial feature detection and image-based 3D non
rigid registration techniques for 3D facial reconstruction. In particular, we develop a
robust and accurate image-based non rigid registration algorithm that incrementally
deforms a 3D template mesh model to best match observed depth image data and
important facial features detected from single RGBD images. The whole process is
fully automatic and robust be-cause it is based on single frame facial registration
frame-work. The ability to accurately capture 3D facial perfor-mances has many
applications including animation, gaming, human-computer interaction, security, and
tele presence. This problem has been partially solved by commercially available
marker-based motion capture equipment, but this solution is far too expensive for
common use. It is also cumbersome, requiring the user to wear more than 60 carefully
positioned retro-reflective markers on the face. This paper presents an alternative to
solving this problem: reconstructing the user’s 3D facial performances using a single
RGBD camera. The main contribution of this paper is a novel 3D facial modeling
process that accurately reconstructs 3D facial expression models from single RGBD
images. We focuson single frame facial reconstruction because it ensures theprocess
is fully automatic and does not suffer from drift ing errors. At the core of our system
lies a 3D facial de-formation registration process that incrementally deforms atemplate
face model to best match observed depth data. Wemodel 3D facial deformation in a
reduced subspace throughembedded deformation [16] and extend model-based opti-
cal flow formulation to depth image data. This allows us toformulate the 3D nonrigid
registration process in the Lucas-Kanade registration framework [1] and use linear
systemsolvers to incrementally deform the template face model tomatch observed
depth images.

8
Face Detection with a 3D Model:

Author: Adrian Barbu, Nathan Lay, Gary Gramajo(2020)

This paper presents a part-based face detection approach where the spatial
relationship between the face parts is represented by a hidden 3D model with six
parameters. The computational complexity of the search in the six dimensional pose
space is addressed by proposing meaningful 3D pose candidates by image-based
regression from detected face key point locations. The 3D pose candidates are
evaluated using a parameter sensitive classifier based on difference features relative
to the 3D pose. A compatible subset of candidates is then obtained by non-maximal
suppression. Experiments on two standard face detection datasets show that the
proposed 3D model based approach obtains results comparable to or better than state
of the art. Face recognition has been a hot research area for its wide range of
applications . In human identification scenarios, facial metrics are more naturally
accessible than many other biometrics, such as iris, fingerprint, and palm print . Face
recognition is also highly valuable in human computer interaction, access control,
video surveillance, and many other applications.

Although 2D face recognition research made significant progresses in recent years,


its accuracy is still highly depended on light conditions and human poses . When the
light is dim or the face poses are not properly aligned in the camera view, the
recognition accuracy will suffer.

Real-time 3-D face tracking and modeling from a webcam:

Author: Jongmoo Choi, Yann Dumortier, Muhammad Bilal Ahmad, Sang-Il Choi(2020)

We first infer a 3-D face model from a single frontal image using automatically
extracted 2-D landmarks and deforming a generic 3-D model. Then, for any input
image, we extract feature points and track them in 2-D. Given these correspondences,
sometimes noisy and incorrect, we robustly estimate the 3-D head pose using PnP
and a RANSAC process. As the head moves, we dynamically add new feature points
to handle a large range of poses.

9
This article objective is to implement an eye-blink detection-based face likeness
detection algorithm to thwart photo attacks. The algorithm works in real time through
a webcam and displays the person’s name only if they blinked. In layman’s terms, the
program runs as follows:

1. Detect faces in each frame generated by the webcam.


2. For each detected face, detect eyes.
3. For each detected eyes, detect if eyes are open or closed.
4. If at some point it was detected that the eyes were open then closed then open,
we conclude the person has blinked and the program displays its name (in the
case of a facial recognition door opener, we would authorize the person to
enter).

Facecept: Real Time 3D Face Traking And Analysis:

Author: Sergey Tulyakov, Radu L. Vieriu, Nicu Sebe, Enver Sangineto(2019)

We present an open source cross platform technology for 3D face tracking and
analysis. It contains a full stack of components for complete face understanding:
detection, head pose tracking, facial expression and action unit’s recognition. Given a
depth sensor, one can combine FaceCept3D modules to fulfill a specific application
scenario. Key advantages of the technology include real time processing speed and
ability to handle extreme head pose variations. There is one important constraint
shared by all these scenarios when solving the above-mentioned tasks: non-
invasiveness,i.e.the solution must not hinder the naturalness of the subject’s behavior.
Consequently, the vision sensors are typically placed out of the direct sight of the
subject. FaceCept3D is motivated by challenges arising from these types of scenarios
and is able to successfully address them in a unified, open source and cross-platform
solution. Additionally, our system can be deployed in a much broader spectrum of
applications (e.g. Those cases for which the face is not fully visible to the sensor),
being able to maintain state-of-the-art performance,

10
Development Of Real Time Face Recognition System Using Opencv

Author: N. Bayramoglu, G. Zhao, and M. Pietikainen(2019)

A real-time, GUI based automatic Face detection and recognition system is developed
in this project. It can be used as access control system by registering the staff or
students of an organization with their faces, and later it will recognize the people by
capturing their images with faces, when they are entering or leaving the premises. The
system is implemented on a desktop with a Graphical User Interface, Initially it detects
the faces in the images that are grabbed from a web camera. All the tools and
operating, used to develop this system like Ubuntu, open Face, Python ..., are open
source tools. This real time GUIbasedface recognition system is developed using
Open source tool Open face. Open Face is the face recognition tool developed by
Carnegie Mellon University, using OpenCV. Open Face, consists in a broader
Prospective, three phases: Detection, Feature extraction, and Recognition. The
dimensionality of face image is reduced by the Histogram of Oriented Gradients (HOG)
and this algorithm is developed to detect frontal views of faces. After detecting the face
part of image, extract the 128 face features for the given image by using a Deep Neural
Network algorithm and the recognition is done by the Support Vector machine (SVM)
classifier.HOG is one of the most popular representation methods for a face image. It
not only reducesthe dimensionality of the image, but also extracting the facial features
of the given images, and retains some of the variations in the image data. So
dimensionality of face image reduced by HOG using deep learning algorithm and
recognition is done by SVM approach

Realtime Data Acquisition Based On Opencv For Close-Range Photogrammetric


Applications:

Author: L. Jurjević a, *, M. Gašparović b(2020)

Development of the technology in the area of the cameras, computers and algorithms
for 3D the reconstruction of the objects from the images resulted in the increased
popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so
advanced that almost anyone can make a 3D model of photographed object. The main
goal of this paper is to examine the possibility of obtaining 3D data for the purposes of
the close-range photogrammetry applications, based on the open source

11
technologies. All steps of obtaining 3D point cloud are covered in this paper. Special
attention is given to the camera calibration, for which two-step process of calibration
is used. Both, presented algorithm and accuracy of the point cloud are tested by
calculating the spatial difference between referent and produced point clouds. During
algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly
usage of this and similar algorithms has a lot of potential in the real-time application.

12
CHAPTER-3

AIM AND SCOPE

3.1 Aim:

The aim of a face recognition-based attendance system is to automate the attendance


tracking process in various settings, such as schools, universities, offices, and other
institutions. The system uses facial recognition technology to identify individuals and
records their attendance automatically without the need for manual intervention.

3.2 Objective:

The objective of a face recognition-based attendance system is to provide an efficient,


accurate, and secure way to track attendance in various settings.

Overall, the objective of a face recognition-based attendance system is to simplify


attendance tracking, improve accuracy, and enhance security while saving time and
effort for all stakeholders involved.

3.3 Scope:

The scope of a face recognition-based attendance system is vast, as it can be used in


various settings where attendance tracking is required. Some of the potential
applications and scope of such a system are:

Educational Institutions: Face recognition-based attendance systems can be used in


schools, colleges, and universities to automate attendance tracking, reduce
administrative burden, and improve accuracy.

The scope of a face recognition-based attendance system is not limited to these areas
and can be used in any setting where attendance tracking is required. The potential
benefits of such a system include accuracy, efficiency, convenience, and enhanced
security.

13
3.4 Limitations:

While face recognition-based attendance systems offer many benefits, they also have
some limitations, including:

Dependence on lighting conditions: The accuracy of face recognition systems is influenced


by lighting conditions, and poor lighting can lead to inaccurate results.

Sensitivity to pose and facial expressions: Face recognition systems may not be accurate if
an individual's face is not fully visible, or if the person has a different facial expression than
what was captured in the system.

3.5 Motivation:

The motivation behind face recognition-based attendance systems is to automate the


attendance tracking process, improve accuracy, and enhance security in various
settings.

The primary motivations of such systems include:

Efficiency: Automating attendance tracking can save time and effort for both
students/employees and teachers/administrators, allowing them to focus on more
productive tasks.

Accuracy: Face recognition technology can accurately identify individuals, reducing


the risk of errors that can occur with manual attendance tracking methods.

Convenience: Face recognition-based attendance systems do not require physical


identification or manual input, making attendance tracking a seamless and hassle-free
experience.

14
CHAPTER-4

EXPERIMENTAL AMD MATERIALS AND METHOD AND


ALGORITHMS

4.1 Research Methodology:

The research methodology of a face recognition-based attendance system typically


involves the following steps:

Research Design: The first step is to determine the research design and methodology
to be used, including the type of data to be collected and analyzed, sample size, and
data collection methods.

Data Collection: The next step is to collect data on the current attendance tracking
methods used and the potential benefits and limitations of face recognition-based
attendance systems. Data can be collected through surveys, interviews, observations,
or secondary sources.

Data Analysis: The collected data is then analyzed to identify patterns, trends, and
potential solutions to address the challenges associated with attendance tracking.

Prototype Development: Based on the data analysis, a prototype of the face


recognition-based attendance system can be developed, which involves integrating
facial recognition technology into the attendance tracking process.

System Testing: The prototype system is tested to evaluate its accuracy, efficiency,
and security in real-world settings. The testing involves comparing the results of the
face recognition-based attendance system with the existing attendance tracking
methods.

Evaluation and Conclusion: The final step is to evaluate the results of the testing and
draw conclusions about the feasibility, effectiveness, and ethical considerations of the
face recognition-based attendance system. The findings can be used to improve the
system design and inform decision-making on its implementation.

Overall, the research methodology of a face recognition-based attendance system


involves a systematic approach to identify the potential benefits and limitations of the
system, develop a prototype, and test its effectiveness in real-world settings.

15
4.2 Block Diagram:

Circuit Diagram:

Fig:4.2.1: Circuit Diagram

UML Diagram:

Fig:4.2.2: UML Diagram

16
Data Flow Diagram:

Fig:4.2.3: Data flow diagram

Architecture Diagram:

Fig:4.2.4: Architecture diagram

17
4.3 Tools/Materials:

1. Esp32 Cam Module:

The ESP32-CAM is a very small camera module with the ESP32-S chip that costs
approximately $10. Besides the OV2640 camera, and several GPIOs to connect
peripherals, it also features a microSD card slot that can be useful to store images
taken with the camera or to store files to serve to clients.

Fig:4.4.1: Esp32 Cam Module

2.Step down transformer:

A step-down transformer is a type of transformer that converts the high voltage (HV)
and low current from the primary side of the transformer to the low voltage (LV) and
high current value on the secondary side of the transformer..

18
Fig:4.4.2: Step down transformer

3.Rectifier:

The circuit which converts the AC into DC signal commonly consists of a particular
arrangement of interlocked diodes and is known as a rectifier. In power supply
circuits, two types of rectifier circuits are commonly used — half-wave and full-wave.
Half-wave rectifiers only permit one-half of the cycle through, whereas full-wave
rectifiers permit both the top half and bottom half of the cycle through, while
converting the bottom half to the same polarity as the top..

Fig:4.4.3: Rectifier

19
4. Capacitor:

A capacitor is a two-terminal, electrical component. Along with resistors and


inductors, they are one of the most fundamental passive components we use. You
would have to look very hard to find a circuit which didn't have a capacitor in it.

Fig:4.4.4: Capacitor

5.Voltage regulator:

A voltage regulator is a system designed to automatically maintain a constant voltage.


A voltage regulator may use a simple feed-forward design or may include negative
feedback. It may use an electromechanical mechanism, or electronic components.
Depending on the design, it may be used to regulate one or more AC or DC voltages.

Fig:4.4.5: Voltage regulator

20
6.Resistor:

A resistor is a passive two-terminal electrical component that implements electrical


resistance as a circuit element. In electronic circuits, resistors are used to reduce
current flow, adjust signal levels, to divide voltages, bias active elements, and
terminate transmission lines, among other uses.

Fig:4.4.6: Resistor

7.Led bulb:

An LED light bulb is a solid-state lighting (SSL) device that fits in standard screw-
in connections but uses LEDs (light-emitting diodes) to produce light up to 90%
more efficiently than incandescent light bulbs. An electrical current passes
through a microchip, which illuminates the tiny light sources we call LEDs and the
result is visible light.

21
Fig:4.4.7: Led bulb

8.Led display:

A LED display is a flat panel display that uses an array of light-emitting diodes
as pixels for a video display. Their brightness allows them to be used outdoors
where they are visible in the sun for store signs and billboards.

Fig:4.4.8: Led display

22
4.4 Flow Chart:
Face Recognition Based Attendance System

Training Face Hog


Storing the
Data Detection features
Hog
Extraction
Features

SVM
Image DB Classifier
Captu (fitcecoc)
re

Testing Hog Storing


Data features the Hog Predict matc
Extraction Features hed

Not
Matched

Fig:4.5.1: Working flow chat

4.5 Software Requirements Specification Document:

Python (Programming language):

Python is a widely used high-level programming language for general-purpose


programming, created by Guido van Rossum and first released in 1991. An interpreted
language, Python has a design philosophy which emphasizes code readability
(notably using whitespace indentation to delimit code blocks rather than curly brackets
or keywords), and a syntax which allows programmers to express concepts in fewer
lines of code than possible in languages such as C++ or Java. The language provides
constructs intended to enable writing clear programs on both a small and large scale.
Python features a dynamic type system and automatic memory management and
supports multiple programming paradigms, including object-oriented, imperative,
functional programming, and procedural styles. It has a large and comprehensive
standard library. Python interpreters are available for many operating systems,
allowing Python code to run on a wide variety of systems. CPython, the reference

23
implementation of Python, is open source software and has a community-based
development model, as do nearly all of its variant implementations. CPython is
managed by the non-profit Python Software Foundation.

OpenCV:

(Open source computer vision) is a library of programming functions mainly aimed at


real-time computer vision. Originally developed by Intel, it was later supported by
Willow Garage then Itseez (which was later acquired by Intel). The library is cross-
platform and free for use under the open-source BSD license.

OpenCV supports the deep learning frameworks TensorFlow, Torch/PyTorch and


Caffe.

OpenCV is written in C++ and its primary interface is in C++, but it still retains a less
comprehensive though extensive older C interface. There are bindings in Python, Java
and MATLAB/OCTAVE. The API for these interfaces can be found in the online
documentation. Wrappers in other languages such as C#, Perl, Ch, Haskell, and Ruby
have been developed to encourage adoption by a wider audience.

Since version 3.4, OpenCV.js is a JavaScript binding for selected subset of OpenCV
functions for the web platform.

All of the new developments and algorithms in OpenCV are now developed in the C++
interface.

OS support:

OpenCV runs on the following desktop operating systems: Windows, Linux, macOS,
FreeBSD, NetBSD, OpenBSD. OpenCV runs on the following mobile operating
systems: Android, iOS, Maemo, BlackBerry 10. The user can get official releases from
SourceForge or take the latest sources from GitHub. OpenCV uses CMake.

24
XLS opener:

The XLS Viewer allows you to open files that were saved with other programs such as
files that were saved with Microsoft Excel. The XLS Viewer allows you to view your
XLS files (spreadsheet) files without having Microsoft Office installed and you do not
have to upload it to online converters. It is a third-party tool that can be installed very
quickly.

The program has a spreadsheet interface that is very similar to the Microsoft Excel
design except that this program has slightly fewer design flairs. The tool allows you to
see to edit load and save XLS files. It has a very small footprint on your CPU (Central
Processing Unit) and it downloads and installs very quickly. It works on MS Windows
95/NT/98/Me/2000/XP/Vista. The software also works on Microsoft NET 2.00 or higher
and you can get it to work on Windows 10 if you use the backwards compatible
troubleshooter that comes with your Windows operating system.

The greatest thing about the XLS Viewer is that you may install it as a tool on your
computer so that you do not need Microsoft Office installed to view your spreadsheets
nor do you have to upload your sensitive spreadsheet information to a website to
convert it. The font options are readable and the fact it looks like Microsoft Excel makes
it easier to use if you are already accustomed to Microsoft products.

4.6 Algorithm:
Algorithm testing:

Algorithm was tested by comparison of the 3D point cloud acquired with an open
source solution, OpenCV and point cloud acquired with the proven software Agisoft
PhotoScan. RO in OpenCV was calculated based on the 152 image points in each
pair of the images used for the orientation i.e. stereo calibration. Totally 10 pairs of the
images were used for the orientation. On the other hand, Agisoft’s algorithm detected
2247 tie points visible on the both images, creating that way a strong geometry.
Ground sample distance was 0.22mm. Precalibrated camera was used for the point
cloud generation and the further optimization of camera is not done. Agisoft point cloud
was created with the maximal offered quality, from alignment to point cloud generation,

25
and with minimal further processing i.e. depth filtering and interpolation, to fit OpenCV
point cloud that is minimally processed.

Accuracy of the point cloud Accuracy of the point cloud was tested by the comparison
of produced point cloud to the model acquired with better camera and in larger scale,
which is considered as an reference in this paper. Reference i.e. etalon model is
produced based on the 73 images of the resolution 4272x2848 with the ground sample
distance of 0.07mm. In order to examine accuracy of the stereo point cloud, point cloud
created with the OpenCV and point cloud created in the Agisoft PhotoScan with the
same images were compared with the reference model.

Algorithms:

Import the required libraries:

Load the images and create a list of known encodings:

26
Capture the video stream and detect faces:

Compare the encodings with known encodings and mark attendance:

Display the video stream and attendance log:

4.7 Applications:

OpenCV's application areas include:

• 2D and 3D feature toolkits


• Egomotion estimation
• Facial recognition system
• Gesture recognition
• Human–computer interaction (HCI)

27
• Mobile robotics
• Motion understanding
• Object identification
• Segmentation and recognition
• Stereopsis stereo vision: depth perception from 2 cameras
• Structure from motion (SFM)
• Motion tracking
• Augmented reality

To support some of the above areas, OpenCV includes a statistical machine learning
library that contains:

• Boosting
• Decision tree learning
• Gradient boosting trees
• Expectation-maximization algorithm
• k-nearest neighbor algorithm
• Naive Bayes classifier
• Artificial neural networks
• Random forest
• Support vector machine (SVM)
• Deep neural networks (DNN)

4.8 Working Principle:

Face detection involves separating image windows into two classes; one containing
faces (tarning the background (clutter). It is difficult because although commonalities
exist between faces, they can vary considerably in terms of age, skin colour and facial
expression. The problem is further complicated by differing lighting conditions, image
qualities and geometries, as well as the possibility of partial occlusion and disguise.
An ideal face detector would therefore be able to detect the presence of any face

28
under any set of lighting conditions, upon any background. The face detection task
can be broken down into two steps.

The first step is a classification task that takes some arbitrary image as input and
outputs a binary value of yes or no, indicating whether there are any faces present in
the image. The second step is the face localization task that aims to take an image as
input and output the location of any face or faces within that image as some bounding
box with (x, y, width, height).

The Face Detection Can Be Classified Into Few Steps:

1. Pre-Processing:
To reduce the variability in the faces, the images are processed before they are
fed into the network. All positive examples that is the face images are obtained
by cropping images with frontal faces to include only the front view. All the
cropped images are then corrected for lighting through standard algorithms.

2. Classification:
Neural networks are implemented to classify the images as faces or non faces
by training on these examples. We use both our implementation of the neural
network and the neural network toolbox for this task. Different network
configurations are experimented with to optimize the results.

3. Localization:
The trained neural network is then used to search for faces in an image and if
present localize them in a bounding box. Various Feature of Face on which the
work has done on:- Position Scale Orientation Illumination

Face detection is a computer technology that determines the location and size of
human face in arbitrary (digital) image. The facial features are detected and any other
objects like trees, buildings and bodies etc are ignored from the digital image. It can
be regarded as a ‗specific‘ case of object-class detection, where the task is finding
the location and sizes of all objects in an image that belong to a given class. Face

29
detection, can be regarded as a more ‗general‘ case of face localization. In face
localization, the task is to find the locations and sizes of a known number of faces
(usually one). Basically there are two types of approaches to detect facial part in the
given image i.e. feature base and image base approach. Feature base approach tries
to extract features of the image and match it against the knowledge of the face
features. While image base approach tries to get best match between training and
testing images.

Feature Base Approach:

Active Shape Model Active shape models focus on complex non-rigid features like
actual physical and higher level appearance of features Means that Active Shape
Models (ASMs) are aimed at automatically locating landmark points that define the
shape of any statistically modelled object in an image. When of facial features such
as the eyes, lips, nose, mouth and eyebrows. The training stage of an ASM involves
the building of a statistical

a) facial model from a training set containing images with manually annotated
landmarks. ASMs is classified into three groups i.e. snakes, PDM, Deformable
templates

b) Snakes:The first type uses a generic active contour called snakes, first introduced
by Kass et al. in 1987 Snakes are used to identify head boundaries [8,9,10,11,12]. In
order to achieve the task, a snake is first initialized at the proximity around a head
boundary. It then locks onto nearby edges and subsequently assume the shape of the
head. The evolution of a snake is achieved by minimizing an energy function, Esnake
(analogy with physical systems), denoted asEsnake = Einternal + EExternal
WhereEinternal and EExternal are internal and external energy functions.Internal
energy is the part that depends on the intrinsic properties of the snake and defines its
natural evolution
The typical natural evolution in snakes is shrinking or expanding. The external energy
counteracts the internal energy and enables the contours to deviate from the natural
evolution and eventually assume the shape of nearby features—the head boundary at
a state of equilibria.Two main consideration for forming snakes i.e. selection of energy
terms and energy minimization. Elastic energy is used commonly as internal energy.

30
Internal energy is vary with the distance between control points on the snake, through
which we get contour an elastic-band characteristic that causes it to shrink or expand.
On other side external energy relay on image features. Energy minimization process
is done by optimization techniques such as the steepest gradient descent. Which
needs highest computations. Huang and Chen and Lam and Yan both employ fast
iteration methods by greedy algorithms. Snakes have some demerits like contour often
becomes trapped onto false image features and another one is that snakes are not
suitable in extracting non convex features.

Deformable Templetas:
Deformable templates were then introduced by Yuille et al. to take into account the a
priori of facial features and to better the performance of snakes. Locating a facial
feature boundary is not an easy task because the local evidence of facial edges is
difficult to organize into a sensible global entity using generic contours. The low
brightness contrast around some of these features also makes the edge detection
process.Yuille et al. took the concept of snakes a step further by incorporating global
information of the eye to improve the reliability of the extraction process Deformable
templates approaches are developed to solve this problem. Deformation is based on
local valley, edge, peak, and brightness .Other than face boundary, salient feature
(eyes, nose, mouth and eyebrows) extraction is a great challenge of face recognition.E
= Ev + Ee + Ep + Ei + Einternal ; where Ev , Ee , Ep , Ei , Einternal are external energy
due to valley, edges, peak and image brightness and internal energy

PDM (Point Distribution Model):

Independently of computerized image analysis, and before ASMs were developed,


researchers developed statistical models of shape . The idea is that once you
represent shapes as vectors, you can apply standard statistical methods to them just
like any other multivariate object. These models learn allowable constellations of
shape points from training example sand use principal components to build what is
called a Point Distribution Model. These have been used in diverse ways, for example
for categorizing Iron Age broaches. Ideal Point Distribution Models can only deform in
ways that are characteristic of the object. Cootes and his colleagues were seeking
models which do exactly that so if a beard, say, covers the chin, the shape model can
31
\override the image" to approximate the position of the chin under the beard. It was
therefore natural (but perhaps only in retrospect) to adopt Point Distribution Models.
This synthesis of ideas from image processing and statistical shape modelling led to
the Active Shape Model. The first parametric statistical shape model for image
analysis based on principal components of inter-landmark distances was presented
by Cootes and Taylor in. On this approach, Cootes, Taylor, and their colleagues, then
released a series of papers that cumulated in what we call the classical Active Shape
Model.

Low Level Analysis:


Based on low level visual features like colour, intensity, edges, motion etc. Skin Colour
BaseColor is vital feature of human faces. Using skin-color as a feature for tracking a
face has several advantages. Colour processing is much faster than processing other
facial features. Under certain lighting conditions, colour is orientation invariant. This
property makes motion estimation much easier because only a translation model is
needed for motion estimation. Tracking human faces using colour as a feature has
several problems like the colour representation of a face obtained by a camera is
influenced by many factors (ambient light, object movement, etc
Majorly three different face detection algorithms are available based on RGB, YCbCr,
and HIS colour space models. In the implementation of the algorithms there are three
main steps viz.

(1) Classify the skin region in the colour space,


(2) Apply threshold to mask the skin region and
(3) Draw bounding box to extract the face image.

Crowley and Coutaz suggested simplest skin colour algorithms for detecting skin
pixels. The perceived human colour varies as a function of the relative direction to the
illumination The pixels for skin region can be detected using a normalized colour
histogram, and can be normalized for changes in intensity on dividing by luminance.
Converted an [R, G, B] vector is converted into an [r, g] vector of normalized colour
which provides a fast means of skin detection. This algorithm fails when there are
some more skin region like legs, arms, etc.Cahi and Ngan suggested skin colour
classification algorithm with YCbCr colour space. Research found that pixels
32
belonging to skin region having similar Cb and Cr values. So that the thresholds be
chosen as [Cr1, Cr2] and [Cb1, Cb2], a pixel is classified to have skin tone if the values
[Cr, Cb] fall within the thresholds. The skin colour distribution gives the face portion in
the color image algorithm is also having the constraint that the image should be having
only face as the skin region. Kjeldson and Kinder defined a color predicate in HSV
color space to separate skin regions from background. Skin color classification inHSI
colour space is the same as YCbCr colour space but here the responsible values are
hue (H) andsaturation (S). Similar to above the threshold be chosen as [H1, S1] and
[H2, S2], and a pixel is classified to have skin tone if the values [H,S] fall within the
threshold and this distribution gives the localized face image. Similar to above two
algorithm this algorithm is also having the same constraint

Motion Base:
When use of video sequence is available, motion information can be used to locate
moving objects. Moving silhouettes like face and body parts can be extracted by simply
thresholding accumulated frame differences. Besides face regions, facial features can
be located by frame differences.

Gray Saclar Base:


Gray information within a face can also be treat as important features. Facial features
such as eyebrows, pupils, and lips appear generally darker than their surrounding
facial regions. Various recent feature extraction algorithms search for local gray
minima within segmented facial regions. In these algorithms, the input images are first
enhanced by contrast-stretching and gray-scale morphological routines to improve the
quality of local dark patches and thereby make detection easier. The extraction of
darkpatches is achieved by low-level gray-scale thresholding. Based method and
consist three levels. Yang and huang presented new approach i.e. faces gray scale
behaviour in pyramid (mosaic) images. This system utilizes hierarchical Face location
consist three levels. Higher two level based on mosaic images at different resolution.
In the lower level, edge detection method is proposed. Moreover this algorithms gives
fine response in complex background where size of the face is unknown

33
Edge Base:
Face detection based on edges was introduced by Sakai et al. This work was based
on analysing line drawings of the faces from photographs, aiming to locate facial
features. Than later Craw et al. proposed a hierarchical framework based on Sakai et
al.‘swork to trace a human head outline. Then after remarkable works were carried
out by many researchers in this specific area. Method was very simple and fast. They
proposed frame work which consist three stepwise. Initially the images are enhanced
by applying median filter for noise removal and histogram equalization for contrast
adjustment. In the second step the edge images constructed from the enhanced image
by applying sober operator. Then a novel edge tracking algorithm is applied to extract
the sub windows from the enhanced image based on edges. Further they used Back
propagation Neural Network (BPN) algorithm to classify the sub-window as either face
or non-face.

Features Analysis:

These algorithms aim to find structural features that exist even when the pose,
viewpoint, or lighting conditions vary, and then use these to locate faces. These
methods are designed mainly for face localization

Feature Searching:

Viola Jones Method:


Paul Viola and Michael Jones presented an approach for object detection which
minimizes computation time while achieving high detection accuracy. Paul Viola and
Michael Jones proposed a fast and robust method for face detection which is 15 times
quicker than any technique at the time of release with 95% accuracy at around 17
fps.The technique relies on the use of simple Haar-like features that are evaluated
quickly through the use of a new image representation. Based on the concept of an
―Integral Image‖ it generates a large set of features and uses the boosting algorithm
AdaBoost to reduce the overcomplete set and the introduction of a degenerative tree
of the boosted classifiers provides for robust and fast interferences. The detector is
applied in a scanning fashion and used on gray-scale images, the scanned window
that is applied can also be scaled, as well as the features evaluated.

34
Gabor Feature Method:
Sharif et al proposed an Elastic Bunch Graph Map (EBGM) algorithm that successfully
implements face detection using Gabor filters. The proposed system applies 40
different Gabor filters on an image. As a result of which 40 images with different angles
and orientation are received. Next, maximum intensity points in each filtered image
are calculated and mark them as fiducially points. The system reduces these points in
accordance to distance between them. The next step is calculating the distances
between the reduced points
Using distance formula. At last, the distances are compared with database. If match
occurs, it means that the faces in the image are detected

Constellation Method:
All methods discussed so far are able to track faces but still some issue like locating
faces of various poses in complex background is truly difficult. To reduce this difficulty
investigator form a group of facial features in face-like constellations using more robust
modelling approaches such as statistical analysis. Various types of face constellations
have been proposed by Burl et al. . They establish use of statistical shape theory on
the features detected from a multiscale Gaussian derivative filter. Huang et al. also
apply a Gaussian filter for pre-processing in a framework based on image feature
analysis. Image Base Approach

Neural Network:
Neural networks gaining much more attention in many pattern recognition problems,
such as OCR, object recognition, and autonomous robot driving. Since face detection
can be treated as a two class pattern recognition problem, various neural network
algorithms have been proposed. The advantage of using neural networks for face
detection is the feasibility of training a system to capture the complex class conditional
density of face patterns. However, one demerit is that the network architecture has to
be extensively tuned (number of layers, number of nodes, learning rates, etc.) to get
exceptional performance. In early days most hierarchical neural network was
proposed by Agui et al. [43]. The first stage having two parallel sub networks in which
the inputs are filtered intensity values from an original image. The inputs to the second
stage network consist of the outputs from the sub networks and extracted feature

35
values. An output at the second stage shows the presence of a face in the
inputregion.Propp and Samal developed one of the earliest neural networks for face
detection . Their network consists of four layers with 1,024 input units, 256 units in the
first hidden layer, eight units in the second hidden layer, and two outputunits.Feraud
and Bernier presented a detection method using auto associative neural networks .
The idea is based on which shows an auto associative network with five layers is able
to perform a nonlinear principal component analysis. One auto associative network is
used to detect frontal-view faces and another one is used to detect faces turned up to
60 degrees to the left and right of the frontal view. After that Lin et al. presented a face
detection system using probabilistic decision-based neural network (PDBNN) . The
architecture of PDBNN is similar to a radial basis function (RBF) network with modified
learning rules and probabilistic interpretation

Linear Sub Space Method:

Eigen faces Method:


An early example of employing eigen vectors in face recognition was done by Kohonen
in which a simple neural network is demonstrated to perform face recognition for
aligned and normalized face images. Kirby and Sirovich suggested that images of
faces can be linearly encoded using a modest number of basis images. The idea is
arguably proposed first by Pearson in 1901 and then byHOTELLING in 1933 .Given a
collection of n by m pixel training.
Images represented as a vector of size m X n, basis vectors spanning an optimal
subspace are determined such that the mean square error between the projection of
the training images onto this subspace and the original images is minimized.They call
the set of optimal basis vectors Eigen pictures since these are simply the eigen vectors
of the covariance matrix computed from the vectorized face images in the training
set.Experiments with a set of 100 images show that a face image of 91 X 50 pixels
can be effectively encoded using only50 Eigen pictures.

36
Statistical Approch:

Support Vector Machine (SVM):


SVMs were first introduced Osuna et al. for face detection. SVMs work as a new
paradigm to train polynomial function, neural networks, or radial basis function (RBF)
classifiers.SVMs works on induction principle, called structural risk minimization,
which targets to minimize an upper bound on the expected generalization error. An
SVM classifier is a linear classifier where the separating hyper plane is chosen to
minimize the expected classification error of the unseen test patterns. In Osunaet al.
developed an efficient method to train an SVM for large scale problems, and applied
it to face detection. Based on two test sets of 10,000,000 test patterns of 19 X 19
pixels, their system has slightly lower error rates and runs approximately30 times
faster than the system by Sung and Poggio . SVMs have also been used to detect
faces and pedestrians in the wavelet domain.

Fig:4.7.1: Hardware model

37
CHAPTER-5

RESULT,DISCUSSION AND PERFORMANCE ANALYSIS

5.1 Result and Performance analysis:

1. Improvement of pictorial information for human interpretation

2. Processing of scene data for autonomous machine perception

In this second application area, interest focuses on procedures for extracting image
information in a form suitable for computer processing.
Examples includes automatic character recognition, industrial machine vision for
product assembly and inspection, military recognizance, automatic processing of
fingerprints etc.

Image:
Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the brightness or
gray levels of the image at that point. A digital image is an image f(x, y) that has been
discretized both in spatial coordinates and brightness. The elements of such a digital
array are called image elements or pixels.

A Simple Image Model:


To be suitable for computer processing, an image f(x, y) must be digitalized both
spatially and in amplitude. Digitization of the spatial coordinates (x, y) is called image
sampling. Amplitude digitization is called gray-level quantization.
The storage and processing requirements increase rapidly with the spatial resolution
and the number of gray levels.
Example: A 256 gray-level image of size 256x256 occupies 64k bytes of memory

Low level processing means performing basic operations on images such as reading
an image resize, resize, image rotate, RGB to grey level conversion, histogram
equalization etc…, The output image obtained after low level processing is raw image.
Medium level processing means extracting regions of interest from output of low level
processed image. Medium level processing deals with identification of boundaries i.e.

38
edges .This process is called segmentation. High level processing deals with adding
of artificial intelligence to medium level processed signal.

Fundamental steps in image processing are:

1. Image acquisition: to acquire a digital image


2. Image pre-processing: to improve the image in ways that increases the chances
for success of the other processes.
3. Image segmentation: to partitions an input image into its constituent parts of
objects.
4. Image segmentation: to convert the input data to a from suitable for computer
processing.
5. Image description: to extract the features that result in some quantitative
information of interest of features that are basic for differentiating one class of objects
from another.
6. Image recognition: to assign a label to an object based on the information provided
by its description.

Element Of Digital Image Processing Systems:

The basic operations performed in a digital image processing system include


1. Acquisition

2. Storage

3. Processing

4. Communication

5. Display

A Simple Image Formation Model:


Image are denoted by two-dimensional function f(x, y).f(x, y) may be characterized by
2 components:
1. The amount of source illumination i(x, y) incident on the scene

2. The amount of illumination reflected r(x, y) by the objects of the scene

4. f(x, y) = i(x, y)r(x, y), where 0 < i(x,y) < and 0 < r(x, y) < 1

39
Typical Values Of Reflectance R(X, Y):

• • Sun on a clear
day: ~90,000 lm/m^2,down to 10,000lm/m^2 on a cloudy day

• • Full moon on a
clear evening :-0.1 lm/m^2

• • Typical
illumination level in a commercial office. ~1000lm/m^2
Face Detection:
The problem of face recognition is all about face detection. This is a fact that seems
quite bizarre to new researchers in this area. However, before face recognition is
possible, one must be able to reliably find a face and its landmarks. This is essentially
a segmentation problem and in practical systems, most of the effort goes into solving
this task. In fact the actual recognition based on features extracted from these facial
landmarks is only a minor last step.
There are two types of face detection problems:
1) Face detection in images and
2) Real-time face detection

Face Detection In Images:

Most face detection systems attempt to extract a fraction of the whole face, thereby
eliminating most of the background and other areas of an individual's head such as
hair that are not necessary for the face recognition task. With static images, this is
often done by running a across the image. The face detection system then judges if a
face is present inside the window (Brunelli and Poggio, 1993). Unfortunately, with
static images there is a very large search space of possible locations of a face in an
image

Most face detection systems use an example based learning approach to decide
whether or not a face is present in the window at that given instant (Sung and
Poggio,1994 and Sung,1995). A neural network or some other classifier is trained

40
using supervised learning with 'face' and 'non-face' examples, thereby enabling it to
classify an image (window in face detection system) as a 'face' or 'non-face'..
Unfortunately, while it is relatively easy to find face examples, how would one find a
representative sample of images which represent non-faces (Rowley et al., 1996)?
Therefore, face detection systems using example based learning need thousands of
'face' and 'non-face' images for effective training. Rowley, Baluja, and Kanade (Rowley
et al.,1996) used 1025 face images and 8000 non-face images (generated from
146,212,178 sub-images) for their training set!
There is another technique for determining whether there is a face inside the face
detection system's window - using Template Matching. The difference between a fixed
target pattern (face) and the window is computed and thresholded. If the window
contains a pattern which is close to the target pattern(face) then the window is judged
as containing a face. An implementation of template matching called Correlation
Templates uses a whole bank of fixed sized templates to detect facial features in an
image (Bichsel, 1991 & Brunelli and Poggio, 1993). By using several templates of
different (fixed) sizes, faces of different scales (sizes) are detected. The other
implementation of template matching is using a deformable template (Yuille, 1992).
Instead of using several fixed size templates, we use a deformable template (which is
non-rigid) and there by change the size of the template hoping to detect a face in an
image.

A face detection scheme that is related to template matching is image invariants. Here
the fact that the local ordinal structure of brightness distribution of a face remains
largely unchanged under different illumination conditions (Sinha, 1994) is used to
construct a spatial template of the face which closely corresponds to facial features.
In other words, the average grey-scale intensities in human faces are used as a basis
for face detection. For example, almost always an individuals eye region is darker than
his forehead or nose. Therefore an image will match the template if it satisfies the
'darker than' and 'brighter than' relationships (Sung and Poggio, 1994).

41
Real-Time Face Detection:
Real-time face detection involves detection of a face from a series of frames from a
video-capturing device. While the hardware requirements for such a system are far
more stringent, from a computer vision stand point, real-time face detection is actually
a far simpler process thandetecting a face in a static image. This is because unlike
most of our surrounding environment, people are continually moving. We walk around,
blink, fidget, wave our hands about, etc. Since in real-time face detection, the system
is presented with a series of frames in which to detect a face, by using spatio-temperal
filtering (finding the difference between subsequent frames), the area of the frame that
has changed can be identified and the individual detected (Wang and Adelson, 1994
and Adelson and Bergen 1986).Further more as seen in Figure exact face locations
can be easily identified by using a few simple rules, such as,
1)the head is the small blob above a larger blob -the body 2)head motion must be
reasonably slow and contiguous -heads won't jump around erratically (Turk and
Pentland 1991a, 1991b).
Real-time face detection has therefore become a relatively simple problem and is
possible even in unstructured and uncontrolled environments using these very simple
image processing techniques and reasoning rules.

Face Detection Process:

It is process of identifying different parts of human faces like eyes, nose, mouth, etc…
this process can be achieved by using MATLAB codeIn this project the author will
attempt to detect faces in still images by using image invariants. To do this it would be
useful to study the grey-scale intensity distribution of an average human face. The
following 'average human face' was constructed from a sample of 30 frontal view
human faces, of which 12 were from females and 18 from males. A suitably scaled
colormap has been used to highlight grey-scale intensity differences.

The grey-scale differences, which are invariant across all the sample faces are
strikingly apparent. The eye-eyebrow area seem to always contain dark intensity (low)
gray-levels while nose forehead and cheeks contain bright intensity (high) grey-levels.
After a great deal of experimentation, the researcher found that the following areas of

42
the human face were suitable for a face detection system based on image invariants
and a deformable template.

The above facial area performs well as a basis for a face template, probably because
of the clear divisions of the bright intensity invariant area by the dark intensity invariant
regions. Once this pixel area is located by the face detection system, any particular
area required can be segmented based on the proportions of the average human face
After studying the above images it was subjectively decided by the author to use the
following as a basis for dark intensity sensitive and bright intensity sensitive templates.
Once these are located in a subject's face, a pixel area 33.3% (of the width of the
square window) below this

Note the slight differences which were made to the bright intensity invariant sensitive
template which were needed because of the pre-processing done by the system to
overcome irregular lighting (chapter six). Now that a suitable dark and bright intensity
invariant templates have been decided on, it is necessary to find a way of using these
to make 2 A-units for a perceptron, i.e. a computational model is needed to assign
neurons to the distributions displayed

Face Recognition:
Over the last few decades many techniques have been proposed for face recognition.
Many of the techniques proposed during the early stages of computer vision cannot
be considered successful, but almost all of the recent approaches to the face
recognition problem have been creditable. According to the research by Brunelli and
Poggio (1993) all approaches to human face recognition can be divided into two
strategies:
(1) Geometrical features and
(2) Template matching.

Face Recognition USING Geometrical Features:

This technique involves computation of a set of geometrical features such as nose


width and length, mouth position and chin shape, etc. from the picture of the face we
want to recognize. This set of features is then matched with the features of known
individuals. A suitable metric such as Euclidean distance (finding the closest vector)

43
can be used to find the closest match. Most pioneering work in face recognition was
done using geometric features (Kanade, 1973), although Craw et al. (1987) did
relatively recent work in this area.

Fig:5.1.1: Face Recognition

The advantage of using geometrical features as a basis for face recognition is that
recognition is possible even at very low resolutions and with noisy images (images
with many disorderly pixel intensities). Although the face cannot be viewed in detail its
overall geometrical configuration can be extracted for face recognition. The
technique's main disadvantage is that automated extraction of the facial geometrical
features is very hard. Automated geometrical feature extraction based recognition is
also very sensitive to the scaling and rotation of a face in the image plane (Brunelli
and Poggio, 1993). This is apparent when we examine Kanade's(1973) results where
he reported a recognition rate of between 45-75 % with a database of only 20 people.
However if these features are extracted manually as in Goldstein et al. (1971), and
Kaya and Kobayashi (1972) satisfactory results may be obtained.

44
Fig:5.1.2: Face Recognition

Fig:5.1.3: Attendance data storage

45
Face Recognition USING Template Matching:

This is similar the template matching technique used in face detection, except here
we are not trying to classify an image as a 'face' or 'non-face' but are trying to
recognize a face.

Whole face, eyes, nose and mouth regions which could be used in a template
matching strategy.The basis of the template matching strategy is to extract whole
facial regions (matrix of pixels) and compare these with the stored images of known
individuals. Once again Euclidean distance can be used to find the closest match. The
simple technique of comparing grey-scale intensity values for face recognition was
used by Baron (1981). However there are far more sophisticated methods of template
matching for face recognition. These involve extensive pre-processing and
transformation of the extracted grey-level intensity values. For example, Turk and
Pentland (1991a) used Principal Component Analysis, sometimes known as the
eigenfaces approach, to pre-process the gray-levels and Wiskott et al. (1997) used
Elastic Graphs encoded using Gabor filters to pre-process the extracted regions. An
investigation of geometrical features versus template matching for face recognition by
Brunelli and Poggio (1993) came to the conclusion that although a feature based
strategy may offer higher recognition speed and smaller memory requirements,
template based techniques offer superior recognition accuracy.

5.2 Disadvantages:

The following problem scope for this project was arrived at after reviewing the literature
on face detection and face recognition, and determining possible real-world situations
where such systems would be of use. The following system(s) requirements were
identified
A system to detect frontal view faces in static images.
A system to recognize a given frontal view face.
Only expressionless, frontal view faces will be presented to the face detection
recognition.
All implemented systems must display a high degree of lighting invariance.

46
All systems must possess near real-time performance.
Both fully automated and manual face detection must be supported.
Frontal view face recognition will be realised using only a single known image.
Automated face detection and recognition systems should be combined into a fully
automated face detection and recognition system. The face recognition sub-system
must display a slight degree of invariance to scaling and rotation errors in the
segmented image extracted by the face detection sub-system.
The frontal view face recognition system should be extended to a pose invariant face
recognition system.
Unfortunately although we may specify constricting conditions to our problem domain,
it may not be possible to strictly adhere to these conditions when implementing a
system in the real-world.

Face Recognition Difficulties:

1. Identify similar faces (inter-class similarity)


2. Accommodate intra-class variability due to
2.1 head pose
2.2 illumination conditions
2.3 expressions
2.4 facial accessories
2.5 aging effects
3. Cartoon faces

Face recognition and detection system is a pattern recognition approach for personal
identification purposes in addition to other biometric approaches such as fingerprint
recognition, signature, retina and so forth. The variability in the faces, the images are
processed before they are fed into the network. All positive examples that is the face
images are obtained by cropping images with frontal faces to include only the front
view. All the cropped images are then corrected for lighting through standard
algorithms.

47
CHAPTER-6

SUMMARY AND CONCLUSION

6.1 Summary:

A face recognition attendance system is a technology that uses artificial intelligence


and machine learning algorithms to identify and verify individuals based on their facial
features. The system captures the image of the face using a camera, and then
compares it with a pre-stored database of faces to determine the identity of the person.

The face recognition attendance system is used in various industries such as schools,
colleges, offices, and hospitals, to take attendance automatically and accurately,
without requiring any manual intervention. The system provides real-time attendance
data, which can be used to monitor attendance trends and improve efficiency.

The face recognition attendance system has several advantages over traditional
attendance systems, such as the elimination of time-consuming manual processes,
reduction of errors, prevention of fraud and impersonation, and the ability to work in
low-light conditions. However, concerns have been raised about privacy and security
issues related to the storage and use of facial recognition data.

6.2 Conclusion:

The purpose of reducing the errors that occur in the traditional attendance taking
system has been achieved by implementing this automated attendance system. In this
paper, face recognition system have been presented using deep learning which
exhibits robustness towards recognition of the users with accuracy of 98.3% . The
result shows the capability of the system to cope with the change in posing and
projection of faces. From face recognition with deep learning, it has been determined
that during face detection, the problem of illumination is solved as the original image
is turned into a HOG representation that captures the major features of the image
regardless of image brightness. In the face recognition method, local facial landmarks
are considered for further processing. After which faces are encoded which generates
128 measurements of the captured face and the optimal face recognition is done by

48
finding the person’s name from the encoding. The result is then used to generate an
excel sheet, the pdf of which is sent to the students and professors on weekly interval.
This system is convenient to the user and it gives better security.

Our project's objective is to make it simpler to track event attendance. There are
numerous methods for identifying individuals at events, such as using personalized
cards and collecting signatures at the event's opening, but we discovered that facial
recognition is more practical than any other conventional method. We dealt with
numerous issues throughout the development process as we discovered that facial
recognition is quite challenging. At first, there was a choice to be made: should we use
face encodings or a face recognition algorithm?

We observed that the last one is more reasonable for our application. We also decided
to use the pictures directly in our program rather than storing them in a database, but
only for testing. As the number of users increases, face recognition algorithms and a
database can both be easily implemented.

6.3 Advantages:

The face recognition-based attendance system offers several advantages over


traditional attendance marking systems, including:

1. Accurate and efficient attendance marking: The system uses advanced facial
recognition technology to accurately and efficiently mark attendance,
eliminating the need for manual attendance marking.

2. Real-time attendance tracking: The system provides real-time updates on


attendance, enabling organizations to monitor attendance in real-time and take
prompt action if necessary.

3. Improved data management: The system automatically records attendance


data, reducing the risk of errors and improving data management.

4. Secure and reliable: The system is secure and reliable, preventing fraudulent
attendance marking and ensuring that attendance data is accurate and reliable.

5. Time-saving: The system eliminates the need for manual attendance marking,
saving time for both teachers and students.

49
6. Cost-effective: The system is cost-effective, as it eliminates the need for costly
attendance management systems and reduces the workload of administrative
staff.

7. User-friendly: The system is easy to use and requires minimal training, making
it accessible to everyone.

8. Contactless attendance marking: The system supports contactless attendance


marking, reducing the risk of infection during the COVID-19 pandemic and other
contagious diseases.

6.4 Future scope:

Face recognition-based attendance systems have gained significant traction in recent


years, and the future scope of this technology is promising. Here are a few potential
areas where face recognition-based attendance systems could be used:

Increased adoption in workplaces: Face recognition-based attendance systems could


become the norm in the workplace as it simplifies the attendance process, saves time,
and reduces errors. The technology could be integrated with existing systems such as
payroll and HR management systems.

Educational institutions: Schools and universities can use face recognition-based


attendance systems to take attendance quickly and efficiently. The system could also
be used to track attendance in online classes.

Government and security agencies: Face recognition-based attendance systems can


be used in government institutions, law enforcement agencies, and security
organizations for identifying and tracking individuals.

Healthcare: Hospitals and healthcare institutions could use face recognition-based


attendance systems to manage patient appointments, staff attendance, and monitor
the entry and exit of visitors.

Retail industry: Retail stores could use face recognition-based attendance systems to
track employee attendance and manage shifts. It could also help in detecting
shoplifting and identifying repeat offenders.

50
Banking and financial institutions: Face recognition-based attendance systems could
be used by banks and financial institutions to identify customers and employees for
security purposes.

Overall, the future of face recognition-based attendance systems is promising, and the
technology is likely to become more widespread as it offers several benefits in terms
of accuracy, efficiency, and security.

51
REFERENCES

[1] V. Blanz and T. Vetter. Face recognition based on fitting a 3dmorphable


model.IEEE Transactions on Pattern Analysisand Machine Intelligence, 25(9):1063–
1074, 2003.2

[2] P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf, and V. Blanz.Automatic 3d face


reconstruction from single images orvideo. InProc. of the 8th IEEE International
Conferenceon Automatic Face Gesture Recognition (FG ’08), pages 1–8, 2008.2

[3] Chaoyang, L. Wang, Y. Wang, F. Matsushita, K., and Soong.Binocular photometric


stereo acquisition and reconstructionfor 3d talking head applications. InInterspeech
2013 sub-mission, 2013.3

[4] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgb-d mapping: Using depth
cameras for dense 2d modeling ofindoor environments.Proc. of International
Symposium onExperimental Robotics, 2010.1

[5] M. Hernandez, J. Choi, and G. Medioni. Laser scan qual-ity 3-d face modeling using
a low-cost depth camera.Proc.of the 20th European Signal Processing Conference
(EU-SIPCO), pages 1995–1999, 2012.1,2,3,4,5,7

[6]C. Mayer, M. Wimmer, M. Eggers, and B. Radig, “Facial Expression Recognition


with 3D Deformable Models.” Second International Conferences on Advances in
Computer-Human Interactions, pp 26-31, 2009.

[7]V. Blanz, and T. Vetter, “Face Recognition Based on Fitting a 3D Morphable Model”,
IEEE Trans. PAMI, vol. 25, no. 9, pp. 1063-1074, 2003.

[8]X. Lu, and A. Jain, “Deformation Modeling for Robust 3D Face Matching.”
Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 8, pp. 1347-
1356, 2008.

[9]B. Moghaddam, J. Lee, F. Fister, and R. Machiraju, “Model-Based 3D Face Capture


with Shape-from-Silhouettes.” Proceedings of the IEEE International Workshop on
Analysis and Modeling of Faces and Gestures, 2003.

[10]Z. Riaz, C. Mayer, M. Wimmer, M. Beetz, and B. Radig, “A Model Based Approach
for Expression Invariant Face Recognition.” ICB, pp 289-298, 2009.

52
[11]B. Weyrauch, and J. Huang, “Component-based Face Recognition with 3D
Morphable Models.” Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition Workshops, 2004.

[12]J. Harguess, S. Gupta, and J. Aggarwal, “3D Face Recognition with the Average-
Half-Face.” IEEE Explore, 2008.

[13]S. Ramanathan, A. Kassim, Y. Venkatesh, and W. Wah, “Human Facial


Expression Recognition Using 3D Morphable Model.” ICIP, pp 661-664, 2006.

[14]J. Lee, B. Moghaddam, F. Fister, and R. Machiraju, “Finding Optimal Views for 3D
Face Shape Modeling.” Mitsubishi Electric Research Laboratories, Inc,
Massachusetts, USA, 2004.

[15]Agisoft. Agisoft photoscan. http://www.agisoft.com/, [Online; accessed 12-April-


2017]. Balletti, C., Guerra, F., Tsioukas, V., Vernier, P., 2014. Calibration of Action
Cameras for Photogrammetric Purposes. Sensors, 14(9), pp. 17471-17490. Brown,
D., 1966.

[16]Decentering distortion of lenses. Photometric Engineering 32(3), pp. 444-462.


Cambi, N., 1993. Bilješke uz kipove Kibele (Magna Mater) iz Senja. Senjski zbornik,
20(1), pp. 33-43. Carrillo, L., López, A., Lozano, R., Pégard, C., 2012.

[17]Combining stereo vision and inertial navigation system for a quad-rotor UAV.
Journal of Intelligent & Robotic Systems, 65(1-4), pp. 373-387. CloudCompare.

[18]Cloud Compare. http://www.danielgm.net/cc/, [Online; accessed 12-April-2017].


Ducrot, A., Dumortier, Y., Herlin, I., Ducrot, V., 2011.

[19]Realtime quasi dense two-frames depth map for autonomous guided vehicles. In
Intelligent Vehicles Symposium (IV), IEEE, pp. 497-503. Gašparović, M., Gajski, D.,
2016a.

[20]Two-step camera calibration method developed for micro UAV's. In: The
International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, Prague, Czech Republic, Vol. XLI, Part B1, pp. 829-833.
Gašparović, M., Gajski, D., 2016b.

53
54

You might also like