0% found this document useful (0 votes)
143 views58 pages

Our Report Final

This document presents a final year project on developing a human tracking and tagging system called A-Syst. The system uses artificial intelligence techniques like computer vision and machine learning to automate the process of monitoring and logging student attendance in classrooms. It involves designing a machine learning model to identify students from camera footage and a web application for instructors and administrators to view attendance reports. A team of four students developed this project under the supervision of two advisors to address the problem of time-consuming manual attendance taking and prevent proxy attendance marking.

Uploaded by

Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views58 pages

Our Report Final

This document presents a final year project on developing a human tracking and tagging system called A-Syst. The system uses artificial intelligence techniques like computer vision and machine learning to automate the process of monitoring and logging student attendance in classrooms. It involves designing a machine learning model to identify students from camera footage and a web application for instructors and administrators to view attendance reports. A team of four students developed this project under the supervision of two advisors to address the problem of time-consuming manual attendance taking and prevent proxy attendance marking.

Uploaded by

Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Human Tracking and Tagging

(A-Syst)

FINAL YEAR PROJECT REPORT 2022

FACULTY OF COMPUTER SCIENCE AND


ENGINEERING (FCSE)

GHULAM ISHAQ KHAN INSTITUTE OF ENGINEERING


SCIENCES AND TECHNOLOGY (GIKI)

Group Members: Advisor:

Hamza Mawaz Khan (2018132) Dr. Raja Hashim

Nabeel Niaz (2018371) Dr. Ghulam Abbas

Omar Farooq (2018379)

Umair Ayaz Aslam (2018484)


Certificate of Approval

It is certified that the work presented in this report was performed by Hamza
Mawaz Khan , Nabeel Niaz, Omar Farooq, and Umair Ayaz Aslam under
the supervision of Dr. Raja Hashim and Dr. Ghulam Abbas. The work is
adequate and lies within the scope of the B.S. degree in Computer
Science/Computer Engineering at Ghulam Ishaq Khan Institute of
Engineering Sciences and Technology.

Dr. Raja Hashim

(Advisor)

Dr. Ahmar Rashid

(Dean)
ABSTRACT

A-SYST is a web-based platform that assists instructors and university


administration in keeping track of student attendance using artificial
intelligence techniques. The system is going to be launched by placing a
camera at a fixed location where it can detect everyone who sits in a
classroom. The main objective of the project is to automate the process of
monitoring and logging student activities in the classroom for the instructors
to review as per their needs. The proposed solution will eliminate the time-
consuming concept of manual attendance. This will not only ensure rigorous
compliance with attendance laws but will also discourage students from
leaving class without permission. This technique will effectively prevent
individuals from marking the attendance of others (proxies). The suggested
system also collects analytics and provides reports for the instructors at the
conclusion of each session, which can, if necessary, be represented as a
tangible record.

This document explains the design, development, and comprehensive


description of our project, which comprises a machine learning model and a
web application for visualizing the results and implementing the generated
system.
ACKNOWLEDGEMENT

We are grateful that the Almighty granted and blessed us with the required
strength to complete this endeavor. We would like to express our sincere
appreciation to everyone who made it possible for us to finish this project.
We owe a debt of gratitude to Dr. Raja Hashim, our senior year project
adviser, for his stimulating suggestions and assistance, which allowed us to
efficiently plan our project. Finally, I would like to extend my gratitude to
the whole faculty of Computer Science and Engineering, who guided the
team to success. During our project reviews, we value the help provided by
the panel members in the form of comments and suggestions.
TABLE OF CONTENTS
CERTIFICATE OF APPROVAL................................................................2

ABSTRACT....................................................................................................3

ACKNOWLEDGEMENT.............................................................................4

TABLE OF CONTENTS..............................................................................5

LIST OF FIGURES.......................................................................................7

LIST OF TABLES.........................................................................................8

CHAPTER I: INTRODUCTION.................................................................9

Motivation.............................................................................................9
Project Perspective................................................................................9
Project Scope......................................................................................11
Product Functioning............................................................................11
User Characteristics............................................................................12
System Overview................................................................................12

CHAPTER II: LITERATURE SURVEY..................................................13

CHAPTER III: DESIGN.............................................................................18

Product Functions...............................................................................18
Constraints..........................................................................................18
Assumptions and Dependencies.........................................................18
User Interfaces....................................................................................19
Hardware Interfaces............................................................................19
Software Interfaces.............................................................................19
Communication Interfaces..................................................................19
Functional Requirements....................................................................19
Functional Requirements with Traceability information....................21
Non-Functional Requirements............................................................27
1. Performance Requirements: Response Time......................27
2. Performance Requirements: Security..................................27
3. Performance Requirements: Scalability..............................27
4. Performance Requirements: Platform.................................28
Software Quality Attributes................................................................28
1. Reliability................................................................................28
2. Availability.............................................................................28
3. Maintainability.......................................................................28
4. Portability...............................................................................29
5. Usability..................................................................................29
6. Scalability...............................................................................29

CHAPTER IV: PROPOSED SOLUTION................................................30

Architectural Design...........................................................................30
UML Diagrams...................................................................................31
1. Use Case Diagram..................................................................31
2. State Diagram.........................................................................33
3. Component Diagram.............................................................35
4. Activity Diagram....................................................................35
5. Deployment Diagram.............................................................37
Methodology.......................................................................................38
Encoding Process (using Haar Cascade Classifier)............................42

CHAPTER V: RESULTS AND DISCUSSION........................................48

CHAPTER VI: CONCLUSION AND FUTURE WORK........................51


REFERENCES.............................................................................................54
List of Figures

Figure 1 - System Overview...............................................................................................................


Figure 2 -List of Functional Requirements.........................................................................................
Figure 3 - Model View Template Architectural Model......................................................................
Figure 4 - Instructor-Server Use case.................................................................................................
Figure 5 - Admin-Server Use case......................................................................................................
Figure 6 - State Diagram.....................................................................................................................
Figure 7 - Component Diagram..........................................................................................................
Figure 8 - Activity Diagram...............................................................................................................
Figure 9 - Deployment Diagram.........................................................................................................
Figure 10 - Model Snippet 1...............................................................................................................
Figure 11 – Model Snippet 2..............................................................................................................
Figure 12 - Model Snippet 3...............................................................................................................
Figure 13 - Model Snippet 4...............................................................................................................
Figure 14 - Model Snippet 5...............................................................................................................
Figure 15 - Model Snippet 5...............................................................................................................
Figure 16 - CSV file...........................................................................................................................
Figure 17 - Haar features....................................................................................................................
Figure 18 - Face detection example....................................................................................................
Figure 19 - Relevent and Irrelevant feature example.........................................................................
Figure 20 - Basis approach.................................................................................................................
Figure 21 - Landing page....................................................................................................................
Figure 22 - A-Syst Homepage............................................................................................................
Figure 23 - Instructor Portal...............................................................................................................
Figure 24 - Admin Portal....................................................................................................................
List of Tables
CHAPTER I: INTRODUCTION

Motivation

The old technique for registering attendance - storing paper sheets for
records, spending significant time manually confirming attendance, and
generating reports with the potential for human mistakes - has become
inefficient. Updates and maintenance of conventional attendance systems are
likewise exceedingly time-consuming. It will take at least a few hours to
update the entire school or college, regardless of whether the data is entered
manually or digitally. This time may have been employed more effectively
on other, more essential duties. Similarly, at the conclusion of the academic
year, it would require an enormous amount of time to consolidate all of the
records and generate individual student reports. Having an automatic
attendance tracker eliminates this burden and compiles all records without
requiring human interaction. Thus, the time saved can be allocated to more
critical managerial tasks.

Project Perspective

The main purpose of this project is to provide end-users (instructors and


administrators) with a platform that caters to their needs properly by
automating the whole attendance-taking procedure due to which time and
physical effort to monitor and track student attendance individually with
integrity ensured. It is an end solution for the instructors and administration
to monitor and review the attendance analytics of students. To schedule
classes and be satisfied with a higher level of integrity in terms of the
records. Absenteeism has long been the most significant issue in academic
institutions. Students frequently drop classes for a variety of reasons. When a
friend is given the job of standing in for an absent student during the roll call,
it becomes difficult to keep track of their absences. In addition, many
teachers frequently skip the roll call due to a lack of teaching time or
laziness. Not only can this impair the academic achievement of the student,
but it can also affect the professor's mood.
Project Scope

A-SYST is a smart web-based student attendance application to assist


instructors in the scheduling of classes using Artificial Intelligence to
monitor students in real-time using cameras mounted at the point of entry
and exit. The purpose of the application is to allow the instructors to
schedule and view student attendance reports on their respective portals
efficiently. The system basically has two end-users, i.e., Instructors and
Admin. It basically provides the end-users a platform where they can
monitor and view student attendance analytics. They can generate the
relevant database records and use them as per their requirement. The aim of
the system is not only to cater to the end-users but to make the system of
attendance taking fairer, smarter, and more efficient while ensuring integrity
and making sure the system can not be cheated as it is done in manual and
biometric attendance taking.

Product Functioning

There are two portals, to begin with, depending on the user. The admin portal
and the instructor portal. When the user first visits the website, they can
choose between the admin portal and the instructor portal. When the user
chooses the former, the admin page, there would be a login interface for the
admin where the admin can enter their respective login credentials, the
username, and the password. Once the admin has logged in, he or she will
have the option to update the database that holds the set of pictures that are
used to identify the students. The label of these images would be the title of
the student that they would be tagged with upon identification. These images
would be uploaded to the server, which would then be picked up by the
model, and the model will be trained on this dataset. For the instructor portal,
initially, there would be a login page where each instructor could log in with
their respective credentials. The instructor would then be taken to another
page after successfully logging in, where they could view the live attendance
analytics of the students. The instructor could also choose a time to schedule
multiple future classes, whereas per the chosen time, the attendance system
would activate in the classrooms and log student attendance. Individual
attendance files would be generated for the respective classes.

User Characteristics

This system consists of two main end-users, i.e., the Instructor and the
Administrator. The product's perspective from both points of view is as
follows:

Instructors access the system as a platform where they can monitor student
attendance analytics. They can filter by students as well as view combined
statistics.

Administrators access the system as a platform where they can update


student image databases as well as have access to the entire student
attendance database.

System Overview

Figure 1 - System Overview


CHAPTER II: LITERATURE SURVEY

In real-world applications such as video surveillance, human-machine


interaction, and security systems, face recognition is highly desired. Image
identification systems based on deep learning have showed higher precision
and processing speed in comparison to standard machine learning techniques.
Face recognition is the process of a visual system identifying an individual's
face. It has become an essential human-computer interface tool because to its
use in security systems, access control, video surveillance, commercial areas,
and even social networks like Facebook.
Face recognition has gained fresh attention with the fast development of
artificial intelligence owing to its non-intrusive nature and because it is the
major method for human identification when compared to other biometric
techniques. It is simple to test face recognition in an uncontrolled
environment without the subject's awareness. Methods based on shallow
learning utilise just the most fundamental parts of an image and extract
sample characteristics using fake experience. Deep learning methods can
extract more sophisticated face traits.
Deep learning is making tremendous strides in overcoming obstacles that
have impeded the artificial intelligence field's best efforts for decades. It has
shown excellent in uncovering complex structures in high-dimensional. It
has smashed records in image recognition, natural language processing,
semantic segmentation, and countless other real-world applications.
Therefore, Attendance automation with Real-Time Face Recognition is an
ideal option for handling the daily chores of the student attendance system.
This system is a way for recognizing a student's face in order to collect
attendance using face landmarks which are in one way the facial biometrics
through use of high definition video and other information technologies. In
our project, the system will be able to discover and identify human faces in
security camera photos and videos with speed and precision. A lot of
algorithms and techniques have been devised to enhance the performance of
face recognition; however, the concept of Deep Learning will be employed
here. It permits the translation of video frames into photographs so that a
student's face may be readily recognized for attendance purposes, enabling
the database to be immediately updated.
There are several ways to accomplish this task, including the application of a
real-time computer vision algorithm within an automatic attendance
management system. The system used a non-intrusive camera in the
classroom to capture photos, and it matched the extracted face from the
image with previously saved faces. This system also employed machine
learning algorithms that are typically employed in computer vision. In this
study, however, we utilize the Haar Cascade classifier to extract facial
features. This is primarily an object detection method used to recognize faces
in images and real-time videos. The model developed as a result of this
training is available at OpenCV. The primary goal of the image's
characteristics is to make it simple to identify the image's boundaries and
lines, as well as regions where there is a dramatic shift in pixel intensity. The
haar feature constantly traverses from the top left to the bottom right of the
image in search of the specific feature that signifies edge traversing. The
benefit of edge-based feature-based techniques is their ability to integrate
structural information by grouping pixels of face edge maps to line segments,
after comparing these pixel computations and carrying out the subsequent
steps.
The face detector identifies the appropriate boundary using a weak classifier
to extract haar-based features that classify images as positive or negative. If
these weak classifiers are accumulated by cascade, they become robust.

Tripathi et al. [1] asserted the existence of a real-time system that can track
the presence of pupils in a classroom. The required supporting photos for this
model were provided through a webcam at a steady pace until the system
was shut off. The author reviewed several facial detection and identification
algorithms. Students are discriminated against using the Ada boost and Haar
cascade classifiers. Although the author utilized OpenCV libraries for face
exposure and recognition, he also employed P.C.A. and LDA for a more in-
depth understanding. The text also highlighted the distinction between LDA
and P.C.A.
In conclusion, the author expressed confidence in the system's correctness
and stated that the recognition rate is totally dependent on the size of the
employed image and the database.

The Viola Jones Face Detection Algorithm was described by Shireesha


Chintalapati et al. [2]. The authors have combined numerous Haar classifiers
to produce improved output rates up to 30-degree angles, as mentioned in the
study, which states that this technique yields superior results in a variety of
illumination circumstances. The preprocessing phase involves the histogram
equalization and scaling to 100x100 of the resulting face picture. Images are
converted to grayscale, their histograms are equalized, and they are scaled to
100x100 pixels. The system utilized the LBPH method for characteristic
extraction and the SVM classifier for classification purposes. This work
utilized an 80-person database (NITW database), including around 20 photos
of each participant. This document specifies some performance evaluation
conditions for the combination of LBPH and distance classifier, including a
false positive rate of 25%, an object distance of 4 feet for correct recognition,
a training time of 563 milliseconds, a recognition rate of 95% for static
images, a recognition rate of 78% for real-time video, and a rate of 2.3% for
occluded faces. In Microsoft Visual C and the EmguCV container, the
WinForms application is used to construct the graphical user interface
(G.U.I.).

Akshara Jadhav et al. [3] spurred the development of the face encounter
method Viola-Jones and the face recognition P.C.A. technique with machine
learning and SVM extraction capability.
The author also added reprocessing, which involves the histogram
equalization and scaling to 100x100 of the retrieved face picture. It has been
demonstrated that neural networks can be used for facial identification, and
we may envision a semi-supervised learning strategy that employs facial
recognition support vector machines to get good results.
After the face is identified, the further processing generates weekly or
monthly attendance reports that may be emailed to parents and guardians.

Nirmala Kar et al. [4] utilized the Haar cascade front XML file for face
detection and Eigen face for face confirmation.
It was developed with Open-CV Libraries. After face orientation, the
examination was prepared. When the face orientation was about 0 degrees,
the detection and identification rates were 98.7 percent and 95 percent,
respectively.
The frequency dropped gradually as the face orientation increased from 0 to
90 degrees. The final levels of identification and recognition varied from 0 to
90 degrees.

Smit Hapani et al. [5] have amplified the system that validated the model that
contributes to facial recognition.
Haar classifiers employ a cascade technique, followed by Fisher face
recognition. The technology achieves excellent efficiency of up to 50 percent
within 15 pupils while modeling several faces with variables like hats and
glasses. The suggested approach utilizes classroom video sources, with the
generated frames utilized to detect faces. Consequently, by adhering to the
processes, the overall model's rate and precision are enhanced.

Nazare Kanchan Jayant et al. [6] carried out the implementation of an


automated attendance system. This method is based on the facial detection
and face recognition algorithm developed by Viola-Jones. First, a database of
20 students is constructed utilizing various head postures for recognition
results. The face detection method was then implemented, and its efficacy
was evaluated based on the number of faces identified. The same method is
used to calculate the effectiveness of the facial recognition algorithm.
Refik Samet et al. [7] have devised an automated attendance system that uses
just mobile phones. This is accomplished by combining the Viola-Jones
algorithm and Ada-boost training for face detection, which, according to the
authors, should perform better in the actual world. For purposes of
recognition, the Euclidean distance was calculated for the three techniques of
recognition, namely Eigen face, Fisher face, and LBP. The precision of each
of the aforementioned recognition techniques was compared. The
smartphone application was designed for the system that generates
automated attendance records.

The paper examined many strategies for enhancing the detection and
recognition rate. The results demonstrate that Haar Cascade is consistent
across all examined articles and has a high detection rate. Despite the
author's laborious attempts to alter the above-mentioned techniques with a
model that interacts with many faces, it lacks both detection and
identification, necessitating the use of Deep Learning using convolutional
neural networks to fulfill the application's requirements.
CHAPTER III: DESIGN

Product Functions

This product performs three functions:

• Monitoring class strength

• Identify and log each present student's attendance

• Generate reports for record-keeping

Constraints

There should be sufficient lighting within the classrooms for the system to
work adeptly; moreover, students should face the camera many times during
a normal classroom session. The system should also be provided with
sufficient training data for each student to work as intended.

Assumptions and Dependencies

The constant flow of the stream is the key assumption of this product, and for
the system to process because the system needs this video stream to calculate
class strength and identify students to generate analytics. Moreover, we're
also assuming the students within the classroom have sustained appearances.
The existence of a video camera is the most important dependency as it
needs to be present in each classroom to provide continuous video streams to
operate the system as intended. We also expect the students to not leave the
classroom premises before a certain time instance.
User Interfaces

User interface will be categorized into two sections, one for the
administration, who will select the students for each class, and the other for
the instructors, who will view class analytics. The landing page of the web
application will allow the incoming users to log in or request a sign-up.
There will be a separate page that will list out all the product features and
provide a guide as to how to use the product.

Hardware Interfaces

The product will require a continuous inflow of video data to operate. This
video data will be provided via a dedicated camera placed strategically
within the environment.

Software Interfaces

The product will use machine learning models to process data. The output of
the model will be saved onto the database, and users can interact with the
system via the web dashboard

Communication Interfaces

All communication with the camera and the C.U. will be done locally. An
internet connection will be needed to access the web application.

Functional Requirements

The primary functional requirements necessary for the system to operate are
listed in Figure 2; however, each F.R. is defined in detail in the section that
follows.
Figure 2 -List of Functional Requirements

FR1 Users should be able to request a demo of the A-


SYST through the web application

FR2 Users should be able to log in with specific


credentials

FR3 Show the total time of each student present in the


classroom

FR4 Video must be supplied in a continuous manner for


processing and generating results in real-time

FR5 Count the total number of students

FR6 Show attendance analytics on the instructor's portal

FR7 Video storage

FR8 Logout
Functional Requirements with their respective traceability information

Table 1 - Functional Requirement 1


Table 2 - Functional Requirement 2

Table 3 - Functional Requirement 3


Table 4 - Functional Requirement 4

Table 5 - Functional Requirement 5


Table 6 - Functional Requirement 6

Table 7 - Functional Requirement 7


Table 8 - Functional Requirement 8
Non-Functional Requirements

1. Performance Requirements: Response Time

We shall be using a database that uses real-time processing to handle


workloads, causing the data transactions to be quick and response time to be
less. It will also layout secure, static, and production-grade hosting.
Furthermore, response time may also be improved by making the detection
and recognition in real-time.

2. Performance Requirements: Security


The security and integrity of the entire product are guaranteed by classifying
the access into two main categories.

Privileged Access: This access is granted only to the admin and the
instructor panel that is required to log in with authenticated credentials.

Unprivileged Access: This access is granted to all the users who are using
the product demo.

3. Performance Requirements: Scalability


The web nodes that run the Django code are independent of your persistence
layer (database, cache, session storage, etc.), so we can grow them
separately.

Django web nodes grow horizontally because they have no stored state. Due
to the large open-source community of Django, most of the tools that are
needed to scale application already exists. For example, Amazon Web
Services (A.W.S.) S3 is being used, and Database indexes also make it much
more efficient to look up records by date or other indexed criteria.

4. Performance Requirements: Platform

The platform will use HTML, CSS, Bootstrap, and react for the front-end of
the application. PostgreSQL will be used for the database services. Django
will be used for back-end services.

Software Quality Attributes

1. Reliability
The resulting program will be trustworthy, error-free, and free of bugs. Users
can receive numerous property-related details. When you utilize any of the
app's functions, services will be promptly and accurately provided. The user
will be able to submit feedback in the event of a problem and will receive a
timely response from the administrative personnel.

2. Availability
Accessed by anyone with a stable internet connection anytime as our servers
are running around the clock

3. Maintainability
Our system code will be written in a readable and testable manner hence
allowing debugging and other aspects related to maintainability to be
conducted in an efficient manner
4. Portability
Our system is incredibly portable as it can be launched on any device which
has a web browser and an internet connection.

5. Usability
The system will work failure-free under the specified conditions, e.g., stable
internet connection, etc. Allows multiple instructors to upload attendance and
view past footage through the central database at the same time

6. Scalability
Allows multiple instructors to upload attendance and view past footage
through the central database at the same time
CHAPTER IV: PROPOSED SOLUTION

Architectural Design

Figure 3 - Model View Template Architectural Model

MVT (Model View Template) is a design pattern for software. It consists of


three essential components: Model, View, and Template. The Model
facilitates database management. It is a data access layer responsible for
handling data.

The Template is a presentation layer that manages every aspect of the User
Interface. The View is used to execute business logic, interact with a data-
carrying model, and render a template.

Model — Equivalent to model in MVC. It includes the programming


responsible for data and database management.

View — In the MVT design pattern, the View determines which data to
display.

Template - Templates are used to specify the output's structure. A template


can be populated with data using placeholders. It specifies how the data is
displayed. Using a Generic list view, for instance, we can display a set of
database records.

While Django adheres to the MVC paradigm, it maintains its own standards.
Thus, the framework itself manages the control.

There is no distinct controller; the entire program is based on Model View


and Template. Therefore, it is known as an MVT application.

The benefits of this architecture are that it is less coupled, easy to modify, and
suitable for small to large-scale applications.

UML Diagrams

In this section, different views of the system that will be utilized to construct
the application are modeled in Unified Modeling Language (UML) and
explained. Use case diagrams, class diagrams, component diagrams, activity
diagrams, and deployment diagrams are included to provide an overview of
how the internal system will function and how users will interact with the
system.

1. Use Case Diagram


Any user can login to the system through the “login” interface which
implements the underlying authentication. This interface depends on the
“error” and “verify” methods to check inputs and either block the user out of
the application if the credentials are incorrect or let them access the
application otherwise.

The first use case diagram depicts the interaction between the instructor and
the system. The instructor has the ability to schedule sessions via the
“capture attendance” interface. These sessions make it possible for the
system to know exactly when to start proctoring for attendance. When the
model is signaled to run via the “trigger model” interface at set times, the
system activates to fill the attendance sheet, this is done via the “generate
report” method in the use case diagram. As an added feature, the instructor
also has the ability to view the reports generated by the model through the
“view data analytics” interface.

The second use case diagram depicts the interaction between the admin and
the system. The admin can upload the dataset onto the system through the
“upload data” interface. The server is responsible for using this data to train
the model. It does this via the “prepare dataset” interface and subsequently
through accessing the “train model” interface.

Figure 4 - Instructor-Server Use case


Figure 5 - Admin-Server Use case

2. State Diagram
A state diagram is a type of diagram used in computer science and related
fields to show the behavior of systems. State diagrams require that the
system being portrayed has a finite number of states, which is sometimes the
fact and other times a fair simplification. There are several various sorts of
state diagrams, each with its own semantics.
Figure 6 - State Diagram
3. Component Diagram

The instructor and admin components are interfaced with the authenticate
component, which itself makes use of the database component. The
instructor interface relies upon the reports generated by the model.
Preprocessed data from the incoming feed is accessed by the model for
making predictions. The admin interface relies upon the upload data
interface, it is the user’s job to provide proper labels for the data. This data is
then accessed by the model to make predictions for when the system is
applied.

Figure 7 - Component Diagram

4. Activity Diagram

The system works by turning the camera on, which generates an incoming
video feed for the system. After preprocessing this feed, ML techniques are
used to predict students in the frames. The timestamps, label, and ID of each
student is recorded in a separate file by the model, this file will be called
“analytics”. The instructor may log on to the system at any instance to view
the generated attendance. The system can verify the admin and grant the
ability to upload the dataset. The admin can retrain the model on the new
dataset through the press of a button available on the user interface, this is
particularly useful to train the system to be able to recognize new faces.

Figure 8 - Activity Diagram


5. Deployment Diagram

Upon deployment, the client is able to send HTTP web requests to the server.
Depending on the request type, the server may choose to access either the
student database or the user database. This will depend on the required
response for the requests. The model is also hosted on the server, which will
run in background upon being triggered by relevant requests.

Figure 9 - Deployment Diagram


Methodology

Figure 10 - Model Snippet 1

Initially, we open the images folder, read all the images, store images in the
"images" dictionary and then store the name of the image at the same index
in the "className" dictionary. While storing the name of the image, we
remove the extension and just store the name of the image.

Figure 11 – Model Snippet 2

We convert all the images from RGB to grayscale. Next, we detect the faces
in each image and then find the encodings of the face and store them.

Figure 12 - Model Snippet 3

We open the camera and perform face detection in real-time. Once the face is
detected, we find the encoding of the face. Using the encoding of the face,
we compare it with all other stored face encodings. If there is an image that
has a match of a distance lower than 0.39, then we label the detected face
with the same label as the stored image and pass it to the
checktimeandmarkAttendence function. If there is no image in the dataset
which compares to a distance lower than 0.39, we label the image as
"Unrecognized" and ignore it.

Figure 13 - Model Snippet 4

The checktimeandmarkAttendance function initially opens the file, which


contains the scheduled start timings and dates of classes, and preprocesses
them in a manner that is understandable by the computer.
Figure 14 - Model Snippet 5
We use the preprocessed times to see whether any class is currently in
session. If there is a class in session and attendance needs to be taken, we
print the class name and time in the CSV file and pass the name to the
markAttendence function.
Figure 15 - Model Snippet 5

The mark attendance function will print the Sr no, Name, and entry time of
the individual into the CSV file.

Figure 16 - CSV file

This is how the attendance CSV file looks like

Encoding Process (using Haar Cascade Classifier)

A facial identification system is a technology that can recognize a person's


face from a digital picture, or a video frame captured from a video source.
The Haar cascade classifier is based on the Viola-Jones detection technique,
which is trained by feeding an input set of faces and non-faces to a face-
identifying classifier.

Haar-Features are identical to CNN kernels, with the exception that Haar-
Feature values are manually set, whereas CNN kernel values are determined
by training.

Figure 17 - Haar features

There are several Haar-Features described above. The first two are known as
"edge characteristics" because they detect edges. The third is a "line feature,"
and the fourth is a "four rectangle feature"; both are most likely used to detect
an angled line when haar characteristics are incorporated into a girl's image.

Each feature yields a single value determined by subtracting the sum of pixels
beneath a white rectangle from the sum of pixels beneath a black rectangle.
Figure 18 - Face detection example

Every haar characteristic corresponds to a facial characteristic.

Viola-Jones utilizes a base window size of 24*24 and calculates the


aforementioned features by moving the entire image by 1 P.X.

If we take into account all conceivable haar feature attributes, such as


position, scale, and type, we arrive at about 160 thousand features. Therefore,
we must analyze a vast number of characteristics for each 24*24 P.X.

If we take into account all conceivable haar feature attributes, such as


position, scale, and type, we arrive at about 160 thousand features. Therefore,
we must analyze a vast number of characteristics for each 24*24 P.X.

AdaBoost:
As indicated previously, a detector with a base resolution of 24x24 may need
the calculation of over 160,000 feature values. However, it must be kept in
mind that only a tiny subset of these traits will be useful for facial recognition.
AdaBoost is used to reduce unnecessary features and choose only the most
useful ones.

Figure 19 - Relevent and Irrelevant feature example

For instance, a characteristic that detects a vertical edge is useful for


recognizing a nose but unnecessary for recognizing a lip.
AdaBoost is utilized to choose the finest features from over 160,000
alternatives. A weak classifier is an alternative term for these features.
Following the discovery of these characteristics, a weighted combination of
these characteristics is utilized to analyze and determine whether or not a
particular window has a face. Each of the selected characteristics (weak
classifiers) is deemed admissible if it can at least outperform random guessing
(detects more than half the cases). Each of the weak classifiers is suitable for
detecting a unique facial characteristic. The output of a weak classifier is
binary, indicating whether a facial feature has been recognized or not.
Adaboost builds a powerful classifier by linearly combining these weak
classifiers.

Cascading:

We must calculate 2,500 relevant features from the 160,000 features selected
by AdaBoost for each 24x24 window. We must move a 24x24 window across
the entire image, compute 2,500 features for each window, and then take a
linear combination of all outputs to determine if the image exceeds a certain
threshold. Even if an image contains one or more faces, it is evident that a
disproportionately large number of examined sub-windows will be negative
(non-faces). Therefore, the algorithm should focus on eliminating non-faces
rapidly and spending more time on probable face regions due to the high
computational cost of evaluating a single strong classifier created from the
linear combination of the best characteristics on each window.
Instead of calculating 2,500 properties per window, cascades are utilized. We
sample 2,500 attributes and divide them into x unique cascades. In different
cascades, we can now identify the presence of a face in a linear fashion. If
cascade detects a face in the image, it is sent to the next cascade. If no face is
found in the cascade, we can proceed to the next window. This reduces the
complication of time.

Figure 20 - Basis approach

The purpose of each stage is to determine if a given subwindow is certainly


not a face or could be one. If a subwindow fails at any level, it is immediately
discarded as not being a face.
The weights are stored on the disc, and the viola-Jones algorithm for facial
recognition is learned. We just apply the file's features to our image, and if a
face is there, we receive its location.
CHAPTER V: RESULTS AND DISCUSSION

The model that was selected performs as expected. It performs with high
precision in a variety of situations and environments. Several tests have
demonstrated that this system is robust, dependable, and scalable. The web
interface can always be customized to meet the needs of different
applications, such as those in locations where identification by other means
is not possible or when full-body personal protective equipment (P.P.E.) is
required. The project met all of the functional and non-functional
requirements; with a few minor adjustments to the user interface, the project
may be transformed into a full-fledged off-the-shelf solution for general
usage. Because of the model's ability to detect unknown individuals, this
research has the potential to be a valuable security tool that may be used to
sound alarms if an unauthorized individual attempt to enter a guarded area.
Figure 21 - A-Syst Homepage

Figure 22 - Landing page


Figure 23 - Admin Portal

Figure 24 - Instructor Portal


CHAPTER VI: CONCLUSION AND FUTURE
WORK

The atmosphere of instruction and learning demands an automated student


attendance system. The majority of current options are time-consuming and
require semi-manual effort from the teacher or students during lecture time,
such as calling students' ID numbers and passing out attendance forms. The
purpose of the suggested system is to provide a solution for the
aforementioned problems by introducing face recognition into the process of
attendance management, which can be used to save time and effort during
tests and lectures. Other researchers have also built current face recognition
systems; nevertheless, there are limitations in terms of usefulness, accuracy,
lighting problem, etc. that the proposed system is aimed to overcome.
The face identification attendance system use facial recognition technology
to identify, verify, and record attendance automatically.
Attendance systems that use fingerprint scanning are practically the standard,
but the epidemic has raised questions about systems that need personal
touch. The facial recognition attendance system is a contactless technology
that removes any human-machine interaction.
Understanding how the technology works makes it much easier to appreciate
how face recognition attendance systems may make buildings and premises
safer and more productive.

The program collects and compares patterns on a person's face and analyses
the specifics to identify and verify the individual. Despite the complexity of
the underlying process, the entire technique may be boiled down into three
phases.

Firstly, locating human faces in real-time is a crucial step in face detection.


Once captured, the analogue face information is converted into a collection
of data or vectors depending on a person's facial characteristics, this is the
transformation of data phase.
Finally, the system checks the information by comparing it to the
information in the database, this phase is the Face matching part.

Our attendance system can conserve organizational resources through the


automatic tracking of student time. A solution like A-Syst may be developed
and scaled to be utilized on mobile devices, making it more economical for
small and medium-sized organizations. This type of attendance system can:

Boost staff output


Enhance the performance of the present student attendance system by
minimizing the time necessary for marking attendance and maximizing the
time available for the actual teaching process
Enhance the overall effectiveness of the system
Reduce administrative costs
Improving security, hence attendance will be taken without the students'
knowledge.

Moreover, by reducing physical contact in public and workspaces,


pandemics like Covid 19 can be effectively handled. There has been a huge
surge in demand for the use of contactless technology since the pandemic.
The industry has recognized the advantages of facial recognition and the
implementation of attendance systems such as A-Syst. Employee and
Student workplaces and classrooms can significantly minimize the frequency
of interaction between individuals, hence reducing the danger of virus
transmission.

The attendance system is not dependent on a small number of facial


characteristics but rather is highly robust and recognizes a face based on
multiple data points. Therefore, these systems can be improved to screen for
face masks and identify individuals without removing the mask or altering
facial characteristics such as beards, glasses, etc. The fact that students do
not have to remove their masks is a significant advantage over other
biometric systems. Algorithms could be implemented that can track changes
in facial characteristics such as spectacles, beards, etc.

With the deployment of the system, the entire atmosphere would be


automated with a facial recognition attendance system. In addition to taking
attendance, it will automatically record the student's entry and exit times. In
addition to enhancing classroom integrity, the technology properly identifies
who left the designated area and when. Unlike manual attendance systems,
AI-based attendance systems are highly automated as the system keeps and
updates daily records in real-time. Our facial recognition attendance system
could be further scaled to handle larger numbers of students.
REFERENCES

[1] K. Susheel Kumar, Shitala Prasad, Vijay Bhaskar Semwal, R C Tripathi –


"Real-time face recognition using AdaBoost improved fast pca algorithm"
International Journal of Artificial Intelligence & Applications (IJAIA),
Vol.2, No.3, July 2011

[2] Shireesha Chintalapati, M.V. Raghunadh – "Automated Attendance


Management System Based On Face Recognition Algorithms" 2013 IEEE
International Conference on Computational Intelligence and Computing
Research

[3] Akshara Jadhav, Akshay Jadhav Tushar Ladhe, Krishna Yeolekar –


"Automated attendance system using face recognition" International
Research Journal of Engineering andTechnology (IRJET)

[4] Nirmalya Kar, Mrinal Kanti Debbarma, Ashim Saha, and Dwijen Rudra
Pal "Study of Implementing Automated Attendance System Using Face
Recognition Technique"

[5] Smit Hapani, Nikhil Parakhiya, Prof. Nandana Prabhu, Mayur Paghda –
"Automated Attendance System using Image Processing" l 2018 Fourth
International Conference on Computing Communication Control and
Automation (ICCUBEA)
[6] Nazare Kanchan Jayant, Surekha Borra – "Attendance Management
System Using Hybrid Face Recognition Techniques" 2016 Conference on
Advances in Signal Processing (CASP) Cummins College of Engineering for
Women, Pune. Jun 9-11, 2016

[7] Refik Samet, Muhammed Tanriverdi – "Face Recognition-Based Mobile


Automatic Classroom Attendance Management System" 2017 International
Conference on Cyberworlds

You might also like