0% found this document useful (0 votes)
17 views

Click Here To Enter Text

This document discusses using face recognition for attendance systems. Traditional attendance marking methods are inefficient and can result in duplicate or fraudulent data. Face recognition using artificial intelligence can automatically mark attendance in real-time by detecting students' faces and storing attendance data in the cloud. This solves problems with traditional methods and makes the attendance process more efficient.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Click Here To Enter Text

This document discusses using face recognition for attendance systems. Traditional attendance marking methods are inefficient and can result in duplicate or fraudulent data. Face recognition using artificial intelligence can automatically mark attendance in real-time by detecting students' faces and storing attendance data in the cloud. This solves problems with traditional methods and makes the attendance process more efficient.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Click here to enter text.

ii
TABLE OF CONTENT

DECLARATION

TITLE PAGE

ACKNOWLEDGEMENTS ii

ABSTRAK iii

ABSTRACT iv

TABLE OF CONTENT v

LIST OF TABLES viii

LIST OF FIGURES ix

LIST OF SYMBOLS x

LIST OF ABBREVIATIONS xi

CHAPTER 1 INTRODUCTION 1

1.1 INTRODUCTION 1

1.2 PROBLEM STATEMENT 2

1.3 OBJECTIVES 3

1.4 SCOPE 4

1.5 SIGNIFICANT 4

1.6 PROJECT ORGANIZATION 5

CHAPTER 2 6

2.1 INTRODUCTION 6

2.2 FACE RECOGNITION IN ATTENDANCE SYSTEM 6

2.3 FACE RECOGNITION IN ATTENDANCE SYSTEM ALGORTIHM 7

iii
2.3.1 YOLO ALGORITHM

2.3.2 PCA ALGORITHM

2.3.3 CNN ALGORITHM (Alex net architecture)

2.4 DEEP LEARNING 9

2.5 MATLAB USED FOR ATTENDANCE SYSTEM USING FACE


RECOGNITION 10

2.6 ATTENDANCE SYSTEM USING FACE RECOGNITION 11

2.7 CONCLUSION 12

CHAPTER 3 METHODOLOGY 13

3.1 INTRODUCTION 13

3.2 METHODOLOGY 13

3.3 PROJECT REQUIREMENT 15

3.3.1 INPUT

3.3.2 OUTPUT

3.4 PROJECT DESCRIPTION 19

3.5 CONSTRAINTS 20

3.6 LIMITATIONS 21

3.7 CASE STUDY 21

CHAPTER 4 RESULTS AND DISCUSSION 23

4.1 Introduction 23

4.2 Implementation Process 23

4.2.1 Development of Student Class Attendance System

4.2.2 Design Interface of Student Class Attendance System

4.2.3 Interface of Student Class Attendance System

iv
4.3 Testing and Result Discussion 50

CHAPTER 5 CONCLUSION 51

5.1 Introduction 51

5.2 RESEARCH CONSTRAINT 51

5.3 FUTURE WORK 51

REFERENCES 53

APPENDIX A SAMPLE APPENDIX 1 54

APPENDIX B SAMPLE APPENDIX 2 55

v
LIST OF TABLES

Table 1 Problem Statement


Table 2 Yolo Algorithm Advantages and Disadvantages
Table 3 PCA Algorithm Disadvantages and Advantages
Table 4 Input for System 17
Table 5 Constraint Table 21
Table 6 System Limitation 21
Table 7 Capture Image from video 26
Table 8 Crop and Save 27
Table 9 Main Gui Coding 29
Table 10 Record Image coding 35
Table 11 Modify Coding 38
Table 12 Training Coding 40

vi
LIST OF FIGURES

Figure 1 Example of Attendance System using Face Recognition Interface


Figure 2 Deep Learning Diagram 10
Figure 3 Flow of Project Description 19
Figure 4 Main GUI Design 40
Figure 5 New Registration GUI Design 41
Figure 6 Modify Data GUI Design 41
Figure 7 Training GUI Design 42
Figure 8 Classify GUI Design 42
Figure 9 Main Interface 43
Figure 10 New Registration Interface 44
Figure 11 Training CNN Interface 45
Figure 12 Initialising input Data 45
Figure 13 Classify and Attendance Interface(Scanning) 46
Figure 14 Classify and Attendance Interface 47
Figure 15 Modify Data Interface 48
Figure 16 Classify and Attendance after Modify (Scanning) 49
Figure 17 Classify and Attendance Interface after Modify 50

vii
LIST OF SYMBOLS

CNN Convolutional Neural Network


PCA Principal Component Analysis
YOLO

viii
LIST OF ABBREVIATIONS

YOLO You Look Only One


PCA Principal Component Analysis
CNN Convolutional Neural Network

ix
CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

Since 1950, Artificial Intelligence (AI) have been introduced and been receiving
attention. The term of Artificial Intelligence has been used because it is a machine that
are artificially incorporated with human-like intelligence to perform task as we do this
intelligence is built using complex algorithm and mathematical. Artificial Intelligence is
used in smartphones, cars, social media feeds, video games, banking surveillance and
many other aspects of our daily life. Artificial Intelligence enable the technical systems
to perceive their environment, deal with what they perceive, solve problems and act to
achieve a specific goal. Data-already prepared or gathered and it receive by the
computer through its own sensors such as a camera to processes it and respond to the
sensor. Artificial Intelligence system can adapt their behaviour to a certain degree by
analysing the effects of previous actions and working autonomously (Dustin Harris,
2022).

A student's attendance in class is tracked and monitored using the attendance


system. There are several types of attendance systems, including biometric-based, face
recognition-based, radiofrequency card-based, and traditional paper-based systems. A
Face recognition-based attendance system is the most time- and security-efficient of
them all. There are numerous studies that just examine students' recognition rates
(SCAD College of Engineering and Technology & Institute of Electrical and Electronics
Engineers, n.d.). Every organization has its way of taking measures for the attendance of
students. Some organizations are document oriented. Approach and others have
implemented these digital methods such as biometric fingerprinting techniques and card
swapping techniques. However, these methods prove to be a statute of limitations as it

1
subjects students to wait in a time-consuming queue. If the student fails to bring his ID
card, then he will not be able to get attendance. Evolving technologies have made many
improvements in the changing world(C.O et al., 2013).

Artificial Intelligence has been used widely in the attendance system for most of the
institutions. As an aspect, in other countries, for their education institutions they have
been using Artificial Intelligence using face recognition to help in marking attendance.
Face recognition using Artificial Intelligence in attendance systems uses a facial feature
of a person to verify or identify their presence in the class and automatically their
attendance will be marked in real-time. When students arrive at the class and need to
take attendance before entering the class, their face is detected, and the data will be
stored in the data collection in the cloud and the data can be viewed by the higher
official or admin(Aparna Trivedi et al., 2022).

1.2 PROBLEM STATEMENT

With enhancement in education marking attendance system, traditional way of


marking attendance is not efficient. The process of marking attendance in University
Malaysia Pahang used conventional method such as using excel paper, student signature
in piece of student name list, google form and calling name by name in class. It may
cause to the duplicate data, risk on fraud where student ask his/her friend to sign for
their attendance.

The existing attendance system requires students to manually sign the sheet every
time they attend a class. This includes the more time consumed by the students to find
their name on sheet, some students may mistakenly sign another students name and the
sometimes sheet may got lost (Asir Antony Gnana Singh et al., 2017). For avoiding this
problem, we are introduced to use the face recognition in the Mat lab.
Using the face recognition can help reduce the problem and it more efficient when
doing marking attendance.

2
No Problem Description Effect

1. Traditional ways of The process of marking Waste of time, lack of


marking attendance attendance using efficiency of work and
system when student traditional ways such as can lead to duplicate
attend to the class. pen-paper sheet, google data and fraud.
form and calling out
names in class. It
consumes a lot of time
for attendance.

2. Lead for student to It can lead to fault data data overload in the
mistakenly sign the in the end and can affect system and not
attendance. for student study. synchronized the data
list and student list

3. Lead to Fraud of It led to student fraud cheat in attendance.


when marking the not coming to the
Attendance System
attendance and they can class but
mark for his/her friend attendance mark
who are actually not attend
attend to class.

Table 1 Problem of Traditional Attendance Marking

1.3 OBJECTIVES

Based on the problem statements, the objectives of the project are:

1) To study the existing Artificial Intelligence (AI) application in the


student attendance system in Faculty of Computing of Universiti
Malaysia Pahang.

3
2) To check the accuracy of Face Recognition using Artificial Intelligence
(AI) application for student attendance systems in Faculty of Computing
of Universiti Malaysia Pahang.

3) To test the functionality of the developed Artificial Intelligence (AI)


application in the student attendance system in higher educational
institutions.

1.4 SCOPE

The scope of the project is:

User Scope:

i. Students and lecturers who want to attend the class.

ii. Students study in UMP campus Pekan and Gambang.


System Scope:

i. Covered the Artificial Intelligence of the marking attendance system.


ii. Cover for student registration and attendance purposes.

Development Scope:

i. Contain the face recognition reader to read and scan the student face for register
and attendance purposes.
ii. Using Mat lab.

1.5 SIGNIFICANT

i. Student

Students can have a better process of marking attendance.

ii. Lecturer

4
Easy for the lecturer to detect for students not coming to the class, detect the
students who are registered with his/her subject.

iii. Admin

It is easy for admin to train the data for new enrolment in the class.

1.6 PROJECT ORGANIZATION

This report contains of five chapters. Chapter 1 is explaining about the overview of
the project including the Introduction, Problem Statement, Objective of the project,
Scope and Thesis Organization.

Chapter 2 briefly explain about the literature review on existing systems of Artificial
Intelligence for education institutions of marking attendance system.

Chapter 3 explains about the methodology used in this subject. This project
implements CNN methodology. The stages that used in this project are Analysis,
Design, Develop, Implementation and Evaluation.

Chapter 4 explains about the result and discussion based on development and testing
of this project. In this chapter, all the results and output of the project were briefly
discussed. These includes the software development, application testing, collecting data
and result of the project.

Chapter 5 conclude and summarize the result on this project. The limitations and
further works were discussed thoroughly in this chapter.

5
CHAPTER 2

2.1 INTRODUCTION

Chapter 2 covers the review of available application of Artificial Intelligence in the


field of Education Attendance System. Three existing Artificial Intelligence application
in attendance system mainly for student were explained in detail by focusing in term of
its Graphical User Interface (GUI), Interface Design (UX), language provided,
connection type, target audience of application, topic covered in the application, size of
application, type of Artificial Intelligence used, main function of the application, the
advantages and disadvantages of the Artificial Intelligence application in attendance
system using face recognition. This comparison of this existing application
recommends the strength and effectiveness of existing application, so that this project
can produce a better version of application.

2.2 FACE RECOGNITION IN ATTENDANCE SYSTEM

Face recognition is among the most productive image processing


applications and has a pivotal role in the technical field. Recognition of the
human face is an active issue for authentication purposes specifically in the
context of attendance of students (Kongunadu College of Engineering &
Technology & Institute of Electrical and Electronics Engineers, n.d.-a) .
Attendance system using face recognition is a procedure of recognizing
students by using face biostatistics based on the high-definition monitoring
and other computer technologies. The development of this system is aimed
to accomplish digitization of the traditional system of taking attendance by
calling names and maintaining pen-paper records. Present strategies for
taking attendance are tedious and time-consuming. Attendance records can
be easily manipulated by manual recording(Kongunadu College of

6
Engineering & Technology & Institute of Electrical and Electronics Engineers,
n.d.-b).

Figure 1 Example of Attendance System using Face Recognition Interface

2.3 FACE RECOGNITION IN ATTENDANCE SYSTEM ALGORTIHM

2.3.1 YOLO ALGORITHM

YOLO is coming from the term of “You Only Look One”. This algorithm is uses neural
network to provide real-time object detection. Its popular due to it speed and accuracy
read the images or object. YOLO also one of the algorithms that detect and recognize
various the class probabilities of the detected image. YOLO also employs
Convolutional Neural Network (CNN) to detect the object in real-time. YOLO is an
algorithm based on regression, instead of selecting the interesting part of an image, it
predicts classes and bounding for the whole image in one run of the algorithm. YOLO
algorithm gives a much better performance on all the parameters we discussed along
with a high fps for real-time usage. To understand the YOLO Algorithm, we need to
understand what is being predicted. YOLO doesn’t search for interested regions in the
input image that could contain an object, instead it splits the image into cells (Jedrzej
Swiezewski, 2020).

7
Advantages Disadvantages

Process frames at the rate of 45fps (larger Comparatively low recall and more
network) to 150fps(smaller network) localization error compared to faster
which is better than real-time. R_CNN.

The network can generalize the image Struggles to detect close objects because
better. each grid can propose only 2 bounding
boxes.

Struggles to detect small objects.

Table 2 Yolo Advantages and Disadvantages

2.3.2 PCA ALGORITHM

Principal Component Analysis known as PCA Algorithm is the oldest and best-known
technique of multivariate data analysis. PCA is the general name for a technique which
uses sophisticated underlying mathematical principle to transforms several possibly
correlated variable into a smaller number of variables called Principal Component. For
the definition of the PCA Algorithm is to reduce the dimensionality of a data set. This is
achieved by transforming to a new set of variables, the principal component (PCs),
which are uncorrelated, and which are ordered so that the first few retain most of the
variation present in all of the original variables. It is one of the efficient methods for
pattern recognition and image analysis. It also has proven one of the best algorithms for
Facial Image (Zakaria Jaadi, 2022).

Advantages Disadvantages

Correlated features are removed. The major components are difficult to


comprehend.

Enhances the performance of the Data normalization is required.

8
algorithm.

Enhanced Visualization. Loss of information.

Table 3 PCA Advantages and Disadvantages

2.3.3 CNN ALGORITHM (Alex net architecture)

Convolutional Neural Network known as CNN or ConvNet is a deep network that


imitates how the visual cortex of the brain processes and recognizes images and not just
a deep neural network that has many hidden layers. CNN basically used the image
recognition as a classification. Therefore, the output layer of the ConvNet generally
employs the multiclass classification neural network. CNN yields better image
recognition when its features extraction neural network is deeper, at the cost of difficult
in the training process, which had driven ConvNet to be impractical and forgotten for a
while, the features extraction neural network consists of piles of the convolutional layer
and pooling layer pairs (O’Shea & Nash, 2015).

Table 4 CNN Advantages and Disadvantages

2.4 DEEP LEARNING

For attendance system using face recognition are use a deep learning where
is comprised of neural network. Deep learning refers to the Convolutional
Neural Network (CNN) comprised of more than three layers which would be

9
inclusive of the inputs and the outputs can be considered a deep learning
algorithm where can be represent by the diagram below.

Figure 2 Deep Learning Diagram

The way in which deep learning and machine learning differ is in


how each algorithm learns. Deep learning automates much of the feature
extraction piece of the process, eliminating some of the manual human
intervention required and enabling the use of larger data sets. You can think
of deep learning as "scalable machine learning" as Lex Freidman noted in
same MIT lecture from above. Classical, or "non-deep", machine learning is
more dependent on human intervention to learn. Human experts determine
the hierarchy of features to understand the differences between data inputs,
usually requiring more structured data to learn. "Deep" machine learning
can leverage labelled datasets, also known as supervised learning, to inform
its algorithm, but it doesn’t necessarily require a labelled dataset. It can
ingest unstructured data in its raw form such as text, images, and it can
automatically determine the hierarchy of features which distinguish
different categories of data from one another. Unlike machine learning, it
doesn't require human intervention to process data, allowing us to scale
machine learning in more interesting ways.

10
2.5 MATLAB USED FOR ATTENDANCE SYSTEM USING FACE
RECOGNITION

In this research, we design an Attendance system with the help of


facial recognition owing to the difficulty in the manual as well as other
traditional means of attendance system. This system uses of AlexNet
Architecture and CNN used for the classification.

How does the attendance system will work in MatLab is first, the
camera will capture the image of a student before entering the class. Then,
the face will be detected and cropped to initialize the face. The image will
be cropped and processed using face recognition and Deep Learning
algorithm. CNN are used in this project for the classification purposed. The
student who their face is recognized by the system are marked as present
and the result are transferred to an excel sheet automatically.

The MatLab Toolbox Used for the Image processing toolbox, Image
acquisition toolbox, computer vision toolbox, spreadsheet Link Ex, Deep
Learning toolbox, and easy integration.

2.6 ATTENDANCE SYSTEM USING FACE RECOGNITION

Face recognition is among the most productive image processing


applications and has a pivotal role in the technical field. Recognition of the
human face is an active issue for authentication purposes specifically in the
context of attendance of students. Attendance system using face recognition
is a procedure of recognizing students by using face biostatistics based on
the high-definition monitoring and other computer technologies. The
development of this system is aimed to accomplish digitization of the
traditional system of taking attendance by calling names and maintaining
pen-paper records. Present strategies for taking attendance are tedious and
time-consuming. Attendance records can be easily manipulated by manual
recording.

11
2.7 CONCLUSION

In this proposed Attendance System Using Face Recognition is the better


model for attendance system for students in the classroom and at the other places.
Now in this today’s era many systems are available like biometrics or other
methods, but the facial recognition is the best option for the accuracy. There is no
special hardware requirement for the implementation of the system. A camera
Laptop and MatLab are sufficient for developing attendance system using face
recognition. CNN was chosen as deep learning model because it could help the
system to anticipate the face recognition in more details and wisely by the
artificially understanding the climate pattern of image. This research is a result of
designing a face recognition for UMP student class attendance.

12
CHAPTER 3

METHODOLOGY

3.1 INTRODUCTION

3.2 METHODOLOGY

In this system, Having a video capture from the computer camera as the input to
the system the details of face detection, face recognition, and the use of CNN in the
system. This Proposed system improves the attendance management system using of
our unique characteristics of their face. For verification and identification face
recognition technique is used. The algorithms which use for biometric facial recognition
follow different steps of image processing. The first step of this system is to collect
physical or behavioural tests in predefined conditions and during the state period of that
time. Extraction- In this step, all data will be extracted from the sample created to make
template using facial recognition. After finishing the extraction step collected data is
compared with existing that templates. In this last stage of face recognition, the face
features of gathered samples are matching with the one from a facial that already been
trained. It will take just a second. In this system we can use CNN method. A
Convolutional Neural Network (CNN) is a type of artificial neural network used in
image recognition and image processing.

A. Capture image from Device Camera (Camera Lenovo Ideapad 330)

After clicking on the register button and starting filling all the
information, the image of the student will be captured by using the Device
Camera from Lenovo Ideapad 330.  

B. Crop and Save image (using Cascade Object Detector)

While the student image is captured, there is one yellow box around
the student face that we call the Cascade Object Detector. The size of
Cascade Object Detector is 150*150cm and the purpose of cropping the
image is to remove all the background around the student and the face
recognition and training will be using the crop image.

C. Training Image

To train the image, the system requires 100 images for each student in
a different facial expression. In this system, 10 students will be used as a
sample test. 

D. Training CNN 

To train CNN, we will train every image in epoch and by batch size.
For the epoch, we will train in 10 epochs with 128 batch sizes of image.

E. Face Detection

Once settled after training CNN, before taking attendance, students


need to detect their face by using a camera from Lenovo Ideapad 330 and the
face will be detected by using yellow Cascade object Detector around the
face.

F. Face Recognition

This is the last step of the face recognition process. We have used one
of the best learning techniques, deep metric learning which is highly accurate
and capable of outputting real value. 

G. Take Attendance

Once the face is identified with the image stored, the system will read
the images with the details. When data is returned, the system generates an
attendance table which includes the name, ID number, date, day, and time
with corresponding subject id.

3.3 PROJECT REQUIREMENT

3.3.1 INPUT

Input Description

Images  Images that will be trained from the webcam to


detect the face and before recognizing it.
 Images will be used as an input for student to
register their course and subject.

Student Name  Student Name are needed as a detail when


system start to process the training data and to
store in Database.
 The same details will be used for the recognizing
purposed when taking the attendance.
 When the system finds the same images
and face from camera and database, it will appear
the details as well as name.

Student ID  Student ID are needed as a detail when


system start to process the training data and to
store in Database.
 The same details will be used for the recognizing
purposed when taking the attendance.
 When the system finds the same images
and face from camera and database, it will appear
the details as well as the ID.

Student Course  Student courses are needed as a detail


when system start to process the training data and
to store it in Database.
 The same details will be used for the recognizing
purposed when taking the attendance.
 When the system find the same images
and face from the camera and database, it will
appear the details as well as the course.

Student Class  Student classes are needed as a detail


when system start to process the training data and
to store in Database.
 The same details will be used for the recognizing
purposed when taking the attendance.

Attendance  Student can scan their face by their own at


the laptop in the class and the system will start to
detect the face and if it matches with images in
the database, then it will start to recognize, and
the details will be appeared, and attendance will
be marking attend.
 If the system cannot matches the face
detection with the images in database, the system
will know it as an unknown and the attendance
will be not marking absent.
Table 5 Input for System

3.3.2 OUTPUT

output Description

Images • Images that will be trained from the webcam to


detect the face and before recognizing it.
• Images will be used as an input for student to
register their course and subject.
• Images will be printed if the system find the
match.
Student Name  Student Name are needed as a detail when
system start to process the training data and to
store in Database.
 The same details will be used for the
recognizing purposed when taking the
attendance.
 When the system finds the same images and
face from camera and database, it will appear
the details as well as name.
Student ID  Student ID are needed as a detail when system
start to process the training data and to store
in Database.
 The same details will be used for the
recognizing purposed when taking the
attendance.
 When the system finds the same images and
face from camera and database, it will appear
the details as well as the ID.
Attendance  Student can scan their face by their own at the
laptop in the class and the system will start to
detect the face if it matches with images in the
database, then it will start to recognize and the
details will be appear and attendance will be
marking attend.
 If the system cannot match the face detect with
the images in database, the system will know
it as an unknown and the attendance will be
not marking absent.
Table 6 Output for Attendance System
3.4 PROJECT DESCRIPTION

Figure 3 Flow of Project Description

Image will be training from the webcam, and it will be store in the folder in the
system. Detection of person faces, is the initial step in the attendance management
system is for the student to enrol their photos, which are then checked in the folder,
which is produced by the admin. After the face of person will be detect, the face will be
started to recognize and if the image are match with the image store in the folder, then
the face will be recognise and the detail of the person will be print in the interface.
When an enrolled image is compared to every database image, if that image is found in
the system database, that student is either tagged as present or absent. The key notion of
the necessary plans and security performance is dealt with by the diagnosed box that
revolves around the gap that supplies the face vision by the complication. We can utilise
the CNN approach in this system. A Convolutional Neural Network (CNN) is a sort of
artificial neural network that is specifically developed for pixel data and is used in
image recognition and image processing. Attendance marking is the final stage of the
system’s operations; this stage records a student’s attendance; if the processes are
completed and an image is correctly recognised, it will be registered as present in the
system’s server; otherwise, it will be marked as absent.

3.5 CONSTRAINTS

The Student Class Attendance System for UMP using Face Recognition consists
of a few constraints that limit user’s action are the requirement of the system that
restricts the way the system should be developed. It is important as it can help the
development team to speed up the development of the system.

Constraints Description

Time The time taken of data train at most 10 second.

Security The system must only allow access of


authorized users

Usability • The time taken of data train finish at most 10


second
• The time taken for face detect not exceed 5
second

• The time taken for print the images and details


within 5 seconds.
Scalability The system must be adapted with the internet

Maintainability The system can be maintained outside of


office hours
Table 7 Constraint Table
3.6 LIMITATIONS

When developing a system, there are some limitations that restricts the system
from functioning to all its potential function and it can be explained in the table 3.2.4.1
below.

Limitations Description

Graphic Card It limited to the Nvidia and above graphic card


user to make the process of training Data, development
and so on smooth.

Internet Speed Since the system need an internet,

Table 8 System Limitation

3.7 CASE STUDY

Maintaining student attendance through traditional methods is extremely difficult


for any organisation, and the approach's reliability is quite low. Traditional attendance,
in which teachers manually take attendance of every student in class, is being replaced
with an Attendance Management System Using Face Recognition. This approach is
used to track attendance in a variety of organisations and educational institutions. Face
recognition is a biometric way of identifying a person by comparing two live images.
Face recognition technology is gradually growing into a universal biometric solution
since, when compared to other biometric solutions, it needs virtually little effort from
the user. Machine learning is one of the best domains among all the domains since it
uses a single dataset as an input and then applies several machine learning algorithms to
get a desirable output. Previously, each educational institute, school, or college would
carefully analyse the student attendance element. Face recognition-based automated
attendance management system will be the most beneficial since it easily solves all
time, safety, and proxy issues. To supplement these systems, Convolutional Neural
Networks (CNN) and MATLAB are primarily employed. CNN is frequently used to
analyse photos. Attendance Tracking Software Face Recognition helps to save time,
reduce bogus attendance, and improve security.

Traditional methods of marking attendance have been ineffective as education


marking attendance systems have improved. In the University Malaysia Pahang,
attendance was recorded using traditional methods such as excel paper, student
signature on a piece of student name list, google form, and calling students by name in
class. It may result in duplicate data, as well as a danger of fraud if a student asks a
buddy to sign for their attendance.

Students must hand sign the attendance sheet every time they attend a lesson under
the current system. This includes the time it takes students to locate their name on the
page, the possibility that some students will accidentally sign another student's name,
and the possibility that the sheet will be misplaced. In the Mat lab, we are taught how to
use face recognition to avoid this problem. Using the face recognition can help reduce
the problem and it more efficient when doing marking attendance.
CHAPTER 4

RESULTS AND DISCUSSION

4.1 Introduction

This chapter discuss about the implementation, result or findings of the research.
This chapter contains the result of the finding based on the experiment or testing that
has been done. Its also includes the explaination of the discussion that shows the
objectives of the research is fullfilled.

4.2 Implementation Process

Implementation is a process to record all the steps in developing the Faculty of


Computing Student Class Attendance System.
This section, student should discuss in detail on the implementation process that
involve in completing this project from the beginning – development process, coding
involved, and interface and result of research gathering.

4.2.1 Development of Student Class Attendance System

4.2.1.1 Captured Image From Camera Device

First, we need to register new student. Click on <<New Registration>> button at Main
Interface of FK Attendance System, it will go directly to the interface from Figure 4
Below. User need to key in all information needed and click on <<Record>> button.
Record Button use to start record the image. Before proceed to record image, from the
Figure 4, you can see the box at the below of axes, it will print message “it will be
starting in....” and once camera is ready, from the Figure 5, the face of student will be
appear at the axes and the box below will print the total of image that will store which is
n=100. After finish record, it will print the “finish” message and user may proceed to
the next process by clicking on <<Main>> button. If student try to register new user
with the same input, it will appear message “Data Exist” where the ID or any
information they enter already have a data.

% function capturefacesfromvideo1(id,nama)

flag = 0;
% Create the face detector object.
faceDetector =
vision.CascadeObjectDetector('FrontalFaceCART','MinSize',[150,150]);

% Here the loop runs for 300 times you can change the threshold
(n) based on the number of training data you need
n = 100;
% id = input('Enter ID :','s');
str = upper(id);
% nama = upper(input('Enter Nama :','s'));

checkusr = dir('info');
checkusr = checkusr(3:end);
for xxx = 1:1:length(checkusr)
if strcmp(strcat(str,'.mat'),checkusr(xxx).name) == 1
flag = 1;
end
end

if flag == 0

info1 = {str};
info2 = {nama};
info3 = {Email};
info4 = {LectName};
info5 = Gender;
info6 = Subject;
info = [info1,info2, info3, info4, info5, info6 ];
save(strcat('info\',str,'.mat'),'info')
mkdir('photos',str);

% Create the point tracker object.


pointTracker = vision.PointTracker('MaxBidirectionalError', 2);

% Create the webcam object.


cam = webcam();

% Capture one frame to get its size.


videoFrame = snapshot(cam);
frameSize = size(videoFrame);

% Create the video player object.


runLoop = true;
numPts = 0;
frameCount = 0;
i=1;

% while runLoop && frameCount < n


axes(handles.axes1)
while i <= n

% Get the next frame.


videoFrame = snapshot(cam);
videoFrameGray = rgb2gray(videoFrame);
frameCount = frameCount + 1;

if numPts < 10
% Detection mode.
bbox = faceDetector.step(videoFrameGray);

if ~isempty(bbox)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray,
'ROI', bbox(1, :));

% Re-initialize the point tracker.


xyPoints = points.Location;
numPts = size(xyPoints,1);
release(pointTracker);
initialize(pointTracker, xyPoints, videoFrameGray);

% Save a copy of the points.


oldPoints = xyPoints;

% Convert the rectangle represented as [x, y, w, h]


into an
% M-by-2 matrix of [x,y] coordinates of the four
corners. This
% is needed to be able to transform the bounding box
to display
% the orientation of the face.
bboxPoints = bbox2points(bbox(1, :));

% Convert the box corners into the [x1 y1 x2 y2 x3 y3


x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);

% Display a bounding box around the detected face.


videoFrame = insertShape(videoFrame, 'Polygon',
bboxPolygon, 'LineWidth', 3);

% Display detected corners.


videoFrame = insertMarker(videoFrame, xyPoints, '+',
'Color', 'white');
end

else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);

numPts = size(visiblePoints, 1);

if numPts >= 10
% Estimate the geometric transformation between the
old points
% and the new points.
[xform, oldInliers, visiblePoints] =
estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity',
'MaxDistance', 4);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform,
bboxPoints);

% Convert the box corners into the [x1 y1 x2 y2 x3 y3


x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
imwrite(videoFrame,[ 'photos\',str,'\',int2str(i),
'.jpg']);
% Display a bounding box around the face being
tracked.
videoFrame = insertShape(videoFrame, 'Polygon',
bboxPolygon, 'LineWidth', 3);

% Display tracked points.


videoFrame = insertMarker(videoFrame, visiblePoints,
'+', 'Color', 'white');

imshow(videoFrame)
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
i = i+1;
dispproc = i;
set(handles.text2,'String',n-dispproc+1);

else
set(handles.text2,'String','Please adjust your
position!');
end

end

end

% Clean up.
clear cam;
release(pointTracker);
release(faceDetector);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
dispproc = 'Cropping Face part....';
set(handles.text2,'String',dispproc);
mkdir('croppedfaces',str);
ds1 =
imageDatastore(['photos\',str],'IncludeSubfolders',true,'LabelSource
','foldernames');
cropandsave(ds1,str);
set(handles.text2,'String','Finish');
else
dispproc = 'Data exist';
set(handles.text2,'String',dispproc);
end

Table 9 Capture Image from video


Figure 4 New Registration and Record Image Interface

Figure 5 Record Image

4.2.1.2 Crop and Save

For the Crop and save part, when the record image starts to record the image, the image
will be stored in two different folders. From Figure 6, the full image will be stored in
this folder. For the crop image, it will be stored in Croppedfaces folder like Figure 7.
function cropandsave(im,str)
j = 1;
T = countEachLabel(im);
n = T(1,2).Variables;
for i = 1:n
i1 = readimage(im,i);
[img,face] = cropface(i1);
if face==1
imwrite(img,['croppedfaces\', str,'\',int2str(j),
'.jpg']);
j = j+1;
end
end

Table 10 Crop and Save

Figure 6 Photo that store in file after record.

Figure 7 Cropped Face Folder

4.2.1.3 Training Image

Before detecting and recognizing the image, the image needs to be trained first. For the
image, it needs to be trained by epoch and by batch. The epoch that decided is 10 epoch
which is it is a number of complete passes through to training dataset while the batch
size decided is 128 which is it is a number of sample image processed. From Figure 8,
to start training the image, make sure the epoch and mini batch size is same with the
number declared. Click on <<Train>> button and it will appear the message in the box
below “Training started…Please wait”. After that, it will display the train will be done
using single CPU and the initializing process will be start until the number of epoch
finish. In the Initializing process, it will be displaying the epoch number, Iteration,
Time Elapsed, Mini Batch Accuracy, Validation Accuracy, Mini Batch Loss, Validation
Loss and Base learning rate as in Figure 9 below. After it finishes, it will display
message “Training Complete” and can click on <<Menu>> button to start classify the
image as you can see in Figure 10.

% unzip('MerchData.zip');
set(handles.text4,'String','Training started...Please wait');
drawnow
imds = imageDatastore('croppedaces', ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.8);

net = alexnet;
inputSize = net.Layers(1).InputSize;

if isa(net,'SeriesNetwork')
lgraph = layerGraph(net.Layers);
else
lgraph = layerGraph(net);
end

[learnableLayer,classLayer] = findLayersToReplace2(lgraph);

numClasses = numel(categories(imdsTrain.Labels))

if isa(learnableLayer,'nnet.cnn.layer.FullyConnectedLayer')
newLearnableLayer = fullyConnectedLayer(numClasses, ...
'Name','new_fc', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);

elseif isa(learnableLayer,'nnet.cnn.layer.Convolution2DLayer')
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
end

lgraph =
replaceLayer(lgraph,learnableLayer.Name,newLearnableLayer);

newClassLayer = classificationLayer('Name','new_classoutput');
lgraph = replaceLayer(lgraph,classLayer.Name,newClassLayer);
layers = lgraph.Layers;
connections = lgraph.Connections;

layers(1:10) = freezeWeights2(layers(1:10));
lgraph = createLgraphUsingConnections2(layers,connections);

pixelRange = [-30 30];


scaleRange = [0.9 1.1];
imageAugmenter = imageDataAugmenter( ...
'RandXReflection',true, ...
'RandXTranslation',pixelRange, ...
'RandYTranslation',pixelRange, ...
'RandXScale',scaleRange, ...
'RandYScale',scaleRange);
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain,
...
'DataAugmentation',imageAugmenter);

augimdsValidation =
augmentedImageDatastore(inputSize(1:2),imdsValidation);

miniBatchSize = mbs;
valFrequency = floor(numel(augimdsTrain.Files)/miniBatchSize);
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',me, ...
'InitialLearnRate',3e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',augimdsValidation, ...
'ValidationFrequency',valFrequency, ...
'Verbose',true);

newnet = trainNetwork(augimdsTrain,lgraph,options);

save('brain.mat','newnet')

set(handles.text4,'String','Training complete');

Table 11 Training Coding


Figure 8 Training Image by Epoch and Batch after record

Figure 9 Training image on single CPU and initialise input data


Figure 10 Training Complete

4.2.1.4 Face Detection & Face Recognition

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
set(handles.text3,'String','Starting. Please wait');
set(handles.text2,'String','0');
gui_brain

% --- Executes on button press in pushbutton2.


function pushbutton2_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
set(handles.text2,'String',1);

% --- Executes on button press in pushbutton3.


function pushbutton3_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
close force all
gui_main
Table 12 Record Image coding

Figure 11 Face Detection


Figure 12 Face Recognition

4.2.1.5 Take Attendance

if strcmp('>>> ID',aaa.info{1,1})==1
msgbox('present')
end

Table 13 Attendance Marking coding

Figure 13 Attendance Marking


4.2.1.6 Gui Modify

function varargout = gui_modifiedimage(varargin)


% GUI_MODIFIEDIMAGE MATLAB code for gui_modifiedimage.fig
% GUI_MODIFIEDIMAGE, by itself, creates a new
GUI_MODIFIEDIMAGE or raises the existing
% singleton*.
%
% H = GUI_MODIFIEDIMAGE returns the handle to a new
GUI_MODIFIEDIMAGE or the handle to
% the existing singleton*.
%
%
GUI_MODIFIEDIMAGE('CALLBACK',hObject,eventData,handles,...) calls
the local
% function named CALLBACK in GUI_MODIFIEDIMAGE.M with the
given input arguments.
%
% GUI_MODIFIEDIMAGE('Property','Value',...) creates a new
GUI_MODIFIEDIMAGE or raises the
% existing singleton*. Starting from the left, property
value pairs are
% applied to the GUI before gui_modifiedimage_OpeningFcn
gets called. An
% unrecognized property name or invalid value makes property
application
% stop. All inputs are passed to
gui_modifiedimage_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI
allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help


gui_modifiedimage

% Last Modified by GUIDE v2.5 16-May-2021 00:42:59

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn',
@gui_modifiedimage_OpeningFcn, ...
'gui_OutputFcn',
@gui_modifiedimage_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before gui_modifiedimage is made visible.
function gui_modifiedimage_OpeningFcn(hObject, eventdata,
handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to gui_modifiedimage (see
VARARGIN)

% Choose default command line output for gui_modifiedimage


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);
ah = axes('unit','normalized','position',[0 0 1 1]);
bg = imread('background\pngwing.png');
imagesc(bg)
set(ah,'handlevisibility','off','visible','off')

% UIWAIT makes gui_modifiedimage wait for user response (see


UIRESUME)
% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command


line.
function varargout = gui_modifiedimage_OutputFcn(hObject,
eventdata, handles)
% varargout cell array for returning output args (see
VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

function edit1_Callback(hObject, eventdata, handles)


% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text


% str2double(get(hObject,'String')) returns contents of
edit1 as a double

% --- Executes during object creation, after setting all


properties.
function edit1_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles empty - handles not created until after all
CreateFcns called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
id = get(handles.edit1,'String');
aaa = load(strcat('info\',id,'.mat'));
% set(handles.text2,'String',aaa.info);
aaa.info
set(handles.uitable1,'Data',aaa.info);

% --- Executes on button press in pushbutton2.


function pushbutton2_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
id = get(handles.edit1,'String');
delete(strcat('info\',id,'.mat'))
rmdir(strcat('photos\',id),'s')
rmdir(strcat('croppedfaces\',id),'s')

close force all


gui_recordimage

% --- Executes on button press in pushbutton3.


function pushbutton3_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
close force all
gui_main

% --- Executes when entered data in editable cell(s) in uitable1.


function uitable1_CellEditCallback(hObject, eventdata, handles)
% hObject handle to uitable1 (see GCBO)
% eventdata structure with the following fields (see
MATLAB.UI.CONTROL.TABLE)
% Indices: row and column indices of the cell(s) edited
% PreviousData: previous data for the cell(s) edited
% EditData: string(s) entered by the user
% NewData: EditData or its converted form set on the Data
property. Empty if Data was not changed
% Error: error string when failed to convert EditData to
appropriate value for Data
% handles structure with handles and user data (see GUIDATA)

% --- Executes on button press in pushbutton4.


function pushbutton4_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton4 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
gui_allinfo

Table 14 Modify Coding

Figure 14 Check ID before Modify


Figure 15 Enter new Info after click modify button

Figure 16 Scanning
Figure 17 Classify after Modify4

4.3 Testing and Result Discussion

From the testing that already been done using the interface as screenshot above, the face
recognition already been tested with the same lighting, distance the face from camera
and the background. For the background, we decided to use the plain white background
to ensure the image will be not being distracted by any of goods from the background.
The process will be more focus on the image only. For the distance for the student face
from the camera devices, it tested for the 70cm from the face to the devices camera.
With the 75% of Lighting Brightness use to test the face recognition process and the
accuracy that get is
CHAPTER 5

CONCLUSION

5.1 Introduction

This chapter conclude the project/research that have been done, which consists
of:
i. Conclusion of the project/research

ii. All the data retrieve and observe how far it been fit into project and its
objectives

iii. Methodology and project implementation conclusion

iv. Future suggestion and enhancement of project or research

5.2 RESEARCH CONSTRAINT

The student should clarify in details the constraints throughout their


project/research completion. Example:
i. Limited time

ii. Development constraints

5.3 FUTURE WORK


This section discusses suggestion and enhancement of research/project,
including knowledge and contribution to the university, faculty, society or writer
throughout the research/project completion.
REFERENCES

Use a reference manager such as Mendeley or EndNote to generate your list of references here.
APPENDIX A
SAMPLE APPENDIX 1

For Appendices Heading, use TITLE AT ROMAN PAGES style.


APPENDIX B
SAMPLE APPENDIX 2

For Appendices Heading, use TITLE AT ROMAN PAGES style.

You might also like