Click Here To Enter Text
Click Here To Enter Text
ii
TABLE OF CONTENT
DECLARATION
TITLE PAGE
ACKNOWLEDGEMENTS ii
ABSTRAK iii
ABSTRACT iv
TABLE OF CONTENT v
LIST OF FIGURES ix
LIST OF SYMBOLS x
LIST OF ABBREVIATIONS xi
CHAPTER 1 INTRODUCTION 1
1.1 INTRODUCTION 1
1.3 OBJECTIVES 3
1.4 SCOPE 4
1.5 SIGNIFICANT 4
CHAPTER 2 6
2.1 INTRODUCTION 6
iii
2.3.1 YOLO ALGORITHM
2.7 CONCLUSION 12
CHAPTER 3 METHODOLOGY 13
3.1 INTRODUCTION 13
3.2 METHODOLOGY 13
3.3.1 INPUT
3.3.2 OUTPUT
3.5 CONSTRAINTS 20
3.6 LIMITATIONS 21
4.1 Introduction 23
iv
4.3 Testing and Result Discussion 50
CHAPTER 5 CONCLUSION 51
5.1 Introduction 51
REFERENCES 53
v
LIST OF TABLES
vi
LIST OF FIGURES
vii
LIST OF SYMBOLS
viii
LIST OF ABBREVIATIONS
ix
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
Since 1950, Artificial Intelligence (AI) have been introduced and been receiving
attention. The term of Artificial Intelligence has been used because it is a machine that
are artificially incorporated with human-like intelligence to perform task as we do this
intelligence is built using complex algorithm and mathematical. Artificial Intelligence is
used in smartphones, cars, social media feeds, video games, banking surveillance and
many other aspects of our daily life. Artificial Intelligence enable the technical systems
to perceive their environment, deal with what they perceive, solve problems and act to
achieve a specific goal. Data-already prepared or gathered and it receive by the
computer through its own sensors such as a camera to processes it and respond to the
sensor. Artificial Intelligence system can adapt their behaviour to a certain degree by
analysing the effects of previous actions and working autonomously (Dustin Harris,
2022).
1
subjects students to wait in a time-consuming queue. If the student fails to bring his ID
card, then he will not be able to get attendance. Evolving technologies have made many
improvements in the changing world(C.O et al., 2013).
Artificial Intelligence has been used widely in the attendance system for most of the
institutions. As an aspect, in other countries, for their education institutions they have
been using Artificial Intelligence using face recognition to help in marking attendance.
Face recognition using Artificial Intelligence in attendance systems uses a facial feature
of a person to verify or identify their presence in the class and automatically their
attendance will be marked in real-time. When students arrive at the class and need to
take attendance before entering the class, their face is detected, and the data will be
stored in the data collection in the cloud and the data can be viewed by the higher
official or admin(Aparna Trivedi et al., 2022).
The existing attendance system requires students to manually sign the sheet every
time they attend a class. This includes the more time consumed by the students to find
their name on sheet, some students may mistakenly sign another students name and the
sometimes sheet may got lost (Asir Antony Gnana Singh et al., 2017). For avoiding this
problem, we are introduced to use the face recognition in the Mat lab.
Using the face recognition can help reduce the problem and it more efficient when
doing marking attendance.
2
No Problem Description Effect
2. Lead for student to It can lead to fault data data overload in the
mistakenly sign the in the end and can affect system and not
attendance. for student study. synchronized the data
list and student list
1.3 OBJECTIVES
3
2) To check the accuracy of Face Recognition using Artificial Intelligence
(AI) application for student attendance systems in Faculty of Computing
of Universiti Malaysia Pahang.
1.4 SCOPE
User Scope:
Development Scope:
i. Contain the face recognition reader to read and scan the student face for register
and attendance purposes.
ii. Using Mat lab.
1.5 SIGNIFICANT
i. Student
ii. Lecturer
4
Easy for the lecturer to detect for students not coming to the class, detect the
students who are registered with his/her subject.
iii. Admin
It is easy for admin to train the data for new enrolment in the class.
This report contains of five chapters. Chapter 1 is explaining about the overview of
the project including the Introduction, Problem Statement, Objective of the project,
Scope and Thesis Organization.
Chapter 2 briefly explain about the literature review on existing systems of Artificial
Intelligence for education institutions of marking attendance system.
Chapter 3 explains about the methodology used in this subject. This project
implements CNN methodology. The stages that used in this project are Analysis,
Design, Develop, Implementation and Evaluation.
Chapter 4 explains about the result and discussion based on development and testing
of this project. In this chapter, all the results and output of the project were briefly
discussed. These includes the software development, application testing, collecting data
and result of the project.
Chapter 5 conclude and summarize the result on this project. The limitations and
further works were discussed thoroughly in this chapter.
5
CHAPTER 2
2.1 INTRODUCTION
6
Engineering & Technology & Institute of Electrical and Electronics Engineers,
n.d.-b).
YOLO is coming from the term of “You Only Look One”. This algorithm is uses neural
network to provide real-time object detection. Its popular due to it speed and accuracy
read the images or object. YOLO also one of the algorithms that detect and recognize
various the class probabilities of the detected image. YOLO also employs
Convolutional Neural Network (CNN) to detect the object in real-time. YOLO is an
algorithm based on regression, instead of selecting the interesting part of an image, it
predicts classes and bounding for the whole image in one run of the algorithm. YOLO
algorithm gives a much better performance on all the parameters we discussed along
with a high fps for real-time usage. To understand the YOLO Algorithm, we need to
understand what is being predicted. YOLO doesn’t search for interested regions in the
input image that could contain an object, instead it splits the image into cells (Jedrzej
Swiezewski, 2020).
7
Advantages Disadvantages
Process frames at the rate of 45fps (larger Comparatively low recall and more
network) to 150fps(smaller network) localization error compared to faster
which is better than real-time. R_CNN.
The network can generalize the image Struggles to detect close objects because
better. each grid can propose only 2 bounding
boxes.
Principal Component Analysis known as PCA Algorithm is the oldest and best-known
technique of multivariate data analysis. PCA is the general name for a technique which
uses sophisticated underlying mathematical principle to transforms several possibly
correlated variable into a smaller number of variables called Principal Component. For
the definition of the PCA Algorithm is to reduce the dimensionality of a data set. This is
achieved by transforming to a new set of variables, the principal component (PCs),
which are uncorrelated, and which are ordered so that the first few retain most of the
variation present in all of the original variables. It is one of the efficient methods for
pattern recognition and image analysis. It also has proven one of the best algorithms for
Facial Image (Zakaria Jaadi, 2022).
Advantages Disadvantages
8
algorithm.
For attendance system using face recognition are use a deep learning where
is comprised of neural network. Deep learning refers to the Convolutional
Neural Network (CNN) comprised of more than three layers which would be
9
inclusive of the inputs and the outputs can be considered a deep learning
algorithm where can be represent by the diagram below.
10
2.5 MATLAB USED FOR ATTENDANCE SYSTEM USING FACE
RECOGNITION
How does the attendance system will work in MatLab is first, the
camera will capture the image of a student before entering the class. Then,
the face will be detected and cropped to initialize the face. The image will
be cropped and processed using face recognition and Deep Learning
algorithm. CNN are used in this project for the classification purposed. The
student who their face is recognized by the system are marked as present
and the result are transferred to an excel sheet automatically.
The MatLab Toolbox Used for the Image processing toolbox, Image
acquisition toolbox, computer vision toolbox, spreadsheet Link Ex, Deep
Learning toolbox, and easy integration.
11
2.7 CONCLUSION
12
CHAPTER 3
METHODOLOGY
3.1 INTRODUCTION
3.2 METHODOLOGY
In this system, Having a video capture from the computer camera as the input to
the system the details of face detection, face recognition, and the use of CNN in the
system. This Proposed system improves the attendance management system using of
our unique characteristics of their face. For verification and identification face
recognition technique is used. The algorithms which use for biometric facial recognition
follow different steps of image processing. The first step of this system is to collect
physical or behavioural tests in predefined conditions and during the state period of that
time. Extraction- In this step, all data will be extracted from the sample created to make
template using facial recognition. After finishing the extraction step collected data is
compared with existing that templates. In this last stage of face recognition, the face
features of gathered samples are matching with the one from a facial that already been
trained. It will take just a second. In this system we can use CNN method. A
Convolutional Neural Network (CNN) is a type of artificial neural network used in
image recognition and image processing.
After clicking on the register button and starting filling all the
information, the image of the student will be captured by using the Device
Camera from Lenovo Ideapad 330.
While the student image is captured, there is one yellow box around
the student face that we call the Cascade Object Detector. The size of
Cascade Object Detector is 150*150cm and the purpose of cropping the
image is to remove all the background around the student and the face
recognition and training will be using the crop image.
C. Training Image
To train the image, the system requires 100 images for each student in
a different facial expression. In this system, 10 students will be used as a
sample test.
D. Training CNN
To train CNN, we will train every image in epoch and by batch size.
For the epoch, we will train in 10 epochs with 128 batch sizes of image.
E. Face Detection
F. Face Recognition
This is the last step of the face recognition process. We have used one
of the best learning techniques, deep metric learning which is highly accurate
and capable of outputting real value.
G. Take Attendance
Once the face is identified with the image stored, the system will read
the images with the details. When data is returned, the system generates an
attendance table which includes the name, ID number, date, day, and time
with corresponding subject id.
3.3.1 INPUT
Input Description
3.3.2 OUTPUT
output Description
Image will be training from the webcam, and it will be store in the folder in the
system. Detection of person faces, is the initial step in the attendance management
system is for the student to enrol their photos, which are then checked in the folder,
which is produced by the admin. After the face of person will be detect, the face will be
started to recognize and if the image are match with the image store in the folder, then
the face will be recognise and the detail of the person will be print in the interface.
When an enrolled image is compared to every database image, if that image is found in
the system database, that student is either tagged as present or absent. The key notion of
the necessary plans and security performance is dealt with by the diagnosed box that
revolves around the gap that supplies the face vision by the complication. We can utilise
the CNN approach in this system. A Convolutional Neural Network (CNN) is a sort of
artificial neural network that is specifically developed for pixel data and is used in
image recognition and image processing. Attendance marking is the final stage of the
system’s operations; this stage records a student’s attendance; if the processes are
completed and an image is correctly recognised, it will be registered as present in the
system’s server; otherwise, it will be marked as absent.
3.5 CONSTRAINTS
The Student Class Attendance System for UMP using Face Recognition consists
of a few constraints that limit user’s action are the requirement of the system that
restricts the way the system should be developed. It is important as it can help the
development team to speed up the development of the system.
Constraints Description
When developing a system, there are some limitations that restricts the system
from functioning to all its potential function and it can be explained in the table 3.2.4.1
below.
Limitations Description
Students must hand sign the attendance sheet every time they attend a lesson under
the current system. This includes the time it takes students to locate their name on the
page, the possibility that some students will accidentally sign another student's name,
and the possibility that the sheet will be misplaced. In the Mat lab, we are taught how to
use face recognition to avoid this problem. Using the face recognition can help reduce
the problem and it more efficient when doing marking attendance.
CHAPTER 4
4.1 Introduction
This chapter discuss about the implementation, result or findings of the research.
This chapter contains the result of the finding based on the experiment or testing that
has been done. Its also includes the explaination of the discussion that shows the
objectives of the research is fullfilled.
First, we need to register new student. Click on <<New Registration>> button at Main
Interface of FK Attendance System, it will go directly to the interface from Figure 4
Below. User need to key in all information needed and click on <<Record>> button.
Record Button use to start record the image. Before proceed to record image, from the
Figure 4, you can see the box at the below of axes, it will print message “it will be
starting in....” and once camera is ready, from the Figure 5, the face of student will be
appear at the axes and the box below will print the total of image that will store which is
n=100. After finish record, it will print the “finish” message and user may proceed to
the next process by clicking on <<Main>> button. If student try to register new user
with the same input, it will appear message “Data Exist” where the ID or any
information they enter already have a data.
% function capturefacesfromvideo1(id,nama)
flag = 0;
% Create the face detector object.
faceDetector =
vision.CascadeObjectDetector('FrontalFaceCART','MinSize',[150,150]);
% Here the loop runs for 300 times you can change the threshold
(n) based on the number of training data you need
n = 100;
% id = input('Enter ID :','s');
str = upper(id);
% nama = upper(input('Enter Nama :','s'));
checkusr = dir('info');
checkusr = checkusr(3:end);
for xxx = 1:1:length(checkusr)
if strcmp(strcat(str,'.mat'),checkusr(xxx).name) == 1
flag = 1;
end
end
if flag == 0
info1 = {str};
info2 = {nama};
info3 = {Email};
info4 = {LectName};
info5 = Gender;
info6 = Subject;
info = [info1,info2, info3, info4, info5, info6 ];
save(strcat('info\',str,'.mat'),'info')
mkdir('photos',str);
if numPts < 10
% Detection mode.
bbox = faceDetector.step(videoFrameGray);
if ~isempty(bbox)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray,
'ROI', bbox(1, :));
else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);
if numPts >= 10
% Estimate the geometric transformation between the
old points
% and the new points.
[xform, oldInliers, visiblePoints] =
estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity',
'MaxDistance', 4);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform,
bboxPoints);
imshow(videoFrame)
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
i = i+1;
dispproc = i;
set(handles.text2,'String',n-dispproc+1);
else
set(handles.text2,'String','Please adjust your
position!');
end
end
end
% Clean up.
clear cam;
release(pointTracker);
release(faceDetector);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
dispproc = 'Cropping Face part....';
set(handles.text2,'String',dispproc);
mkdir('croppedfaces',str);
ds1 =
imageDatastore(['photos\',str],'IncludeSubfolders',true,'LabelSource
','foldernames');
cropandsave(ds1,str);
set(handles.text2,'String','Finish');
else
dispproc = 'Data exist';
set(handles.text2,'String',dispproc);
end
For the Crop and save part, when the record image starts to record the image, the image
will be stored in two different folders. From Figure 6, the full image will be stored in
this folder. For the crop image, it will be stored in Croppedfaces folder like Figure 7.
function cropandsave(im,str)
j = 1;
T = countEachLabel(im);
n = T(1,2).Variables;
for i = 1:n
i1 = readimage(im,i);
[img,face] = cropface(i1);
if face==1
imwrite(img,['croppedfaces\', str,'\',int2str(j),
'.jpg']);
j = j+1;
end
end
Before detecting and recognizing the image, the image needs to be trained first. For the
image, it needs to be trained by epoch and by batch. The epoch that decided is 10 epoch
which is it is a number of complete passes through to training dataset while the batch
size decided is 128 which is it is a number of sample image processed. From Figure 8,
to start training the image, make sure the epoch and mini batch size is same with the
number declared. Click on <<Train>> button and it will appear the message in the box
below “Training started…Please wait”. After that, it will display the train will be done
using single CPU and the initializing process will be start until the number of epoch
finish. In the Initializing process, it will be displaying the epoch number, Iteration,
Time Elapsed, Mini Batch Accuracy, Validation Accuracy, Mini Batch Loss, Validation
Loss and Base learning rate as in Figure 9 below. After it finishes, it will display
message “Training Complete” and can click on <<Menu>> button to start classify the
image as you can see in Figure 10.
% unzip('MerchData.zip');
set(handles.text4,'String','Training started...Please wait');
drawnow
imds = imageDatastore('croppedaces', ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.8);
net = alexnet;
inputSize = net.Layers(1).InputSize;
if isa(net,'SeriesNetwork')
lgraph = layerGraph(net.Layers);
else
lgraph = layerGraph(net);
end
[learnableLayer,classLayer] = findLayersToReplace2(lgraph);
numClasses = numel(categories(imdsTrain.Labels))
if isa(learnableLayer,'nnet.cnn.layer.FullyConnectedLayer')
newLearnableLayer = fullyConnectedLayer(numClasses, ...
'Name','new_fc', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
elseif isa(learnableLayer,'nnet.cnn.layer.Convolution2DLayer')
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
end
lgraph =
replaceLayer(lgraph,learnableLayer.Name,newLearnableLayer);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph = replaceLayer(lgraph,classLayer.Name,newClassLayer);
layers = lgraph.Layers;
connections = lgraph.Connections;
layers(1:10) = freezeWeights2(layers(1:10));
lgraph = createLgraphUsingConnections2(layers,connections);
augimdsValidation =
augmentedImageDatastore(inputSize(1:2),imdsValidation);
miniBatchSize = mbs;
valFrequency = floor(numel(augimdsTrain.Files)/miniBatchSize);
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',me, ...
'InitialLearnRate',3e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',augimdsValidation, ...
'ValidationFrequency',valFrequency, ...
'Verbose',true);
newnet = trainNetwork(augimdsTrain,lgraph,options);
save('brain.mat','newnet')
set(handles.text4,'String','Training complete');
if strcmp('>>> ID',aaa.info{1,1})==1
msgbox('present')
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before gui_modifiedimage is made visible.
function gui_modifiedimage_OpeningFcn(hObject, eventdata,
handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to gui_modifiedimage (see
VARARGIN)
Figure 16 Scanning
Figure 17 Classify after Modify4
From the testing that already been done using the interface as screenshot above, the face
recognition already been tested with the same lighting, distance the face from camera
and the background. For the background, we decided to use the plain white background
to ensure the image will be not being distracted by any of goods from the background.
The process will be more focus on the image only. For the distance for the student face
from the camera devices, it tested for the 70cm from the face to the devices camera.
With the 75% of Lighting Brightness use to test the face recognition process and the
accuracy that get is
CHAPTER 5
CONCLUSION
5.1 Introduction
This chapter conclude the project/research that have been done, which consists
of:
i. Conclusion of the project/research
ii. All the data retrieve and observe how far it been fit into project and its
objectives
Use a reference manager such as Mendeley or EndNote to generate your list of references here.
APPENDIX A
SAMPLE APPENDIX 1