Face Recognition-Based Attendance System
Face Recognition-Based Attendance System
Literature Survey
• Priyanka Wagh,Jagruti Chaudhari,Roshani Thakare,Shweta Patil ," Attendance
System based on Face Recognition using Eigen face and PCA Algorithms ",2015.
An overview of the OpenCV face recognition pipeline. The key step is a CNN feature extractor that generates
128-d facial embeddings
Process behind dlib’s facial landmark
detector
• The pre-trained facial landmark detector inside the dlib library is used to estimate the location
of 68 (x, y)-coordinates that map to facial structures on the face.
• The indexes of the 68 coordinates can be visualized on the image below:
Process
• Detect faces
• Compute 128-d face embeddings to quantify a face
• Train a Support Vector Machine (SVM) on top of the embeddings
• Recognize faces in images and video streams
Training Datasets
• To train a face recognition model with deep learning, each input batch of data
includes three images:
1. The anchor
2. The positive image
3. The negative image
While optional, face alignment has been demonstrated to increase face recognition accuracy in some pipelines.
After we’ve (optionally) applied face alignment and cropping, we pass the input face through our deep neural
network:
• The neural network computes the 128-d embeddings for each face and
then tweaks the weights of the network (via the triplet loss function) such
that:
• The 128-d embeddings of the anchor and positive image lie closer together
• While at the same time, pushing the embeddings for the negative image
father away
• In this manner, the network is able to learn to quantify faces and return
highly robust and discriminating embeddings suitable for face recognition.
• ├── images(Contains three test images that we’ll use to verify the operation of our model)
• │ ├── adrian.jpg
• │ ├── patrick_bateman.jpg
• │ └── trisha_adrian.jpg
• ├── face_detection_model (Contains a pre-trained Caffe deep learning model provided by OpenCV to detect faces. This model detects and localizes faces in an
image.)
• │ ├── deploy.prototxt
• │ └── res10_300x300_ssd_iter_140000.caffemodel
• ├── output (Contains my output pickle files. If you’re working with your own dataset, you can store your output files here as well)
• │ ├── embeddings.pickle (A serialized facial embeddings file. Embeddings have been computed for every face in the dataset and are stored in this file)
• │ ├── le.pickle (Our label encoder. Contains the name labels for the people that our model can recognize.)
• │ └── recognizer.pickle (Our Linear Support Vector Machine (SVM) model. This is a machine learning model rather than a deep learning model and it is
responsible for actually recognizing faces.)
•
Summary
Detecting facial landmarks in an image is a two step process:
1. First we must localize a face(s) in an image. This can be accomplished using a number
of different techniques, but normally involve either Haar cascades or HOG + Linear
SVM detectors (but any approach that produces a bounding box around the face will
suffice).
1. Apply the shape predictor, specifically a facial landmark detector, to obtain the (x, y)-
coordinates of the face regions in the face ROI.
Given these facial landmarks we can apply a number of computer vision techniques,
including:
• Face part extraction (i.e., nose, eyes, mouth, jawline, etc.)
• Facial alignment
• Head pose estimation
• Face swapping
• Blink detection
• …and much more!
Proposed Methodology
● Architecture
Working
● To bring this system into work, we will need some hardware devices for our
project.
● Firstly, we will need a high definition camera which has to be fixed in the
classroom at a suitable location from where the whole class can be covered in
the camera.
● When the camera takes the picture of all students, that picture is enhanced
for further processing.
● In the enhancement, the picture is first transformed in grayscale image, and
then it will be equalized using histogram technique.
● After enhancement, the picture will be given for detecting the faces of
students which is done by face detection algorithm.
● Then after detection of faces, each student's face will be cropped from that
image, and all those cropped faces will be compared with the database of
faces.
● In that database, all students' information will be already maintained with their
image.
● By comparing the faces one by one, the attendance of students will be
marked on server.
METHODOLOGY
● For implementing the automated face recognition system, we need to follow some
particular methodologies. The certain steps which need to be followed for successful
attendance marking are as follows:
1) Enrollment
2) Image Acquisition.
3) Converting the image into gray scale Image.
4) Histogram Normalization.
5) Removing Noise.
6) Classification of Skin.
7) Face Detection.
8) Face Recognition.
9) Attendance marking
1) (1)Enrollment:
● The student or person will be enrolled to the database using their general information
and unique biometric features.
● This information will be saved in the form of templates. The enrollment includes:
1. 1.Taking image by camera
2. 2.Enhancement of that image
3. 3.Feature extraction
4. 4.Maintain Database
● The image of person will be captured from the camera and then it will be
enhanced using histogram equalization and noise filtering.
● Then after this process, the features are extracted from the image.
● The unique features will be stored in the face database and a particular id will
be assigned to that person.
1) (2) Image Acquisition:
● A high definition camera device will be installed in front of the classroom.
● The camera device will capture the image of whole classroom.
● This captured image is given as an input to the system.
● (7)Face Detection:
● After the enhancement of image, the image comes to face detection module which will
detect the faces of students from image.
● The Viola and Jones Algorithm is used for the purpose of face detection, also known as
the Ada-Boost algorithm for face detection which is created by Viola P. and M. 1. Jones.
● (8)Face Recognition:
● Face recognition is the next step after face detection.
● The face recognition can be achieved by cropping the faces from the image and comparing
them with the enrolled images in the face database.
● For the face recognition, the concept of selection of region of interest is used, and the faces
are verified one by one using the EigenFace method.
● (9)Attendance:
● After the verification of faces and successful recognition is done, the attendance will be
marked on the server.
Activity diagram of Face Recognition
ALGORITHM
● (2) For the chosen window size, slide the window vertically and horizontally with the same
step. At each step, a set of N face recognition filters is applied.
● If one filter gives a positive answer, the face is detected in the current window.
● (3) If the size of the window is the maximum size ,stop the procedure.
● Otherwise increase the size of the window and corresponding sliding step to the
● next chosen size and go to the step 2.
Features
● Each face recognition filter (from the set of N filters) contains a set of
cascade-connected classifiers.
● Each classifier looks at a rectangular subset of the detection window and
determines if it looks like a face. If it does, the next classifier is applied.
● If all classifiers give a positive answer, the filter gives a positive answer and
the face is recognized. Otherwise the next filter in the set of N filters is run.
● Each classifier is composed of Haar feature extractors (weak classifiers).
● Each Haar feature is the weighted sum of 2-D integrals of small rectangular areas
attached to each other. The weights may take values ±1. Below figure shows
examples of Haar features relative to the enclosing detection window.
● Gray areas have a positive weight and white areas have a negative weight.
●
The classifier decision is defined as:
Where,
F is the weighted sum of the 2-D integrals.
m is the classifier number.
t is the decision threshold for the i-th feature extractor.
αm,i and βm,i are constant values associated with the i-th feature extractor.
θm is the decision threshold for the m-th classifier.
Face Recognition
● .
● For face recognition, we are going to utilize Eigen face technique along with Principle
Component Analysis.
● In the Eigen face, when faces are detected, they are cropped from image. Each
student's face is cropped and the various features are extracted from them like
distance between eyes, nose, outline of face, etc.
● Using these faces as eigen features, the student are recognized and by comparing
them with the face database and their attendance is marked.
Eigen face
● Eigenfaces are a set of features that characterize the variation between face images
● Each training face image can be represented in terms of a linear combination of the
eigenfaces, so can the new input image
● Compare the feature weights of the new input image with those of the known
individuals
FACE RECOGNITION PROCESS
One of the simplest and most effective PCA approaches used in face recognition systems
is the so-called eigenface approach.
This approach transforms faces into a small set of essential characteristics called eigen
faces, which are the main components of the initial set of learning images (training set).
Recognition is done by projecting a new image in the eigenface subspace, after which the
person is classified by comparing its position in eigenface space with the position of known
individuals .
The advantage of this approach over other face recognition systems is in its simplicity,
speed and insensitivity to small or gradual changes on the face.
● The whole recognition process involves two steps:
● A. Initialization process
● B. Recognition process
● The Initialization process involves the following operations:
● i. Acquire the initial set of face images called as training set.
● ii. Calculate the Eigenfaces from the training set, keeping only the highest eigenvalues.
● These M images define the face space. As new faces are experienced,
● the eigenfaces can be updated or recalculated.
● iii. Calculate distribution in this M-dimensional space for each known person by projecting
his or her face images onto this face-space.
● Having initialized the system, the next process involves the steps:
● 1) Calculate a set of weights based on the input image and the M eigenfaces by projecting
● the input image onto each of the Eigenfaces.
2) Determine if the image is a face at all (known or unknown) by checking to see if the
image is sufficiently close to a “free space”.
3) If it is a face, then classify the weight pattern as either a known person or as unknown.
● 4) Update the eigenfaces or weights as either a known or unknown, if the same unknown
● person face is seen several times then calculate the characteristic weight pattern and
● incorporate into known faces.
EIGENFACE ALGORITHM
● Let a face image Γ(x, y) be a two dimensional M by N array of intensity values. Here , consider a set of
image by 200× 149 pixels.
● An image may also be considered as a vector of dimension M × N, so that a typical image of size 200 ×
149 becomes a vector of dimension 29,800 or equivalently a point in a 29,800 dimensional space.
● Step1: Prepare the training faces
● Obtain face images I1 , I2 , I3, I4 , . . . . . . ,IM (training faces).
● The face images must becentered and of the same size.
Step 2: Prepare the data set
● Each face image Ii in the database is transformed into a vector and placed into a training set S.
● S=Γ1,Γ2,Γ3,Γ4.......ΓM
● Each image is transformed into a vector of size MN × 1 and placed into the set.
● For simplicity, the face images are assumed to be of size N × N resulting in a point in dimensional space.
● Step 3: ompute the average face vector
● The average face vector (Ψ) has to be calculated by using the following formula:
●
●
● Step 5: Calculate the covariance matrix
● We obtain the covariance matrix C in the following manner,
∙ The covariance matrix C in step 5 has a dimensionality N²xN² , so one would have N²
∙ eigenface and eigenvalues.
● Compute the eigenvectors uᵢ of .The above matrix is very large which is not practical!!!
● Next we have to project the training sample into the Eigenface space. The feature weight for the training images
can be calculated by the following formula:
Schematic diagram
Flow chart
Thank You