4th Page
4th Page
With every passing day, we are becoming more and more dependent upon technology to carry
out even the most basic of our actions. Facial detection and Facial recognition help us in many
ways, be it sorting of photos in our mobile phone gallery by recognizing pictures with their
face in them or unlocking a phone by a mere glance to adding biometric information in the
form of face images in the country’s unique ID database (Aadhaar) as an acceptable biometric
input for verification.
This project lays out the basic terminology required to understand the implementation of Face
Detection and Face Recognition using Intel’s Computer Vision library called ‘OpenCV’.
It also shows the practical implementation of the Face Detection and Face Recognition using
OpenCV with Python embedding on both Windows as well as macOS platform. The aim of the
project is to implement Facial Recognition on faces that the script can be trained for. The input
is taken from a webcam and the recognized faces are displayed along with their name in real
time.
This project can be implemented on a larger scale to develop a biometric attendance system
which can save the time-consuming process of manual attendance system.
CHAPTER 1: INTRODUCTION
1.1 INTRODUCTION
A face recognition system could also be a technology which is very capable of matching a
personality's face from a digital image or a video frame which it has or use it as a reference to
map and identify against an info of faces. Researchers’ area unit presently developing m u l t i
p l e ways throughout that face recognition systems work. the foremost advanced face
recognition methodology, that is to boot used to manifest users through ID verification services,
works by pinpointing and mensuration countenance from a given image.
While at first a kind of laptop application, face recognition systems have seen wider uses in
recent times on smartphones and in alternative kinds of technol ogy, like artificial intelligence.
as a result of computerized face recognition involves the measuring of a human's physiological
characteristics face recognition systems are a unit classified as bioscience. though the accuracy
of face recognition systems as a biometric technology is a smaller amount than iris recognition
and fingerprint recognition, it's wide adopted because of its contactless a n d n o n -invasive
method. Facial recognition systems area unit deployed in advanced human -computer
interaction, video police work and automatic compartmentalization of pictures.
We have a created a face recognition technology capable of identifying faces.
“Facial Detection and Facial Recognition using Intel’s open-so urc e Computer Vision
Library (OpenCV) and Python dependency”
There are various scripts illustrated throughout the project that w i l l h a v e functionalities like
detecting faces in static images, detecting faces in live feed using a webcam, capturing face
images and storing them in the dataset, training of classifier for recognition and finally
recognition of the trained faces.
All the scripts are written in python 3.6.5 and have been provided with documented code. This
project lays out most of the useful tools and information for face detect ion and face recognition
and can be of importance to people exploring facial recognition with OpenCV.
The project shows implementation of various algorithms and recognition approaches which
will be discussed on later in the project report.
1
Face Recognition can be of importance in terms of security, organization, marketing,
surveillance and robotics etc.
Face detection is able to very immensely improve surveillance efforts which can greatly help
in tracking down of people with ill criminal record basically referring to criminals and terrorists
who might be a vast threat to the security of the nation and the people collectively. The Personal
security is also greatly exacerbated since there is nothing for hackers to steal or change, such
as passwords.
1.3 OBJECTIVES
This project is created so as to study the various means of recognizing faces with more accuracy
and reducing the error rates while recognition. The ideal condition for any recognition project
is to reduce the intra class variance of features and increase the inter class variance of features
to be detected or recognized.
Different recognizer approaches are used for recognition of faces. They are:
• Eigen Faces
• Fisher Faces
2
In total we can say that the library has about 2500 optimized algorithms which is really insane,
“These algorithms contain a comprehensive set which comprises of each classic and
progressive laptop vision and machine learning algorithms. These algorithms area unit usually
accustomed sight and acknowledge faces, determine objects, classify human actions in videos,
track camera movements, track moving objects, extract 3D models of objects, manufacture 3D
purpose clouds from s tereo cameras, sew pictures along to produce a high resolution image of
a full scene, realize similar pictures from an image info, take away red eyes from pictures taken
exploitation flash, follow eye movements, acknowledge scenery and establish markers to
overlay it with increased reality, etc.”. The amazing thing about this library is that it has quite
about forty-seven thousand individuals of user community and calculable variety of downloads
Olympian eighteen million. The library is utilized extensively in corporations, analysis teams
and by governmental bodies.
Along with well-established corporations like “Google, Yahoo, Microsoft, Intel, IBM, Sony,
Honda, Toyota” that use the library, there are a unit several startups like “Applied Minds,
VideoSurf, and Zeit era”, that create in depth use of OpenCV. OpenCV’s deployed wide array
spans vary from sewing Streetview pictures along, police work intrusions in police work video
in Israel, watching mine instrumentality in China, serving to robots navigate and devour objects
at “Willow Garage, detection of natatorium drowning accidents in Europe, running interactive
art in Espana and New York , checking runways for scrap in Turkey”, inspecting labels on
product in factories around the world on to fast face detection in Japan.
1.4.2 NumPy
“The Python programming language earlier wasn't originally designed for numerical computing
as we know it to be, however it also attracted the attention of the scientific and engineering
community early”. “In 1995 the inter e st (SIG) matrix-sig was based with the aim of shaping
associate array computing package; among its members was Python designer and supporter
Guido van Rossum, WHO extended Python's syntax (in explicit the compartmentalization
syntax) to make array computing easier”.
“An implementation of a matrix package was completed by Jim discoverer, then
generalized[further rationalization required by Jim Hugunin and known as Numeric (also
diversely observed because the "Numerical Python extensions" or "NumPy").Hugunin, a
collegian at the Massachusetts Institute of Technology (MIT), joined the Corporation for
National analysis Initiatives (CNRI) in 1997 to work on JPython,leaving Paul Dubois of
3
Lawrence Livermore National Laboratory (LLNL) to need over as supporter. Other early
contributors embrace David Ascher, Konrad Hinsen and Travis Oliphant”.
“A new package known as Num array was written as a additional versatile replacement for
Numeric. Like Numeric, it too is currently deprecated.Numarray had quicker operations for
large arrays, however was slower than Numeric on tiny ones, thus for a time each packages
were utilised in parallel for varied use cases. The last version of Numeric (v24.2) was
discharged on St Martin's Day 2005, whereas the last version of numarray (v1.5.2) was
discharged on twenty four August 2006”.
There was a want to urge Numeric into the Python customary library, however Guido van
Rossum determined that the code wasn't reparable in its state then.
“In early 2005, NumPy developer Travis Oliphant needed to unify the community around one
array package and ported Numarray's options to Numeric, cathartic the result as NumPy one.0
in 2006.This new project was a region of SciPy. To avoid putting in the large SciPy package
simply to urge associate array object, t his new package was separated and known as NumPy.
Support for Python three was other in
2011 with NumPy version one.5.0”.
In 2011, PyPy started development on associate implementation of the NumPy API for PyPy.It
is not nevertheless absolutely compatible with NumPy.
1.4.3 Pillow
The Python Imaging Library (PILLOW) or generally called as PIL is very useful for adding
image processing capabilitiy to your Python interpreter which you have or have installed when
working with Pyhon.
The main use of this library is that this library has a wide array of extensive file format support
which allows it to run and store different files in different formats, an efficient representation
which is further boosted by very powerful image process ing ability.The important point is that
the core image library is meant for faster access to data stored during a few basic pixel
formats.All this enable it to be a very commanding and powerful tool for image processing
general, which is probably also the reason why it is used so heavily.
Let’s see a couple of possible things in which we can use this library.
Image Archive-The “Python Imaging Library” is right for image archival and execution
applications. you'll use the library to make thumbnails, convert between file formats, print
images, etc.
4
From the information available to us currently about the current version is that it identifies and
reads an outsized number of file formats. Write support is intentionally restricted to the
foremost commonly used interchange and presentation formats.
“Image Display-The current release includes Tk PhotoImage and BitmapImage interfaces, also
as a Windows DIB interface which will be used with PythonWin and other Windows-based
toolkits. Many other GUI toolkits accompany some quite PIL support”.“For a purpose such as
debugging there are a lot of functions like the show() method which saves a picture to disk, and
callsit an external display utility.
Image Processing-The library contains basic image processing functionality, including point
operations, filtering with a group of built-in convolution kernels, and colour space
conversions”.Basically in short the “Python Imaging library” is a free additional open source
library which you have to install first by using the command such as pip so that you can use it
to run various python functions in the module.It is important to note and keep in mind that all
of these fucntions only work once you import this library and cannot be used if the library isnt
imported or called upon.Henceforth why is it recommended to install this library beforehand
inorder to properly use it.In regards to the main function of this library is to add functionality
in opening, manipulating and storing of different file format images which can later be used to
perform operations on accordingly to the needs of the programmer using the library.
1.4.4 FACE RECOGNITION
The “Face recognition” library in python is a library which helps in recognizing and
manipulating the faces by using the programming language python or from the command line
with the simplest face recognition library after importing the module and accessing the required
functions.The “Face recognition” library was built using dlib’s “state-of-the-art face
recognition” and was further enhanced and built with deep learning. The model has an accuracy
of 99.38%. It is used to find faces in pictures
5
Figure 1.1: Finding all the faces in the picture
Find and according look for facial features in the pictures
6
Figure 1.3: Recognizing who is in the Picture
1.5 Organization
Biometrics in simple terms basically refers to evaluating and measuring your Fingerprints,face
features or any such human parameters that help to identify an individual.The study of
Biometrics is very much important considering how bleak the security as a whole has
become.Note that identification is only possible since each and every person has unique and
distinct features which make it easier to identify individuals.
Keep in mind that the biometric feature which is being used or are being used must be available
in the database for all individuals in the community before the feature or those features can be
used for authentication. This is called enrolment. Authentication can be in one of the following
forms
• Identification: This basically refers to matching an individual’s features against all the
records to check whether his/her or simply put that if or not the record is there in the database
or not
7
• Verification: “To check whether the person is who he/she is claiming to be”.In other words
to check whether their claim of identity is true or not.In this case the features of the person is
matched only with the features of the person they claim themselves to be.
Types of Biometrics:
Behavioural Biometrics
Physiological Biometrics:
“As the name sounds out to be in this the physical traits of a person are measured greatly for
identification and verification in this type of biometrics. The trait should be chosen such that
it is unique among the general population, and no mater what is resistant to changes due to
illness, aging, injury, etc”.
Physiological Biometric Techniques:
• Facial Recognition: This basically refers to the set of the features of the face like distance
between the nose and the mouth or say the distance between the ears, length of face, skin colour,
are used for verification and identification in this regard. However there can be certain
complications arising due to the complexity in aging, diesease,wearing sunglasses or any sort
8
of face that disrupts to recognize those featrues and indirectly hampering the outcome greatly
by playing a role in the low accuracy of the results.
• Iris and Retina: Not only just fingerprints the patterns found in say the iris and retina are
also unique metrics which can be used to deicde or identify the concerned Devices to analyse
retina are expensive and hence it is less common. Diseases like cataract may hinder the pattern
recognition of the iris and may cause discrepancies to occur
• Voice Recognition: The third kind of recognition that can be used is the voice recognition
which is also very helpful.The pitch, voice modulation, and tone, among other things are taken
into consideration and are taken in the dataset . Regarding the security of the system,it isnt
necessarily a great choice in that regard since two different people can have same voice and
henceforth the model will have to deal with a lot of problems and wich is why it isnt all that
great and isnt used mostly in recognition.The accuracy can be hindered due to the presence of
noise, or due to aging or illness.
• DNA: DNA might be the most unique kind of authentication ,since it is the most trusted
metric for uniqely identifying an inidividual. Thus, security is high and can be used for both
identification and verification
Behavioral Biometrics:
In this the traits are generally measured, which relate to the behaviour patterns of the individual
and which can be a great source of auhentication
• Signature: “Signature might just be one of the most used metric for authentication . They
are used to verify checks by matching the signature of the check against the signature present
in the database. Signature tablets and special pens are used to compare the signatures”. Duration
required to write the signature can also be used to increase accuracy. Signatures are mostly used
for verification.
• Keystroke Dynamics: This technique measures the behaviour of a person when typing on
a keyboard. Some of the characteristics take into account are:
1. Typing speed-refers to how fast the person can type and matches that
2. Frequency of errors-what is the frequency of the errors committed
3. Duration of key depressions
9
CHAPTER 2:LITERATURE SURVEY
2.1 Face Tracking
Face tracking refers to identifying the features which are then used to detect a Face In this case
the example method includes the receiving or we can say that it gets the first image and the
second images of a face of a user who is being taken into consideration, where one or both of
the images which were used to sort of look for a match have been granted a match by the facial
recognition system which also proofs the correct working of the system. “The technique
includes taking out a second sub- image coming from the second image, where the second sub-
image includes a representation of the at least one corresponding facial landmark, detecting a
facial gesture by determining whether a sufficient difference exists between the second sub -
image and first sub-image to indicate the facial gesture, and determining, based on detecting
the facial gesture, whether to deny authentication to the user with respect to accessing
functionalities controlled by the computing” [1]
Basically what we see in this paper is that it presents an extension and a new way of perception
of the author's theory for human visual information processing, which The method includes
extracting a second sub-image from the second image, where the second sub-image includes a
representation of the at least one corresponding facial landmark. “In turn detecting a facial
gesture by determining whether a sufficient difference exists between the second sub-image
and first sub-image to indicate the facial gesture, and determining, based on detecting the facial
gesture, whether to deny authentication to the user with respect to the human recognition system
and same was applied”.Several indispensable techniques are implicated: encoding of visible
photographs into neural patterns, detection of easy facial features, measurement
standardization, discount of the neural patterns in dimensionality [2].
“The logical (computational) role suggested for the primary visual cortex has several
components: size standardization, size reduction, and object extraction”. “The result of
processing by the primary visual cortex, it is suggested, is a neural encoding of the visual
pattern at a size suitable for storage. “(In this context, object extraction is the isolation of
regions in the visual field having the same color, texture, or spatial extent.)”It is shown in detail
how the topology of the mapping from retina to cortex, the connections between retina, lateral
geniculate bodies and primary visual cortex, and the local structure of the cortex itself may
10
combine to encode the visual patterns. Aspects of this theory are illustrated graphically with
human faces as the primary stimulus. However, the theory is not limited to facial recognition
but pertains to Gestalt recognition of any class of familiar objects or scenes [2].
2.3 Eye Spacing Measurement for Facial Recognition
Few procedures to computerized facial consciousness have employed geometric size of
attribute points of a human face. Eye spacing dimension has been recognized as an essential
step in reaching this goal. Measurement of spacing has been made by means of software of the
Hough radically change method to discover the occasion of a round form and of an ellipsoidal
form which approximate the perimeter of the iris and each the perimeter of the sclera and the
form of the place under the eyebrows respectively. Both gradient magnitude and gradient
direction were used to handle the noise contaminating the feature space. “Results of this
application indicate that measurement of the spacing by detection of the iris is the most
accurate of these three methods with measurement by detection of the position of the eyebrows
the least accurate. However, measurement by detection of the eyebrows' position is the least
constrained method. Application of these strategies has led to size of a attribute function of the
human face with adequate accuracy to advantage later inclusion in a full bundle for
computerized facial consciousness”. [3].
2.4 A direct LDA algorithm for high-dimensional data * with application
to face recognition
“Linear discriminant analysis (LDA) has been successfully used as a dimensionality
reduction technique to many classification problems, such as speech recognition, face
recognition, and multimedia information retrieval”.The objective is to "nd a
projection A that maximizes the ratio of between-class scatter against within-class
scatter S (Fisher's criterion) [4]
11
CHAPTER 3:SYSTEM DEVELOPMENT
12
Jones algorithm is a widely used mechanism for object detection. The main property of t his
algorithm is that training is slow, but detection is fast. This algorithm uses Haar basis feature
filters, so it does not use multiplications.
The efficiency of the Viola- Jones algorithm can be significantly increased by first
generating the integral image.
Detection happens inside a detection window. A minimum and maximum window size i s
chosen, and for each size a sliding step size is chosen. Then the detection window is mo ved
across the image as follows:
1. Set the minimum window size, and sliding step corresponding to that size.
2. For the chosen window size, slide the window vertically and horizontally with the sam e
step. At each step, a set of N face recognition filters is applied. If one filter gives a posit
ive answer, the face is detected in the current widow.
3. If the size of the window is the maximum size stop the procedure. Otherwise increase t he
size of the window and corresponding sliding step to the next chosen size and go to the
step 2.Each face recognition filter (from the set of N filters) contains a set of cascade-
connected classifiers. Each classifier looks at a rectangular subset of the detection wind
ow and determines if it looks like a face. If it does, the next classifier is applied. If all clas
sifiers give a positive answer, the filter gives a positive answer and the face is recognized
. Otherwise the next filter in the set of N filters is run.
13
Each classifier is composed of Haar feature extractors (weak classifiers). Each Haar feat ure is
the weighted sum of 2D integrals of small rectangular areas attached to each other. The
weights may take values ±1. Fig.2 shows examples of Haar features relative to the en closing
detection window. Gray areas have a positive weight and white areas have a nega tive weight.
Haar feature extractors are scaled with respect to the detection window size.
Figure 3.2:Classifiers
14
Figure 3.3
Now, all possible sizes and locations of each kernel are used to calculate lots of features.
(Just imagine how much computation it needs? Even a 24x24 window results over 16000 0
features). For each feature calculation, we need to find the sum of the pixels under whit e and
black rectangles. To solve this, they introduced the integral image. However large your
image, it reduces the calculations for a given pixel to an operation involving just fo ur pixels.
Nice, isn't it? It makes things super-fast.
But among all these features we calculated, most of them are irrelevant. For example, co
nsider the image below. The top row shows two good features. The first feature selected s
eems to focus on the property that the
region of the eyes is often darker than the region of the nose and cheeks. The second featu re
selected relies on the property that the eyes are darker than the bridge of the nose. But t he same
windows applied to cheeks or any other place is irrelevant. So how do we select t he best
features out of 160000+ features? It is achieved by Adaboost.
15
Figure 3.4
The final classifier is a weighted sum of these weak classifiers. It is called weak because it
alone can't classify the image, but together with others forms a strong classifier. The pa per
says even 200 features provide detection with 95% accuracy. Their final setup had ar ound
6000 features. (Imagine a reduction from 160000+ features to 6000 features. That i s a big
gain).So now you take an image. Take each 24x24 window. Apply 6000 features t o it. Check
if it is face or not. Wow.. Isn't it a little inefficient and time consuming? Yes, i t is. The authors
have a good solution for that.In an image, most of the image is nonface re gion. So it is a better
idea to have a simple method to check if a window is not a face regio n. If it is not, discard it
in a single shot, and don't process it again. Instead, focus on regio ns where there can be a
face. This way, we spend more time checking possible face regio ns.For this they introduced
the concept of Cascade of Classifiers. Instead of applying all 6000 features on a window, the
features are grouped into different stages of classifiers an d applied one-by-
one. (Normally the first few stages will contain very many fewer features). If a window f ails
the first stage, discard it. We don't consider the remaining features on it. If it passes, apply the
second stage of features and continue the process. The window which passes al l stages is a
face region. How is that plan!The authors' detector had 6000+ features with 3 8 stages with
1, 10, 25, 25 and 50 features in the first five stages. (The two features in the above image are
actually obtained as the best two features from Adaboost).
LIVE CAM FEED:
16
Figure 3.5: Faces.py fill
17
CHAPTER 4: TRAINING AND TESTING
4.1 TRAINING IN OPENCV
In OpenCV, training refers to providing a recognizer algorithm with training data to learn
from. The trainer uses the same algorithm (LBPH) to convert the images cells to histograms
and then computes the values of all cells and by concatenating the histograms, feature vectors
can be obtained. Images can be classified by processing with an ID attached. Input images are
classified using the same process and compared with the dataset and distance is obtained. By
setting up a threshold, it can be identified if it is a known or unknown face. Eigenface and
Fisher face compute the dominant features of the whole training set while LBPH analyses them
individually.
To do so, firstly, a Dataset is created. You can either create your own dataset or start with one
of the available face databases.
• Yale Face Database
• AT & T Face Database
The .xml or .my configuration file is made from the several features extracted from your dataset
with the help of the Face Recognizer Class and stored in the form of feature vectors.
1. Takes in the number of components for the PCA for crating Eigenfaces. OpenCV
documentation mentions 80 can provide satisfactory reconstruction capabilities.
2. Takes in the threshold in recognising faces. If the distance to the likeliest Eigenface is above
this threshold, the function will return a -1, that can be used state the face is unrecognisable
cv2.face. createFisherFaceRecognizer()
1. The first argument is the number of components for the LDA for the creation of Fisher
faces. OpenCV mentions it to be kept 0 if uncertain.
2. Similar to Eigenface threshold. -1 if the threshold is passed.
cv2.face. createLBPHFaceRecognizer()
18
1. The radius from the centre pixel to build the local binary pattern.
2. The Number of sample points to build the pattern. Having a considerable number will slow
down the computer.
3. The Number of Cells to be created in X axis.
4. The number of cells to be created in Y axis.
5. A threshold value similar to Eigen face and Fisherface. if the threshold is passed the object
will return 1. Recogniser objects are created and images are imported, resized, converted
into numpy arrays and stored in a vector. The ID of the image is gathered from splitting the
file name, and stored in another vector.By using FaceRecognizer.train(NumpyImage, ID) all
three of the objects are trained. It must be noted that resizing the images were required only
for Eigenface and Fisherface, not for LBPH. The configuration model is saved as XML
using the function:
FaceRecognizer.save(FileName).
cognizer class. The stored images are imported, converted to grayscale and saved with IDs in
two listswith same indexes. FaceRecognizer objects are created using face recogniser class.
4.3 . train () FUNCTION
Trains a Face Recognizer with given data and associated labels.
Parameters:
The training images, that means the faces you want to learn. The data has to be given as a
vector<Mat >. labels the labels corresponding to the images have to be given either as a
vector<int> or any other data type.
4.4 CODE
Given below is the code for creating a .yml file, that is the configuration model that stores
features extracted from datasets using the Face Recognizer Class. It is stored in a folder named
‘recognizer’ under the name ‘training DataMall’.
DATASET:
This is the code that will be used to create a dataset. It will turn the camera and take number of
pictures for few seconds.Given below is the code for face_dataset.py
19
Figure 4.1: Code snippet for the dataset
20
OUTPUT:
After running the dataset code we will get number of pictures in a folder named
dataset. Now these photos will be used to train. The more the pics the greater the
accuracy of the trainer.
21
TRAINING:
This is the code that is going to be used to train and get the train.
22
This is the file that will get created after we run the code train it will take all the images from
the dataset that we created previously, using that it will create a file named trainer which will
be further used for recognition.
23
It doesn't look like a powerful interface at first sight. But: Every FaceRecognizer is an Al
gorithm, so you can easily get/set all model internals (if allowed by the implementation).
Algorithm is a relatively new OpenCV concept, which is available since the 2.4 release.
• Setting/Retrieving algorithm parameters by name. If you used video capturing function ality
from OpenCV highgui module, you are probably familar with cv::cvSetCapturePro perty,
ocvcvGetCaptureProperty, VideoCapture::set and VideoCapture::get. Algorithm provides
similar method where instead of integer id's you specify the parameter names a s text Strings.
See Algorithm::set and Algorithm::get for details.
• Reading and writing parameters from/to XML or YAML files. Every Algorithm derivat ive
can store all its parameters and then read them back. There is no need to re- implement it each
time.
Moreover every FaceRecognizer supports the:
• Training of a FaceRecognizer with FaceRecognizer.train on a given set of images (your face
database!).
• Prediction of a given sample image, that means a face. The image is given as a Mat.
• Loading/Saving the model state from/to a given XML or YAML.
• Setting/Getting labels info, that is stored as a string. String labels info is useful for keep ing
names of the recognized people
APPLICATIONS
• Security: Face Recognition can help in developing security measures, that i s unlocking of
a safe using facial recognition.
• Attendance Systems: Face Recognition can be used to train a set of users in order to create
and implement an automatic attendance system that recognizes the face of the individual and
marks their attendance.
24
• Access: Face Detection can be used to access sensitive information like your bank account
and it can also be used to authorize payments.
• Mobile Unlocking: This feature has taken the mobile phone industry by a storm and almost
every smart phone manufacturing company has their fla g shi p sm artph on es being unlocked
using face recognition. Apple’s FaceID is an excellent example.
• Law Enforcement: This is a rather interesting way of using face detection and face
recognition as it can be used to assess the features of a suspect to see if they are being truthful
in their statements or not.
• Healthcare: Face Recognition and Detection can be used in the healthcare sector to assess
the illness of a patient by reading their facial features.
25
CHAPTER 5: CONCLUSION AND FUTURE SCOPE
1.1 Future Scope
• Government/ Identity Management: Governments all around the world are using face
recognition systems to identify civilians. America has one of the largest face databases in the
world, containing data of about 117 million people.
• Emotion & Sentiment Analysis: Face Detection and Recognition have brought us closer to
the technology of automated psyche evaluation. As systems now a days can judge the precise
emotions frame by frame in order to evaluate the psyche.
• Authentication systems: Various devices like mobil e phones or even ATMs work using
facial recognition, thus making getting access or verification quicker and hassle free.
• Full Automation: This technology helps us become fully automated as there is very little
to zero amount of effort required for verification using facial recognition.
• High Accuracy: Face Detection and Recognition systems these days have developed very
high accuracy and can be trained using very small data sets and the false acceptance rates have
dropped down significantly.
1.2 Limitations
• Data Storage: Extensive data storage is required for creating, t r a i n i n g a n d maintaining
big face databases which is not always feasible.
• Camera Angle: The relative angle of the target’s face with the camera impacts the
recognition rate drastically. These conditions may not always be suitable, therefore creating a
major drawback.
26
CONCLUSION
Facial Detection and Recognition systems are gaining a lot of popularity these days. Most of
the flagshi p smartphones of major mobile phone manufacturing companies use face
recognition as the means to provide access to the user.
This project report explains the implementation of face detection and face recognition using
OpenCV with Python and also lays out the basic information that is needed to develop a face
detection and face recognition software. The goal of increasing the accuracy of this project will
always remain constant and new configurations and different algorithms will be tested to obtain
better results. In this project, the approach we used was that of Local Binary Pattern Histograms
that are a part of the Face Recognizer Class of OpenCV.
27
References
[1] Schneiderman.United States of America Patent U.S. Patent No. 8,457,367, 2013.
[2] R. J. Baron, “Mechanisms of human facial recognition,” International Journal of Man-
Machine Studies.
[3] M. Nixon, “"Eye Spacing Measurement for Facial Recognition",” International Society
for Optics and Photonics., vol. (Vol. 575), (19 December 1985).
[4] H. &. Y. J. Yu, “A direct LDA algorithm for high-dimensional data—with application to
face recognition,” 2001.
[5] M. R. Mesbahi, A. M. Rahmani, and M.Hosseinzadeh, "Reliability and high availability
in cloud computing environments: a guide roadmap," vol.8, p.20, 2018.
[6] I. Alsmadi and H. Najadat, "Evaluating the adjustment of software fault behaviour with
dataset attributes dependent on categorical correlation," Advances in Engineering
Software, vol. 42, no. 8/ 2011, pp. 535- 546.
[7] S. Chatterjee and A. Roy, "Prediction of web software faults in a fuzzy environment
using the MODULO-M multivariate overlapping fuzzy clustering algorithm and a newly
proposed updated prediction algorithm," Appl. Soft Comput., vol. 22, no. 3, pp. 372396,
2014.
[8] C. Jin and S.-W. Jin, "Software faultproneness prediction using a hybrid artificial neural
network and quantum particle swarm optimization," Applied Soft Computing, vol. 35,
no. 10, pp. 717-725, 10/ 2015.
28