Face Filter Using OpenCV
Face Filter Using OpenCV
AN
ON
AT
1
TRAINING CERTIFICATE
Industrial Training in Face Filter System (Based on Machine learning with python)
2
ACKNOWLEDGEMENT
The effort that I have put in my report would not have been possible without the support and help
of many individuals. I would like to extend my sincere thanks to all of them.
I extend my gratitude to Dr. Sunita Yadav, HoD CSE for providing with excellent infrastructure
and awesome environment that laid potentially strong foundation for my professional life.
I would like to especially thank Ms. Shiva Tyagi and Ms. Aarti Mishra for being a source of
support, advice and guidance in documentation and standardization of training report.
I would like to thank T&P Department for supporting us and providing valuable guidance in
selection of best options for industrial training.
I would also like to thank Mr. Aman Chaudhary, Trainer (Python and ML), Akatva (TCS
TRAINING PARTNER) for the positive attitude he showed towards my work, and his valuable
help and guidance which supported me during my project.
ARSALAAN ALI
1602710801
3
TABLE OF CONTENTS
Acknowledgement 3
Table of Contents 4
List of figures 5
Chapter 2: Introduction 7
Chapter 6: Conclusion 19
Chapter 7: References 20
4
LIST OF FIGURES
5
CHAPTER 1
COMPANY PROFILE
Akatva is a TCS iON Training Partner focused on providing high-end and high quality
trainings on cutting-edge technologies like Java, Python, Bigdata & Hadoop, MySQL,
Microsof.NET, PHP, Android and many more to the students who come to us.
Our principals TCS iON with its vast experience and expertise in digital and other technologies
areas on a global scale provide the pedagogy and curricula of all the courses that are run in our
Training Centre. These trainings are delivered under the strict quality supervision of TCS iON
by the “TCS iON Certified” faculty with the necessary technical infrastructure of par excellence
for providing best trainings to the students.
The propellers of Akatva are educationists and academicians who nourish genuine desire in their
hearts to uplift their students in every possible manner. They focus on employability of the
students through quality trainings to make them stand on their feet in life. Aktva TCS iON is the
outcome of this desire to provide best training in the IT and digital sphere. It is the best training
hub existing today in Noida.
For helping its trained students, Akatva has a full-fledged Placement Division that takes care of
all the training needs on the soft side of the candidates before they face interviews by the
recruiters.
Software Development training: For both fresher’s as well as for those who seek advanced level
training. The focus is to make the trainees employable on a variety of cutting edge technologies.
Instructor led campus: We are “Instructor-led campus” meaning hand holding of the students for
optimum output.
Workshops and Placement Service: Akatva has central placement department that focuses on
providing 100 Percent placement assistance to all students. It conducts tailor-made workshops that
enhance knowledge, competency, efficiency and employment.
6
CHAPTER 2
INTRODUCTION
Face recognition is an easy task for humans. Experiments have shown, that even one to three day
old babies are able to distinguish between known faces. So how hard could it be for a computer? It
turns out we know little about human recognition to date. Are inner features (eyes, nose, mouth) or
outer features (head shape, hairline) used for a successful face recognition? How do we analyze an
image and how does the brain encode it? It was shown by David Hubel and Torsten Wiesel, that
our brain has specialized nerve cells responding to specific local features of a scene, such as lines,
edges, angles or movement. Since we don’t see the world as scattered pieces, our visual cortex
must somehow combine the different sources of information into useful patterns. Automatic face
recognition is all about extracting those meaningful features from an image, putting them into a
useful representation and performing some kind of classification on them.
Face recognition based on the geometric features of a face is probably the most intuitive approach
to face recognition. One of the first automated face recognition systems was described by marker
points (position of eyes, ears, nose ...) were used to build a feature vector (distance between the
7
points, angle between them ...). The recognition was performed by calculating the Euclidean
distance between feature vectors of a probe and reference image. Such a method is robust against
changes in illumination by its nature, but has a huge drawback: the accurate registration of the
marker points is complicated, even with state of the art algorithms. A 22-dimensional feature
vector was used and experiments on large datasets have shown, that geometrical features alone my
not carry enough information for face recognition.
OpenCV (Open Source Computer Vision Library) is released under a BSD license and hence it’s
free for both academic and commercial use. It has C++, Python and Java interfaces and supports
Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency
and with a strong focus on real-time applications. Written in optimized C/C++, the library can take
advantage of multi-core processing. Enabled with OpenCL, it can take advantage of the hardware
acceleration of the underlying heterogeneous compute platform.
Adopted all around the world, OpenCV has more than 47 thousand people of user community and
estimated number of downloads exceeding 14 million. Usage ranges from interactive art, to mines
inspection, stitching maps on the web or through advanced robotics.
SCOPE OF APPLICATION
Retail
Large retailers are using facial recognition to instantly recognize customers and present
offers. They can also use it to catch shoplifters augmented with camera footage. The
entertainment industry, casinos, and theme parks have also caught on to its uses.
Companies like NTechLab, Kairosuse face recognition technology to provide customer
analytics.
Banking
Bankers are now looking to introduce face recognition in mobile apps and ATMs for
identification. China is already seeing an application where a customer withdrawing money
from ATMs in Macau need to punch in their PIN and also to stare into a camera for six
seconds so facial-recognition software can verify their identity and help monitor
transactions.
8
Social Media
Social media platforms have adopted facial recognition capabilities to diversify their
functionalities in order to attract a wider user base amidst stiff competition from different
applications.
Face Id
1. Policing
The Australian Border Force and New Zealand Customs Services have set up an
automated border processing system called SmartGate that uses face recognition, which
compares the face of the traveller with the data in the e-passport microchip.
2. National Security
In 2017, Time & Attendance company ClockedIn released facial recognition as a form
of attendance tracking for businesses and organisations looking to have a more
automated system of keeping track of hours worked as well as for security and health
and safety control.
9
CHAPTER 3
PROJECT ANALYSIS
A general statement of the face recognition problem (in computer vision) can be formulated as
follows: given still or video images of a scene, identify or verify one or more persons in the scene
using a stored database of faces. Facial recognition generally involves two stages:
Face Detection where a photo is searched to find a face, then the image is processed to crop and
extract the person’s face for easier recognition.
Face Recognition where that detected and processed face is compared to a database of known
faces, to decide who that person is. Since 2002, face detection can be performed fairly easily and
reliably with Intel’s open source framework called OpenCV. This framework has an inbuilt Face
Detector that works in roughly 90-95% of clear photos of a person looking forward at the camera.
However, detecting a person’s face when that person is viewed from an angle is usually harder,
sometimes requiring 3D Head Pose Estimation. Also, lack of proper brightness of an image can
greatly increase the difficulty of detecting a face, or increased contrast in shadows on the face, or
maybe the picture is blurry, or the person is wearing glasses, etc.
Face recognition however is much less reliable than face detection, with an accuracy of 30-70% in
general. Face recognition has been a strong field of research since the 1990s, but is still a
far way away from a reliable method of user authentication. More and more techniques are being
developed each year. The Eigenface technique is considered the simplest method of accurate face
recognition, but many other (much more complicated) methods or combinations of multiple
methods are slightly more accurate.
EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to
extract the components which are important and useful (the components that catch the maximum
variance/change) and discards the rest of the components. This way it not only extracts the
important components from the training data but also saves memory by discarding the less
important components. These important components it extracts are called principal components.
Below is an image showing the principal components extracted from a list of faces.
So this is how EigenFaces face recognizer trains itself (by extracting principal components).
Remember, it also keeps a record of which principal component belongs to which person. One
thing to note in above image is that Eigenfaces algorithm also considers illumination as an
important component.
11
FisherFaces Face Recognizer
This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer
looks at all the training faces of all the persons at once and finds principal components from all of
them combined. By capturing principal components from all the of them combined you are not
focusing on the features that discriminate one person from the other but the features that represent
all the persons in the training data as a whole.
This approach has drawbacks, for example, images with sharp changes (like light changes which is
not a useful feature at all) may dominate the rest of the images and you may end up with features
that are from external source like light and are not useful for discrimination at all. In the end, your
principal components will represent light changes and not the actual face features.
Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the
persons, it extracts useful features that discriminate one person from the others. This way features
of one person do not dominate over the others and you have the features that discriminate one
person from the others.
Below is an image of features extracted using Fisherfaces algorithm.
One thing to note here is that even in Fisherfaces algorithm if multiple persons have images with
sharp changes due to external sources like light they will dominate over other features and affect
recognition accuracy.
First of all, we need to define the parameters (radius, neighbors, grid x and grid y) using
the Parameters structure from the lbph package. Then we need to call the Init function passing the
structure with the parameters. If we not set the parameters, it will use the default parameters as
explained in the Parameters section.
Secondly, we need to train the algorithm. To do that we just need to call the Train function passing
a slice of images and a slice of labels by parameter. All images must have the same size. The labels
are used as IDs for the images, so if you have more than one image of the same texture/subject, the
labels should be the same.
The Train function will first check if all images have the same size. If at least one image has not
the same size, the Train function will return an error and the algorithm will not be trained.
Then, the Train function will apply the basic LBP operation by changing each pixel based on its
neighbors using a default radius defined by the user. The basic LBP operation can be seen in the
following image (using 8 neighbors and radius equal to 1):
After applying the LBP operation we extract the histograms of each image based on the number of
grids (X and Y) passed by parameter. After extracting the histogram of each region, we
concatenate all histograms and create a new one which will be used to represent the image.
13
Fig 3.4. Sample Histogram
The images, labels, and histograms are stored in a data structure so we can compare all of it to
a new image in the Predict function.
Now, the algorithm is already trained and we can Predict a new image.
To predict a new image we just need to call the Predict function passing the image as
parameter. The Predict function will extract the histogram from the new image, compare it to
the histograms stored in the data structure and return the label and distance corresponding to
the closest histogram if no error has occurred. Note: It uses the distance metric as the default
metric to compare the histograms. The closer to zero is the distance, the greater is the
confidence.
14
CHAPTER 4
SNAPSHOTS AND WORKING
15
Fig 4.1 Training and label subfolders
The _`test-data`_ folder contains images that we will use to test our face recognizer after it has
been successfully trained.
As OpenCV face recognizer accepts labels as integers so we need to define a mapping between
integer labels and persons actual names so below defining a mapping of persons integer labels and
their respective names.
16
Prepare training data
Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector
is of faces of all the persons and the second vector is of integer labels for each face so that when
processing a face the face recognizer knows which person that particular face belongs too.
For example, if we had 2 persons and 2 images for each person.
PERSON-1 PERSON-2
img1 img1
img2 img2
Then the prepare data step will produce following face and label vectors.
FACES LABELS
person1_img1_face 1
person1_img2_face 1
person2_img1_face 2
person2_img2_face 2
Testing
Now our data is prepared and we will now use any of the three face recognizers. Here using LBPH
recognizer, we are ready to predict the test images and generate the output.
17
Output:
18
CHAPTER 5
LIMITATIONS AND FUTURE ENHANCEMENTS
19
CHAPTER 6
CONCLUSION
The subject of visual processing of human faces has received attention from philosophers and
scientists for centuries. Generally, feature extraction, discriminant analysis and classifying
criterion are the three basic elements of a face recognition system. The performance and robustness
of face recognition could be enhanced by improving these elements. Feature extraction in the sense
of some linear or nonlinear transform of the data with subsequent feature selection is commonly
used for reducing the dimensionality of facial image so that the extracted features are as
representative as possible. This thesis study mainly focuses on use of multiresolution,
multidirectional transforms and independent component analysis (ICA) in face recognition.
Though still in its primal stage a face filter system can be used to detect multiple faces either live
using web cam or by giving a set of images. It will show your name on your image to give you a
real time experience how a face is detected using face ids on mobile phones and other systems.
On a personal and a moral front I have understood that one should always enjoy the work he is
doing and only then one can actually be successful, another lesson is that no matter how senior an
employee you are there will always be someone who needs your guidance and help, and also
consequently there will always be someone from whom you can learn and gain something, so
never be shy or egoistic in doing so.
Overall, the industrial training proved to be helpful in enhancing the trainee’s practical skills, and a
wonderful stimulus for extension of theoretical knowledge to real world applications.
20
CHAPTER 7
REFERENCES
Various web resource proved to be vital for successful completion of the training project. Some of
them are mentioned here
1. https://www.wikipedia.com
2. https://www.opencv.org
3. https://www.icml.cc/
4. https://www.pythonprogramming.net
5. https://www.superdatascience.com
6. https://www.elitedatascience.com
7. https://www.stackoverflow.com
21