Face Recognition Report
Face Recognition Report
On
“Face Recognition”
Submitted to
BACHELOR OF TECHNOLOGY
In
By
[20E31A0531]
Under the guidance of
Mr.P.Ramesh
Associate Professor CSE
2023-2024
1
MAHAVEER INSTITUTE OF SCIENCE AND TECHNOLOGY
(Affiliated to JNTU Hyderabad, Approved by AICTE)
Vyasapuri, Bandlaguda, Post: Keshavgiri, Hyderabad-500005
Department of Computer science and engineering
CERTIFICATE
2
ACKNOWLEDGEMENT
I would like to thankful to Dr. A. P.RAMESH HOD, Dept. of CSE for guiding me through
his favourable suggestions to complete my Technical Seminar. I wish to express my special
thanks to Technical Seminar coordinator Mr. P.RAMESH HOD, Dept. of CSE, Mahaveer
Institute of Science and Technology. I express our profound sense of gratitude to Dr.
P.RAMESH, HOD of Computer science and engineering , Mahaveer Institute of Science
and Technology for his support and guidance throughout the Technical Seminar. I extend my
thanks to Dr. V. USHA SHREE Principal, Mahaveer Institute of Science and Technology,
Hyderabad for extending her help throughout the duration of this Technical Seminar. I
Sincerely acknowledge to all the lecturers of the Dept. of IT for their motivation during our IT
course. I would like to say thanks to all of our friends for their timely help and encouragement.
3
INDEX
CHAPTER – 1 6
Introduction
CHAPTER – 2 7
History Of Face Recognition
CHAPTER – 3 9
Techniques Of Face Recognition
CHAPTER – 4 12
Types Of Face Recognition
CHAPTER – 5 14
Architecture Of Face Recognition
CHAPTER - 6 15
Applications Of Face Recognition
CHAPTER – 7 17
Current problem & Future Work
CHAPTER – 8 19
Summary
4
ABSTRACT
7
detection and recognition. The
face detection is performed on
live acquired images without any
application field in mind.
Processes utilized in the system
are white balance correction, skin
like
region segmentation, facial
feature extraction and face
image extraction on a face
candidate.
Then a face classification
method that uses FeedForward
Neural Network is integrated in
the
system. The system is tested
with a database generated in the
laboratory with 26 people. The
8
tested system has acceptable
performance to recognize faces
within intended limits. System is
also capable of detecting and
recognizing multiple faces in live
acquired images.
A face recognition system is one of the biometric information processes, its
applicability is easier and working range is larger than others, i.e.; fingerprint, iris
scanning, signature, etc. A face recognition system is designed, implemented and
tested at Atılım University, Mechatronics Engineering Department. The system uses a
combination of techniques in two topics; face detection and recognition. The face
detection is performed on live acquired images without any application field in mind.
Processes utilized in the system are white balance correction, skin like region
segmentation, facial feature extraction and face image extraction on a face candidate.
Then a face classification method that uses Feed Forward Neural Network is
integrated in the system. The system is tested with a database generated in the
laboratory with 26 people. The tested system has acceptable performance to recognize
faces within intended limits. System is also capable of detecting and recognizing multiple
faces in live acquired images.
9
CHAPTER – 1
INTRODUCTION
10
CHAPTER – 2
HISTORY OF FACE RECOGNITION
Automated facial recognition was pioneered in the 1960s by Woody Bledsoe, Helen
Chan Wolf, and Charles Bisson, whose work focused on teaching computers to recognize
human faces. Their early facial recognition project was dubbed "man-machine" because a
human first needed to establish the coordinates of facial features in a photograph before
they could be used by a computer for recognition. Using a graphics tablet, a human would
pinpoint facial features coordinates, such as the pupil centres, the inside and outside
corners of eyes, and the widows peak in the hairline. The coordinates were used to
calculate 20 individual distances, including the width of the mouth and of the eyes. A
human could process about 40 pictures an hour, building a database of these computed
distances. A computer would then automatically compare the distances for each
photograph, calculate the difference between the distances, and return the closed records
as a possible match.
11
In 1993, the Defence Advanced Research Project Agency (DARPA) and the Army Research
Laboratory (ARL) established the face recognition technology program FERET to develop
"automatic face recognition capabilities" that could be employed in a productive real life
environment "to assist security, intelligence, and law enforcement personnel in the
performance of their duties." Face recognition systems that had been trailed in research
labs were evaluated and the FERET tests found that while the performance of existing
automated facial recognition systems varied, a handful of existing methods could viably be
used to recognize faces in still images taken in a controlled environment. The FERET tests
spawned three US companies that sold automated facial recognition systems. Vision
Corporation and Miros Inc were both founded in 1994, by researchers who used the results
of the FERET tests as a selling point. Visage Technology was established by a identification
card defence contractor in 1996 to commercially exploit the rights to the facial recognition
algorithm developed by Alex Pentland at MIT.
Following the 1993 FERET face-recognition vendor test, the Department of Motor
Vehicles (DMV) offices in West Virginia and New Mexico became the first DMV offices to use
automated facial recognition systems to prevent people from obtaining multiple driving
licenses using different names. Driver's licenses in the United States were at that point a
commonly accepted form of photo identification. DMV offices across the United States were
undergoing a technological upgrade and were in the process of establishing databases of
digital ID photographs. This enabled DMV offices to deploy the facial recognition systems on
the market to search photographs for new driving licenses against the existing DMV
database. DMV offices became one of the first major markets for automated facial
recognition technology and introduced US citizens to facial recognition as a standard
method of identification. The increase of the US prison population in the 1990s prompted
U.S. states to established connected and automated identification systems that
incorporated digital biometric databases, in some instances this included facial recognition.
In 1999, Minnesota incorporated the facial recognition system Face IT by Visionist into
a mug shot booking system that allowed police, judges and court officers to track criminals
across the state.
In this shear mapping the red arrow changes direction, but the blue arrow does not and is
used as eigen vector. The Viola–Jones algorithm for face detection uses Haar-like features to
locate faces in an image. Here a Haar feature that looks similar to the bridge of the nose is
applied onto the face.
Until the 1990s, facial recognition systems were developed primarily by using photographic
portraits of human faces. Research on face recognition to reliably locate a face in an image
that contains other objects gained traction in the early 1990s with the principal component
analysis (PCA). The PCA method of face detection is also known as Eigenface and was
developed by Matthew Turk and Alex Pentland. Turk and Pentland combined the conceptual
approach of the Karhunen–Loevy theorem and factor analysis, to develop a linear model.
Eigenfaces are determined based on global and orthogonal features in human faces. A
human face is calculated as a weighted combination of a number of Eigenfaces. Because few
Eigenfaces were used to encode human faces of a given population, Turk and Pentland's
PCA face detection method greatly reduced the amount of data that had to be processed to
detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen
mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997, the PCA
12
Eigenface method of face recognition was improved upon using linear discriminant
analysis (LDA) to produce Fisher faces. LDA Fisher faces became dominantly used in PCA
feature based face recognition. While Eigenfaces were also used for face reconstruction. In
these approaches no global structure of the face is calculated which links the facial features
or parts.
Purely feature based approaches to facial recognition were overtaken in the late 1990s by
the Bochum system, which used Gabor filter to record the face features and computed
a grid of the face structure to link the features. Christoph von der Malzberg and his research
team at the University of Bochum developed Elastic Bunch Graph Matching in the mid-
1990s to extract a face out of an image using skin segmentation. By 1997, the face detection
method developed by Malzberg outperformed most other facial detection systems on the
market. The so-called "Bochum system" of face detection was sold commercially on the
market as ZN-Face to operators of airports and other busy locations. The software was
"robust enough to make identifications from less-than-perfect face views. It can also often
see through such impediments to identification as Mustaches, beards, changed hairstyles
and glasses—even sunglasses".
CHAPTER 3
Traditional
Some face recognition algorithms identify facial features by extracting landmarks, or
features, from an image of the subject's face. For example, an algorithm may analyse the
relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features
are then used to search for other images with matching features.
Other algorithms normalize a gallery of face images and then compress the face data, only
saving the data in the image that is useful for face recognition. A probe image is then
compared with the face data. One of the earliest successful systems is based on template
matching techniques applied to a set of salient facial features, providing a sort of
compressed face representation.
Recognition algorithms can be divided into two main approaches: geometric, which looks at
distinguishing features, or photo-metric, which is a statistical approach that distils an image
into values and compares the values with templates to eliminate variances. Some classify
these algorithms into two broad categories: holistic and feature-based models. The former
attempts to recognize the face in its entirety while the feature-based subdivide into
13
components such as according to features and analyse each as well as its spatial location
with respect to other features.
Use of face hallucination techniques improves the performance of high resolution facial
recognition algorithms and may be used to overcome the inherent limitations of super-
resolution algorithms. Face hallucination techniques are also used to pre-treat imagery
where faces are disguised. Here the disguise, such as sunglasses, is removed and the face
hallucination algorithm is applied to the image.
3-dimensional recognition
Three-dimensional face recognition technique uses 3D sensors to
capture information about the shape of a face. This information is then used
to identify distinctive features on the surface of a face, such as the contour of
the eye sockets, nose, and chin. One advantage of 3D face recognition is that
it is not affected by changes in lighting like other techniques. It can also
identify a face from a range of viewing angles, including a profile view. Three-
dimensional data points from a face vastly improve the precision of face
recognition.
Thermal cameras
A different form of taking input data for face recognition is by using thermal
cameras, by this procedure the cameras will only detect the shape of the head and it will
ignore the subject accessories such as glasses, hats, or makeup. Unlike conventional
cameras, thermal cameras can capture facial imagery even in low-light and nighttime
conditions without using a flash and exposing the position of the camera. However, the
databases for face recognition are limited. Efforts to build databases of thermal face images
date back to 2004. By 2016, several databases existed, including the IIITD-PSE and the
Notre Dame thermal face database. Current thermal face recognition systems are not able
to reliably detect a face in a thermal image that has been taken of an outdoor environment.
In 2018, researchers from the U.S. Army Research Laboratory (ARL) developed a technique
that would allow them to match facial imagery obtained using a thermal camera with those
in databases that were captured using a conventional camera Known as a cross-spectrum
synthesis method due to how it bridges facial recognition from two different imaging
modalities, this method synthesize a single image by analysing multiple facial regions and
details. It consists of a non-linear regression model that maps a specific thermal image into a
corresponding visible facial image and an optimization issue that projects the latent
projection back into the image space. ARL scientists have noted that the approach works by
combining global information with local information. According to performance tests
15
conducted at ARL, the multi-region cross-spectrum synthesis model demonstrated a
performance improvement of about 30% over baseline methods and about 5% over state-
of-the-art methods
CHAPTER 4
16
Here we used the complete design of the composite task to measure holistic processing of
unfamiliar faces. In the complete design, the top and bottom face halves of two face
composites on each trial could either be identical or different.
17
CHAPTER 5
18
and confidences will be updated online for each new frame. Additionally, the face
recognition system is able to learn new persons.
CHAPTER 6
Social media
Founded in 2013, Looksery went on to raise money for its face modification app on
Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The
application allows video chat with others through a special filter for faces that modifies the
look of users. Image augmenting applications already on the market, such as Face tune and
Perfect365, were limited to static images, whereas Looksery allowed augmented reality to
live videos. In late 2015 SnapChat purchased Looksery, which would then become its
landmark lenses function. Snapchat filter applications use face detection technology and on
the basis of the facial features identified in an image a 3D mesh mask is layered over the
face.
Deep Face is a deep learning facial recognition system created by a research group
at Facebook. It identifies human faces in digital images. It employs a nine-layer neural
net with over 120 million connection weights, and was trained on four million images
19
uploaded by Facebook users. The system is said to be 97% accurate, compared to 85% for
the FBI's Next Generation Identification system.
TikTok's algorithm has been regarded as especially effective, but many were left to wonder
at the exact programming that caused the app to be so effective in guessing the user's
desired content. In June 2020, TikTok released a statement regarding the "For You" page,
and how they recommended videos to users, which did not include facial recognition. In
February 2021, however, TikTok agreed to a $92 million settlement to a US lawsuit which
alleged that the app had used facial recognition in both user videos and its algorithm to
identify age, gender and ethnicity.
ID verification
The emerging use of facial recognition is in the use of ID verification services. Many
companies and others are working in the market now to provide these services to banks,
ICOs, and other e-businesses. Face recognition has been leveraged as a form of
biometric authentication for various computing platforms and devices; Android 4.0 "Ice
Cream Sandwich" added facial recognition using a smartphone's front camera as a means
of unlocking devices, while Microsoft introduced face recognition login to its Xbox
360 video game console through its Kinect accessory, as well as Windows 10 via its
"Windows Hello" platform (which requires an infrared-illuminated camera).
Face ID
The facial pattern is not accessible by Apple. The system will not work with eyes closed, in
an effort to prevent unauthorized access. The technology learns from changes in a user's
appearance, and therefore works with hats, scarves, glasses, and many sunglasses, beard
and makeup. It also works in the dark. This is done by using a "Flood Illuminator", which is a
dedicated infrared flash that throws out invisible infrared light onto the user's face to
properly read the 30,000 facial points.
Healthcare
Facial recognition algorithms can help in diagnosing some diseases using specific
features on the nose, cheeks and other part of the human face. Relying on developed data
sets, machine learning has been used to identify genetic abnormalities just based on facial
dimensions. FRT has also been used to verify patients before surgery procedures.
20
In March, 2022 according to a publication by Forbes, FDNA, an AI development company
claimed that in the space of 10 years, they have worked with geneticists to develop a
database of about 5,000 diseases and 1500 of them can be detected with facial recognition
algorithms.
CHAPTER 7
CURRENT PROBLEM & FUTURE WORK:
Illumination
Illumination stands for light variations. The slight change in lighting conditions cause a
significant challenge for automated face recognition and can have a significant impact on its
results. If the illumination tends to vary, the same individual gets captured with the same
sensor and with an almost identical facial expression and pose, the results that emerge may
appear quite different.
Illumination changes the face appearance drastically. It has been found that the difference
between two same faces with different illuminations is higher than two different faces taken
under same illumination.
Pose
Face Recognition systems are highly sensitive to pose variations. The pose of a face varies
when the head movement and viewing angle of the person changes. The movements of
head or differing POV of a camera can invariably cause changes in face appearance and
generate intra‐class variations making automated face recognition rates drop drastically. It
21
becomes a challenge to identify the real face when the rotation angle goes higher. It may
result in faulty recognition or no recognition if the database only has the frontal view of the
face.
Expressions
Face is one of the most crucial biometrics as its unique features play a crucial role in
providing human identity and emotions. Varying situations cause different moods which
result in showing various emotions and eventually change in facial expressions.
Low Resolution
The minimum resolution for any standard image should be 16*16. The picture with the
resolution less than 16*16 is called the low resolution image. These low resolution images
can be found through small scale standalone cameras like CCTV cameras in streets, ATM
cameras, supermarket security cameras. These cameras can capture a small part of the
human face area and as the camera is not very close to face, they can only capture the face
region of less than 16*16. Such a low resolution image doesn’t provide much information as
most of them are lost. It can be a big challenge in the process of recognizing the faces.
Ageing
Face appearance/texture changes over a period of time and reflect as ageing, which is yet
another challenge in facial recognition system. With the increasing age, the human face
features, shapes/lines, and other aspects also change. It is done for visual observation and
image retrieval after a long period.
For accuracy checking, the dataset for a different age group of people over a period of time
is calculated. Here, the recognition process depends on feature extraction, basic features
like wrinkles, marks, eyebrows, hairstyles, etc.
Model Complexity
Existing state-of-the-art facial recognition methods rely on ‘too-deep’ Convolutional Neural
Network (CNN) architecture which is very complex and unsuitable for real-time performance
on embedded devices.
22
CHAPTER 8
SUMMARY
The process of face recognition involves capturing images or videos of faces using cameras
or other imaging devices. These visual data are then subjected to preprocessing, which
includes tasks such as normalization, alignment, and feature extraction. During feature
extraction, specific facial attributes such as the distance between eyes, nose shape, and the
contours of the face are analyzed to create a unique facial signature for each individual.
One of the key advantages of face recognition lies in its non-intrusive nature. Unlike other
biometric methods, such as fingerprint or iris scans, face recognition does not require
physical contact, making it more user-friendly and suitable for applications like surveillance
and crowd monitoring. Moreover, the technology has witnessed advancements with the
integration of artificial intelligence and machine learning algorithms, enhancing its accuracy
and efficiency.
23
However, face recognition also raises ethical and privacy concerns, particularly regarding
data security and the potential for misuse. Issues related to consent, data storage, and
unauthorized access have sparked debates about the responsible deployment of this
technology. Striking a balance between the benefits of face recognition and the protection
of individuals' privacy remains a critical challenge as the technology continues to evolve and
find new applications in our daily lives.
24