Face
Face
The process starts by sliding a filter designed to detect certain features over
the input image, a process known as the convolution operation (hence the
name "convolutional neural network"). The result of this process is a feature
map that highlights the presence of the detected features in the image. This
feature map then serves as input for the next layer, enabling a CNN to
gradually build a hierarchical representation of the image.
Initial filters usually detect basic features, such as lines or simple textures.
Subsequent layers' filters are more complex, combining the basic features
identified earlier on to recognize more complex patterns. For example, after
an initial layer detects the presence of edges, a deeper layer could use that
information to start identifying shapes.
Between these layers, the network takes steps to reduce the spatial
dimensions of the feature maps to improve efficiency and accuracy. In the
final layers of a CNN, the model makes a final decision -- for example,
classifying an object in an image -- based on the output from the previous
layers.
Evolution:
Early Stages:-
1960s-1990s : Initial research focused on simple geometric models,
comparing dis tances between facial features like eyes, nose, and
mouth. These methods were quite rudimentary and had limited
accuracy.The earliest pioneers of facial recognition were Woody
Bledsoe, Helen Chan Wolf and Charles Bisson. In 1964 and 1965,
Bledsoe, along with Wolf and Bisson began work using computers to
recognise the human face.
Due to the funding of the project originating from an unnamed intelligence
agency, much of their work was never published. However, it was later
revealed that their initial work involved the manual marking of various
“landmarks” on the face such as eye centres, mouth etc. These were then
mathematically rotated by a computer to compensate for pose variation. The
distances between landmarks were also automatically computed and
compared between images to determine identity1.
These earliest steps into Facial Recognition by Bledsoe, Wolf and Bisson were
severely hampered by the technology of the era, but it remains an important
first step in proving that Facial Recognition was a viable biometric.
Carrying on from the initial work of Bledsoe, the baton was picked up in the
1970s by Goldstein, Harmon and Lesk who extended the work to include 21
specific subjective markers including hair colour and lip thickness in order to
automate the recognition.
While the accuracy advanced, the measurements and locations still needed
to be manually computed which proved to be extremely labour intensive yet
still represents an advancement on Bledsoe’s RAND Tablet technology.
It wasn’t until the late 1980s that we saw further progress with the
development of Facial Recognition software as a viable biometric for
businesses. In 1988, Sirovich and Kirby began applying linear algebra to the
problem of facial recognition.
A system that came to be known as Eigenface showed that feature analysis
on a collection of facial images could form a set of basic features. They were
also able to show that less than one hundred values were required in order
to accurately code a normalized facial image.
In 1991, Turk and Pentland carried on the work of Sirovich and Kirby by
discovering how to detect faces within an image which led to the earliest
1
instances of automatic facial recognition. This significant breakthrough was
hindered by technological and environmental factors, however, it paved the
way for future developments in Facial
Recognition technology.
Statistical Method:-
1990s-2000s : The introduction of statistical methods such as Eigenfaces
and Fisher-faces marked significant progress. These techniques used
principal component analy-sis (PCA) and linear discriminate analysis (LDA)
to improve recognition accuaracy by reducing the dimensionality of facial
data.
The Defence Advanced Research Projects Agency (DARPA) and the
National Institute of Standards and Technology (NIST) rolled out the Face
Recognition Technology (FERET) programme in the early 1990s in order to
encourage the commercial facial recognition market. The project involved
creating a database of facial images. Included in the test set were 2,413
still facial images representing 856 people. The hope was that a large
database of test images for facial recognition would inspire innovation
and may result in more powerful facial recognition technology .
Working:
Sophisticated sensors, computer vision capabilities, artificial intelligence
Overfitting examples:
Consider a use case where a machine learning model has to analyze photos
and identify the ones that contain dogs in them. If the machine learning
model was trained on a data set that contained majority photos showing
dogs outside in parks , it may may learn to use grass as a feature for
classification, and may not recognize a dog inside a room.
Another overfitting example is a machine learning algorithm that predicts a
university student's academic performance and graduation outcome by
analyzing several factors like family income, past academic performance,
and academic qualifications of parents. However, the test data only includes
candidates from a specific gender or ethnic group. In this case, overfitting
causes the algorithm's prediction accuracy to drop for candidates with
gender or ethnicity outside of the test dataset.
· Output Layer: The output from the fully connected layers is then
fed into a logistic function for classification tasks like sigmoid or
softmax which converts the output of each class into the
probability score of each class.
Example:
Let’s consider an image and apply the convolution layer, activation layer,
and pooling layer operation to extract the inside feature.
Step:
· import the necessary libraries
· set the parameter
· define the kernel
· Load the image and plot it.
· Reformat the image
· Apply convolution layer operation and plot the output image.
· Apply activation layer operation and plot the output image.
· Apply pooling layer operation and plot the output image.
· OpenCV
· Contains pre-trained classifiers for face, eyes, and smile
detection. OpenCV can also be used to draw bounding boxes around
detected faces.
· Haar cascades
· A machine learning-based algorithm that trains a cascade function
with a set of input data.
· ArcFace
· A loss function designed to improve the discriminative power of face
recognition models.
· DeepFace
· A deep method for face recognition that uses a general 3D shape
model to align all faces to be frontal.
· MTCNN
· A deep learning model for face detection that can create bounding
boxes around detected faces.
· VGGFace
· A pre-trained model that can be used in facial recognition systems.
· DeepID
· A face verification algorithm that uses deep learning and convolutional
neural networks.
vii. Evaluate The Model:
Architecture
The BiometricPrompt API includes all biometric authentication including, face,
finger, and iris. The Face HAL interacts with the following components.
F UTURE WITH FACE AUTHENTICATION:
· Growing market
The facial recognition industry is expected to grow rapidly, with an
estimated value of $13.4 billion by 2028. The technology’s adaptation
is increasing because it enhances workflow, efficiency, and automates
processes.Moreover, the application of face recognition
technology has become widespread across industries. For example, it
is widely used in the healthcare industry to link biometric
identities with insurance policies and ensure that the account holder
receives the mentioned benefits.Similarly, in the education and
corporate sectors, it has become a highly efficient method for
ensuring an individual’s real-time presence at a location if used as an
attendance management system. Likewise, face biometrics help to
identify people enlisted on a watchlist, which enhances organizational
security. Also, many enterprises use this technology for access
management and preventing intrusions.
A critical practice that has not yet become common in care-taking places like
hospitals and healthcare centers is the amalgamation of face biometrics with
body posture detection. Combining the two would help to identify a patient
advised to keep back straight but slouching, a person who fell and would
need assistance, someone facing a critical health problem, etc.
One of the key reasons that would drive the growth is the use of IP camera
face detection systems in forensic investigation, too, in addition to the ones
we have already covered. By using IP camera face detection systems,
tracking criminals and solving crimes becomes speedy.
Education:
Although many educational institutions have understood the essence of face
biometric attendance systems, a much broader scope remains unknown
to them. For example, a face recognition solution can quickly identify a
person holding a gun on the premises.
Under such circumstances, law enforcement agencies can easily use the
information to communicate with the student, staff member, parent, visitor,
or other person listed in the database.
conclusion:
In conclusion, using Convolutional Neural Networks (CNNs) for facial
authentication of-fers several significant advantage: