Face Recognition Report 1
Face Recognition Report 1
i
CHAPTER 1
INTRODUCTION
Face recognition is a biometric software application adapted to identify individuals via
tracking and detecting. The main intention of this project is to recognize the faces of
people. This approach can be executed practically in crowded areas like airports,
railway stations, universities and malls for security. Face recognition is the task of
identifying an already detected object as a known or unknown face. Often the problem
of face recognition is confused with the problem of face detection. Face Recognition on
the other hand is to decide if the "face" is someone known, or unknown, using for this
purpose a database of faces in order to validate this input face.
There are two predominant approaches to the face recognition problem: Geometric
(feature based) and photometric (view based). As researcher interest in face recognition
continued, many different algorithms were developed, three of which have been well
studied in face recognition literature.
Face detection involves separating image windows into two classes; one
containing faces (tarning the background (clutter). It is difficult because although
commonalities exist between faces, they can vary considerably in terms of age, skin
colour and facial expression. The problem is further complicated by differing lighting
conditions, image qualities and geometries, as well as the possibility of partial occlusion
and disguise. An ideal face detector would therefore be able to detect the presence of
any face under any set of lighting conditions, upon any background. The face detection
task can be broken down into two steps. The first step is a classification task that takes
some arbitrary image as input and outputs a binary value of yes or no, indicating
whether there are any faces present in the image. The second step is the face localization
task that aims to take an image as input and output the location of any face or faces
within that image as some bounding box with (x, y, width, height).
The face detection system can be divided into the following steps:-
1. Pre-Processing: To reduce the variability in the faces, the images are processed
before they are fed into the network. All positive examples that is the face images are
obtained by cropping images with frontal faces to include only the front view. All the
cropped images are then corrected for lighting through standard algorithms.
3. Localization: The trained neural network is then used to search for faces in an
image and if present localize them in a bounding box. Various Feature of Face on which
the work has done on:- Position Scale Orientation Illumination.
CHAPTER 2
LITERATURE REVIEW
[1] Li Cuimei, Qi Zhiliang. “Human face detection algorithm via Haar cascade classifier
with three additional classifiers”, 13th IEEE International Conference on Electronic
Measurement & Instruments, pp. 01-03, 2017.
Human face detection has been a challenging issue in the areas of image processing and patter recognition. A
new human face detection algorithm by primitive Haar cascade algorithm combined with three additional weak
classifiers is proposed in this paper. The three weak classifiers are based on skin hue histogram matching, eyes
detection and mouth detection. First, images of people are processed by a primitive Haar cascade classifier,
nearly without wrong human face rejection (very low rate of false negative) but with some wrong acceptance
(false positive). Secondly, to get rid of these wrongly accepted non-human faces, a weak classifier based on face
skin hue histogram matching is applied and a majority of non-human faces are removed. Next, another weak
classifier based on eyes detection is appended and some residual non-human faces are determined and rejected.
Finally, a mouth detection operation is utilized to the remaining non-human faces and the false positive rate is
further decreased. With the help of OpenCV, test results on images of people under different occlusions and
illuminations and some degree of orientations and rotations, in both training set and test set show that the
proposed algorithm is effective and achieves state-of-the-art performance. Furthermore, it is efficient because of
its easiness and simplicity of implementation.
[2] Kushsairy Kadir , Mohd Khairi Kamaruddin .Haidawati Nasir, Sairul I Safie,Zulkifli
Abdul Kadir Bakti.” A comparative study between LBP and Haar-like features for Face
Detection using OpenCV”, 4th International Conference on Engineering Technology and
Technopreneuship (ICE2T), 2014.
Face Detection is an important step in any face recognition systems, for the purpose of localizing and extracting
face region from the rest of the images. There are many techniques, which have been proposed from simple edge
detection techniques to advance techniques such as utilizing pattern recognition approaches. This paper
evaluates two methods of face detection, her features and Local Binary Pattern features based on detection hit
rate and detection speed. The algorithms were tested on Microsoft Visual C++ 2010 Express with OpenCV
library. The experimental results show that Local Binary Pattern features are most efficient and reliable for the
implementation of a real-time face detection system.
One of the major research areas attracting much interest is face recognition. This is due to the growing need of
detection and recognition in the modern days' industrial applications. However, this need is conditioned with the
high performance standards that these applications require in terms of speed and accuracy. In this work we
present a comparison between two main techniques of face recognition in unconstraint scenes. The first one is
Edge-Orientation Matching and the second technique is Haar-like feature selection combined cascade classifiers.
[4] Jiwen Lu, Jie Zhou. ”Learning CBFD for Face Recognition.” IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol:37 , Issue: 10 , pp.10-12, Oct. 2015.
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many
face recognition systems due to their excellent robustness and strong discriminative power. However, most
existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by
hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face
representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local
patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature
mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised
manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the
original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at
each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are
obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for
each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of
heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition.
Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-
art face descriptors.
CHAPTER 3
SYSTEM
ANALYSIS
EXISTING METHOD
To improve the ability of detection, we usually generate a small set of features from the unique input variables
by feature extraction. The conventional Convolutional Neural Networks is used to project the detections in large
images. But it need some more advanced techniques to get better result.
DISADVANTAGES:
Low efficiency.
Time consuming.
High complexities.
Resources consuming
PROPOSED METHOD:
The proposed technique deploys two progresses of images such as the input images and the image captured
through live streaming. Both these process undergoes four common procedures namely, face acquisition, pre-
processing, face detection using Haarcascade classifier and feature extraction using Linear Binary Pattern
algorithm to compute LBP values. These values are stored in the database only in case of processing an input
image. Finally, comparison of the values in the database with the values computed via live streaming takes place
which recognizes the human face as known or unknown based on the matching.
Flow of the project:
ADVANTAGES:
High efficiency.
Time Saving.
Inexpensive.
Low complexities.
CHAPTER 4
SYSTEM
DESIGN
UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in
the field of object-oriented software engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of object-oriented computer
software. In its current form UML is comprised of two major components: A Meta-model and a notation. In
the future, some form of method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and
documenting the artifacts of software system, as well as for business modeling and other non-software
systems.
The UML represents a collection of best engineering practices that have proven successful in the modeling
of large and complex systems.
The UML is a very important part of developing objects-oriented software and the software development
process. The UML uses mostly graphical notations to express the design of software projects.
GOALS:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and
exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns and
components.
7. Integrate best practices.
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and
created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by
a system in terms of actors, their goals (represented as use cases), and any dependencies between those use
cases. The main purpose of a use case diagram is to show what system functions are performed for which actor.
Roles of the actors in the system can be depicted.
Upload
Receive Dataset
View Preprocessing
Preprocessing
View Correlation
Correlation
View Encoding
Encoding
Variance
View Variance
PCA
View PCA
View Result
Results
CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure
diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class contains information.
system user
SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how
processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence
diagrams are sometimes called event diagrams, event scenarios, and timing diagrams.
System User
Upload Dataset
Receive Dataset
Preprocesing
View Preprocessing
Correlation
View Correlation
Encoding,Standardisation,PCA
Eigen Values,Vectors
View Results
COLLABORATION DIAGRAM:
In collaboration diagram the method call sequence is indicated by some numbering technique as shown below.
The number indicates how the methods are called one after another. We have taken the same order management
system to describe the collaboration diagram. The method calls are similar to that of a sequence diagram. But
the difference is that the sequence diagram does not describe the object organization whereas the collaboration
diagram shows the object organization.
2: Receive Dataset
3: Preprocesing
5: Correlation
7: Encoding,Standardisation,PCA
9: Eigen Values,Vectors
System User
1: Upload Dataset
4: View Preprocessing
6: View Correlation
8: View Performed Results
10: View Results
DEPLOYMENT DIAGRAM
Deployment diagram represents the deployment view of a system. It is related to the component diagram.
Because the components are deployed using the deployment diagrams. A deployment diagram consists of nodes.
Nodes are nothing but physical hardware used to deploy the application.
System User
ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for
choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe
the business and operational step-by-step workflows of components in a system. An activity diagram shows the
overall flow of control.
System User
Upload ataset
Receive Dataset
View Encoding
Encoding
View Standardisation
Standardisation
View Variance
variance
View PCA
PCA
COMPONENT DIAGRAM
A component diagram, also known as a UML component diagram, describes the organization and wiring of the
physical components in a system. Component diagrams are often drawn to help model implementation details
and double-check that every aspect of the system's required function is covered by planned development.
User System
ER DIAGRAM:
An Entity–relationship model (ER model) describes the structure of a database with the help of a diagram,
which is known as Entity Relationship Diagram (ER Diagram). An ER model is a design or blueprint of a
database that can later be implemented as a database. The main components of E-R model are: entity set and
relationship set.
An ER diagram shows the relationship among entity sets. An entity set is a group of similar entities and these
entities can have attributes. In terms of DBMS, an entity is a table or attribute of a table in database, so by
showing relationship among tables and their attributes, ER diagram shows the complete logical structure of a
database. Let’s have a look at a simple ER diagram to understand this concept.
DFD DIAGRAM:
A Data Flow Diagram (DFD) is a traditional way to visualize the information flows within a system. A neat and
clear DFD can depict a good amount of the system requirements graphically. It can be manual, automated, or a
combination of both. It shows how information enters and leaves the system, what changes the information and
where information is stored. The purpose of a DFD is to show the scope and boundaries of a system as a whole.
It may be used as a communications tool between a systems analyst and any person who plays a part in the
system that acts as the starting point for redesigning a system.
CHAPTER 5
SYSTEM IMPLEMENTATION
CHAPTER 6
SYSTEM
TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault
or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an unacceptable manner.
There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly,
and that program inputs produce valid outputs. All decision branches and internal code flow should be validated.
It is the testing of individual software units of the application .it is done after the completion of an individual
unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive.
Unit tests perform basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they actually run as one
program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration
tests demonstrate that although the components were individually satisfaction, as shown by successfully unit
testing, the combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as specified by the
business and technical requirements, system documentation, and user manuals.
Organization and preparation of functional tests is focused on requirements, key functions, or special test cases.
In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes,
and successive processes must be considered for testing. Before functional testing is complete, additional tests
are identified and the effective value of current tests is determined.
SYSTEM TEST
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to
ensure known and predictable results. An example of system testing is the configuration-oriented system
integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links
and integration points.
White Box Testing is a testing in which in which the software tester has knowledge of the inner workings,
structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of
the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as specification or requirements document. It is
a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.
Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle,
although it is not uncommon for coding and unit testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
Software integration testing is the incremental integration testing of two or more integrated software
components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components in a
software system or – one step up – software applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
6.3 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end user.
It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CHAPTER 7
EXPERIMENTAL RESULTS
CHAPTER 8
8.1 CONCLUSION
In this project, we have successfully created a process to decrease the dimensionality for large
dataset. The system is likely to collect information from the user to decrease the dimensions.In the
last 20 years, facial recognition technology has come a long way. Today we can check identity
information automatically with regard to safe transactions, tracking, security purposes and buildings
access control. Such systems normally work in controlled environments and algorithms of
recognition may manipulate environmental constraints to achieve high accuracy of recognition. Yet
face-recognition technologies of next generation will be commonly used in smart settings where
computers and machines are most likely supportive helpers.
[2] Hsu, Daniel; Kakade, Sham M.;Zhang, Tong (2008). A spectral algorithm for learning hidden Markov
models .arXiv:0811.4413.Bibcode:2008arXiv0811.4413H.
[3] Chacha, Dimitris G.; Prayer-Bennett, Ashley; Markopoulos, Panos P. (22 November 2019). "L1-norm
Tucker Tensor Decomposition". IEEEs
[4] Markopoulos, Panos P.; Karystinos, George N.; Pados, Dimitris A. (October 2014). "Optimal Algorithms for
L1-subspace Signal Processing". IEEE Transactions on Signal Processing.
[5] Learning OpenCV –Computer Vision with the OpenCV Library O’Reilly Publication. [2] Learning
OpenCV: Computer Vision with OpenCV Library, Kindle Edition.Gary Bradsk1 and AndrianKehlar
[6] M.A. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces”, IEEE Conf. on Computer Vision and
Pattern Recognition, pp. 586-591, 1991.
[9] G B Huang, H Lee, E L. Miller, “Learning hierarchical representation for Face verification with convolution
deep belief networks[C]”,Proceedings of International Conference on Computer Vision and Pattern
Recognition,pp.223-226,2012.
[11] Learning OpenCV: Computer Vision with the OpenCV Library 1st Edition, Kindle Edition
[14] Paul Viola, Matthew Jones Conference paper- IEEE Computer Society Conference on Computer Vision
and Pattern Recognition.“Rapid object detection using a boosted cascade of simple features.