0% found this document useful (0 votes)
13 views61 pages

Rescued Document

The document outlines a project focused on the design and implementation of a face detection and recognition system for Eby Oil Security and Customer Identification Nigeria Ltd. It details the use of OpenCV and deep learning models to achieve real-time face detection and recognition, emphasizing the importance of accuracy and efficiency in security applications. The research includes a comprehensive methodology, system analysis, and recommendations for future improvements in the technology.

Uploaded by

princerobleodom1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views61 pages

Rescued Document

The document outlines a project focused on the design and implementation of a face detection and recognition system for Eby Oil Security and Customer Identification Nigeria Ltd. It details the use of OpenCV and deep learning models to achieve real-time face detection and recognition, emphasizing the importance of accuracy and efficiency in security applications. The research includes a comprehensive methodology, system analysis, and recommendations for future improvements in the technology.

Uploaded by

princerobleodom1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

DESIGN AND IMPLEMENTATION OF FACE DETECTION AND

RECOGNITION SYSTEM

(A CASE STUDY OF EBY OIL SECURITY AND CUSTOMER

IDENTIFICATION NIGERIA LTD)

BY

NWANKWO KIZITO KOSISOCHUKWU

23EF050122299

FACULTY OF SCIENCE AND TECHNOLOGY DEPARTMENT

OF COMPUTER SCIENCE.

ECOLE SUPERIEURE DE TECHNOLOGIES AVANCEES ET DE

MANAGEMENT (ESTAM UNIVERSITY),

COTONOU, BENIN REPUBLIC.

IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE


AWARD OF BACHELOR OF SCIENCE (B.Sc.) DEGREE IN COMPUTER
SCIENCE

MAY 2025

i
DECLARATION
I, NWANKWO KIZITO KOSISOCHUKWU, hereby declare that the project work titled Design

and Implementation of face detection and recognition system (A Case Study of Eby Oil Security

and Customer Identification Nigeria Ltd) is a record of an original work done by me, as a result of

my research effort carried out in the Faculty of Science and Technology, Department of Computer

Science, ESTAM University, Benin Republic, under the supervision of Engr. Raymond Ejem.

This work has not been previously submitted for any degree or diploma in any institution.

……………………….. ………………………
Nwankwo Kizito Kosisochukwu Signature/Date

ii
CERTIFICATION
This is to certify that this research project titled “Design and Implementation of Face Detection and

Recognition System” is an original work undertaken by Nwankwo Kizito Kosisochukwu

(23EF050122299) under the supervision of Engr. Raymond Ejem and has been prepared in

accordance with the regulations governing the preparation of projects in the Faculty of Science and

Technology, Department of Computer Science, ESTAM University, Benin Republic.

…………………… ……………………..

ENGR. RAYMOND EJEM DATE

(B.Sc., M.Sc.,)

(Project Supervisor)

…………………………… …………………….

ENGR. RAYMOND EJEM DATE

(B.Sc., M.Sc.)

(Head of Department)

……………………… …………………….

External Examiner DATE

iii
DEDICATION

This project is dedicated to my parents, family lecturer guardians, mentors, and God. Their
unwavering support, encouragement, and guidance have been instrumental in my academic
journey.

iv
ACKNOWLEDMENT
I sincerely acknowledge the efforts and guidance of my supervisor, [ENGR RAYMOND], whose
patience, expertise, and constructive feedback were invaluable in the completion of this project.

I also extend my gratitude to the faculty and staff of the Department of Computer Science, [ESTAM
UNIVERSITY], for providing a conducive learning environment.

My heartfelt appreciation goes to my family, friends, and colleagues for their unwavering support,
encouragement, and motivation throughout this research work.

Above all, I thank God for the wisdom, strength, and perseverance to successfully complete this
project.

v
ABSTRACT
Face detection and recognition systems play a significant role in modern security
and authentication mechanisms. This project explores the design and
implementation of a computerized face detection and recognition system,
focusing on its application in security and identification. The system employs
OpenCV and deep learning models to accurately detect and recognize faces in
real-time. The methodology involves collecting image datasets, preprocessing
images, and implementing facial recognition algorithms using machine learning
techniques such as Principal Component Analysis (PCA), Local Binary Patterns
Histograms (LBPH), and Convolutional Neural Networks (CNNs). The project is
developed in Python using libraries like OpenCV, TensorFlow, and NumPy for
efficient image processing. The system is tested on a dataset of facial images,
achieving a high recognition accuracy rate. This work highlights the potential of
face recognition technology in security, authentication, and surveillance, while
also addressing challenges like lighting conditions, pose variations, and
computational efficiency. The study concludes with recommendations for
improving accuracy and real-time performance in future implementations.
Keywords: Face Detection, Face Recognition, OpenCV, Machine Learning,
Artificial Intelligence, Security Systems.

vi
RÉSUMÉ
La conception et la mise en œuvre d’un système de détection et de reconnaissance
faciale visent à améliorer la sécurité et l’authentification dans divers domaines,
notamment l’accès aux bâtiments, la surveillance et les systèmes informatiques.
Ce système utilise des algorithmes avancés de vision par ordinateur et
d’apprentissage profond pour identifier et authentifier les visages en temps réel.
Grâce à l'intégration de modèles CNN et d'OpenCV, le système capture, traite et
compare les images faciales avec une base de données préenregistrée, permettant
ainsi une reconnaissance rapide et fiable. Il dispose d’une interface utilisateur
intuitive pour enregistrer de nouveaux visages, gérer les accès et générer des
rapports détaillés sur les tentatives de reconnaissance. En automatisant
l’identification des individus, ce système renforce la sécurité, réduit le besoin de
surveillance manuelle et améliore l’efficacité des contrôles d’accès. Il peut être
utilisé dans des environnements tels que les entreprises, les institutions éducatives
et les applications domestiques intelligentes, garantissant ainsi une solution
moderne et fiable pour la gestion de l’identité et la surveillance.

vii
TABLE OF CONTENT

CONTENTS

DECLERATION…………………………………………………. I

CERTIFICATION ………………………………………………. II

DEDICATION …………………………………………………... III

ACKNOWLEDGEMENT ………………………………………. IV

ABSTRACT ……………………………………………………… V

RESUME …………………………………………………………. VI
CHAPTER ONE

1.0 Introduction

1.1 Background of the research

1.2 Statement of research problem

1.3 Objectives of the study

1.4 Significance of the study

1.5 Scope of the study

1.6 Limitation of the study

1.7 Definition of terms

CHAPTER TWO: LITERATURE REVIEW

2.0 Introduction

2.1 Review of concept

viii
2.2 Review of related work

2.3 Empirical Study

2.4 System Architectural Framework/Structure

2.5 Summary of the review

CHAPTER THREE: SYSTEM ANALYSIS AND DESIGN

3.1 Introduction

3.5 Problem of the existing system

3.2 Method of data collection

3.3 Data preparation


3.4 Program structure

3.6 Justification for the new system

3.7 System modeling

3.8 System flow chat

3.9 Activity diagram

3.10 Program flow chart

3.11 Database specification and design

CHAPTER FOUR: SYSTEM TESTING AND DOCUMENTATION

4.1 Introduction

4.2 Program language justification

4.3 Systems requirement

ix
4.4 Implementation details

4.5 Procedure testing plan

CHAPTER FIVE

SUMMARY, CONCLUSION AND RECOMMENDATIONS

5.1 Summary

5.2 Conclusion/ Source code.

x
CHAPTER ONE 1.0 INTRODUCTION

Face recognition system is an application for identifying someone from image or

videos. Face recognition is classified into three stages ie)Face detection, Feature

Extraction ,Face Recognition. Face detection method is a difficult task in image

analysis. Face detection is an application for detecting object, analyzing the face,

understanding the localization of the face and face recognition. It is used in many

application for new communication interface, security etc. Face Detection is

employed for detecting faces from image or from videos. The main goal of face

detection is to detect human faces from different images or videos. The face

detection algorithm converts the input images from a camera to binary pattern and

therefore the face location candidates using the AdaBoost Algorithm. The

proposed system explains regarding the face detection based system on AdaBoost

Algorithm . AdaBoost Algorithm selects the best set of Haar features and

implement in cascade to decrease the detection time .The proposed System for

face detection is intended by using Verilog and Model Sim, and also implemented

in FPGA.

Face Detection System is to detect the face from image or videos. To detect the

face from video or image is gigantic. In face recognition system the face detection

is the primary stage. Now Face Detection is in vital progress in the real world

1
Face recognition is a pattern recognition technique and one of the most important

biometrics; it is used in a broad spectrum of applications. The accuracy is not a

major problem that specifies the performance of automatic face recognition

system alone, the time factor is also considered a major factor in real time

environments. Recent architecture of the computer system can be employed to

solve the time problem, this architecture represented by multi-core CPUs and

many-core GPUs that provide the possibility to perform various tasks by parallel

processing. However, harnessing the current advancements in computer

architecture is not without difficulties. Motivated by such challenge, this research

proposes a Face Detection and Recognition System (FDRS). In doing so, this

research work provides the architectural design, detailed design, and four variant

implementations of the FDRS.

1.1 BACKGROUND OF THE RESEARCH

Face recognition has gained substantial attention over in past decades due to its

increasing demand in security applications like video surveillance and biometric

surveillance. Modern facilities like hospitals, airports, banks and many more

another organizations are being equipped with security systems including face

recognition capability. Despite of current success, there is still an ongoing

2
research in this field to make facial recognition system faster and accurate. The

accuracy of any face recognition system strongly depends on the face detection

system. The stronger the face detection system the better the recognition system

would be. A face detection system can successfully detect human face from a

given image containing face/faces and from live video involving human presence.

The main methods used in these days for face detection are feature based and

image based. Feature based method separates human features like skin color and

facial features whereas image based method used some face patterns and

processed training images to distinguish between face and non faces. Feature

based method has been chosen because it is faster than image based method and

its’ implementation is far more simplified. Face detection from an image is

achieved through image processing. Locating the faces from images is not a

trivial task; because images not just contain human faces but also non-face objects

in clutter scenes. Moreover, there are other issues in face recognition like lighting

conditions, face orientations and skin colors. Due to these reasons, the accuracy

of any face recognition system cannot be 100%.

Face recognition is one of the most important biometrics methods. Despite the

fact that there are more reliable biometric recognition techniques such as

fingerprint and iris recognition, these techniques are intrusive and their success

3
depends highly on user cooperation. Therefore, face recognition seems to be the

most universal, nonintrusive, and accessible system. It is easy to use, can be used

efficiently for mass scanning, which is quite difficult, in case of other biometrics

. Also it is natural and socially accepted.

Moreover, technologies that require multiple individuals to use the same

equipment to capture their biological characteristics probably expose the user to

the transmission of germs and impurities from other users. However, face

recognition is completely non-intrusive and does not carry any such health

dangers.

Biometrics is a rapidly developing branch of information technology. Biometric

technologies are automated methods and means for identification based on

biological and behavioral characteristics of an individual. There are several

advantages of biometric technologies compared to traditional identification

methods. To take adequate measures against increasing security risks in modern

world, countries are considering these advantages and are shifting to new

generation identification systems based on biometric technologies.

1.2 STATEMENT OF RESEARCH PROBLEM

Biometric systems are becoming an important element (gateway) for information

security systems. Therefore biometric systems themselves have to satisfy high

4
security requirements. Unfortunately producers of biometric technologies do not

always consider security precautions. In publications regarding biometric

technologies, drawbacks and weaknesses of these technologies have been

discussed. Since biometrics form the technology basis for large scale and very

sensitive identification systems (e.g. passports, identification cards), the problem

of adequate evaluation of the security of biometric technologies is a current issue.

Also, some other issues with face detection and recognition system is on

individual with identical face like identical twins and others, in situation like this

it is possible for the system to make mistake or error in processing the person

image so as to grant access to the rightful user.

1.3 OBJECTIVES OF THE STUDY

The objective of this project is to implement a face recognition system which first

detects the faces present in either single image frames; and then identifies the

particular person by comparing the detected face with image database or in the

both image frames.

In addition to the main objective of this research work, the researcher also went
far

more to add other features to the new system which are as fellow.

1. One of the objectives of this system is to design a system that will help the

5
organization maintain a strong security in the work environment.

2. Highlight areas of vulnerability in the new system

3. Develop a ridged and secure database for the organization to enable them

secure their sensitive data and records.

1.4 SIGNIFICANCE OF THE STUDY

This study is primarily aimed at increasing efficiency in security, this research

work will help the users in maintaining data. This system will reduce the rate of

fraudulent activities as it can as well keep track of registered users and grant them

access upon face recognition completion.

Also the knowledge that would be obtained from this research will assist the

management to grow, also this research work will also be of help to the upcoming

researcher in this field of study both with the academic students on their study.

1.5 SCOPE OF THE STUDY

The scope of this study covers only on face detection and recognition, accessing

previous records and making matched for the data, updating of records and

making

delete.

6
1.6 LIMITATION OF THE STUDY

Many limitations encountered, were in the process of gathering information for

the development of this project work to this extent. It was not an easy one, so

many constraints were encountered during the collection of data.

The limitation focuses of the following constraints;

i. FINANCIAL CONTRAINTS: the cost of sourcing for information and data

that are involved in this work is high in the sense that we all know that information

is money.

ii. TIME: A lot of time was involved in writing and developing this work, iii.

Irregularities in power supply also dealt harshly with the researcher.

1.7 DEFINITION OF TERMS

Analysis: Breaking a problem into successively manageable parts for individual

study.

Attribute: A data item that characterize an object


Data flow: Movement of data in a system from a point of origin to specific

destination indicated by a line and arrow

Data Security: Protection of data from loss, disclosure, modification or

destruction. Design: Process of developing the technical and operational

specification of a candidate system for implements.


7
File: Collection of related records organized for a particular purpose also called

dataset.

Flow Chart: A graphical picture of the logical steps and sequence involved in a

procedure or a program.

Form: A physical carrier of data of information

Implementation: In system development-phase that focuses on user training, site

preparation and file conversion for installing a candidate system.

Maintenance: Restoring to its original condition

Normalization: A process of replacing a given file with its logical equivalent the

object is to derive simple files with no redundant elements.

Operation System: In database – machine based software that facilitates the

availability of information or reports through the DBMS.

Password: Identity authenticators a key that allow access to a program system a

procedure.

Record: A collection of aggregates or related items of a data treated as a unit.

Source Code: A procedure or format that allow enhancements on a software


package.

8
CHAPTER TWO LITERATURE REVIEW 2.0 INTRODUCTION

Biometrics is a rapidly developing branch of information technology. Biometric

technologies are automated methods and means for identification based on

biological and behavioral characteristics of an individual.

This chapter focuses on the ongoing confronts in the field of the recognition and

some basic concepts of image applications, empirical study, review of related

study system architectural framework, challenges and the imaging concepts all

will described in detail un this chapter.

2.1 REVIEW OF CONCEPT FACE IMAGE DETECTION MODEL

Face detection is the elementary step in the face recognition system and acts as a

stone to all facial analysis algorithms. Many algorithms exist to implement face

detection; each has its own weaknesses and strengths. The majority of these

algorithms suffer from the same difficulty; they are computationally expensive.

The image is a combination of color or light intensity values. Analyzing these

pixels for face detection is time consuming and hard to implement because of the

enormous diversity of shape and pigmentation in the human face. Viola and Jones

proposed an algorithm, called Haar-cascade Detector or called Viola-Jones, to

quickly detect any object, including human faces, using AdaBoost classifier

9
cascades that are based on Haar-like features and not pixels. Viola-Jones

algorithm is widely used in various studies involving face processing because of

its real-time capability, high accuracy, and availability as open-source software

under the Open Computer Vision Library (OpenCV) [8]. Viola-Jones detectors

can be trained to recognize any kind of a solid object, including human faces and

facial features such as eyes, and mouths. OpenCV has implemented Viola-Jones

and provides a pre-trained Haar-cascade for face detection.

INFORMATION MANAGEMENT

Data can be defined as individual facts or raw about something that can be

organized to generate useful information for decision-making. Information is

stimuli that have meaning in some contest for its receiver. When data is entered

into and stored in a computer, it IS generally referred to as information.

Graham (2001) said, with the move from local application to a web based ones,

also the data we created and access will need to undergo some profound changes.

Data and information undergoes a management process to maintain its

consistence and quality. Physical and logical security, quality assurance is

emphasized to ensure rational utilization and reliability of data information.

10
DATABASES

According to Lane and William (2004). a database is part of data management

system. They define a database as a container of data files, such as product

catalogs, inventories and item/customer records. They say that every business

would be a failure without a secure and reliable data management system. They

further say that information systems are the hearts of most businesses worldwide,

According to them. it is not easy to have a secure system, hut a system developer

must ensure that this is achieved. They advise system developers to have clear

subject areas, requirements and plans before they start designing the systems.

Advanced database systems were written by leading specialists who have made

significant contributions to the development of technology areas.

According to Thierry (2006), the term database design can be used to describe

many different parts of design of an overall system Principally, and most

correctly, it can be thought as the logical design of the database of database

structure used to store data in are rational model these are the tables and views

However the database design could be sued to apply overall process of designing,

and not just base data structures, hut also forms and queries are used apart overall

database application within database management system (DBMS).

11
2.2 EMPIRICAL STUDY .1 CHALLENGES IN FACE RECOGNITION

Over the years, face recognition has gained rapid success with the development

of new approaches and techniques. Due to this success, the rate of successful face

recognition has been increased to well above 90%. Despite of all this success, all

face recognition techniques usually suffer from common challenges of image

visibility. These challenges are lighting conditions variations, skin variations and

face angle variations. The challenges are explained in descriptive manner below.

.1.1 Difference in Lighting Conditions

The lighting conditions in which the pictures are taken are not always similar

because the variations in time and place. The example of lighting variations could

be the pictures taken inside the room and the pictures taken outside. Due to these

variations, a same person with similar facial expressions may appear differently

in different pictures. As a result, if the person has single image store in the

database of face recognition, matching could be difficult with the face detection

under different lighting conditions.

.1.2 Skin Color Variations

Another challenge for skin based face recognition system is the difference in the

skin color due to the difference in the races of the people. Due to this variation,

12
sometimes the true skin pixels are filtered out along with the noise present in an

image. In addition, the false skin background/non-background is not completely

filtered out during noise filtering process. So, it is a tough task is to choose the

filter which will cover the entire skin tones of different people and kick out false

skin noise.

.1.3 Variation in face angle or Orientation variation

The angle of the human face from camera can be different in different situations.

A frontal face detection algorithm can’t work on non-frontal faces present in the

image because the geometry of facial features in frontal view is always different

than the geometry of facial features in non-frontal view. This is why orientation

variation remains the difficult challenge in face detection system .

.2 IMPORTANT CONCEPTS OF IMAGING APPLICATIONS .2.1 Image


Processing

Image processing is a method of processing the image values, more precisely, the

pixels in case of digital images. The purpose of image processing is to modify

the input image such that the output image may change parametrically such as in

colors and representation. Image processing is the basic part of the face

recognition involving digital images. The processing can change the image
13
representation from one color space to another color space. It can also assign

different color values to targeted pixels for the purpose of keeping areas of interest

in output image. Image processing is also used to increase or decrease image

brightness, contrast and other morphological operations.

.2.2 Color Space

Color space is the representation of image colors in two or more color

components. Typical examples of color spaces are RGB, YCbCr and HSI color

spaces. In each of these color spaces the color of a pixel at any point in an image

is the combination of three color components. These color components vary from

0 to maximum value and this maximum value depends on the bits per pixel. The

different values in the range give different colors from black (0) to white (255)

with 8 bits per pixel.

.3.3 RGB color Space

RGB color space is the combination of red, green and blue color components.

For the 24 bits per pixel, the range of R, G and B varies from 0 to 255. If R, G

and B are all 0 then the resulted color will be black. If R, G and B are all 255

then the output color will be white. The concept of the RGB color space is

specified in figure 2.b.3. Here the x-axis represents blue color range, y-axis

represents green color range and Z-axis represents red color range. As explained

14
above, we can see that black color is represented at the origin and white color is

represented at the other corner where red, green and blue are 255 each. Similarly

we can have other color values at different corners of cube corresponding to

different RGB values.

Figure 2.2.3: RGB color model

2.3 REVIEW OF RELATED WORK

Wayman has studied the technical testing of biometric devices and divided it into

five subsystems: data collection, signal processing, transmission, data storage,

decision. This makes the potential attack points more clear. introduces three more

components: administrative management, information technologies, and

presentation of token. In total, 20 potential attack points and 22 vulnerabilities are

identified. All biometric systems require administrative supervision to some

15
extend. The level of supervision may vary for systems but it is not difficult to

imagine the related vulnerabilities. Vulnerabilities in this area may devalue even

the best planned system. A biometric system may or may not be related to an IT

environment, but is usually part of a larger system. Interaction with an IT

environment may introduce some new vulnerabilities not existing in the previous

scheme. A token is required in some biometric systems which make final

decisions based on presented biometric characteristic and information on the

token. A token may introduce a potential attack point to the biometric system. A

smart card containing biometric information is an example of a token used in this

kind of system. There are several other schemes for vulnerability classification

[8]. Considering them as well, a generalized list of vulnerabilities of biometric

systems is suggested.

Administration: Intentional or unintentional administrative mistakes.

User: A legitimate user wants to upgrade his privileges to the administrative


level.

Enrollment: Breaking registration procedures.

Spoofing: A fake biometric is used for authentication as a legitimate user.

Mimicry: Attacker mimics the biometric characteristics of the legitimate user.

Undetected: Attacks undetected by the system may encourage new attacks.

16
Fail secure: Result of abnormal utilization conditions of biometric system or IT

environment.

Power: Power cuts.

Bypass: Bypassing biometric system for access. This can be achieved by

surpassing physical barriers, forcing a legitimate user to present his biometric to

the sensor, or by cooperation of legitimate user.

Corrupt attack – Weakening the system by making changes in the IT

environment or biometric system. Modification or replacement of system

parameters is an example.

Degrade: Certain software in the IT environment decreases the system’s

security level.

Tamper: Counterfeiting the hardware of the system.

Residual: Latent fingerprints may be used to make artificial fingerprints or

accepted directly by the sensor.

Cryptological attack: Encryption can be broken in data transmission and this

biometric data can be used for another type of attack (e.g. replay attack).

Consequently, there are many attack points and vulnerabilities in biometric

systems. Using the given list of them, vulnerabilities for specific systems can be

identified. A biometric system may not have all of the vulnerabilities or attack

17
points. The list is general enough and can be applied to any system easily. For a

specific system, it is essential to consider the properties of the system in order to

identify the

vulnerabilities.

The aim of vulnerability analysis is to determine the possibility of utilization of

weaknesses of biometric systems in an application environment. Penetration tests

are carried out to determine the vulnerability in the application environment of an

imposter with a certain potential of attack.

2.4 SYSTEM ARCHITECTURAL STRUCTURE/FRAMEWORK

This section describes the architectural design of FDRS, involve: the Mono

(sequential) and Parallel face recognition concepts.


The face recognition is the hardest algorithm because it has many steps before it

start the real recognition. A face must be detected to increase the possibility of

recognition and speed up the process by choosing one location in the image. To

detect a face, two steps must be done before the recognition. The first step is to

resize the image to standard size (determine by the administrator), apply some

filter to increase the quality, and convert the image into a compatible form. Next,

go to detection face, such that the image required to recognize is uploaded in the

memory with an Extensible Markup Language (XML) file to detect a face, and
18
finally, go to recognition step. In recognition, the extracted face will be compared

with training faces when they uploading to memory and extract face features by

a recognition algorithm.

Any operating system (OS) has multiple ways to deal with a process for different

structures. Some process has a single thread and other has multithreads

architecture

(threads can run in a simultaneous manner).

Every biometric system is composed of four main modules:

Sensor module: A sensor acquires the biometric characteristic of an individual and

makes a digital description of it.

Feature extraction module: Input sample is processed and generates a compressed

image called template. Template is stored in database or in a smart card. Matching

module: This module compares the presented biometric sample with the template.

In verification mode only one matching is performed resulting in only one

matching score and in identification mode the presented characteristic is matched

with many templates and generates many matching scores.

Decision module: This module accepts or rejects the user depending on the

matching score or security threshold.

Figure 1 presents such a system and possible attack points.

19
Figure 1: Attack points of a biometric system

Ratha et.al. identify eight attack points in this scheme. Let us shortly describe

characteristics of these attacks denoted by numbers 1-8 [5]:

1. Presenting a fake biometric sample to the sensor: A fake biometric sample

such as a fake finger, image of a signature, or a face mask is presented to the

sensor in order to get into the system.

2. Replay of stored digital biometric signals: A stored signal is replayed into

the system ignoring the sensor. For instance, replay of an old copy of a fingerprint

image or a recorded audio signal.

3. Denial of feature extraction: A feature set is formed by the imposter using

a Trojan horse attack.

4. Spoofing the biometric feature: Features extracted from input signal are

replaced by a fake set of features.

5. Attacking matching module: Attacks on matching module result in

replacement of matching scores by fake ones.


20
6. Spoofing templates in database: Database of saved templates can be local

or distant. The attacker tries to fake one or more biometric templates in the

database. As a result, either a fake identity is authorized or a rightful user faces a

denial of service.

7. Attacking the channel between the template database and matching

module: Stored templates are transmitted through a communication channel to

the matching module. Data in the channel can be changed by attacker.

8.Attacking the final decision process: If the final decision can be inserted or

blocked by the hacker then the authentication system function will be overridden.

Structure, architecture, production or implementation of a system may introduce

a vulnerability to the biometric system. In some cases a secondary system may be

integrated to the biometric system which possibly makes the biometric system

vulnerable. There are five points of vulnerabilities:

1. Operating systems;

2. Database management systems (and application software);

3. Biometric application software;

4. Software for sensor;


5. Hardware and drivers.

Other main aspects can be categorized as follows:

1. management of operations;

21
2. management of parameters (especially FAR/FRR parameters );

3. system configuration.

CPU Parallel Face Recognition

In the parallel face recognition process, two tasks can be done simultaneously.

The process of uploading training face images in the memory and the process of

getting face features from the training face images.

22
Figure 3. CPU Parallel Face Recognition

2.5 SUMMARY OF THE REVIEW

FDRS has been implemented in four implementation variants (i.e., CPU Mono,

CPU Parallel, Hybrid Mono and Hybrid Parallel. Fisher face algorithm is

employed to implement recognition phase and Haar-cascade algorithm is

employed for the

detection phase. In addition, these implementations are based on industrial


standard

23
tools involve Open Computer Vision (OpenCV) version 2.4.8, Microsoft .Net

framework 4, VB programming language, EmguCV version windows universal

CUDA 2.9.0.1922, and heterogeneous processing units. The experiment consists

of applying 400 images for 40 persons' faces (10 images per person), defining,

training, and recognizing these images on these four variants, the experiment is

taken place on the same environment (laptop computer Intel core i7 processor 2.2

GHz, Nvidia GPU GeForce GT 630M, 7GB RAM). The speed up factor is

measured with respect to the CPU Mono implementation (the slowest than all

other three variants). The practical results demonstrated that, the Hybrid Parallel

Recognition is the fastest algorithm variant among the all, because it gives an

overall speed up around (82) times. The CPU Parallel gives an overall speed up

around (71). Finally, the Hybrid Mono gives a little improvement about (1.04).

Thus, employing parallel processing on modern computer architecture can

accelerate face recognition system.

24
CHAPTER THREE SYSTEM ANALYSIS AND METHODOLOGY 3.1
INTRODUCTION

The methodology applied in this research is an Artificial Intelligence (AI) and

Computer Vision-based Methodology. This involves acquiring image data,

applying face detection algorithms (e.g., Haar cascades or deep learning models),

preprocessing, and applying face recognition techniques using machine learning

models.

3.2 METHOD OF DATA COLLECTION

Procedures used in data collection and information gathering are here, outlined

and analyzed. Data was carefully collated and objectively evaluated in order to

define as well as ultimately provide solutions to the problems for which the

research work is based.

During the research work, data collection was carried out in many places. In

gathering and collecting necessary data and information needed for system

analysis, two major fact-finding techniques were used in this work and they are:

a. Primary source

b. Secondary source Primary source:

25
Primary source refers to the sources of collecting original data in which the

researcher made use of empirical approach such as personal interview and

questionnaires.

This involved series of orally conducted interviews with some select users. Also,

some users were interviewed with a view to getting information about their

opinion on how to best develop the system.

Secondary Source:

Perusals through online journals and e-books as well as visits to relevant websites,

medical dictionaries and other research materials increased my knowledge and

aided my comprehension of diagnostic processes.

3.3 PROGRAM STRUCTURE

26
3.4 FILE MAINTENANCE ERROR HANDLING MODULE

The researcher too time to design an error handling module for the new system

design, an algorithm on how errors will be tracked down so as to handle exception

cases. This module design handles exception in sequence so that each of the unit

try’s to handle the error, which if could not be handled by all the units, runs as an

outbound error

3.5 SYSTEMS/PROCESS ARCHITECTURE

A system architecture or systems architecture is the conceptual model that defines

the structure, behavior, and more views of a system. An architecture description

is a formal description and representation of a system, organized in a way that

supports reasoning about the structures and behaviors of the system.

27
A system architecture can comprise system components, the externally visible

properties of those components, the relationships (e.g. the behavior) between

them. It can provide a plan from which products can be procured, and systems

developed, that will work together to implement the overall system. There have

been efforts to formalize languages to describe system architecture, collectively

these are called architecture description languages (ADLs)

Process architecture is the structural design of general process systems and applies

to fields such as computers (software, hardware, networks, etc.), business

processes (enterprise architecture, policy and procedures, logistics, project

management, etc.), and any other process system of varying degrees of

complexity.

Processes are defined as having inputs, outputs and the energy required to

transform inputs to outputs. Use of energy during transformation also implies a

passage of time: a process takes real time to perform its associated action. A

process also requires space for input/output objects and transforming objects to

exist: a process uses real space.

A process system is a specialized system of processes. Processes are composed of

processes. Complex processes are made up of several processes that are in turn

made up of several processes. This results in an overall structural hierarchy of

28
abstraction. If the process system is studied hierarchically, it is easier to

understand and manage; therefore, process architecture requires the ability to

consider process systems hierarchically. Graphical modeling of process

architectures is considered by Dualistic Petri nets. Mathematical consideration of

process architectures may be found in CCS and the π-calculus.

Figure 3.5 process design structure

3.6 COMPONENTS OF SYSTEM MODEL

The internal architectural design of computers differs from one system model to

another. However, the basic organization remains the same for all computer

systems. The following five units (also called "The functional units") correspond

to the five

basic operations performed by all computer systems.

29
Input Unit

Data and instructions must enter the computer system before any computation can

be performed on the supplied data. The input unit that links the external

environment with the computer system performs this task. Data and instructions

enter input units in forms that depend upon the particular device used. For

example, data is entered from a keyboard in a manner similar to typing, and this

differs from the way in which data is entered through a mouse, which is another

type of input device. However, regardless of the form in which they receive their

inputs, all input devices must provide a computer with data that are transformed

into the binary codes that the primary memory of the computer is designed to

accept. This transformation is accomplished by units that called input interfaces.

Input interfaces are designed to match the unique physical or electrical

characteristics of input devices to the requirements of the computer system.

In short, an input unit performs the following functions.

1. It accepts (or reads) the list of instructions and data from the outside world.

2. It converts these instructions and data in computer acceptable format.

30
3. It supplies the converted instructions and data to the computer system for

further processing.

Output Unit

The job of an output unit is just the reverse of that of an input unit. It supplied

information and results of computation to the outside world. Thus it links the

computer with the external environment. As computers work with binary code,

the results produced are also in the binary form. Hence, before supplying the

results to the outside world, it must be converted to human acceptable (readable)

form. This task is accomplished by units called output interfaces.

In short, the following functions are performed by an output unit.

1. It accepts the results produced by the computer which are in coded form
and

hence cannot be easily understood by us.

2. It converts these coded results to human acceptable (readable) form.

3. It supplied the converted results to the outside world.

Storage Unit

The data and instructions that are entered into the computer system through input

units have to be stored inside the computer before the actual processing starts.

31
Similarly, the results produced by the computer after processing must also be kept

somewhere inside the computer system before being passed on to the output units.

Moreover, the intermediate results produced by the computer must also be

preserved for ongoing processing. The Storage Unit or the primary / main storage

of a computer system is designed to do all these things. It provides space for

storing data and instructions, space for intermediate results and also space for the

final results.

In short, the specific functions of the storage unit are to store:

1. All the data to be processed and the instruction required for processing

(received from input devices).

2. Intermediate results of processing.


3. Final results of processing before these results are released to an output
device.

3.7 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise

activities and actions with support for choice, iteration and concurrency. In the

Unified Modeling Language (UML), activity diagrams are intended to model both

computational and organizational processes (i.e. workflows). Activity diagrams

show the overall flow of control. In FDRS, the main activity is the training phase

and the other one is the recognition phase. The activity diagram for the training

32
phase is shown in Figure 1, and the corresponding steps' activities are tabulated

in Table 1. Similarly, the activity diagram for the recognition phase is shown in

Figure 2, and the corresponding steps' activities are tabulated in Table 2. Also, the

pseudo code for

the training and the recognition phases are shown in Figure 3 and Figure 4

respectively.

Figure 1. Activity Diagram of Training Phase

33
Figure 2. Activity Diagram of Recognition Phase

Table 1. Activity Table of Training Phase in FDRS


No Activities

1 Sends Image from folder to start the module work.

2 Image resizing in to a fixed size; determined by the administrator;


then

applying some image processing filters to increase image quality.


3 New Image applies Haar-cascades algorithm to detect the face and

remove other parts of image.

4 Clone the image and insert the subject name.

5 Create file in a hard disk to new image and save the image path and

image name in XML DB.

Table 2. Activity Table of Recognition Phase in FDRS


No Activities

34
1 Sends Image from folder to start the module work.

2 Image resizing in to a fixed size; determined by the administrator; then

applying some image processing filters to increase image quality.

3 New Image goes to GPU to apply Haar-cascades algorithm to detect


the

face and remove other parts of image, then the result is returned to
CPU.
4 Load train images from DB to RAM and extract features from them
in parallel. Next, the CPU extracts the face features from new face and
compares with other face features of the training images.

5 If find a face from DB closes to the detected face the system


recognized the person by displaying the name in the GUI; otherwise;
the system displays a message for an unknown person.

35
3.8 PROGRAM FLOW CHAT/DIAGRAM

Figure 3.8.1 program flow chat

3.9 SYSTEM STRUCTURAL DESIGN

3.9.1 FILE DESIGN

Access database was used in storing the information used in this project. The

database was integrated into the system that the program access and update the

files.

In the course of the design, a table was created in the database.

Table design
Field Name Type Size

Id Int 11

User Name text 20

Sex text 12

36
Image Object 10

Template Object 3mb

Aga Integer 2

Fig 4.9.1: Structure of the Database File

37
3.10 SYSTEM FLOWCHARTS

Fig 4.10: System Flowcharts

3.11 DESCRIPTION OF THE NEW SYSTEM

Face detection is the process of finding the human face from image and, if present,

returning the location of it. It is the special case of the object detection. Objects

can be anything present in the image including human and non human things like

trees, buildings, car and the chair. But, other than a human being itself objects

are least likely utilized in advanced applications. So, to find and locate the human

face in the image is an interesting and important application of modern time.

However, to locate the face is not an easy task in the image since images do not

contain only faces but other objects too. Moreover, some of the scenes’ are very

complex and to filter out the unwanted information is remain the tough task.

38
When the presence and the location of the face is found than this information is

utilized to implement more

sophisticated applications such as recognition and video surveillance

implementation. Hence, the success of these applications depends heavily on the

detection rate of image face detection system. Face detection from an image can

be done by two methods named image based and feature based. Image Based

methods

treats the whole image as the group of patterns and so each region is classified as

the face or non-face. In this method a window is scanned against the parts of the

image. On every scan the output is computed and the threshold value is compared

with the output value of the window on current part of the image. If it is above

than the threshold value then that current part of the image is considered as the

face. The size of this window is fixed and chosen according to some experiments

with the help of training images size. The advantage of image based methods is

the higher percentage of face detection hit rate; however it is slow in terms of

computation as compared to feature based methods. Eigenfaces and the neural

networks are the two examples of image based approach. In feature based

methods features such as skin color, eyes, nose and mouth are separated first from

the rest of the image regions. With the extraction of these features the non-

39
interested regions of the image are not required to process further and therefore

the processing time is significantly reduced. In feature based methods the skin

color pixels are separated first because the color processing is faster which results

in the separation of other features. The advantage of the feature based methods

is the fast results but less accurate than image based methods. Another advantage

is the ease of implementation in real time applications.

In the project, the feature based method is implemented. The initial step is to get

the image as input and then feature based algorithm is applied to detect the face

or faces.

Once the face region is determined the next is to mark the boundary around it.

40
CHAPTER FOUR SYSTEM IMPLEMENTATION AND TESTING

This chapter details the implementation of the face detection and recognition
system. The system was developed using Python and OpenCV, a powerful
computer vision library. The implementation process involved collecting facial
data, training the recognition model, and real-time face recognition through a
webcam interface.

The implementation was divided into three major modules:

1. Data Collection – capturing face images from the webcam and storing
them.
2. Model Training – training the LBPH (Local Binary Pattern Histogram)
recognizer.
3. Real-time Recognition – identifying faces and labeling them in real-time.

4.1 LANGUAGE CHOICE

The system was developed using Python, an interpreted high-level programming

language with extensive support for image processing and computer vision

libraries. The OpenCV (Open Source Computer Vision) library was used for

image capture, face detection, and recognition due to its efficiency and support

for various algorithms.

Advantages of Python:

• Simple syntax and readability.


• Rich library support.
• Cross-platform compatibility.
• Easy integration with databases and GUI frameworks.
41
4.2 CHOICE OF THE ENVIRONMENT

• Development Environment: Visual Studio Code (VSCode) was used for


code writing and debugging. It provides extensions for Python, syntax
highlighting, and terminal integration for ease of execution.
• Runtime Environment: The system runs on Windows 10 with Python
installed, along with OpenCV and NumPy libraries.

4.3 OUTPUT SPECIFICATION AND DESIGN

• Output 1: During data collection, face images are saved in the dataset
directory as User.ID.sample#.jpg.
• Output 2: After training, a model file trainer.yml is generated and
stored.
• Output 3: During real-time recognition, detected faces are displayed with
bounding boxes and labels (“User X” or “Unknown”) depending on
recognition accuracy.

4.4 SYSTEM REQUIREMENT


In order to realize this project, the following software components were used

4.6.1 Hardware Requirement


Hardware Requirements
Component Specification
Processor Intel Core i3 or higher
RAM Minimum 4 GB
Hard Disk Minimum 50 GB free space
Camera Integrated/External Webcam
4.6.2 Software Requirement

The software requirement includes:

42
Software Version
Operating System Windows 10 or higher
Python Interpreter Python 3.8 or higher
OpenCV OpenCV 4.x
NumPy NumPy 1.19+
Visual Studio Code Latest Version
4.5 CODING

1. collect_faces(user_id) – collects 20 face samples.


2. train_recognizer() – trains the LBPH recognizer with collected
images.
3. recognize_faces() – detects and identifies faces in real-time.

The code ensures modularity and error handling for robustness.

4.7.1 Coding Testing

Each module was tested independently:

• Data collection was tested by capturing various facial positions and lighting
conditions.
• Model training was verified by ensuring trainer.yml is generated.
• Real-time recognition was tested with multiple users and verified for
accuracy.
4.7.2 Testing

1. Unit Testing: Each function (face detection, image saving, recognition) was
tested independently.
2. Integration Testing: Ensured seamless interaction between data collection,
training, and recognition modules.
3. System Testing: The complete system was tested to validate functionality
under real-world conditions.

43
4.6 LEVELS OF TESTING FLOW CHART
Fig 4.6.1: Unit Testing

Unit testing focuses verification effort on the smallest unit of software i.e. the

module. Using the detailed design and the process specification testing is done to

uncover errors within the boundary of the module. All modules must be successful

in the unit test before the start of the integration testing begins. In this project each

service can be thought of a module.

4.6.1 Integration Testing

The system was integrated and tested with multiple users, ensuring accurate

detection and correct labeling. The model successfully differentiated known users

from unknown ones in real-time video feed.

4.6.2 System Testing

Here the entire software system is tested. The reference document for this process

is the requirements document, and the goal as to see if software meets its

requirements. Here entire computerized banking system has been tested against

requirements of project have been satisfied or not.

44
4.6.3 Database Implementation

Although the system is file-based, images are stored systematically in a local

dataset directory. Each image is labeled with a user ID to track user identity.

Model data is stored in trainer.yml, which acts as a reference for recognition.

4.7 PROGRAM PSEUDO CODE

Figure 4.1. The Pseudo Code for Training Phase

4.8 SYSTEM DOCUMENTATION

User Guide

1. Launching the System:


a. Open VSCode.
b. Run the script via terminal: python face_recognition.py.

45
2. Collecting Data:
a. Select option 1.
b. Enter a numeric User ID.
c. Position face in front of the camera until 20 samples are collected.
3. Training Model:
a. Select option 2.
b. The system will train and save trainer.yml.
4. Recognition:
a. Select option 3.
b. The camera detects and identifies known faces.

CHAPTER FIVE SUMMARY, CONCLUSION AND


RECOMMENDATION 5.1 SUMMARY

Recognition is the process of identifying the particular person present in the

image. Recognition is the most powerful and interesting application of image

processing and has gained rapid success for over 30 years. Basically, the face

detection system locates the face present in the image and after comparing with

the database, if present, the identity of that person is revealed. For this purpose,

sample images are already stored in the database.

The location of the database is set in the recognition system so that each and every

image present in the database gets searched. If there is a match then the image is

taken from the database and display at the output along with the face detected

image. In addition, the name of the matching person will also be displayed. On

46
the other hand, if there is no hit then image is not displayed and a no match

message is rather displayed.

The method used in the recognition is the Image Search method. It simply takes

an input image where a face was found and a directory of images as input. A

signature is calculated for the input image. Then for each image in the search

directory a signature is also calculated. Basically, the digital signature attempts

to assign a unique value to each image based on the contents of the image. A

distance is then calculated for each image in the search directory in relation to the

input image. The distance is the difference between the signature of the source

image and the signature of the image it is being checking against. The image with

the smallest distance is returned as the matching image and displayed at the

output.

5.2 CONCLUSION

The image face detection is implemented first and then the same system is used

to detect from video sources. The recognition system has also been implemented

on the image files. The accuracy of the system is achieved above 80%. The

project is good at the pictures of the people of different races and colors. The

project is good to detect the frontal faces present in the images files but not able

to detect the sideviews faces. The failure of detection on the pictures with very

47
dark backgrounds colors are also the limitation of the system just like other

systems. Overall it is a good project by which I have gained valuable knowledge

of image processing and the steps required for any successful face detection. The

advancement can be achieved as the future goal to make most parts of the project

automated for surveillance and vision based applications.

5.3 RECOMMENDATION

The end of this research work the researcher, finds this work interesting and

recommends it to any security information management institute, also, the

researcher recommends that any other work to be carried out on this topic the

current researcher should consider adding a real time facial recognition and voice

detection to enhance the security level of this system.

APPENDIX I SOURCE LISTING

import cv2 import numpy as np import os

Paths for face images and trained model


dataset_path = 'dataset' if not os.path.exists(dataset_path):
os.makedirs(dataset_path) recognizer = cv2.face.LBPHFaceRecognizer_create()

Initialize face detector


face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
48
-------- STEP 1: Data Collection --------
def collect_faces(user_id): cam = cv2.VideoCapture(0) sample_count = 0
while True:
ret, frame = cam.read()
if not ret:
break

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces


= face_cascade.detectMultiScale(gray, 1.3, 5)

for (x, y, w, h) in faces: sample_count += 1


cv2.imwrite(f"{dataset_path}/User.{user_id}.{sample_count}
.j
pg", gray[y:y+h, x:x+w])
cv2.rectangle(frame, (x, y), (x + w, y + h), (255,
0,
0), 2)

cv2.imshow('Collecting Faces', frame)


if cv2.waitKey(1) == 27 or sample_count >= 20: # ESC
to
quit or 20 samples collected break

cam.release()
cv2.destroyAllWindows()

49
-------- STEP 2: Train Recognizer --------
def train_recognizer(): image_paths = [os.path.join(dataset_path, f) for f in
os.listdir(dataset_path)] face_samples = [] ids = []

for image_path in image_paths: gray_img =


cv2.imread(image_path, cv2.IMREAD_GRAYSCALE) id =
int(os.path.split(image_path)[-1].split('.')[1]) faces =
face_cascade.detectMultiScale(gray_img)

for (x, y, w, h) in faces:


face_samples.append(gray_img[y:y+h, x:x+w])
ids.append(id)

recognizer.train(face_samples, np.array(ids))
recognizer.write('trainer.yml')
print("[INFO] Training completed and model saved!")

-------- STEP 3: Real-time Face Recognition -----


--
def recognize_faces(): recognizer.read('trainer.yml') cam = cv2.VideoCapture(0)

while True: ret, frame = cam.read() gray =


cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces =
face_cascade.detectMultiScale(gray, 1.3, 5)

for (x, y, w, h) in faces:


id, confidence = recognizer.predict(gray[y:y+h,
x:x+w])

50
label = f"User {id}" if confidence < 70 else
"Unknown"
cv2.putText(frame, label, (x, y-10),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0) if label !=
"Unknown" else (0, 0, 255), 2)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0,
255,
0) if label != "Unknown" else (0, 0, 255), 2)

cv2.imshow('Face Recognition', frame) if


cv2.waitKey(1) == 27: # ESC to quit break

cam.release()
cv2.destroyAllWindows()

--------- MAIN ---------


print("Choose an option:") print("1. Collect New Face Data") print("2. Train
Recognizer") print("3. Real-Time Recognition") choice = input("Enter choice
(1/2/3): ")

if choice == '1': user_id = input("Enter user ID (use numbers only): ")


collect_faces(user_id) elif choice == '2': train_recognizer() elif choice == '3':
recognize_faces() else: print("Invalid choice!")

APPENDIX II SCREEN SHOTS

51

You might also like