0% found this document useful (0 votes)
98 views66 pages

Knuckle Pattern Detection Final

Uploaded by

Tintocp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views66 pages

Knuckle Pattern Detection Final

Uploaded by

Tintocp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

WIRELESS SAFETY SYSTEM IN

CRACKERS FACTORY USING IOT

A PROJECT REPORT

Submitted by

K. PUSHPA REKA (811518104302)

in partial fulfillment for the award of the degree of

BACHELOR OF ENGINEERING

IN

COMPUTER SCIENCE ENGINEERING

K. RAMAKRISHNAN COLLEGE OF ENGINEERING

(AUTONOMOUS), SAMAYAPURAM, TIRUCHIRAPPALLI – 621 112.

APRIL 2021
WIRELESS SAFETY SYSTEM IN

CRACKERS FACTORY USING IOT

A PROJECT REPORT

Submitted by

S. Jeslin Jaicy (811517104047)


A. M. Mahaboob Nisha (811517104062)

in partial fulfillment for the award of the

degree of

BACHELOR OF ENGINEERING

IN

COMPUTER SCIENCE AND ENGINEERING

K.RAMAKRISHNAN COLLEGE OF
ENGINEERING (AUTONOMOUS),
SAMAYAPURAM, TIRUCHIRAPPALLI – 621 112.

APRIL 2021

i
K.RAMAKRISHNAN COLLEGE OF ENGINEERING

BONAFIDE CERTIFICATE

Certified that this project report “KNUCKLE PATTERN DETECTION


USING CNN” is the bonafide work of “S.Jeslin Jaicy, A.Mahaboob Nisha”
who carried out the project work under my supervision.

SIGNATURE SIGNATURE
Dr. T.M. NITHYA, M.E., Ph.D., Dr.B.Kiran Bala, M.E., M.B.A.,Ph.D.,
Assistant Professor

HEAD OF THE DEPARTMENT SUPERVISOR


Department of Computer Science &Engg. Department of Computer Science &Engg.
K.Ramakrishnan College of Engineering, K.Ramakrishnan College of Engineering,
Samayapuram, Samayapuram,
Trichy-621112. Trichy-621112.

Submitted for the Project Viva-Voce Examination held on ……………

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

We thank the almighty GOD, without whom it would not have been
possible for us to complete our project.

We wish to address our profound gratitude to


Dr.K.RAMAKRISHNAN, Chairman, K.Ramakrishnan College of
Engineering, who encouraged and gave us all help throughout the course.

We express our hearty gratitude and thanks to our honourable and


grateful executive director Dr.S.KUPPUSAMY, B.Sc., MBA., Ph.D.,
K.Ramakrishnan College of Engineering.

We are glad to thank our principal Dr.D.SRINIVASAN,M.E., Ph.D.,


FIE., MIIW., MISTE., MISAE., C.Engg., for giving us permission to
carry out this project.

We wish to convey our sincere thanks to Dr. T.M. NITHYA, M.E.,


Ph.D., Head of the Department, Computer Science and Engineering for
giving us constants encouragement and advice throughout the course.

We are graceful to Dr.B.KiranBala, M.E., M.B.A., Ph.D., Assistant


Professor in the Department, Computer Science and Engineering,
K.Ramakrishnan College of Engineering, for her guidance and valuable
suggestions during the course of study.

Finally we sincerely acknowledged in no less term for all our staff


members colleagues, our parents and friends for their co-operation and
help at various stages of this project work.

iiii
DECLARATION

I hereby declare that the work entitled “KNUCKLE PATTERN


DETECTION USING CNN” is submitted in partial fulfillment of the
requirement for the reward of the degree in B.E., Anna University,
Chennai, is a record of our own work carried out by me during the
academic year 2020-2021 under the supervision and guidance of Dr.
B.KIRAN BALA M.E., M.B.A., Ph.D., Assistant professor, Department
of Computer Science and Engineering, K.Ramakrishnan College of
Engineering. The extent and source of information ae derived from the
existing literature and have been indicated through the dissertation at the
appropriate places. The matter embodied in this work is original and has
not be submitted for the award of any degree or diploma, either in this or
any other University.

JESLIN
JAICY.S
(8115171040
47)

I certify that the declaration made by above candidate is true.

Dr.B.KIRAN BALA M.E., M.B.A.,


Ph.D.,
Assistant Professor/CSE
DECLARATION

I hereby declare that the work entitled “KNUCKLE PATTERN


DETECTION USING CNN” is submitted in partial fulfillment of the
requirement for the reward of the degree in B.E., Anna University,
Chennai, is a record of our own work carried out by me during the
academic year 2020-2021 under the supervision and guidance of
Dr.B.KIRAN BALA M.E., M.B.A., Ph.D., Assistant professor,
Department of Computer Science and Engineering, K.Ramakrishnan
College of Engineering. The extent and source of information ae derived
from the existing literature and have been indicated through the
dissertation at the appropriate places. The matter embodied in this work is
original and has not be submitted for the award of any degree or diploma,
either in this or any other University.

MAHABOOBNISHA A.M
(811517104062)

I certify that the declaration made by above candidate is true.

Dr.B.KIRAN BALA B.Tech.,M.E.,M.B.A.,Ph.D.,


Assistant Professor/CSE
ABSTRACT

In order to advance application of finger knuckle modality in new domains,

especially using widely popular smartphones, completely contactless and pose

invariant knuckle identification are highly desirable. Earlier work in the

literature incorporates finger knuckle images under fixed poses, which is not

realistic for completely contactless biometrics applications. This paper makes a

first such attempt to investigate the possibility of recognizing completely

contactless finger knuckle images acquired under varying poses. New approach

to automatically normalize and align contactless finger knuckle images is

introduced, and the performance of a specialized matcher is investigated to

achieve superior performance. Traditional feature extraction methods, such as

Gabor filter and competitive coding, have been widely used in finger-knuckle-

print (FKP) identification. However, these methods focus on manually designed

features which may not achieve satisfying results on FKP images. In order to

solve this problem, a novel batch-normalized Convolutional Neural Network

(CNN) architecture with data augmentation for FKP recognition is proposed.

Firstly, a novel batch- normalized CNN is designed specifically for FKP

recognition. Then, random histogram equalization is adopted as data

augmentation here for training the CNN in FKP recognition. Meanwhile, batch-

normalization is adopted to avoid overfitting

during network training.


TABLE OF CONTENT
CHAPTER TITLE PAGE
NO NO
ABSTRACT vi
LIST OF FIGURES x
LIST OF ABBREVIATIONS xi
1 INTRODUCTION 1
1.1 IMAGE PROCESSING SYSTEM 2
1.2 TYPES OF DIGITAL IMAGES 7
1.3 APPLICATION OF 8
DIGITAL IMAGE
PROCESSING 9
2 1.4 FINGER 11
3 KNUCKLPATTERN 19
LITERATURE SURVEY 20
EXISTING SYSTEM
3.1 KNUCKLE IMAGING 21
ANDSEGMENTATION
3.2DETECTING KNUCKLE 21
4 CREASE FLOW CENTER 22
3.3 FEATURE EXTRACTION AND MATCHING 22
PROPOSED SYSTEM 22
4.1 CONVOLUTIONAL NEURAL NETWORK 24
4.2CNN ARCHITECTURE 24
4.3PREPROCESSING 25
4.4SEGMENTATION
4.5 FEATURE EXTRACTION
25 4.6 CNN
26 4.6.1 Convolution layer
4.6.2 ReLU layer 27
4.6.3 Pooling layer 27
4.6.4 Fully connected layer 28
SOFTWARE DESCRIPTION
5 29
RESULTS AND DISCUSSION
6 37
CONCLUSION AND FUTURE
7 43
WORK APPENDIX
REFERENCES

ix
LIST OF FIGURES

FIGURE NO FIGURE NAME PAGE NO

1.1 Block diagram for image processing system 3


1.2 Block diagram of fundamental sequence 4
involved in an image processing system
1.3 Image processing technique 5
3.1 Block diagram for existing system 20
3.2 Block diagram for imaging and segmentation 20
4.1 CNN architecture 22
4.2 Block diagram for proposed system 24
6.1 Input image 37
6.2 Extracted image of minor knuckle 38
6.3 Gray scale image of minor knuckle 38
6.4 Segmented image of minor knuckle 39
6.5 Extracted image of major knuckle 39
6.6 Gray scale image of major knuckle 40
6.7 Segmented image of major knuckle 40
6.8 Output 41
6.9 Tabulation of ANN and CNN 41
6.10 Comparison graph of ANN and CNN 42
LIST OF ABBREVIATIONS

ACRONYMS EXPANSIONS

CNN Convolutional Neural Network


FCN Fully Connected Network
FKP Finger knuckle Pattern
ANN Artificial Neural Network
ReLU Rectified Linear Unit
CHAPTER 1

INTRODUCTIO

Prevailing methods of human identification based on credentials (identification


documents and PIN) are not able to meet the growing demands for stringent
security in applications such as national ID cards, border crossings, government
benefits, and access control. As a result, biometric recognition, or simply
biometrics, which is based on physiological and behavioral characteristics of a
person, is being increasingly adopted and mapped to rapidly growing person
identification applications. Unlike credentials (documents and PIN), biometric
traits (e.g., fingerprint, face, and iris) cannot be lost, stolen, or easily forged;
they are also considered to be persistent and unique. Use of biometrics is not
new; fingerprints have been successfully used for over 100 years in law
enforcement and forensics to identify and apprehend criminals. But, as
biometrics permeates our society, this recognition technology faces new
challenges.

The design and suitability of biometric technology for person identification


depends on the application requirements. These requirements are typically
specified in terms of identification accuracy, throughput, user acceptance,
system security, robustness, and return on investment. The next generation
biometric technology must overcome many hurdles and challenges to improve
the recognition accuracy. These include ability to handle poor quality and
incomplete data, achieve scalability to accommodate hundreds of millions of
users, ensure interoperability, and protect user privacy while reducing system
cost and enhancing system integrity. This chapter presents an overview of
biometrics, some of the emerging biometric technologies and their limitations,
and examines future challenges.
1
1.1 IMAGE PROCESSING SYSTEM

Image processing is a physical process used to convert an image signal into


a physical image. The image signal can be either digital or analog. The actual
output itself can be an actual physical image or the characteristics of an image.
The most common type of Image processing is photography. In this process, an
image is captured or scans using a camera to create a digital or analog image.
In order to produce a physical picture, the image is processed using the
appropriate technology based on the input source type. In digital photography,
the image is stored as a computer file. This file is translated using photographic
software to generate an actual image. The colors, shading, and nuances are all
captured at the time the photograph is taken the software translates this
information into an image. When creating images using analog photography,
the image is burned into a film using a chemical reaction triggered by
controlled exposure to light. The image is processed in a darkroom, using
special chemicals to create the actual image. This process is decreasing in
popularity due to the opening of digital photography, which requires less effort
and special training to product images. The field of digital imaging has created
a whole range of new applications and tools that were previously impossible.
Face recognition software, medical image processing and remote sensing are
all possible due to the development of digital image processing. Specialized
computer programs are used to enhance and correct images.
DIGITIZER MASS STORAGE

IMAGE DIGITAL OPERATOR


PROCESSOR COMPUTER CONSOLE

HARD COPY
DEVICE

DISPLAY

FIGURE 1.1 BLOCK DIAGRAM FOR IMAGE


PROCESSING SYSTEM

 DIGITIZER
A digitizer converts an image into a numerical representation suitable for
input into a digital computer. Some common digitizers are

 Microdensitometer
 Flying spot scanner
 Image dissector
 Videocon camera
 Photosensitive solid- state arrays.
 DIGITAL COMPUTER

Mathematical processing of the digitized image such as convolution,


averaging, addition, subtraction, etc. are done by the computer.

 MASS STORAGE

The secondary storage devices normally used are floppy disks, CD ROMs
etc.
 HARD COPY DEVICE

The hard copy device is used to produce a permanent copy of the image and
for the storage of the software involved.

 OPERATORCONSOLE

The operator console consists of equipment and arrangements for


verification of intermediate results and for alterations in the software as and
when require. The operator is also capable of checking for any resulting errors
and for the entry of requisite data.

 IMAGE PROCESSOR
An image processor does the functions of image acquisition, storage,
preprocessing, segmentation, representation, recognition and interpretation and
finally displays or records the resulting image. The following block diagram
gives the fundamental sequence involved in an image processing system.

IMAGE REPRESENTATION &


SEGMENTATION
PROBLEM DESCRIPTION
ACQUISITION
DOMAIN

KNOWLEDGE RESULT
PREPROCESSING RECOGNITION &
BASE INTERPRETATION

FIGURE 1.2 BLOCK DIAGRAM OF FUNDAMENTAL


SEQUENCE INVOLVED IN AN IMAGE PROCESSING
SYSTEM

As detailed in the diagram, the first step in the process is image acquisition by
an imaging sensor in conjunction with a digitizer to digitize the image. The next
step is the preprocessing step where the image is improved being fed as an input
to the other processes. Preprocessing typically deals with enhancing, removing
Noise, isolating regions, etc. Segmentation partitions an image into its
constituent parts or objects. The output of segmentation is usually raw pixel
data, which consists of either the boundary of the region or the pixels in the
region themselves. Representation is the process of transforming the raw pixel
data into a form useful for subsequent processing by the computer. Description
deals with extracting features that are basic in differentiating one class of objects
from another. Recognition assigns a label to an object based on the information
provided by its descriptors. Interpretation involves assigning meaning to an
ensemble of recognized objects. The knowledge about a problem domain is
incorporated into the knowledge base. The knowledge base guides the operation
of each processing module and also controls the interaction between the
modules. Not all modules need be necessarily present for a specific function.
The composition of the image processing system depends on its application. The
frame rate of the image processor is normally around 25 frames per second.

Image
Enhancement
Image
IMAGE Restoration
PROCESSING Image Analysis

Image Compression

Image Synthesis

FIGURE 1.3: IMAGE PROCESSING TECHNIQUES


 IMAGE ENHANCEMENT
Image enhancement operations improve the qualities of an image like
improving the image’s contrast and brightness characteristics, reducing its noise
content, or sharpen the details. This just enhances the image and reveals the
same information in more understandable image. It does not add any
information to it.

 IMAGE RESTORATION
Image restoration like enhancement improves the qualities of image but all
the operations are mainly based on known, measured, or degradations of the
original image. Image restorations are used to restore images with problems
such as geometric distortion, improper focus, repetitive noise, and camera
motion. It is used to correct images for known degradations.

 IMAGE ANALYSIS
Image analysis operations produce numerical or graphical information based
on characteristics of the original image. They break into objects and then
classify them. They depend on the image statistics. Common operations are
extraction and description of scene and image features, automated
measurements, and object classification. Image analyze are mainly used in
machine vision applications.

 IMAGE COMPRESSION
Image compression and decompression reduce the data content necessary to
describe the image. Most of the images contain lot of redundant information,
compression removes all the redundancies. Because of the compression the size
is reduced, so efficiently stored or transported. The compressed image is
decompressed when displayed. Lossless compression preserves the exact data in
the original image, but loss compression does not represent the original image
but provide excellent compression.
 IMAGE SYNTHESIS
Image synthesis operations create images from other images or non- image
data. Image synthesis operations generally create images that are either
physically impossible or impractical to acquire.

1.2 TYPE OF DIGITAL IMAGES

There are several ways of encoding the information in an image.

1. Binary image
2. Grayscale image
3. Indexed image
4. True color or RGB image

 BINARY IMAGE
Each pixel is just black or white. Since there are only two possible values for
each pixel (0, 1), we only need one bit per pixel.

 GRAYSCALE IMAGE
Each pixel is a shade of gray, normally from 0 (black) to 255(white). This
range means that each pixel can be represented by eight bits, or exactly one
byte. Other grayscale ranges are used, but generally they are a power of 2.

 INDEXED IMAGE
An indexed image consists of an array and a color map matrix. The pixel
values in the array are direct indices into a color map. By convention, this
documentation uses the variable name X to refer to the array and map to refer to
the color map.
 TRUE COLOR OR RGB IMAGE
Each pixel has a particular color; that color is described by the amount of red,
green and blue in it. If each of these components has a range 0–255, this gives a
total of 2563 different possible colors. Such an image is a “stack” of three
matrices; representing the red, green and blue values for each pixel. This means
that for every pixel there correspond 3 values.

1.3 APPLICATIONS OF IMAGE PROCESSING


Image processing has an enormous range of applications; almost every area
of science and technology can make use of image processing methods. Here is a
short list just to give some indication of the range of image processing
applications.

 DOCUMENT PROCESSING
It is used in scanning, and transmission for converting paper documents to a
digital image form, compressing the image, and storing it on magnetic tape. It is
also used in document reading for automatically detecting and recognizing
printed characteristics.

 MEDICINE
Inspection and interpretation of images obtained from X-rays, MRI or CAT
scans, analysis of cell images, of chromosome karyotypes. In medical
applications, one is concerned with processing of chest X-rays, cineangiograms,
projection images of trans-axial tomography and other medical images that
occur in radiology, nuclear magnetic resonance (NMR) and ultrasonic scanning.
These images may be used for patient screening and monitoring or for
detection of tumors’ or other disease in patients.

 INDUSTRY
Automatic inspection of items on a production line, inspection of paper
samples.

 DEFENSE/INTELLIGENCE
It is used in reconnaissance photo-interpretation for automatic interpretation
of earth satellite imagery to look for sensitive targets or military threats and
target acquisition and guidance for recognizing and tracking targets in real-time
smart-bomb and missile-guidance systems.

 RADAR IMAGING SYSTEM


Radar and sonar images are used for detection and recognition of various
types of targets or in guidance and maneuvering of aircraft or missile systems.

 AGRICULTURE
Satellite/aerial views of land, for example to determine how much land is
being used for different purposes, or to investigate the suitability of different
regions for different crops, inspection of fruit and vegetables distinguishing
good and fresh produce from old.

1.4 FINGER KNUCKLE PATTERN


Each finger has three joints. There are three bones in every finger known as
the proximal phalanges, the Centre phalanges and the distal phalanges. The
proximal phalanx is the first join where the finger joins the hand. The proximal
inter phalangeal joint, or PIP joint is the second joint. The distal inter
phalangeal joint or DIP is the last joint of the finger. Finger knuckle is that the
back surface of finger, it is also known as dorsum of the hand.
The proximal phalanx is the first join where the finger joins the hand. The
proximal inter phalangeal joint, or PIP joint is the second joint. The distal inter
phalangeal joint or DIP is the last joint of the finger. Finger knuckle is that the
back surface of finger, it is also known as dorsum of the hand. The inherent skin
patterns of the outer surface around the phalange joint of one’s Finger, has high
capability to discriminate completely different people. Such image pattern of
finger knuckle is unique and might be getting on-line, offline for authentication.
Extraction of options of knuckle for identification is completely depends upon
the user. Some of the researchers of science extracted the options for
authentication a pair of options are center of phalange joint, U formed line round
the middle phalanx, Number of lines, length and Spacing between lines.
Knuckle crease patterns and stray marks as a method of photographic
identification. Such options square measure distinctive and can use for
identification. The use of finger knuckle images for the biometric identification
has generated increasingly interest in the literature. Woodard and Flynn
successfully demonstrated the use of finger dorsal images for personal
identification.
This work essentially exploits local curvature patterns on the finger surface
and quantifies them into various shape indexes for the matching. Reference
details an online system using the hand dorsal surface images which can
simultaneously exploit the finger knuckle patterns from the multiple fingers and
their geometrical shape characteristics.
CHAPTER2

LITERATURE

SURVEY

2.1 Zijing Zhao, Ajay kumar “A deep learning based unified framework to
detect, segment and recognize irises using spatially corresponding features”
IEEE Access 2020.

DESCRIPTION:

This paper proposes a deep learning based unified and generalizable framework
for accurate iris detection, segmentation and recognition. The proposed
framework firstly exploits state-of-the-art and iris specific Mask R-CNN, which
performs highly reliable iris detection and primary segmentation i.e., identifying
iris/non-iris pixels, followed by adopting an optimized fully convolution
network (FCN), which generates spatially corresponding iris feature descriptors.
A specially designed Extended Triplet Loss (ETL) function is presented to
incorporate the bit-shifting and non-iris masking, which are found necessary for
learning meaningful and discriminative spatial iris features. Thorough
experiments on four publicly available databases suggest that the proposed
framework consistently outperforms several classic and state-of-the-art iris
recognition approaches. More importantly, our model exhibits superior
generalization capability as, unlike popular methods in the literature, it does not
essentially require database-specific parameter tuning, which is another key
advantage
2.2 Ajay Kumar, Zhihuan Xu “ Personal Identification using Minor
Knuckle Patterns from Palm Dorsal Surface” 2016.

DESCRIPTION:

Finger or palm dorsal surface is inherently revealed while presenting (slap)


fingerprints during border crossings or during day-to-day activities like driving,
holding arms, signing documents or playing sports. Finger knuckle patterns are
believed to be correlated with the anatomy of fingers that involve complex
interaction of finger bones, tissues, and skin which can be uniquely identify the
individuals. This paper investigates the possibility of using lowest finger
knuckle patterns formed on joints between the metacarpal and proximal phalanx
bones for the automated personal identification. We automatically segment such
region of interest from the palm dorsal images and normalize/enhance
them to accommodate illumination, scale and pose variations resulting
from the contactless imaging. The normalized knuckle images are investigated
for the matching performance using several spatial and spectral domain
approaches. We use database of 501 different subjects acquired from the
contactless hand imaging to ascertain the performance. This paper also evaluate
the possibility of using palm dorsal surface regions, along with their
combination with minor knuckle patterns, and provides palm dorsal image
database from 712 different subjects for the performance evaluation. The
experimental results presented in this paper are very encouraging and
demonstrates the potential of such unexplored minor finger knuckle patterns for
the biometrics applications
2.3 Gaurav Jaswal, Amit Kaul and Ravinder Nath “Knuckle Print
Biometrics and Fusion Schemes– Overview, Challenges, and Solutions”
2016

DESCRIPTION:

Numerous behavioral or physiological biometric traits, including iris,


signature, hand geometry, speech, palm print, face, etc. have been used to
discriminate individuals in a number of security applications over the last 30
years. Among these, hand-based biometric systems have come to the attention of
researchers worldwide who utilize them for low- to medium-security
applications such as financial transactions, access control, law enforcement,
border control, computer security, time and attendance systems, dormitory meal
plan access, etc. Several approaches for biometric recognition have been
summarized in the literature. The survey in this article focuses on the interface
between various hand modalities, summary of inner- and dorsal-knuckle print
recognition, and fusion techniques. First, an overview of various feature
extraction and classification approaches for knuckle print, a new entrant in the
hand biometrics family with a higher user acceptance and invariance to
emotions, is presented. Next, knuckle print fusion schemes with possible
integration scenarios, and traditional capturing devices have been discussed. The
economic relevance of various biometric traits, including knuckle print for
commercial and forensic applications is debated. Finally, conclusions related to
the scope of knuckle print as a biometric trait are drawn and some
recommendations for the development of hand-based multimodal biometrics
have been presented
2.4 Qian Zheng, Ajay Kumar and Gang Pan “Suspecting Less and Doing
Better: New Insights on Palmprint Identification for Faster and More
Accurate Matching” 2016

DESCRIPTION:

This paper introduces a generalized palm print identification framework to


unify several state-of-art 2D and 3D palm print methods. Through this
framework, we argue that the methods employing one-to-one matching strategy
and binary representation for feature are more effective for palm print
identification. The analysis for the first argument is based on a statistical
matching model and is supported by outperforming results on several publicly
available 2D palm print databases. These two arguments are further evaluated
for 3D palm print matching and used to introduce a new method for encoding 3D
palm print feature. The proposed 3D feature is binary and more efficiently
computed. It encodes the 3D shape of palm print to either convex or concave.
The experimental results on two publicly available, from contactless and
contact- based 3D palm print database of 177 and 200 subjects, respectively,
outperform the state-of-the-art methods. This paper also provides our palm print
matching algorithm(s) in public domain, unlike the previous work in this area,
which will help to further advance research efforts in this area
2.5 NeclaOzkaya “ Metacarpophalangeal joint patterns based personal
identification system” 2015

DESCRIPTION:

Forensic identification is the task of determining whether or not observed


evidence arose from a known source. It is useful to associate probabilities with
identification/exclusion opinions, either for presenta- tion in court or to evaluate
the discriminative power of a given set of attributes. At present, in most forensic
domains outside of DNA evidence, it is not possible to make such a statement
since the necessary probability distributions cannot be computed with
reasonable accuracy, although the probabilistic approach itself is well-
understood. In principle, it involves determining a likelihood ratio (LR) – the
ratio of the joint probability of the evidence and source under the identification
hypothesis (that the evidence came from the source) and under the exclusion
hypothesis (that the evidence did not arise from the source). Evaluating the joint
probability is computationally intractable when the number of variables is even
moderately large. It is also statistically infeasible since the number of
parameters to be determined from the data is exponential with the number of
variables. An approximate method is to replace the joint probability by another
probability: that of distance (or similarity) between evidence and object under
the two hypotheses. While this reduces to linear complexity with the number of
variables, it is an oversimplification leading to errors. We consider a third
method which decomposes the LR into a product of two factors, one based on
distance and the other on rarity
2.6 Ajay kumar “ Importance of Being Unique From Finger Dorsal
Patterns: Exploring Minor Finger Knuckle Patterns in Verifying Human
Identities” 2014.

DESCRIPTION:

Automated biometrics identification using finger knuckle images has


increasingly generated interest among researchers with emerging applications in
human forensics and biometrics. Prior efforts in the biometrics literature have
only investigated the major finger knuckle patterns that are formed on the finger
surface joining proximal phalanx and middle phalanx bones. This paper
investigates the possible use of minor finger knuckle patterns, which are formed
on the finger surface joining distal phalanx and middle phalanx bones. The
minor finger knuckle patterns can either be used as independent biometric
patterns or employed to improve the performance from the major finger knuckle
patterns. A completely automated approach for the minor finger knuckle
identification is developed with key steps for region of interest segmentation,
image normalization, enhancement, and robust matching to accommodate image
variations. This paper also introduces a new or first publicly available database
for minor (also major) finger knuckle images from 503 different subjects. The
efforts to develop an automated minor finger knuckle pattern matching scheme
achieve promising results and illustrate its simultaneous use to significantly
improve the performance over the conventional finger knuckle identification.
Several open questions on the stability and uniqueness of finger knuckle
patterns should be addressed before knuckle pattern/image evidence can be
admissible as supportive evidence in a court of law. Therefore, this paper also
presents a study on the stability of finger knuckle patterns from images acquired
with an interval of 4–7 years
2.7 Necla Ozkaya “Discriminative common vector based finger
knuckle recognition”,2016.

DESCRIPTION:

The main issue in personal authentication systems for military, security,


industrial and social applications is accuracy. This paper presents a finger
knuckle print (FKP) recognition approach to identity authentication. It applies a
discriminative common vectors (DCV) based method to obtain the unique
feature vectors, called discriminative common vectors, and the Euclidean
distance as matching strategy to achieve the identification and verification tasks.
The recognition process can be divided into the following phases: capturing the
image; pre-processing; extracting the discriminative common vectors; matching
and, finally, making a decision. In order to test and evaluate the proposed
approach both the most representative FKP public databases and an established
non-uniform FKP database were used. Experiments with these databases
confirm that the DCV-based FKP recognition method achieves the
authentication tasks effectively. The results showed the performance of the
system in terms of the recognition rate had 100% accuracy for both training data
and unseen test data
2.8 Jeff Donahue,T revor darrell,jitendra malik “Rich feature hierarchies
for accurate object detectionand semantic segmentation” 2014

DESCRIPTION:

Object detection performance, as measured on the canonical PASCAL VOC


dataset, has plateaued in the last few years. The best-performing methods are
complex ensemble systems that typically combine multiple low-level image
features with high-level context. In this paper, we propose a simple and scalable
detection algorithm that improves mean average precision (mAP) by more than
30% relative to the previous best result on VOC 2012—achieving a mAP of
53.3%. Our approach combines two key insights: (1) one can apply high-
capacity convolutional neural networks (CNNs) to bottom-up region proposals
in order to localize and segment objects and (2) when labeled training data is
scarce, supervised pre-training for an auxiliary task, followed by domain-
specific fine-tuning, yields a significant performance boost. Since we combine
region proposals with CNNs, we call our method R-CNN: Regions with CNN
features. We also present experiments that provide insight into what the network
learns, revealing a rich hierarchy of image features
CHAPTER 3
EXISTING
SYSTEM

INTRODUCTION

Choice of a biometrics trait highly depends on the nature of application,


biometric availability and the user convenience. Finger knuckle is an emerging
biometric that meets many such requirements in a variety of applications; e.g.,
for the law- enforcement when a knuckle image is the only available piece of
evidence to identify suspects, privacy savvy or elderly users. Many publications
in the literature have detailed experimental results that indicate high matching
accuracy, or at least com- parable with those from fingerprints, while using the
knuckle images that are acquired under touch less imaging environment. Finger
knuckle essentially represents physiological biometric pattern whose uniqueness
is related to the anatomy of proximal phalanx bones, metacarpal phalanx bones,
metacarpo phalangeal joint and the tissues surrounding these joints. The
formation of knuckle patterns can be linked to the genetics factors during the
gestation stage. Several references in the literature have identified NOG gene
that encodes bone morphogenetic protein which blocks further bone formation in
the region of future finger joints that leads to the formation of joint space. The
finger knuckle creases are developed in relation to this joint as a result of
functional adaptation that requires additional skin to support the forward
movement of fingers.

The formation of finger joints from NOG gene indicates its relationship
with the finger knuckle creases observed in our hands. The uniqueness and the
formation of finger knuckle patterns are therefore largely influenced by the
factors relating to functional requirements and the genetics.
FIGURE 3.1 EXISTING SYSTEM

3.1 KNUCKLE IMAGING AND SEGMENTATION


One of the key challenges in the development of completely automated
pose invariant finger knuckle identification is to segment region of interest
images that can adequately/only represent knuckle crease patterns. This region
of interest is the sub image region that represents finger dorsal skin crease
patterns between the medial and the proximal phalanx finger joints. Each of the
acquired images are firstly cropped to generate a fixed ROI images from the
spatial image center and resulting images are subjected to the segmentation steps
as shown in below figure

FIGURE 3.2 BLOCK DIAGRAM FOR IMAGING AND SEGMENTATION


Average of illumination in 30 × 30 pixels blocks is used to estimate the
illumination profile, which is subjected from segmented knuckle image and the
resulting image is subjected to histogram equalization.

3.2 DETECTING KNUCKLE CREASE FLOW CENTER


Finger knuckle flow patterns generally illustrate multiple and wide ridges,
with varying thickness, that are separated by narrow valleys. These ridge flow
patterns are centered at metacarpophalangeal joint, i.e., joint between proximal
and metacarpal phalanx bones. Automated estimation of such knuckle center can
help to align region of interest during the feature extraction and matching
process. Therefore, the estimation of such knuckle image center is developed.
The orientation flow map of the curved knuckle lines and creases is firstly
estimated from the normalized knuckle images. Each of the knuckle images are
divided into blocks of size w*w pixels.

3.3 FEATURE EXTRACTION AND MATCHING

There are ranges of spatial and spectral feature extraction methods that
can be employed to match normalized finger knuckle images. Contrast context
histogram (CCH), for that contrast-based local descriptors can represent local
regions with more compact histogram image matching and object recognition.
By representing the contrast distributions of a local region, it serves as a
distinctive local descriptor of the region. Our experiments demonstrate bins.
Because of its high matching accuracy and efficient computation, the CCH has
the potential to be used in a number of real-time applications.

APPLICATIONS

• Bank

• Security applications

• human surveillance
CHAPTER 4

PROPOSED SYSTEM

4.1 CONVOLUTIONAL NEURAL NETWORK


Convolutional Neural Networks (CNNs) are analogous to traditional
ANNs in that they are comprised of neurons that self-optimize through learning.
Each neuron will still receive an input and perform a operation (such as a scalar
product followed by a non-linear function) - the basis of countless ANNs. From
the input raw image vectors to the final output of the class score, the entire of the
network will still express a single perceptive score function (the weight). The
last layer will contain loss functions associated with the classes, and all of the
regular tips and tricks developed for traditional ANNs still apply.
The only notable difference between CNNs and traditional ANNs is that
CNNs are primarily used in the field of pattern recognition within images. This
allows us to encode image-specific features into the architecture, making the
network more suited for image-focused tasks - whilst further reducing the
parameters required to set up the model.
4.2 CNN ARCHITECTURE

FIGURE 4.1 CNN ARCHITECTURE


CNNs are comprised of three types of layers. These are convolutional layers,
pooling layers layers. When these layers are stacked, a CNN architecture has
been and fully-connected formed

The basic functionality of the example CNN above can be broken down into
four key areas.

1. As found in other forms of ANN, the input layer will hold the pixel
values of the image.

2.The convolutional layer will determine the output of neurons of which are
connected to local regions of the input through the calculation of the scalar
product between their weights and the region connected to the input vol- ume.
The rectified linear unit (commonly shortened to ReLu) aims to apply an
’element wise’ activation function such as sigmoid to the output of the
activation produced by the previous layer.

3. The pooling layer will then simply perform down sampling along the spatial
dimensionality of the given input, further reducing the number of pa- rameters
within that activation.

4.The fully-connected layers will then perform the same duties found in
standard ANNs and attempt to produce class scores from the activations, to be
used for classification. It is also suggested that ReLu may be used between these
layers, as to improve performance.
Through this simple method of transformation, CNNs are able to transform the
original input layer by layer using convolutional and downsampling techniques
to produce class scores for classification and regression purposes.
FIGURE4.2 BLOCK DIAGRAM FOR PROPOSED SYSTEM

4.3 PRE-PROCESSING:
Two pre-processing techniques were applied: resizing the images and data
augmentation. Each image was resized to 224 × 224 pixels to be suitable for the
VGG16 model. Data augmentation is used to increase the training data. The
augmentation techniques that used to increase the number of the iris images were
rotation, shearing, zooming, width shifting and height shifting. And to increase
the faces images rotation, shearing, zooming, width shifting, height shifting, and
horizontal flipping were used.

4.4 SEGMENTATION:
Segmentation partitions an image into distinct regions containing each
pixels with similar attributes. To be meaningful and useful for image analysis
and interpretation, the regions should strongly relate to depicted objects or
features of interest.
Segmentation techniques are either contextual or non-contextual. The latter take
no account of spatial relationships between features in an image and group pixels
together on the basis of some global attribute, e.g. grey level or colour.
Contextual techniques additionally exploit these relationships, e.g. group
together pixels with similar grey levels and close spatial locations.

4.5 FEATURE EXTRACTION:


Feature extraction methods encompass, besides the traditional transformed
and no transformed signal characteristics and texture, structural and graph
descriptors. The feature selection methods described in this chapter are the
exhaustive search, branch and bound algorithm, max–min feature selection,
sequential forward and backward selection, and also Fisher’s linear discriminate.
Advanced feature representation methods are becoming necessary when it comes
to dealing with the local image content or with spatio-temporal characteristics or
with the statistical image content.
4.6 CNN
A Convolutional neural system (CNN) is a neural system that has at least
one Convolutional layers and are utilized mostly for image processing,
arrangement, classification and furthermore for other auto correlated data. Rather
than taking a gander at a whole picture on the double to discover certain
highlights it tends to be progressively powerful to take a gander at littler
segments of the picture.
The Convolutional Neural Networks (CNN) is one of the most celebrated
profound learning calculations and the most usually utilized in picture order
applications. When all is said in done, the CNN design contains three sorts of
layers, which are convolutional layers, pooling layers, and completely
associated layers. The CNN calculation gets an info picture that goes through
the layers to distinguish includes and perceive the picture, and afterward it.
accompanying layer. The contribution of the CNN is a 3D picture (width ×
tallness × profundity), the width and the stature are the elements of the pictures.
The profundity is the quantity of information channels and it is three shading
channels Red, Green, and Blue (RGB).
CNN LAYERS
1. Convolution Layer
2. ReLU Layer
3. Pooling Layer
4. Fully Connected Layer

4.6.1 CONVOLUTION LAYER

The convolution layer plays a vital role in how CNNS operate. The layers
parameters focus around the use of learnable kernels. These kernels are usually
small in spatial dimensionality, but spreads along the entirety of the depth of the
input. When the data hits a convolutional layer, the layer convolves each filter
across the spatial dimensionality of the input to produce a 2D activation map.
These activation maps can be visualized. As we glide through the input, the
scalar product is calculated for each value in that kernel. (Figure 4) From this the
network will learn kernels that ’fire’ when they see a specific feature at a given
spatial position of the input. These are commonly known as activations.

As we alluded to earlier, training ANNs on inputs such as images results in


models of which are too big to train effectively. This comes down to the fully-
connected manner of standard ANN neurons, so to mitigate against this every
neuron in a convolutional layer is only connected to small region of the input
volume. The dimensionality of this region is commonly referred to as the
receptive field size of the neuron. The magnitude of the connectivity through the
depth is nearly always equal to the depth of the input.
4.6.2 ReLU LAYER

Convolutional layers are also able to significantly reduce the complexity


of the model through the optimization of its output. These are optimized through
three hyper parameters, the depth, the stride and setting zero-padding.
The depth of the output volume produced by the convolutional layers can
be manually set through the number of neurons within the layer to a same region
of the input. This can be seen with other forms of ANNs, where the all of the
neurons in the hidden layer are directly connected to every single neuron
beforehand. Reducing this hyper parameter can significantly minimize the total
number of neurons of the network, but it can also significantly reduce the pattern
recognition capabilities of the model.

We are also able to define the stride in which we set the depth around the
spatial dimensionality of the input in order to place the receptive field. For
example if we were to set a stride as 1, then we would have a heavily overlapped
receptive field producing extremely large activations. Alternatively, setting the
stride to a greater number will reduce the amount of overlapping and produce an
output of lower spatialdimensions.
4.6.3POOLING LAYER
Pooling layers aim to gradually reduce the dimensionality of the
representation, and thus further reduce the number of parameters and the
computational complexity of the model.
Due to the destructive nature of the pooling layer, there are only two
generally observed methods of max-pooling. Usually, t×hestride and filters of the
pooling layers are both set to 2 2, which will allow the layer to extend through
the entirety of the spatial dimensionality of the input. Furthermore overlapping
pooling may be utilised, where the stride is set to 2 with a kernel size set to 3.
Due to the destructive nature of pooling, having a kernel size above 3 will
usually greatly decrease the performance of the model.
It is also important to understand that beyond max-pooling, CNN architectures
may contain general-pooling. General pooling layers are comprised of pooling
neurons that are able to perform a multitude of common operations including
L1/L2-normalisation, and average pooling. However, this tutorial will primarily
focus on the use of max- pooling.

4.6.4 FULLY CONNECTED LAYER

The objective of a fully connected layer is to take the results of the


convolution/pooling process and use them to classify the image into a label (in
a simple classification example).
The output of convolution/pooling is flattened into a single vector of
values, each representing a probability that a certain feature belongs to a label.
For example, if the image is of a cat, features representing things like whiskers
or fur should have high probabilities for the label “cat”.
The image below illustrates how the input values flow into the first layer
of neurons. They are multiplied by weights and pass through an activation
function (typically ReLu), just like in a classic artificial neural network. They
then pass forward to the output layer, in which every neuron represents a
classification label.
The fully connected part of the CNN network goes
through its own backpropagation process to determine the most accurate
weights. Each neuron receives weights that prioritize the most appropriate label.
Finally, the neurons “vote” on each of the labels, and the winner of that vote is
the classification decision.
CHAPTER 5
SOFTWARE REQUIREMENT

MATLAB 2017
MATLAB (matrix laboratory) is a fourth-generation high-level
programming language and interactive environment for numerical computation,
visualization and programming. MATLAB is developed by Math Works .It
allows matrix manipulations; plotting of functions and data; implementation of
algorithms; creation of user interfaces; interfacing with programs written in
other languages, including C, C++, Java, and Fortran ;analyze data; develop
algorithms; and create models and applications. It has numerous built-in
commands and math functions that help you in mathematical calculations,
generating plots and performing numerical methods.

MATLAB's Power of Computational Mathematics


MATLAB is used in every facet of computational mathematics.
Following are some commonly used mathematical calculations where it is used
most commonly:
 Dealing with Matrices and Arrays
 2-D and 3-D Plotting and graphics
 Linear Algebra
 Algebraic Equations
 Non-linear Functions
 Statistics
 Data Analysis
 Calculus and Differential Equations
 Numerical Calculations
Features of MATLAB
Following are the basic features of MATLAB:
 It is a high-level language for numerical computation,
visualization and application development.
 It also provides an interactive environment for iterative
exploration, design and problemsolving.
 It provides vast library of mathematical functions for linear
algebra, statistics, Fourier analysis, filtering, optimization,
numerical integration and solving ordinary differential
equations.
 It provides built-in graphics for visualizing data and tools for
creating custom plots.
 MATLAB's programming interface gives development tools
for improving code quality and maintainability and
maximizing performance.
 It provides tools for building applications with custom
graphical interfaces.
 It provides functions for integrating MATLAB based
algorithms with external applications and languages such as
C, Java, .NET and Microsoft Excel.
Uses of MATLAB
MATLAB is widely used as a computational tool in science and
engineering encompassing the fields of physics, chemistry, math and all
engineering streams. It is used in a range of applications including:
 Signal Processing and Communications
 Image and Video Processing
 Control Systems
 Test and Measurement
 Computational Finance
 ComputationalBiology
SIMULINK
Simulation and Model-Based Design
Simulink is a block diagram environment for multi domain simulation
and Model-Based Design. It supports system-level design, simulation, automatic
code generation, and continuous test and verification of embedded systems.
Simulink provides a graphical editor, customizable block libraries, and solvers
for modeling and simulating dynamic systems. It is integrated with MATLAB®,
enabling you to incorporate MATLAB algorithms into models and export
simulation results to MATLAB for further analysis.

Tool for Model-Based Design


With Simulink, you can move beyond idealized linear models to explore
more realistic nonlinear models, factoring in friction, air resistance, gear
slippage, hard stops, and the other things that describe real-world phenomena.

Simulink turns your computer into a laboratory for modeling and analyzing
systems that would not be possible or practical otherwise. Whether you are
interested in the behavior of an automotive clutch system, the flutter of an
airplane wing, or the effect of the monetary supply on the economy, Simulink
provides you with the tools to model and simulate almost any real-world
problem. Simulink also provides examples that model a wide variety of real-
world Simulink provides a graphical user interface (GUI) for building models as
block diagrams, allowing you to draw models as you would with pencil and
paper.
Simulink also includes a comprehensive block library of sinks, sources,
linear and nonlinear components, and connectors. If these blocks do not meet
your needs, however, you can also create your own blocks. The interactive
graphical environment simplifies the modeling process, eliminating the need to
formulate differential and difference equations in a language or program.
Models are hierarchical, so you can build models using both top-down and
bottom-up approaches. You can view the system at a high level, and then
double-click blocks to see increasing levels of model detail. This approach
provides insight into how a model is organized and how its parts interact.

Tool for Simulation


After you define a model, you can simulate its dynamic behavior using a
choice of mathematical integration methods, either from the Semolina menus or
by entering commands in the MATLAB Command Window. The menus are
convenient for interactive work, while the command line is useful for running a
batch of simulations. For example, if you are doing Monte Carlo simulations or
want to apply a parameter across a range of values, you can use MATLAB
scripts. Using scopes and other display blocks, you can see the simulation
results while the simulation runs. You can then change parameters and seewhat
happens for “what if” exploration. The simulation results can be put in the
MATLAB workspace for post processing and visualization.

Tool for Analysis


Model analysis tools include linearization and trimming tools, which you
can access from the MATLAB command line, plus the many tools in MATLAB
and its application toolboxes. Because MATLAB and Simulink are integrated,
you can simulate, analyze, and revise your models in either environment at any
point.
Interaction with MATLAB Environment
Simulink software is tightly integrated with the MATLAB environment.
It requires MATLAB to run, depending on it to define and evaluate model and
block parameters. Simulink can also use many MATLAB features.
MATLAB environment to:
 Define model inputs.
 Store model outputs for analysis and visualization.
 Perform functions within a model, through
integrated calls to MATLAB operators and
functions.
Model- Based Design
Model-Based Design is a process that enables faster, more cost-effective
development of dynamic systems, including control systems, signal processing,
and communications systems. In Model-Based Design, a system model is at the
center of the development process, from requirements development, through
design, implementation, and testing. The model is an executable specification
that you continually refine throughout the development process. After model
development, simulation shows whether the model works correctly. When
software and hardware implementation requirements are included, such as
Fixed-point and timing behavior, you can automatically generate code for
embedded deployment and create test benches for system verification, saving
time and avoiding the introduction of manually coded errors.

Model-Based Design allows you to improve efficiencyby:


 Using a common design environment
acrossproject teams
 Linking designs directly to requirements
 Integrating testing with design to
continuouslyidentify and correcterrors
 Refining algorithms through multi-domain simulation
 Automatically generating embedded software code
 Developing and reusing test suites
 Automatically generating documentation
 Reusing designs to deploy
systemsacross multiple processors and
hardwaretargets
Model-Based Design Process
There are six steps to modeling any system:
1) Defining the System
2) Identifying System Components
3) Modeling the System with Equations
4) Building the Simulink Block Diagram
5) Running the Simulation
6) Validating the Simulation Results
You perform the first three steps of this process outside of the Simulink
software environment before you begin building your model.

Defining the System


The first step in modeling a dynamic system is to fully define the system.
If you are modeling a large system that can be broken into parts, you should
Model each subcomponent on its own. Then, after building each component,
you can integrate them into a complete model of the system. For example, the
demo house heat example model of the heating system of a house is broken
down into three main parts:
 Heater subsystem
 Thermostat subsystem
 Thermodynamic model subsystem
The most effective way to build a model of this system is to consider each of
these Subsystems independently.
Identifying System Components
The second step in the modeling process is to identify the
system components. Three types of components define a system:

• Parameters — System values that remain constant unless you change them
• States — Variables in the system that change over time
• Signals — Input and output values that change dynamically
duringa simulation

In Simulink, parameters and states are represented by blocks, while


signals are represented by the lines that connect blocks. For each subsystem that
you identified, ask yourself the following questions:
 How many input signals does the subsystem have?
 How many output signals does the subsystem have?
 How many states (variables) does the subsystem have?
 What are the parameters (constants) in the subsystem?
 Are there any intermediate (internal) signals in the subsystem?
Once you have answered these questions, you should have a comprehensive
list of system components, and you are ready to begin modeling the system.

Modeling the System with Equations

The third step in modeling a system is to formulate the mathematical


equations that describe the system. For each subsystem, use the list of system
components that you identified to describe the system mathematically.
Your model may include:
 Logical equations
 Differential equations, for continuous systems
 Difference equations, for discrete systems

Building the Simulink Block Diagram


After you have defined the mathematical equations that describe each
subsystem, you can begin building a block diagram of your model in Simulink.
Build the block diagram for each of your subcomponents separately. After you
have modeled each subcomponent, you can then integrate them into a complete
model of the system.
Running the Simulation
After you build the Simulink block diagram, you can simulate the model
and analyze the results. Simulink allows you to interactively define system
inputs, simulate the model, and observe changes in behavior. This allows you to
quickly evaluate your model.

Validating the Simulation Results

Finally, you must validate that your model accurately represents the
physical characteristics of the dynamic system. You can use the linearization
and trimming tools available from the MATLAB command line, plus the many
tools in MATLAB and its application toolboxes to analyze and validate your
model.
CHAPTER 6
RESULT AND
DISCUSSION
In our proposed system, CNN based approach is utilized for
recognizing the finger knuckle pattern of the individual. The usage of CNN
approaches for recognizing the individual , will reduce the number epochs as
compared to the system based on the traditional Artificial Neural Network.
Our proposed technique will give the refreshing perspective to convolutional
neural networks and justification of its effectiveness in the fields of biometric
security.
INPUT IMAGE
Any finger of the individual can be given as the input to the system
The input image of the system should contain both minor and major knuckle
FIGURE 6.1 INPUT IMAGE
MINOR KNUCKLE
The minor knuckle part of the image will be cropped to get the
accurate creases.

FIGURE 6.2 EXTRACTED IMAGE OF MINOR KNUCKLE


The color minor knuckle image will be converted into black and white image

FIGURE 6.3 GRAY SCALE IMAGE OF MINOR KNUCKLE


The black and white image will be converted into segmented image

FIGURE 6.4 SEGMENTED IMAGE OF MINOR KNUCKLE

MAJOR KNUCKLE
The major knuckle part of the image will be cropped to get the
accurate creases.

FIGURE 6.5 EXTRACTED IMAGE OF MAJOR KNUCKLE


The color major knuckle image will be converted into black and white image

FIGURE 6.6 GRAY SCALE IMAGE OF MAJOR KNUCKLE

The black and white image will be converted into segmented image

FIGURE 6.7 SEGMENTED IMAGE OF MAJOR KNUCKLE


RESULT
When the entire input of the individual is injected to the system, it will
acknowledge with the ok message then it will be stored in the data base and it
will be used for detecting the individual.

FIGURE 6.8 OUTPUT

ACCURACY COMPARRISON OF CNN AND ANN


Our prosed system based on convolutional neural network promise to
produce the better accuracy rates as compared with the ANN. The
experimental results guarantees for 99% accuracy and proves that the
authentication system using finger knuckle pattern recognition will be the
most reliable and standardized solution for all the security crisis for the
individual recognition.

FIGURE 6.9 TABULATION OF ANN AND CNN


FIGURE 6.10 COMPARISON GRAPH OF ANN
AND CNN WITH ACCURACY AND EPOCH
CHAPTER 7

CONCLUSION AND FUTURE WORK

This project introduces a finger knuckle recognition system based on

convolutional neural networks. We have presented the set of experimental tests

performed over the twelve commonly-used and publicly-available databases. The

obtained results show that it is possible to achieve the best identification accuracy

greater than 95% for all the twelve databases, using our proposed CNN

architecture. Moreover, if the knuckle images are not acquired with the same

illumination intensity and ambient lighting conditions for different sessions of

environmental conditions, then the use of multiple session’s data for training are

often considered as an efficient strategy for improving the achievable identification

accuracy. The proposed system could be improved by using multimodality and

multiple positional features based

Technique by extracting two different kind of features (fingerprint and

knuckle) were extracted from the user to provide the best authentication

technique.
APPENDIX

clc;
close
all; clear
all;
[filename,pathname]=uigetfile('*.png');
a=imread([pathname,filename]);
% a=imread('test1.jpg');
figure();imshow(a)
title('input color
image'); impixelinfo;

I2 = imcrop(a,[28 200 110 110]);


figure();imshow(I2);
title('Minor knuckle Extracted
Image'); impixelinfo;

I3 =rgb2gray(I2);
figure();imshow(I3);

BW = imbinarize(I3,
'adaptive');
figure();imshow(BW);
title('Minor knuckle Segmented
Image'); impixelinfo;

I_2 = imcrop(a,[13 25 130 140]);


figure();imshow(I_2);
title('Major knuckle Extracted
Image'); impixelinfo;

I_3 =rgb2gray(I_2);
figure();imshow(I_3);

B_W = imbinarize(I_3,
'adaptive');
figure();imshow(B_W);
title('Major knuckle Segmented
Image'); impixelinfo;

matlabroot='E:\software\final backup\final
backup';

Datasetpath=fullfile(matlabroot,'Data_Set');
Data=imageDatastore(Datasetpath,'IncludeSubfolders',true,'LabelSource','foldernames');
layers=[imageInputLayer([400 150 3])
convolution2dLayer(5,20)
reluLayer
maxPooling2dLayer(2,'stride',2)
convolution2dLayer(5,20)
reluLayer
maxPooling2dLayer(2,'stride',2)
fullyConnectedLayer(3)
softmaxLayer
classificationLayer()]

options =
trainingOptions('sgdm','MaxEpochs',10,...
'InitialLearnRate',0.0001);
% options=trainingOptions('sgdm');
convnet=trainNetwork(Data,layers,options);
output=classify(convnet,a);

tf1=[]
for
i=1:5
st=int2str(i);
tf=ismember(output,st);
tf1=[tf1 tf];
end
out=find(tf1==1);

if
out==
1
ccf=1;
probecate = 'FIRST
PERSON' msgbox('first
person');
Xms=sprintf('Predicted Result Using Neural Network
%s',probecate); elseif out==2
probecate = 'SECOND
PERSON' msgbox('second
person');
Xms=sprintf('Predicted Result Using Neural Network
%s',probecate); elseif out==3
probecate = 'knucle belongs to
third' msgbox('third person');
Xms=sprintf('Predicted Result Using Neural Network %s',probecate);

end

X = categorical({'ANN','CNN'});
X = reordercats(X,{'ANN','CNN'});
Y = [90 100 ];
bar(X,Y)
xlabel('method');
ylabel('accuracy')
;

% addition of graininess (i.e. noise)


I_noise = imnoise(I_2, 'gaussian', 0.09);
% Measure signal-to-noise
ratio img=I_2;
img=double(img(:));
ima=max(img(:));
imi=min(img(:));
mse=std(img(:));
snr=20*log10((ima-imi)./mse)
% Measure Peak SNR
[peaksnr, snr] = psnr(I_noise, I_2);
fprintf('\n The Peak-SNR value is %0.4f',
peaksnr); fprintf('\n The SNR value is %0.4f \n',
snr); fprintf('\n The MSE value is %0.4f \n', mse);

X1 = categorical({'7','20'});
X1 = reordercats(X1,
{'7','20'}); Y1 = [90 100 ];
bar(X1,Y1)
xlabel('epoch');
ylabel('accuracy')
;
REFERENCE

[1] Kevin H. M. Cheng & Ajay Kumar , “Deep feauture collaboration for
challenging 3D finger knuckle identification” ,IEEE ,Oct 2020.

[2] L. B. Zimmerman, J. M. De Jesús-Escobar, and R. M. Harland, “The Spemann


organizer signal noggin binds and inactivates bone morphogenetic protein 4,” Cell,
vol. 86, no. 4, pp. 599–606, 1996.

[3] A. Kumar and Z. Xu, “Personal identification using minor knuckle patterns from
palm dorsal surface,” IEEE Trans. Inf. Forensics Security, vol. 11, no. 10, pp. 2338–
2348, Oct. 2016.

[4] K. Ito, T. Aoki, H. Nakajima, K. Kobayashi, and T. Higuchi, “A palmprint


recognition algorithm using phase-only correlation,” IEICE Trans. FECCS, vol.
E91- A, no. 4, pp. 1023–1030, Apr. 2008.

[5] G. Jaswal, A. Kaul, and R. Nath, “Knuckle print biometrics and fusion schemes—
Overview, challenges, and solutions,” ACM Trans. Comput. Surveys, vol. 49, no. 2,
Nov. 2016, Art. no. 34.

[6] Z. Guo, D. Zhang, L. Zhang, and W. Guo, “Palmprint verification using binary
orientation co-occurrence vector,” Pattern Recognition. Lett., vol. 30, no. 13, pp.
1219– 1227, 2009.

[7] “The Hong Kong Polytechnic University contactless finger knuckle images
database (version 3.0),” 2019. [Online]. Available:
http://www.comp.polyu.edu.hk/∼csajaykr/fn2.htm

[8] B. Zhang, Y. Gao, S. Zhao, and J. Liu, “Local derivative pattern versus local binary
pattern: Face recognition with high-order local pattern descriptor,” IEEE Trans.
Image Process., vol. 19, no. 2, pp. 533–544, Feb. 2010.

[9] C.-R. Huang, C.-S. Chen, and P.-C. Chung, “Contrast context histogram—An
efficient discriminating local descriptor for object recognition and image matching,”
Pattern Recognit., vol. 41, no. 10, pp. 3071–3077, 2008.

[10] K. Lehmann et al., “A novel R486Q mutation in BMPR1B resulting in either a


brachydactyly type C/symphalangism-like phenotype or brachydactyly type A2,”
Eur J. Human Genet., vol. 14, no. 12, pp. 1248–1254, 2006.

[11] M. Kass and A. Witkin, “Analyzing oriented patterns,” in Proc. Comput. Vis.
Graph. Image Process., vol. 37, pp. 362–385, 1987.

[12] R. Benson, To Catch a Paedophile, You Only Need to Look at Their Hands,
WIRED,San Francisco,CA,USA, Sep. 2017. [Online]. Available:
https://w w w . w i r e d . c o . u k / a r t i c l e / s u e - b l a c k -
f o r e n s i c s h a n d - markings-paedophiles- rapists

[13] M. Kass and A. Witkin, “Analyzing oriented patterns,” in Proc. Comput. Vis.
Graph. Image Process., vol. 37, pp. 362–385,
1987.

You might also like