"Biometric System": Seminar Report
"Biometric System": Seminar Report
On
“BIOMETRIC SYSTEM”
Submitted by
ASHOK KUMAR (8516115)
BACHELOR OF TECHNOLOGY
(Computer Science & Engineering)
0
Introduction
As technology advances and information and intellectual properties are wanted by many
unauthorized personnel. As a result, many organizations have being searching ways for
more secure authentication methods for user access. Furthermore, security has always
been an important concern to many people. From Immigration and Naturalization Service
(INS) to banks, industrial, military systems, and personal are typical fields where security
is highly valued. It is soon realized by many, that traditional security and identification
are not sufficient enough, people need to find a new authentic system in the face of new
technological reality.
Conventional security and identification systems are either knowledge based – like a
social security number or a password, or token based – such as keys, ID cards. The
conventional systems can be easily breached by others, ID cards and passwords can be
lost, stolen or can be duplicated. In other words, it is not unique and not necessary
represent the rightful user. Therefore, biometric systems are under intensive research for
this particular reason.
What is Biometric
Humans recognize each other according to their various characteristics for ages. People
recognize others by their face when they meet and by their voice during conversation.
These are part of biometric identification used naturally by people in their daily life.
Biometrics relies on “something you are or you do”, on one of any number of unique
characteristics that you can’t lose or forget. It is an identity verification of living, human
individuals based on physiological and behavioral characteristics. In general, biometric
system is not easily duplicated and unique to each individual. It is a step forwards from
identify something you have and something you know, to something you are.
1
Table 1. Biometric characteristics.
Physical characteristics Personal traits
chemical composition of handwritten
body odor signature
facial features and thermal keystrokes or
emissions typing
features of the eye - retina voiceprint
and iris
fingerprints
hand geometry
skin pores
wrist/hand veins
Same as many recognition systems, a general biometric system can consists of following
sections, data collection, transmission, signal processing storage and decision, see Figure
1. It can be considered that each section function independently, and errors can be
introduced at each point in an additive way.
Data collection consists of sensors to obtain the raw biometric of the subject, and can
output one or multidimensional signal. Usually, data are obtained in a normalized
fashion, fingerprints are moderately pressed and rotation is minimized, faces are obtained
in frontal or profiled view, etc. Data storage is usually separated from point of access,
2
therefore the data have to be transmitted or distributed via a communication channel. Due
to bandwidth, data compression may be required. The signal processing module takes the
original biometric data and converts it into feature vectors. Depend on the applications,
raw data might be stored as well as the obtained feature vectors. The decision subsystem
compares the measured feature vectors with the storage data using the implemented
system decision policy. If measures indicate a close relationship between the feature
vector and compared template, a match is declared.
False matching and false non-matching error can occur, although for different systems
error equation varied, a general equation can be developed [Error: Reference source not
found][1]. Let M be the number of independent biometric measures the probability of
false match FMRSR against any single record can be given by,
M
FMRSR FMR j ( j )
j 1
Where FMR j ( j ) equal single comparison false match rate for the jth biometric and
threshold τ. The probability for not making any false match in comparison in multiple
records can be expressed as,
1 FMRSYS (1 FMR) PN
Where FMRSYS is the system false match rate, and N and P is number of records and
percentage of the database to be searched respectively. For the single record false non-
match rate,
M
1 FNMRSR [1 FMR j ( j )]
j 1
More commonly used biometric system reliability indexes are FRR (False Reject Rate)
which is the statistical probability that the system fails to recognize an enrolled person
and FAR (False Accept Rate) which is the statistical probability that an imposter is
recognized as an enrolled person. FRR and FAR are inversely dependent on each other
and as within modern biometric systems identification:
FAR [0.0001%, 0.1%]
FAR [0.0001%, 5%]
3
Image based biometric techniques
There are many biometric systems based on different characteristics and different part of
the human body. However, people should look for the following in their biometrics
systems
Universality - which means that each person should have the characteristic
Uniqueness - which indicates that no two persons should be the same in
terms of the characteristic
Permanence - which means that the characteristic should not be changeable
Collectability - which indicates that the characteristic can be measured
quantitatively
From the above, various image based biometric techniques has been intensively studied.
This paper will discuss the following techniques, face, fingerprints, hand geometry, hand
veins, iris, retina and signature.
Face
Face recognition technology (FRT) has applications in many wide ranges of fields,
including commercial and law enforcement applications. This can be separate into two
major categories. First is a static matching, example such as passport, credit cards, photo
ID’s, driver’s licenses, etc. Second is real-time matching, such as surveillance video,
airport control, etc. In the psychophysical and neuroscientific aspect, they have concerned
on other research field which enlighten engineers to design algorithms and systems for
machine recognition of human faces. A general face recognition problem can be, given
still or video images of a scene to identify one or more persons in the scene using a stored
database of faces [2]. With additional information such as race, age and gender can help to
reduce the search. Recognition of the face is a relatively cheap and straightforward
method. Identification based on acquiring a digital image on the target person and
analyzing and extracting the facial characteristics for the comparison with the database.
Karhunen-Loeve (KL) expansion for the representation and recognition of faces is said to
generate a lot of interest. The local descriptors are derived from regions that contain the
eyes, mouth, nose, etc., using approaches such as deformable templates or eigen-
4
expansion. Singular value decomposition (SVD) is described as deterministic counterpart
of KL transform. After feature extraction, recognition is done, early approach such as
feature point distance or nearest neighbor rule is used, and later eigenpictures using
eigenfaces and Euclidean distance approach is examined. Other methods using HyperBF
network a neural network approach, dynamic link architecture, and Gabor wavelet
decomposition methods.
Gabor wavelet is another widely used face recognition approach, it can be described by
the equation,
k 2 | x |2
k, ( x ) exp exp(ik x )
2 2
where k is the oscillating frequency of the wavelet, and the direction of the oscillation. σ
is the rate at which the wavelet collapses to zero as one moves from its center outward.
The main idea is to describe an arbitrary two-dimensional image function I(x,y) as a
5
linear combination of a set of wavelets. The x, y plane is first subdivided into a grid of
non-overlapping regions. At each grid point, the local image is decomposed into a set of
wavelets chosen to represent a range of frequencies, directions and extents that “best”
characterize that region [Error: Reference source not found]. By limiting k to a few
values, the resulting coefficients become almost invariant to translation, scale and angle.
The finite wavelet set at a particular point forms a feature vector called a jet, which
characterize the image. Also with elastically distorted grid, best match between two
images can be obtained .
Fingerprints
One of the oldest biometric techniques is the fingerprint identification. Fingerprints were
used as a means of positively identifying a person as an author of the document and are
used in law enforcement. Fingerprint recognition has a lot of advantages, a fingerprint is
compact, unique for every person, and stable over the lifetime. A predominate approach
to fingerprint technique is the uses of minutiae, see Figure 2.
The traditional fingerprints are obtained by placing inked fingertip on paper, now
compact solid state sensors are used. The solid state sensors can obtain patterns at 300 x
6
300 pixels at 500 dpi, and an optical sensor can have image size of 480 x 508 pixels at
500 dpi. A typical algorithm for fingerprint feature extraction contains four stages, see in
Figure 3.
The feature extraction first binarizes the ridges in a fingerprint image using masks that
are capable of adaptively accentuating local maximum gray-level values along a direction
normal to ridge direction. Minutiae are determined as points that have one neighbor or
more than two neighbors in skeletonized image.
Feature extraction approach differs between many papers, one simple minutiae extraction
can be by applying the following filter, where resulting of 1 means ending, 2 a ridge, and
3 a bifurcation [Error: Reference source not found].
1 1 1
Minutiae Filter 1 0 1
1 1 1
7
R1 R2 R3
Minutiae Filter R8 M R4
R7 R6 R5
However more complicated feature extraction such as [Error: Reference source not
found], [Error: Reference source not found] applied Gabor filters. [Error: Reference
source not found] uses a bank of 8 Gabor filter with same frequency,0.1 pix-1, but
different orientations (0° to 157.5° in steps of 22.5°). The frequency is chosen based on
average inter-ridge distance in fingerprint, which is ~10 pixels. Therefore, there are 8
feature values for each cell in tessalation, and are concatenate to form 81 x 8 feature
vector. In [Error: Reference source not found] the frequency is set to average ridge
frequency (1/K), where K is the average inter-ridge distance. The Gabor filter parameters
δx and δy are set to 4.0, and orientation is tuned to 0°. This is due to the extracted region is
in the direction of minutiae. In general the result can be seen in Figure 4.
8
Iris
Another biometric non-invasive system is the use of color ring around the pupil on the
surface of the eye. Iris contains unique texture and is complex enough to be used as a
biometric signature. Compared with other biometric features such as face and fingerprint,
iris patterns are more stable and reliable. It is unique to people and stable with age
[5].Figure 5, shows a typical example of an iris and extracted texture image.
Figure 5. (a) Iris image (b) iris localization (c) unwrapped texture image
Iris is highly randomized and its suitability as an exceptionally accurate biometric derives
from its
extremely data-rich physical structure
genetic independence, no two eyes are the same
stability over time
physical protection by a transparent window (the cornea) that does not inhibit
external view ability
There are wide range of extraction and encoding methods, such as, Daugman Method,
multi-channel Gabor filtering, Dyadic wavelet transfmor, etc. Also, iris code is calculated
using circular bands that have been adjusted to conform to the iris and pupil boundaries.
Daugman is the first method to describe the extraction and encoding process. The system
contains eight circular bands and generates 512-byte iris code, see Figure 6.
9
a) b)
Figure 6. a) Dougman system, top 8 circular band, bottom iris code b) demodulation code
After boundaries have been located, any occluding eyelids detected, and reflections or
eyelashes excluded, the isolated iris is mapped to size-invariant coordinates and
demodulated to extract its phase information using quadrature 2D Gabor wavelets [Error:
Reference source not found]. A given area of the iris is projected onto complex-valued
2D Gabor wavelet using,
h{Re,Im} sgn {Re,Im} I ( , )e iw( 0 ) e ( r0 )
2
/ 2 2
/2
e ( 0 ) d d
where h{Re,Im} can be regarded as a complex-valued bit whose real and imaginary parts are
either 1 or 0 (sgn) depending on the sign of the 2D integral. I(,) is the raw iris image in
a dimensionless polar coordinate system that is size- and translation-invariant, and which
also corrects for pupil dilation. and are the multi-scale 2D wavelet size parameters,
spanning a 8-fold range from 0.15mm to 1.2mm on the iris, and w is wavelet frequency
spanning 3 octaves in inverse proportion to . (r0,0) represent the polar coordinates of
each region of iris for which the phasor coordinates h{Re,Im} are computed [Error:
Reference source not found]. 2,048 such phase bits (256 bytes) are computed for each iris
and equal amount of masking bits are computed to signify any region is obscured by
eyelids, eyelash, specular reflections, boundary artifacts or poor signal-to-noise ratio.
Hamming distance is used to measure the similarity between any two irises, whose two
phase code bit vectors are denoted {codeA, codeB} and mask bit vectors are {maskA,
maskB} with Boolean operation [Error: Reference source not found],
(codeA codeB) maskA maskB
HD
maskA maskB
10
For two identical iris codes, the HD is zero; for two perfectly unmatched iris codes, the
HD is 1. For different irises, the average HD is about 0.5 . The observed mean HD was p
= 0.499 with standard deviation = 0. 317, which is close fit to theoretical values.
Generally, an HD threshold of 0.32 can reliably differentiate authentic users from
impostors.
An alternative approach to this iris system can be the use of multi-channel Gabor filtering
and wavelet transform. The boundaries can be taken by two circles, usually not co-
centric. Compared with the other part of the eye, the pupil is much darker, therefore,
inner boundary between the pupil and the iris is determined by means of thresholding.
The outer boundary is determined by maximizing changes of the perimeter-normalized
sum of gray level values along the circle. Due to size of pupil can be varied, it is
normalized to a rectangular block of a fixed size. Local histogram equalization is also
performed to reduce the effect of non-uniform illumination, see Figure 6. The multi-
channel Gabor filtering technique involves of cortical channels, each cortical channel is
modeled by a pair of Gabor filters opposite symmetry to each other.
he ( x, y ) g ( x, y ) cos[2f ( x cos y sin )]
ho ( x, y ) g ( x, y ) sin[2f ( x cos y sin )]
where g(x,y) is a 2D Guassian function, f and are the central frequency and orientation.
The central frequencies used in [Error: Reference source not found] are 2, 4, 8, 16, 32 and
64 cycles/degree. For each central frequency f, filtering is performed at = 0°, 45°, 90°
and 135°. Which produces 24 output images (4 for each frequency), from which the iris
features are extracted. These features are the mean and the standard deviation of each
output image. Therefore, 48 features per input image are calculated, and all 48 features
are used for testing.
A 2D wavelet transform can be treated as two separate 1-D wavelet transforms . A set of
sub-images at different resolution level are obtained after applying wavelet transform.
The mean and variance of each wavelet sub-image are extracted as texture features. Only
five low resolution levels, excluding the coarsest level, are used. This makes the 26
11
extracted features robust in a noisy environment . Weighted Euclidean Distance is used as
classifier,
(k )
N
( fi fi )2
WED(k ) (k )
,
i 1 ( i ) 2
where fi denotes the ith feature of the unknown iris, fi(k) and δi(k) denots the ith feature and its
standard deviation of iris k, N is the total number of features extracted from a single iris.
It is found that, a classification rate of 93.8% was obtained when either all the 48 features
were used or features at f = 2, 4, 8, 16, 32 were used. And the wavelet transform can
obtained an accuracy of 82.5% . Other methods such as Circular Symmetric Filters [ 6]
can obtain correct classification rate of 93.2% to 99.85%.
Retina
A retina-based biometric involves analyzing the pattern of blood vessels captured by
using a low-intensity light source through an optical coupler to scan the unique patterns
in the back of the eye. Retina is not directly visible and so a coherent infrared light source
is necessary to illuminate the retina. The infrared energy is absorbed faster by blood
vessels in the retina than by the surrounding tissue. Retinal scanning can be quite
accurate but does require the user to look into a receptacle and focus on a given point.
However it is not convenient if wearing glasses or if one concerned about a close contact
with the reading device. A most important drawback of the retina scan is its intrusiveness.
The light source must be directed through the cornea of the eye, and operation of the
retina scanner is not easy. However, in healthy individuals, the vascular pattern in the
retina does not change over the course of an individual’s life [7]. Although retina scan is
more susceptible to some diseases than the iris scan, but such diseases are relatively rare.
Due to its inherent properties of not user-friendly and expansive, it is rarely used today. A
typical retinal scanned image is shown in Figure .
12
Figure 7. Retinal scanned image
Paper [8] propose a general framework of adaptive local thresholding using a verification-
based multithreshold probing scheme. It is assumed that, given a binary image BT
resulting from some threshold T, decision can be made if any region in BT can be
accepted as an object by means of a classification procedure. A pixel with intensity lower
than or equal to T is marked as a vessel candidate and all other pixels as background.
Vessels are considered to be curvilinear structures in BT, i.e., lines or curves with some
limited width . The approach to vessel detection in BT consists of three steps: 1) perform
an Euclidean distance transform on BT to obtain a distance map, 2) prune the vessel
candidates by distance map retain only center line pixels of curvilinear bands, 3)
reconstruct the curvilinear bands from their center line pixels. The reconstructed
curvilinear bands give that part of the vessel network that is made visible by the
particular threshold T .
Fast algorithm for Euclidean distance transform is applied. For each candidate vessel
point, the resulting distance map contains the distance to its nearest background pixel and
the position of that background pixel. The pruning operation uses two measures, and d,
to quantify the likelihood of a vessel candidate being a center line pixel of a curvilinear
band of limited width, see Figure.
13
Figure 8. Quantities for testing curvilinear bands.
where p and n represent a vessel candidate and one of the eight neighbors from its
neighborhood Np, respectively, ep and en are their corresponding nearest background
pixel. The two measures are defined by,
180 pe p pe n
max angle( pe p , pe n ) max arccos
nN p nN p pe p pe n
d max e p en
nN p
14
Signature
Signature differ from above mentioned biometric system, it is a trait that characterize
single individual. Signature verification analyzes the way a user signs his or her name.
This biometric system can be put into two categories, on-line and off-line methods. On-
line methods take consideration of signing features such as speed, velocity, rhythm and
pressure are as important as the finished signature’s static shape. Whereas, off-line
classification methods are having signature signed on a sheet and scanned. People are
used to signatures as a means of transaction-related identity verification, and most would
see nothing unusual in extending this to encompass biometrics. Signature verification
devices are reasonably accurate in operation and obviously lend themselves to
applications where a signature is an accepted identifier. Various kinds of devices are used
to capture the signature dynamics, such as the traditional tablets or special purpose
devices. Special pens are able to capture movements in all 3 dimensions. Tablets are used
to capture 2D coordinates and the pressure, but it has two significant disadvantages.
Usually the resulting digitalized signature looks different from the usual user signature,
and sometimes while signing the user does not see what has been written so far. This is a
considerable drawback for many (inexperienced) users.
15
the straight direction line is fished. The direction vector usually have 300 elements
[Error: Reference source not found].
In recognition stage of an input or signature vector sequence X, each HMM model λi,
i = {1,2,…,M}, with M equal to the number of different signatures, estimates the “a
posteriori” probabilities P(X | λi), and the input sequence X is assigned to the j signature
which provides the maximum score (maxnet),
X i if j arg max i 1, 2,, M P( X | i )
The resultant system decrease greatly when the number of signature increases. The
recognition and verification rates are for 30 signatures are 76.6% and 92% respectively.
On-line verification signature verification methods can be further divided into two
groups: direct methods (using the raw functions of time) and indirect methods (using
parameters) [9]. With direct methods, the signature is stored as a discrete function to be
compared to a standard from the same writer, previously computed during an enrolment
stage. Such methods simplify data acquisition but comparison can become a hard task.
For indirect methods, it requires a lot of effort preparing data to be processed, but the
comparison is quite simple and efficient . One direct method system, mentioned in , relies
on three pseudo-distance measures (shape, motion and writing pressure) derived from
coordinate and writing pressure functions through the application of a technique known
as Dynamic Time Warping (DTW). It is reported to have over 90% success rate. Another
approach is the use of Fast Fourier Transform as an alternative to time warping. It is
suggested that working in the frequency domain would eliminate the need to worry about
temporal misalignments between the functions to be compared. It is concluded that the
FFT can be useful as a method for the selection of features for signature verification .
16
Alternative approach could be wavelet base method, where the signature to be tested is
collected from an electronic pad as two functions in time (x(t),y(t)). It is numerically
processed to generate numbers that represent the distance between it and a reference
signature (standard), computed in a previous enrolment stage. The numerical treatment
includes resampling to a uniform mesh, correction of elementary distortions between
curves (such as spurious displacements and rotations), applying wavelet transforms to
produce features and finally nonlinear comparison in time (Dynamic Time Warping).
The decomposition of the functions x(t) and y(t) with wavelet transform generates
approximations and details like those showed in Figure 11 to an original example of x(t).
Each zero-crossing of the detail curve at the 4 th level of resolution (this level was chosen
empirically, by trial and error), three parameters are extracted: its abscissa, the integral
between consecutive zero-crossings,
ZC k
and the corresponding amplitude to the same abscissa in the approximation function at 3 rd
level,
va k WA 3( zc k )
17
curves. It is found that the Dynamic Time Warping algorithm on features extracted with
the application of wavelet transforms, is suitable to on-line signature verification.
Furthermore, it is only with the inclusion of wavelet transform that proposed system can
prevent trained forgeries to be accepted (0% FAR).
Multiple biometric
In practice, a biometric characteristic that satisfies the requirements mentioned in section
image based biometric techniques may not always be feasible for a practical biometric
system. In a practical biometric system, there are a number of other issues which should
be considered, including,
1. Performance, which refers to the achievable identification accuracy, speed,
robustness, the resource requirements to achieve the desired identification
accuracy and speed, as well as operational or environmental factors that affect
the identification accuracy and speed.
2. Acceptability, which indicates the extent to which people are willing to accept a
particular biometrics in their daily life.
3. Circumvention, which reflects how easy it is to fool the system by fraudulent
methods.
Also, single biometric system has some limitations, such as noisy data, limited degrees of
freedom. In searching for a better more reliable and cheaper solution, fusion techniques
have been examined by many researches, which also known as multi-modal biometrics.
This can address the problem of non-universality due to wider coverage, and provide
anti-spoofing measures by making it difficult for intruder to “steal” multiple biometric
traits. Commonly used classifier combination schemes such as the product rule, sum rule,
min rule, max rule, media rule and the majority rule were derived from a common
theoretical framework under different assumptions by using different approximations. In
it is discussed that different threshold or weights can be given to different user, to reduce
the importance of less reliable biometric traits. It is found by doing this, FRR can be
improved. As well it can reduce the failure to enroll problem by assigning smaller
weights to those noisy biometrics. Also, in the proposed integration of face and
fingerprints overcomes the limitations of both face-recognition systems and fingerprint-
18
verification systems. The decision-fusion scheme formulated in the system enables
performance improvement by integrating multiple cues with different confidence
measures, with FRR of 9.8% and FAR of 0.001%. Other fusion techniques have been
mentioned in , these are Bayes theory, clustering algorithms such as fuzzy K-means,
fuzzy vector quantization and median radial basis function. Also vector machines using
polynomial kernels and Bayesian classifiers (also used by for multisensor fusion) are said
to outperform Fisher’s linear discriminant. Not only fusion between biometric, fusions
within a same biometric systems using different expert can also improve the overall
performance, such as the fusion of multiple experts in face recognition.
Conclusion
Depend on application different biometric systems will be more suited than others. It is
known that there is no one best biometric technology, where different applications require
different biometrics . Some will be more reliable in exchange for cost and vise versa, see
Figure .
Proper design and implementation of the biometric system can indeed increase the overall
security. Furthermore, multiple biometric fusions can be done to obtain a relative cheaper
reliable solution. The imaged base biometric utilize many similar functions such as Gabor
filters and wavelet transforms. Image based can be combined with other biometrics to
give more realible results such as liveliness (ECG biometric) or thermal imaging or Gait
19
based biometric systems. A summary of comparison of biometrics is shown in table
below,
20
Refrences
21
1 Wayman, J.L. 1999. Error rate equations for the general biometric system. IEEE Robotics &
Automation Magazine, 6 (1), Mar, 358.
2 Chellappa, R., Wilson, C.L., Sirohey, S. 1995. “Human and machine recognition of faces: a survey,”
Proceedings of the IEEE, 83 (5), May, 705 -741.
3 Barrett, W.A. 1997. A survey of face recognition algorithms and testing results. Signals, Systems &
Computers, 1997. Conference Record of the Thirty-First Asilomar Conference, 1, 2-5 Nov, 301 -305.
4 Huvanandana, S., Changick Kim, Jenq-Neng Hwang. 2000. Reliable and fast fingerprint
identification for security applications. Image Processing, 2000. Proceedings. 2000 International
Conference, 2, 503 -506.
5 Yong Zhu, Tieniu Tan, Yunhong Wang. 2000. Biometric personal identification based on iris patterns.
Proceedings 15th International Conference on Pattern Recognition. 2, 801-4.
6 Li Ma, Yunhong Wang, Tieniu Tan. 2002. Iris recognition using circular symmetric filters Pattern
Recognition, 2002. Proceedings. 16th International Conference, 2, 414 -417.
7 Podio, F.L. 2002. Personal authentication through biometric technologies. Networked Appliances,
2002. Proceedings. 2002 IEEE 4th International Workshop, 57 -66.