Fingerprint Recognition Using MATLAB PDF
Fingerprint Recognition Using MATLAB PDF
Recognition using
MATLAB
Graduation project
Prepared by:
Zain S. Barham
Supervised by:
Dr. Allam Mousa
5/17/2011
Acknowledgement
I would like to extend my heartiest gratitude to Dr. Allam A.
Mousa, my project supervisor, for his invaluable guidance,
inspirations and timely suggestions which facilitated the entire
pro ess of ri gi g out this proje t o fingerprint recognition
using Matlab .
2
Contents
Abstract .............................................................................................. 5
Acknowledgement ............................................................................. 2
Chapter one: Introduction .................................................................. 6
1.1 INTRODUCTION ......................................................................... 7
1.2 Biometrics ................................................................................. 8
1.3 Biometrics Authentication Techniques ..................................... 8
1.4 How Biometric Technologies Work ........................................... 8
1.4.1 Enrollment .......................................................................... 9
1.4.2 Verification ....................................................................... 10
1.4.3 Identification .................................................................... 10
1.4.4 Matches Are Based on Threshold Settings ....................... 11
1.5 Leading Biometric Technologies ............................................. 12
1.6 Fingerprints as a Biometric ..................................................... 13
1.6.1 Fingerprint Representation .............................................. 14
1.6.2 Minutiae ........................................................................... 14
Chapter two: Motivation for the project .......................................... 16
2.1 Problem Definition .................................................................. 17
2.2 Motivation for the Project ...................................................... 18
2.3 About the Project .................................................................... 18
Chapter three: System design .......................................................... 20
2.1 System Level Design ................................................................ 21
2.2 Algorithm Level Design ........................................................... 22
Chapter four: Fingerprint image preprocessing ............................... 24
4.1 Fingerprint Image Enhancement ............................................ 25
4.1.1 Histogram Equalization: ................................................... 26
4.1.2 Fingerprint Enhancement by Fourier Transform .............. 28
4.2 Fingerprint Image Binarization ............................................... 30
4.3 Fingerprint Image Segmentation (orientation flow estimate)
........................................................................................................... 32
4.3.1 Block direction estimation ............................................... 32
3.3.2 ROI extraction by Morphological operation .................... 34
Chapter five: Minutiae extraction .................................................... 36
5.1 Fingerprint Ridge Thinning ..................................................... 37
5.2 Minutia Marking ..................................................................... 39
Chapter six: Minutiae post-processing ............................................. 42
False Minutia Removal ................................................................. 43
3
Chapter seven: Minutiae match ....................................................... 46
6.1 Alignment Stage ..................................................................... 48
6.2 Match Stage ............................................................................ 50
Chapter eight: System evaluation and conclusion ........................... 51
8.1 Evaluation of the system ........................................................ 52
8.2 Conclusion ............................................................................... 54
Appendix .......................................................................................... 55
REFERENCES ..................................................................................... 74
4
Abstract
Human fingerprints are rich in details called minutiae, which can be
used as identification marks for fingerprint verification. The goal of this
project is to develop a complete system for fingerprint verification
through extracting and matching minutiae. To achieve good minutiae
extraction in fingerprints with varying quality, preprocessing in form of
image enhancement and binarization is first applied on fingerprints
before they are evaluated. Many methods have been combined to build
a minutia extractor and a minutia matcher. Minutia-marking with false
minutiae removal methods are used in the work. An alignment-based
elastic matching algorithm has been developed for minutia matching.
This algorithm is capable of finding the correspondences between input
minutia pattern and the stored template minutia pattern without
resorting to exhaustive search. Performance of the developed system is
then evaluated on a database with fingerprints from different people.
5
Chapter one:
Introduction
6
1.1 INTRODUCTION
7
1.2 Biometrics
8
facilitate matching, the raw digital representation is usually further
processed by feature extractor to generate a compact but expensive
representation, called a template.
Depending on the application, the template may be stored in the
central database. Depending on the application, biometrics can be used
in one of two modes: verification or identification. Verification—also
called authentication—is used to erif a perso s ide tit —that is, to
authenticate that individuals are who they say they are. Identification is
used to esta lish a perso s ide tit —that is, to determine who a person
is. Although biometric technologies measure different characteristics in
substantially different ways, all biometric systems start with an
enrollment stage followed by a matching stage that can use either
verification or identification.
1.4.1 Enrollment
In enrollment, a biometric system is trained to identify a specific
person. The person first provides an identifier, such as an identity card.
The biometric is linked to the identity specified on the identification
document. He or she then presents the biometric (e.g., fingertips, hand,
or iris) to an acquisition device. The distinctive features are located and
one or more samples are extracted, encoded, and stored as a reference
template for future comparisons. Depending on the technology, the
biometric sample may be collected as an image, a recording, or a record
of related dynamic measurements. How biometric systems extract
features and encode and store information in the template is based on
the s ste e dor s proprietar algorith s. Te plate size aries
depending on the vendor and the technology. Templates can be stored
remotely in a central database or within a biometric reader device itself;
their small size also allows for storage on smart cards or tokens.
Minute changes in positioning, distance, pressure, environment, and
other factors influence the generation of a template. Consequently, each
ti e a i di idual s io etric data are captured, the new template is
likely to be unique. Depending on the biometric system, a person may
need to present biometric data several times in order to enroll.
Either the reference template may then represent an amalgam of the
captured data or several enrollment templates may be stored. The
quality of the template or templates is critical in the overall success of
the biometric application. Because biometric features can change over
time, people may have to reenroll to update their reference template.
9
Some technologies can update the reference template during matching
operations. The enrollment process also depends on the quality of the
identifier the enrollee presents. The reference template is linked to the
identity specified on the identification document. If the identification
do u e t does ot spe if the i di idual s true ide tit , the refere e
template will be linked to a false identity.
1.4.2 Verification
In verification systems, the step after enrollment is to verify that a
person is who he or she claims to be (i.e., the person who enrolled).
After the individual provides an identifier, the biometric is presented,
which the biometric system captures, generating a trial template that is
ased o the e dor s algorith . The s ste then compares the trial
io etri te plate ith this perso s refere e te plate, hi h as
stored in the system during enrollment, to determine whether the
i di idual s trial a d stored te plates at h.
Verification is often referred to as 1:1 (one-to-one) matching.
Verification systems can contain databases ranging from dozens to
millions of enrolled templates but are always predicated on matching an
i di idual s prese ted io etri agai st his or her refere e te plate.
Nearly all verification systems can render a match–no-match decision in
less than a second.
One of the most common applications of verification is a system that
requires employees to authenticate their claimed identities before
granting them access to secure buildings or to computers.
1.4.3 Identification
In identification systems, the step after enrollment is to identify who
the person is. Unlike verification systems, no identifier is provided. To
fi d a at h, i stead of lo ati g a d o pari g the perso s refere e
template against his or her presented biometric, the trial template is
compared against the stored reference templates of all individuals
enrolled in the system. Identification systems are referred to as 1: M
(one-to-M, or one-to- a at hi g e ause a i di idual s io etri
is o pared agai st ultiple io etri te plates i the s ste s
database. There are two types of identification systems: positive and
negative. Positive identification systems are designed to ensure that an
i di idual s io etri is e rolled i the data ase. The anticipated result
of a search is a match. A typical positive identification system controls
10
access to a secure building or secure computer by checking anyone who
seeks access against a database of enrolled employees. The goal is to
determine whether a person seeking access can be identified as having
been enrolled in the system. Negative identification systems are
desig ed to e sure that a perso s io etri i for atio is ot prese t
in a database. The anticipated result of a search is a no match.
Co pari g a perso s io etri i for atio agai st a data ase of all
who are registered in a public benefits program, for example, can ensure
that this perso is ot dou le dippi g usi g fraudule t
documentation to register under multiple identities. Another type of
negative identification system is a watch list system. Such systems are
designed to identify people on the watch list and alert authorities for
appropriate action. For all other people, the system is to check that they
are not on the watch list and allow them normal passage. The people
whose biometrics is in the database in these systems may not have
provided them voluntarily. For instance, for a surveillance system, the
biometric may be faces captured from mug shots provided by a law
enforcement agency.
11
1.5 Leading Biometric Technologies
Facial Recognition
leading biometric technologies:
Fingerprint Recognition
Hand Geometry
Iris Recognition
Signature Recognition
Speaker Recognition
Fingerprint Recognition
12
1.6 Fingerprints as a Biometric
Among all biometric traits, fingerprints have one of the highest levels
of reliability and have been extensively used by forensic experts in
criminal investigations. A fingerprint refers to the flow of ridge patterns
in the tip of the finger. The ridge flow exhibits anomalies in local regions
of the fingertip (Figure), and it is the position and orientation of these
anomalies that are used to represent and match fingerprints.
13
1.6.1 Fingerprint Representation
1.6.2 Minutiae
14
Minutiae also refer to any small or otherwise incidental details.
But the focus when matching is only on the 2 main minutiae; ridge ending
and ridge bifurcation.
15
Chapter two: Motivation
for the project
16
2.1 Problem Definition
17
2.2 Motivation for the Project
18
A ridge ending is defined as the point where the ridge ends abruptly
and the ridge bifurcation is the point where the ridge splits into two or
more branches. Automatic minutiae detection becomes a difficult task in
low quality fingerprint images where noise and contrast deficiency result
in pixel configurations similar to that of minutiae. This is an important
aspect that has been taken into consideration in this presentation for
extraction of the minutiae with a minimum error in a particular location.
Data Signal
Decision
storage processing
Template/
Model Pattern
Templates Matching
matching score
Match?
Quality
Images control
Match
Quality
score / Non-
match
Features
Data Feature
acquisition extraction
(Biometric
sensor, Accept?
scanner…) Sample
Sample
Sample
Accept/
Reject
Transmission channel
19
Chapter three: System
design
20
2.1 System Level Design
They have high efficiency and acceptable accuracy except for some cases that
the user s fi ger is too dirt or dr . Ho ever, the testing database for my
project consists of scanned fingerprints using the ink and paper technique
because this method introduces a high level of noise to the image and the
The minutia extractor and minutia matcher modules are explained in detail
21
2.2 Algorithm Level Design
stage.
fingerprint image is binarized using the locally adaptive threshold method. The
22
estimation, segmentation by direction intensity and Region of Interest
The minutia marking is a relatively simple task. For the post-processing stage,
The minutia matcher chooses any two minutiae as a reference minutia pair
and then matches their associated ridges first. If the ridges match well, the
two fingerprint images are aligned and matching is conducted for all the
remaining minutiae.
23
Chapter four:
Fingerprint image
preprocessing
24
4.1 Fingerprint Image Enhancement
Fingerprint Image enhancement is used to make the image clearer for easy
further operations. Since the fingerprint images acquired from scanner or any
other media are not assured with perfect quality, those enhancement
methods, for increasing the contrast between ridges and valleys and for
connecting the false broken points of ridges due to insufficient amount of ink,
Originally, the enhancement step was supposed to be done using the canny
edge detector. But after trial, it turns out that the result of an edge detector is
an image with the borders of the ridges highlighted. Using edge detection
would require the use of an extra step to fill out the shapes which would
consume more processing time and would increase the complexity of the
25
So, for this part of the project, two Methods are adopted for image
enhancement stage: the first one is Histogram Equalization; the next one is
Fourier Transform.
26
The right side of the following figure [Figure 4.1.1.3] is the output after the
histogram equalization
27
4.1.2 Fingerprint Enhancement by Fourier Transform
We divide the image into small processing blocks (32 by 32 pixels) and
(1)
multiply the FFT of the block by its magnitude a set of times. Where the
(2)
(3)
28
appearance of the ridges, filling up small holes in ridges, having too high a "k"
The enhanced image after FFT has the improvements to connect some
between ridges.
29
4.2 Fingerprint Image Binarization
The binarization step is basically stating the obvious, which is that the true
information that could be extracted from a print is simply binary; ridges vs.
valleys. But it is a really important step in the process of ridge extracting, since
fact ridges, still vary in intensity. So, binarization transforms the image from a
white or black, depending on a pixel's label (black for 0, white for 1).
The difficulty in performing binarization is that not all the fingerprint images
fingerprint image. In this method, the image is divided into blocks (16x16),
and the mean intensity value is calculated for each block, then each pixel is
turned into 1 if its intensity value is larger than the mean intensity value of the
30
Figure 4.2.1 the Fingerprint image after adaptive binarization
31
4.3 Fingerprint Image Segmentation (orientation flow
estimate)
fingerprint image. The image area without effective ridges is first discarded
since it only holds background information and probably noise. Then the
bound of the remaining effective area is sketched out since the minutiae in
the bound region are confusing with those false minutiae that are generated
To extract the ROI, a two-step method is used. The first step is block
direction estimation and direction variety check, while the second is done
I. Calculates the gradient values along x-direction (gx) and y-direction (gy)
for each pixel of the block. Two Sobel filters are used to fulfill the task.
II. For each block, it uses the following formula to get the Least Square
32
tan2ß = 2 (gx*gy)/ (gx2-gy2) for all the pixels in each block.
along x-direction and y-direction as cosine value and sine value. So the
tangent value of the block direction is estimated nearly the same as the
1.2 After finishing with the estimation of each block direction, those blocks
formulas:
For each block, if its certainty level (E) is below a threshold, then the block
33
Figure 4.3.1.1 Direction map.
small cavities.
34
Figure 4.3.2.1 ROI + Bound
Figure [4.3.2.1] shows the interest fingerprint image area and its
bound. The bound is the subtraction of the closed area from the opened area.
bottommost blocks out of the bound so as to get the tightly bounded region
35
Chapter five: Minutiae
extraction
36
5.1 Fingerprint Ridge Thinning
Ridge Thinning is to eliminate the redundant pixels of ridges till the ridges
are just one pixel wide. An iterative, parallel thinning algorithm is used. In
each scan of the full fingerprint image, the algorithm marks down redundant
pixels in each small image window (3x3) and finally removes all those marked
pixels after several scans. The thinned ridge map is then filtered by other
spikes. In this step, any single points, whether they are single-point ridges or
37
Figure 5.1.2 image after removing H-breaks and spikes
38
5.2 Minutia Marking
After the fingerprint ridge thinning, marking minutia points is relatively
easy. The concept of Crossing Number (CN) is widely used for extracting the
minutiae.
In general, for each 3x3 window, if the central pixel is 1 and has exactly 3
one-value neighbors, then the central pixel is a ridge branch [Figure 4.2.1]. If
the central pixel is 1 and has only 1 one-value neighbor, then the central pixel
0 1 0 0 0 0
0 1 0 0 1 0
1 0 1 0 0 1
0 1 0
0 1 1
1 0 0
Figure 5.2.3 Triple counting branch
39
Figure 5.2.3 illustrates a special case that a genuine branch is triple counted.
Suppose both the uppermost pixel with value 1 and the rightmost pixel with
value 1 have another neighbor outside the 3x3 window, so the two pixels will
be marked as branches too, but actually only one branch is located in the
Also the average inter-ridge width D is estimated at this stage. The average
ridges. The way to approximate the D value is simple. Scan a row of the
thinned ridge image and sum up all pixels in the row whose values are one.
Then divide the row length by the above summation to get an inter-ridge
width. For more accuracy, such kind of row scan is performed upon several
other rows and column scans are also conducted, finally all the inter-ridge
Together with the minutia marking, all thinned ridges in the fingerprint
40
Figure 5.2.4 image after minutiae marking
41
Chapter six: Minutiae
post-processing
42
False Minutia Removal
The preprocessing stage does not usually fix the fingerprint image in total. For
example, false ridge breaks due to insufficient amount of ink and ridge cross-
connections due to over inking are not totally eliminated. Actually all the
lead to spurious minutia. These false minutiae will significantly affect the
m1 m2 m3 m4
m5 m6 m7
Figure 6.1 False Minutia Structures. m1 is a spike piercing into a valley. In the m2
case a spike falsely connects two ridges. m3 has two near bifurcations located in the same ridge.
The two ridge broken points in the m4 case have nearly the same orientation and a short
distance. m5 is alike the m4 case with the exception that one part of the broken ridge is so short
that another termination is generated. m6 extends the m4 case but with the extra property that
a third ridge is found in the middle of the two parts of the broken ridge. m7 has only one short
ridge found in the threshold window.
43
The procedure for the removal of false minutia consists of the following
steps:
1. If the distance between one bifurcation and one termination is less than
D and the two minutiae are in the same ridge (m1 case). Remove both
2. If the distance between two bifurcations is less than D and they are in the
coincident with a small angle variation. And they suffice the condition
Then the two terminations are regarded as false minutiae derived from a
4. If two terminations are located in a short ridge with length less than D,
This procedure in removing false minutia has two advantages. One is that the
ridge ID is used to distinguish minutia and the seven types of false minutia are
strictly defined. The second advantage is that the order of removal procedures
the relations among the false minutia types. For example, the procedure3
44
solves the m4, m5 and m6 cases in a single check routine. And after procedure
45
Chapter seven: Minutiae
match
46
Given two set of minutia of two fingerprint images, the minutia match
algorithm determines whether the two minutia sets are from the same finger
or not.
consecutive stages: one is alignment stage and the second is match stage.
any one minutia from each image; calculate the similarity of the two
ridges associated with the two referenced minutia points. If the similarity
2. Match stage: After we get two set of transformed minutia points, we use
assuming two minutia having nearly the same position and direction are
identical.
47
6.1 Alignment Stage
coordinates (x1, x2… n) of the points on the ridge. A point is sampled per ridge
length L starting from the minutia point, where the L is the average inter-ridge
length. And n is set to 10 unless the total ridge length is less than 10*L.
S = mi=0xiXi/[mi=0xi2Xi2]^0.5,
where (xi~xn) and (Xi~XN ) are the set of minutia for each fingerprint image
respectively. And m is minimal one of the n and N value. If the similarity score is
larger than 0.8, then go to step 2, otherwise continue to match the next pair of
ridges.
2. For each fingerprint, translate and rotate all other minutia with respect
xi_new ( xi x)
yi_new ( yi y )
=TM *
i_new i
48
cos sin 0
TM = sin cos
0 1
0
0
This method uses the rotation angle calculated earlier by tracing a short
ridge start from the minutia with length D. And since the rotation angle is
already calculated and saved along with the coordinates of each minutiae,
then this saves some processing time. The following step is to transform each
minutia according to its own reference minutia and then match them in a
49
6.2 Match Stage
adaptive since the strict match requires that all parameters (x, y, ) are the
same for two identical minutiae which is impossible to get when using
biometric-based matching.
the minutia to be matched is within the rectangle box and the direction
difference between them is very small, then the two minutiae are regarded as
a matched minutia pair. Each minutia in the template image either has no
The final match ratio for two fingerprints is the number of total
The score is 100*ratio and ranges from 0 to 100. If the score is larger than a
pre-specified threshold (typically 80%), the two fingerprints are from the same
finger.
50
Chapter eight: System
evaluation and
conclusion
51
8.1 Evaluation of the system
As we can see in the graph shown below, when eliminating a step from the
affected.
Observations:
and the second time using a value of 80. As we can see from the graph,
gave better results. Still, it remains better to use the adaptive threshold
52
3. If we try to remove the H- reaks step, the s ste ould t e greatl
53
8.2 Conclusion
54
Appendix
Program no.1
(Image Enhancement)
function [final]=fftenhance(image,f)
I = 255-double(image);
[w,h] = size(I);
%out = I;
w1=floor(w/32)*32;
h1=floor(h/32)*32;
inner = zeros(w1,h1);
for i=1:32:w1
for j=1:32:h1
a=i+31;
b=j+31;
F=fft2( I(i:a,j:b) );
factor=abs(F).^f;
block = abs(ifft2(F.*factor));
larv=max(block(:));
if larv==0
55
larv=1;
end;
block= block./larv;
inner(i:a,j:b) = block;
end;
end;
final=inner*255;
final=histeq(uint8(final));
Program no.2
(Image Binarization )
[w,h] = size(a);
o = zeros(w,h);
%seperate it to W block
for i=1:W:w
for j=1:W:h
mean_thres = 0;
mean_thres = mean2(a(i:i+W-1,j:j+W-1));
mean_thres = 0.8*mean_thres;
56
o(i:i+W-1,j:j+W-1) = a(i:i+W-1,j:j+W-1) < mean_thres;
end;
end;
end;
if nargin == 2
imagesc(o);
colormap(gray);
end;
Program no.3
%image=adaptiveThres(image,16,0);
[w,h] = size(image);
direct = zeros(w,h);
gradient_times_value = zeros(w,h);
gradient_sq_minus_value = zeros(w,h);
gradient_for_bg_under = zeros(w,h);
W = blocksize;
theta = 0;
sum_value = 1;
bg_certainty = 0;
blockIndex = zeros(ceil(w/W),ceil(h/W));
57
%directionIndex = zeros(ceil(w/W),ceil(h/W));
times_value = 0;
minus_value = 0;
center = [];
filter_gradient = fspecial('sobel');
I_horizontal = filter2(filter_gradient,image);
filter_gradient = transpose(filter_gradient);
I_vertical = filter2(filter_gradient,image);
gradient_times_value=I_horizontal.*I_vertical;
gradient_sq_minus_value=(I_vertical-
I_horizontal).*(I_vertical+I_horizontal);
gradient_for_bg_under = (I_horizontal.*I_horizontal) +
(I_vertical.*I_vertical);
for i=1:W:w
for j=1:W:h
1)));
58
bg_certainty = 0;
theta = 0;
bg_certainty = (times_value*times_value +
minus_value*minus_value)/(W*W*sum_value);
blockIndex(ceil(i/W),ceil(j/W)) = 1;
%tan_value = atan2(minus_value,2*times_value);
tan_value = atan2(2*times_value,minus_value);
theta = (tan_value)/2 ;
theta = theta+pi/2;
end;
end;
end;
times_value = 0;
minus_value = 0;
sum_value = 0;
end;
end;
59
if nargin == 2
imagesc(direct);
hold on;
[u,v] = pol2cart(center(:,3),8);
quiver(center(:,2),center(:,1),u,v,0,'g');
hold off;
end;
x = bwlabel(blockIndex,4);
y = bwmorph(x,'close');
z = bwmorph(y,'open');
p = bwperim(z);
Program no.4
[iw,ih]=size(in);
tmplate = zeros(iw,ih);
[w,h] = size(inArea);
tmp=zeros(iw,ih);
left = 1;
right = h;
upper = 1;
bottom = w;
60
le2ri = sum(inBound);
roiColumn = find(le2ri>0);
left = min(roiColumn);
right = max(roiColumn);
tr_bound = inBound';
up2dw=sum(tr_bound);
roiRow = find(up2dw>0);
upper = min(roiRow);
bottom = max(roiRow);
intensity:0,100,200
for i = upper:1:bottom
for j = left:1:right
if inBound(i,j) == 135
tmplate(16*i-15:16*i,16*j-15:16*j) = 200;
tmp(16*i-15:16*i,16*j-15:16*j) = 1;
tmplate(16*i-15:16*i,16*j-15:16*j) = 100;
tmp(16*i-15:16*i,16*j-15:16*j) = 1;
end;
end;
61
end;
in=in.*tmp;
roiImg = in(16*upper-15:16*bottom,16*left-15:16*right);
roiBound = inBound(upper:bottom,left:right);
roiArea = inArea(upper:bottom,left:right);
%inner area
if nargin == 3
colormap(gray);
imagesc(tmplate);
end;
Program no.5
(Ridge Thinning)
[w,h] = size(image);
a=sum(inROI);
b=find(a>0);
c=min(b);
d=max(b);
i=round(w/5);
m=0;
for k=1:4
62
m=m+sum(image(k*i,16*c:16*d));
end;
e=(64*(d-c))/m;
a=sum(inROI,2);
b=find(a>0);
c=min(b);
d=max(b);
i=round(h/5);
m=0;
for k=1:4
m=m+sum(image(16*c:16*d,k*i));
end;
m=(64*(d-c))/m;
edgeDistance=round((m+e)/2);
Program no. 6
(Minutia marking)
function [end_list,branch_list,ridgeOrderMap,edgeWidth] =
mark_minutia(in,
inBound,inArea,block);
[w,h] = size(in);
[ridgeOrderMap,totalRidgeNum] = bwlabel(in);
imageBound = inBound;
63
imageArea = inArea;
blkSize = block;
%innerArea = im2double(inArea)-im2double(inBound);
edgeWidth = interRidgeWidth(in,inArea,blkSize);
end_list = [];
branch_list = [];
for n=1:totalRidgeNum
[m,n] = find(ridgeOrderMap==n);
b = [m,n];
ridgeW = size(b,1);
for x = 1:ridgeW
i = b(x,1);
j = b(x,2);
%ifimageArea(ceil(i/blkSize),ceil(j/blkSize))==1&
imageBound(ceil(i/blkSize),ceil(j/blkSize)) ~= 1
if inArea(ceil(i/blkSize),ceil(j/blkSize)) == 1
neiborNum = 0;
neiborNum = sum(sum(in(i-1:i+1,j-1:j+1)));
if neiborNum == 1
elseif neiborNum == 3
64
%if two neighbors among the three are connected directly
%there may be three braches are counted in the nearing three cells
tmp=in(i-1:i+1,j-1:j+1);
tmp(2,2)=0;
[abr,bbr]=find(tmp==1);
t=[abr,bbr];
if isempty(branch_list)
branch_list = [branch_list;[i,j]];
else
for p=1:3
);
if ~isempty(cbr)
p=4;
break;
end;
end;
if p==3
branch_list = [branch_list;[i,j]];
end;
end;
65
end;
end;
end;
end;
Program no.7
=remove_spurious_Minutia(in,end_list,branch_list,inArea,ridgeOrderMap,
edgeWidth
[w,h] = size(in);
final_end = [];
final_branch =[];
direct = [];
pathMap = [];
end_list(:,3) = 0;
branch_list(:,3) = 1;
minutiaeList = [end_list;branch_list];
finalList = minutiaeList;
[numberOfMinutia,dummy] = size(minutiaeList);
suspectMinList = [];
for i= 1:numberOfMinutia-1
for j = i+1:numberOfMinutia
66
d =( (minutiaeList(i,1) - minutiaeList(j,1))^2 + (minutiaeList(i,2)-
minutiaeList(j,2))^2)^0.5;
if d < edgeWidth
suspectMinList =[suspectMinList;[i,j]];
end;
end;
end;
[totalSuspectMin,dummy] = size(suspectMinList);
for k = 1:totalSuspectMin
typesum = minutiaeList(suspectMinList(k,1),3) +
minutiaeList(suspectMinList(k,2),3)
if typesum == 1
if
ridgeOrderMap(minutiaeList(suspectMinList(k,1),1),minutiaeList(suspectMinLi
st(k,1),2) )
==
ridgeOrderMap(minutiaeList(suspectMinList(k,2),1),minutiaeList(suspectMinLi
st(k,2),2) )
finalList(suspectMinList(k,1),1:2) = [-1,-1];
finalList(suspectMinList(k,2),1:2) = [-1,-1];
67
end;
elseif typesum == 2
if
ridgeOrderMap(minutiaeList(suspectMinList(k,1),1),minutiaeList(suspectMinLi
st(k,1),2) )
==
ridgeOrderMap(minutiaeList(suspectMinList(k,2),1),minutiaeList(suspectMinLi
st(k,2),2) )
finalList(suspectMinList(k,1),1:2) = [-1,-1];
finalList(suspectMinList(k,2),1:2) = [-1,-1];
end;
elseif typesum == 0
a = minutiaeList(suspectMinList(k,1),1:3);
b = minutiaeList(suspectMinList(k,2),1:3);
if ridgeOrderMap(a(1),a(2)) ~= ridgeOrderMap(b(1),b(2))
[thetaA,pathA,dd,mm] = getLocalTheta(in,a,edgeWidth);
[thetaB,pathB,dd,mm] = getLocalTheta(in,b,edgeWidth);
angleAB = abs(thetaA-thetaB);
68
angleAC = abs(thetaA-thetaC);
abs(angleAC - pi)
< pi/3)) )
finalList(suspectMinList(k,1),1:2) = [-1,-1];
finalList(suspectMinList(k,2),1:2) = [-1,-1];
end;
finalList(suspectMinList(k,1),1:2) = [-1,-1];
finalList(suspectMinList(k,2),1:2) = [-1,-1];
end;
end;
end;
for k =1:numberOfMinutia
if finalList(k,1:2) ~= [-1,-1]
if finalList(k,3) == 0
[thetak,pathk,dd,mm] = getLocalTheta(in,finalList(k,:),edgeWidth);
final_end=[final_end;[finalList(k,1:2),thetak]];
[id,dummy] = size(final_end);
pathk(:,3) = id;
69
pathMap = [pathMap;pathk];
end;
else
final_branch=[final_branch;finalList(k,1:2)];
[thetak,path1,path2,path3] =
getLocalTheta(in,finalList(k,:),edgeWidth);
size(path3,1)>=edgeWidth
final_end=[final_end;[path1(1,1:2),thetak(1)]];
[id,dummy] = size(final_end);
path1(:,3) = id;
pathMap = [pathMap;path1];
final_end=[final_end;[path2(1,1:2),thetak(2)]];
path2(:,3) = id+1;
pathMap = [pathMap;path2];
final_end=[final_end;[path3(1,1:2),thetak(3)]];
path3(:,3) = id+2;
pathMap = [pathMap;path3];
end;
end;
end;
70
end;
Program no.8
( Alignment stage)
theta = real_end(k,3);
if theta <0
theta1=2*pi+theta;
end;
theta1=pi/2-theta;
rotate_mat=[cos(theta1),-sin(theta1);sin(theta1),cos(theta1)];
%x1 x2 x3...
%y1 y2 y3...
toBeTransformedPointSet =
ridgeMap(min(pathPointForK):max(pathPointForK),1:2)';
tonyTrickLength = size(toBeTransformedPointSet,2);
pathStart = real_end(k,1:2)';
71
translatedPointSet = toBeTransformedPointSet -
pathStart(:,ones(1,tonyTrickLength)
newXY = rotate_mat*translatedPointS
theta = real_end(k,3);
if theta <0
theta1=2*pi+theta;
end;
theta1=pi/2-theta;
rotate_mat=[cos(theta1),-sin(theta1),0;sin(theta1),cos(theta1),0;0,0,1];
toBeTransformedPointSet = real_end';
tonyTrickLength = size(toBeTransformedPointSet,2);
pathStart = real_end(k,:)';
translatedPointSet = toBeTransformedPointSet -
pathStart(:,ones(1,tonyTrickLength));
newXY = rotate_mat*translatedPointSet;
for i=1:tonyTrickLength
if or(newXY(3,i)>pi,newXY(3,i)<-pi)
end;
72
end;
Program no.9
(Minutiae matching)
theta = real_end(k,3);
if theta <0
theta1=2*pi+theta;
end;
theta1=pi/2-theta;
rotate_mat=[cos(theta1),-sin(theta1),0;sin(theta1),cos(theta1),0;0,0,1];
toBeTransformedPointSet = real_end';
tonyTrickLength = size(toBeTransformedPointSet,2);
pathStart = real_end(k,:)';
translatedPointSet = toBeTransformedPointSet -
pathStart(:,ones(1,tonyTrickLength));
newXY = rotate_mat*translatedPointSet;
for i=1:tonyTrickLength
if or(newXY(3,i)>pi,newXY(3,i)<-pi)
end;
end;
73
REFERENCES
performance evaluation/Jan.2004
(IJIP)/Dec.2010 (CSCjournals.org)
(http://www.interpol.int/Public/Forensic/fingerprints/History/Brie
fHistoricOutline.pdf)
(http://scgwww.epfl.ch/)
74