0% found this document useful (0 votes)
44 views6 pages

An Efficient Iris Segmentation Approach To Develop An Iris Recognition System

An-Efficient-Iris-Segmentation-Approach-to-Develop-an-Iris-Recognition-System

Uploaded by

hhakim32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views6 pages

An Efficient Iris Segmentation Approach To Develop An Iris Recognition System

An-Efficient-Iris-Segmentation-Approach-to-Develop-an-Iris-Recognition-System

Uploaded by

hhakim32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617

https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 24

An Efficient Iris Segmentation Approach to


Develop an Iris Recognition System
Md. Selim Al Mamun, S.M. Tareeq and Md. Hasanuzzaman

Abstract—Iris recognition is regarded as the most stable and accurate biometric identification system. An iris recognition
system basically consists of four steps- segmentation, normalization, encoding and matching. This paper proposes an efficient
approach for iris segmentation. This segmentation approach uses a modified canny edge detection algorithm by considering
gradient finding, non- maximum suppression and hysteresis thresholding for the best results. This paper also present an
automated iris recognition system based on proposed segmentation approach. This approach is proved to be very successful
and about 95% images of dataset [CASIA database version 1.0] are segmented successfully. The iris recognition system
resulted in False Reject Rates (FRR) and False Accept Rates (FAR) of 5.222 and 1.932 respectively.

Index Terms— Iris Segmentation, Irish Recognition, Hough Transformation, False Reject Ratio, False Accept Ratio.

——————————  ——————————

1 INTRODUCTION

A biometric system refers to the identification and


verification of individuals based on certain physio-
logical traits of a person. Commonly used biometric
erator for locating the circular iris region. Daugman algo-
rithm does not suffer from the thresholding problems of
Hough transform but this algorithm may fail if there is
features for the purpose include facial features, voice, noise (from reflections) in the image. Boles and Boashash
fingerprint, handwriting, retina and the most important [4] used an active contour model for the segmentation
iris. The idea of person identification using iris is a newly method which suffers from time consumption.
emergent technique in the world of biometric system. It is
gaining lots of attention due to its accuracy, reliability This paper consists of five sections. Section 2 focuses pro-
and simplicity as compared to other biometric systems. posed Iris recognition system, section 3 describes imple-
mentation method of this system, section 4 presents expe-
The iris is an externally visible, yet protected organ lo- rimental results and discussions and section 5 concludes
cated behind the cornea. The features of iris include tra- this paper.
becular meshwork, crypts and the pigment spots that is
moles and freckles and the color of the iris. These visible
2 PROPOSED IRIS RECOGNITION SYSTEM
patterns are unique to all individuals and it has been
found that the probability of finding two individuals with
identical iris patterns is almost zero. Even the left and
correlate with genetic determination right irises for a giv- Image Preprocessing
Image Acquisition
en person are different from each other [1].
Edge Detection
Iris recognition system is an automated person identifica- Segmentation
tion technique where pattern of iris of the individual is
Noise Filtering
used for the purpose. Some prototype systems of iris rec-
ognitions had been proposed earlier, but it did not come Normalization
Iris Localization
to the topic of discussion until the Cambridge researcher,
John Daugman[2] practically implemented a working iris
recognition system. This was the first implemented work- Feature Extracion Pupil Detection
ing automated iris recognition system. Besides
J.Daugman’s[2] system some other systems had been de- Removal of Eyelids
veloped. The most notable include the systems of Wildes Matching and Eyelashes
et al [3], Boles and Boashash [4], Lim et al [5] and Noh et
al [6]. The systems implemented by different researches, Fig. 1. Proposed System Architecture
differ in the process of segmentation, iris code generation
and also in the matching techniques. Wildes et al [3] sys- Fig.1 shows the proposed system architecture. Proposed
tem employed Hough transform, a standard computer system is composed of five modules: (i) Image Acquisi-
vision algorithm used to determine the parameters of tion (ii) Segmentation: locating the iris region in an eye
simple geometric objects. But it requires threshold values image (iii) Normalization: creating a dimensionally con-
to be chosen for edge detection. This may remove some sistent representation of the iris region, (iv) Feature Ex-
critical edge points resulting in failure to detect cir- traction: creating a template containing only the most
cles/arcs. J. Daugman [2] used an integro-differential op-
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 25

discriminating features of the iris and (v) Matching : pression method proposed by Kovesi’s [9]. A pixel (x,y),
matching a test template with the stored templates. The in the gradient image and given the orientation Ө(x,y),
proposed segmentation approach includes image prepro- the edge intersects two of its 8 connected neighbors. The
cessing, edge detection on the eye image, noise filtering, point at (x,y) is a maximum if its value is not smaller than
iris localization, pupil detection and removal of eyelids the values at the two intersection points. The third step is
and eyelashes. hysteresis thresholding which is implemented using the
same hysteresis thresolding method used in J. Canny’s
algorithm [10]. Any pixel having a value greater than a
high threshold is taken as an edge pixel and eliminates all
3 SYSTEM DESCRIPTION AND IMPLEMENTATION the pixels below a low threshold. The pixels between
these two ranges but connected to the edge pixels (pixels
3.1 Image acquisition above the high threshold) through a chain of pixels all
This step is one of the most important and deciding fac- above the low threshold are also considered as edge pix-
tors for obtaining a good result. A good and clear image els.
eliminates the process of noise removal and also helps in
avoiding errors in calculation. This paper uses CASIA
(The Chinese Academy of Sciences Institute of Automa-
tion) [7]. It contains 756 iris images from 108 subjects. All
iris images are 8 bit gray-level JPEG files, collected under
near infrared illumination and free from specular reflec-
tion. Fig. 3. Result of Edge Detection

3.2 Image segmentation 3.2.3 Noise Filtering


The image segmentation module includes several steps. A median filter [11] is used in order to decrease the extra-
In the following subsections we will sequentially intro- neous data found in the edge detection stage. This can
duce image preprocessing, edge detection on the eye im- reduce the pixels on the circle boundary but still success-
age, noise filtering, iris localization, pupil detection and ful localization of the boundary can be obtained even
removal of eyelids and eyelashes methods. with the absence of few pixels. It does not make only the
circle localization accurate but it is also computationally
3.2.1 Image Preprocessing faster since the boundary pixels are lesser for calculation.
To make the computation faster the image is scaled down
to 0.40. The images of CASIA are already preprocessed
for iris research and there is very little to clean up the im-
age. The images are filtered using Gaussian smoothing
filter [8], which blurs the image and reduces effects due to
noise. The degree of smoothening is decided by the stan-
dard deviation and in this case it is chosen 2.0. Fig. 4. Result of Noise Filtering

3.2.4 Iris Localization


To detect the outer circle in the iris/sclera boundary a
modified Circular Hough Transformation algorithm [12]
Fig. 2. Result of Preprocessing
is used. The range of radius values is set manually, the
iris radius range from 90 to 150 pixels. For each edge
point, circles with different radii are drawn and the points
3.2.2 Edge Detection on the circles surrounding it at different radii are taken,
This paper uses modified canny edge detection algorithm and their weights are increased if they are also edge
for detecting edges in the eye image. The modified edge points. These weights are added to the accumulator array.
detection algorithm involves three steps: finding the gra- When for all the edge points the circle points for different
dient, non-maximum suppression and the hysteresis thre- radius are considered, the maximum from the accumula-
sholding. Kovesi’s [9] algorithm is used for finding the tor array is used to find the center of the circle and its ra-
gradients in the image and it is modified as Wildes [3] dius.
suggestion. For iris area vertical gradient is weighted by
1.0 and horizontally by 0.0 and for pupil both vertical and
horizontal gradients are weighted by 1.0. This gradient
image is used to find peaks using non-maximum sup-

© 2012 Journal of Computing Press, NY, USA, ISSN 2151-9617


http://sites.google.com/site/journalofcomputing/
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 26

cos tan

The displacement of the centre of the pupil relative to


the centre of the iris is given by σx, σy and is the distance
between the edge of the pupil and edge of the iris at an
angle θ around the region, and r1 is the radius of the iris.

Fig. 5. Result of Iris Localization

3.2.5 Pupil Detection


For pupil detection again circular Hough Transformation
algorithm is applied. The radius range for pupil is 28 to 75
pixels. In order to make pupil detection process more effi-
cient and accurate, the Hough transform for iris/pupil
boundary is performed within the iris region, instead of
the whole eye region, since the pupil is always within the Fig. 8. Result of Normalization
iris region. After this process a circle is clearly present
along the pupil boundary.
3.4 Feature Extraction
Encoding is done using the Gabor filter [13], by breaking
up the 2D normalized pattern into a number of 1D wave-
lets, and then these signals are convolved with 1D Gabor
wavelets. The output of filter is then phase quantized to
four levels using the Daugman [2] method, with each
filter producing two bits of data for each phasor. The iris
code is formed by assigning 2 bits for each pixel of the
image. The bit is 1 or 0 depending on the sign + or – of the
real and imaginary part respectively.
Fig. 6. Result of Pupil Detection

3.2.6 Removal of Eyelids and Eyelashes


To isolate eyelids from the rest of the image, a line to the
upper and lower eyelids drawn using the linear Hough
transforms. The lines are fitted exterior to the pupil region
and interior to the iris region. The points upper and low- Fig. 9. Result of Feature Extraction : (a) Original image, (b) Template
er the lines are marked as NaN. The eye lashes are very of the eye image and (c) Mask of the eye image
dark compared to the whole iris image. So it is easily re-
moved using simple thresholding technique. Those pixels 3.5 Matching
were marked as NaN. Fig. 10 shows the result.
3.5.1 Hamming Distance
This paper uses modified Hamming distance, used by J.
3.3 Normalization
Daughman[2] for matching which also incorporates noise
The next step is to normalize this part, to enable gener- masking so that only significant bits are used in calculat-
ation of the iris template and present them in a genera- ing the Hamming distance between two iris templates.
lized way for comparisons. For this purpose, a technique The modified hamming distance algorithm that is used
based on Daugman’s [2] rubber sheet model is employed. for matching is given below.
The center of the pupil is considered as the reference
point and a remapping formula is used to convert the
points on the Cartesian scale to the polar scale. ∑

(2)

(1)
Where Where Xj and Yj are the two bit-wise templates to
And compare, and are corresponding noise masks for
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 27

Xj and Yj, and N is the number of bits represented by


each template.

3.5.1 Rotation Varitation Adaption


In order to account for rotational inconsistencies, when
the Hamming distance of two templates is calculated, one
template is shifted left. This bit-wise shifting in the hori-
zontal direction corresponds to rotation of the original iris
region by an angle given by the angular resolution used
for iris detection. Fig. 11. Cases where segmentation fails

4.3 Performance Evaluation


4 EXPERIMENTAL RESULTS AND DISCUSSIONS The key issue in all pattern recognition problems is to
In this chapter performance of the developed system is find a unique separation point between intra-class and
evaluated. Different types of tests are conducted to eva- inter class variability. An individual can be reliably classi-
luate the accuracy of the system. These include decidabili- fied only if the variability among different instances of a
ty, False Accept Rate (FAR), False Reject Rate (FRR), given class is less than the variability between different
Equal Error Rate and number of shifts to make the system classes. So a threshold value must be chosen so that a de-
rotation invariant. This paper uses iris images from CA- cision can be made as to whether two templates were
SIA database (version 1.0) to verify the uniqueness of iris created from the same individual or whether they were
pattern and to evaluate the performance of the proposed created from the different individuals. From figure 12 it is
system. easily visible that the common region between intra-class
and inter-class is very small which gives indication of
4.1 Experimental Setup good result.
The proposed system is implemented using MATLAB
7.50. For statistical analysis a statistical tool ’R’ is used.
The system is implemented in Intel Core 2duo-2.13 GHz
with 2GB RAM (DDR2, 800 bus). The system is tested
using CASIA dataset where number of subject is 108 with
7 samples each.

4.2 Result of Segmentation

Fig. 12. Distribution of Hamming Distance (HD Vs Density)

Fig. 10. Result of Segmentation: (a) Original Image, (b) After remov-
ing eyelids and (c) After removing eyelashes

The proposed segmentation approach is proved to be


very successful. The new segmentation approach success-
fully segmented 681 out of 756 of eye images of CASIA
(version 1.0) which corresponds to a success rate of
around 90%.
There are some cases where segmentation of iris fails.
Fig. 13. Distribution of Hamming Distance(HD Vs Frequency)
The problem images had small intensity differences be-
tween the iris region and the pupil regions. This situation
is shown in the following figures. 4.3.1 Decidability
A popular metric for determining the threshold value for
pattern recognition or identification is ‘decidability’. It is
evaluated from the mean and standard deviation of the
intra-class and intra class distribution. The decidability is
defined as
′ | |
(3)

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 28

The higher the decidability, the greater the variation


between the intra-class and inter-class distributions which
is the key to the iris recognition system. The decidability
calculated from the result is given below. The values of
decidability for each test are found near 5.0 or greater
which is a good result.

4.3.2 FAR, FRR and EER


FAR = (4)

FRR =

(5) (a)

False Accept Rate and False Reject Rate are related in-
verse proportionally. An important way to judge the sys-
tem’s FAR at 5 % of FRR.

(b)
Fig. 16. Density of Intra-Class and Inter-Class Distribution with (a) 0
shift and (b) 8 shifts

Fig. 14. Threshold Vs FAR and FRR


Due to rotational inconsistencies a significant number
of templates are not aligned and common area is large
which means larger false rate. With 8 shifts values be-
come much closer distributed around the mean and
common area is decreased.
Without considering rotation variation, at threshold
value 0.44, the FRR is found 5.397 % and FAR is 33.609 %.
But considering the rotation variation at threshold value
0.39 the FRR is 5.222 % and FAR is 1.932 %.

Fig. 15. Threshold Vs FAR and FRR(Closed look) 5 CONCLUSION


This paper proposes a new segmentation approach in-
At threshold 0.39, the FAR is 1.932% and FRR is cluding image preprocessing, edge detection, noise filter-
5.222%, at 0.394 FAR = FRR and it is 4.8% = EER. Accura- ing, iris localization, pupil detection and removal of eye-
cy = 95.2%. lids and eyelashes methods. For edge detection Kove-
si’s[9] edge detection algorithm with the modification of
4.3.3 Rotation Variation Adaption Wildes[3] is used so that gradients can be weighted to
Robust representation for iris pattern recognition must be find edges in horizontal and vertical direction which are
invariant to changes in size, position and rotation. To important for locating iris region.
compensate the rotation variant the templates of the iris As eyelids and eyelashes occluded upper portion of
images are shifted 8bits in both sides: left and right and iris, gradients are biased vertically so that the circle is
then the hamming distance are taken. visible in between two eyelids. For pupil detection gra-
dients are weighted equally so that pupil boundary is
visible in horizontal and vertical direction. To make the
circle detection algorithm easier median filter is to re-
move extraneous data that is randomly present near the
circle. For iris and pupil detection circular Hough trans-
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 6, JUNE 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 29

form is used. Md. Selim Al Mamun received B.Sc (Hons) and MS from from the
Department of Computer Science & Engineering, University of Dha-
To make the algorithm faster the image is scaled down
ka, Bangladesh. His research interests include Pattern Recognition,
to 40% and to make it efficient and more accurate, pupil Image processing Artificial Intelligence, Bioinformatics etc.
detection is applied only within the iris circle because the
pupil is always within the iris region. To remove eyelids S.M. Tareeq is working as an Associate Professor in Department of
linear Hough transform is used to fit lines for top and Computer Science & Engineering, University of Dhaka, Bangladesh.
He has already published many International Journals and Confe-
lower eyelids. The eyelashes are removed by simple thre- rence papers and participated many International Conferences. His
sholding method. research interests include Artificial Intelligence, Fuzzy Logic, Pattern
Experimental result shows that segmentation accuracy Recogntion, Image processing, Robotics etc.
is around 90% which is better than kovesi[9] where seg-
Md. Hasanuzzamn is working as an Associate Professor in Depart-
mentation accuracy was 83% for the same dataset (CASIA ment of Computer Science & Engineering, University of Dhaka, Ban-
ver.1). This paper also implements iris recognition system gladesh. He has already published many International Journals and
using this new segmentation approach and experimental Conference papers and participated many International Confe-
rences. His research interests include Artificial Intelligence, Fuzzy
result shows that FAR is 1.932% and FRR is 5.222 %
Logic, Pattern Recogntion, Image processing, Robotics etc.
which is satisfactory. .
There are still some issues that need to be considered.
To make the system fully automated an iris acquisition
camera should be included rather than having a set of iris
images from a database. The most of the time required for
computation include performing the Hough transform,
and calculating Hamming distance values.

REFERENCES
[1] El-Bakry, H.M, Human Iris Detection Using Fast Cooperative Modular
Neural Nets, Neural Networks, Proceedings of International Joint Conference
on IJCNN '01, vol.1, 2001, pp 577 –582.
[2] J. Daugman. How iris recognition works. Proceedings of 2002
International Conference on Image Processing, Vol. 1, 2002
[3] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey,
S. McBride: A Machine-vision System for Iris Recognition. Machine
Vision and Applications Vol. 9 (1996)
[4] W. Boles, B. Boashah, A Human Identification Technique Using
Images of the Iris and Wavelet Transform. IEEE Transaction on
Signal Processing Vol. 46 (1998)
[5] S. Lim, K. Lee, O. Byeon, T. Kim. Efficient, iris recognition
through improvement of feature vector and classifier. ETRI Journal,
Vol. 23, No. 2, Korea, 2001.
[6] S. Noh, K. Pae, C. Lee, J. Kim, Multi resolution independent com-
ponent analysis for iris identification. The 2002 International Tech-
nical Conference on Circuits/Systems, Computers and Com-
munications, Phuket, Thailand, 2002.
[7] CASIA (The Chinese Academy of Sciences Institute of Automa-
tion) Iris image database (version 1.0)
http://www.cbsr.ia.ac.cn/IrisDatabase.htm.
[8] R. Gonzalez and R. Woods, Digital Image Processing, Addison-
Wesley Publishing Company, 1992, p 191.
[9] Peter Kovesi, Matlab functions for Computer Vision and Image
Processing, What are Log-Gabor filters?
[10] Canny. J. A Computational Approach To Edge Detection, IEEE
Trans. Pattern Analysis and Machine Intelligence, 8:679-714, 1986.
[11] R. Boyle and R. Thomas, Computer Vision: A First Course, Black-
well Scientific Publications, 1988, pp 32 - 34).
[12] A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall,
1989, Chap. 9.
[13] Hans G. Feichtinger, Thomas Strohmer: "Gabor Analysis and
Algorithms", Birkhäuser, 1998; ISBN: 0817639594.

You might also like