0% found this document useful (0 votes)
31 views7 pages

An Efficient Iris Segmentation Model Based On Eyelids and Eyelashes Detection in Iris Recognition System

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

An Efficient Iris Segmentation Model Based On Eyelids and Eyelashes Detection in Iris Recognition System

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan.

08 – 10, 2015, Coimbatore, INDIA

An Efficient Iris Segmentation Model Based on Eyelids


and Eyelashes Detection in Iris Recognition System
Prajoy Podder1,*, Tanvir Zaman Khan2, Mamdudul Haque Khan3, M. Muktadir Rahman4,
Rafi Ahmed5 and Md. Saifur Rahman6
1,2,3,4,5
Department of ECE at Khulna University of Enginrreing & Technology
6
Department of EEE at Khulna University of Engineering & Technology
Khulna-9203, Bangladesh
[email protected], [email protected], [email protected], [email protected],
[email protected], [email protected]

Abstract— This paper presents an efficient noise reduction structure is conferred. Related works on iris recognition
scheme to remove localized high frequency information from systems have been discussed in the fourth section. The
segmented iris region for personal authentication based on radial proposed methods have been discussed with the necessary
suppression. Eyelash and eyelids of localized iris area is algorithm in the fifth section. The sixth and seventh section
considered as noisy information. Accuracy of iris recognition illustrates the feature extraction and matching process whereas
system generally depends on accurate segmentation and noise the experimental results have been focused in the eighth section.
deduction. Proposed method not only removes eyelash and Finally, conclusions are given in Section IX.
eyelids by suppressing localized frequency using radial
suppression but also detects the pupil and iris center accurately II. BIOMETRIC TECHNOLOGY
as well as localizes the iris and pupil region. Finally, this paper
also designates a prototype of automated iris recognition system The word Bios means to life and metric means to measure.
for personnel authentication. For iris feature extraction purpose, Biometric system normally measures physiological and
one dimensional Log Gabor wavelet has been used where the behavioral characteristics.
feature vector length has been reduced without less loss of
A. Verification vs. Identification
Information. It has also been showed here that the proposed
detection model has a stable matching score. The proposed The most common use of biometric systems is
automated iris recognition system with maximum suppression of authentication and verification [1]. Identification mode means
eyelashes has less equal error rate which indicates the superiority one’s to many match. On the other hand, Verification mode
of the performance compared to the other existing methods. means one’s to one match.
Keywords- eyelashes detection; edge detection; image B. Biometric error analysis
acquisition; iris normalization; iris template; hough transform Two types of error mainly occur in a biometric system:
I. INTRODUCTION false acceptance rate (FAR) and false rejection rate (FRR) [1],
[3]. FAR happens when the biometric system authenticates an
In the modern age, Security is always an important issue in imposter. FRR occurs when the biometric system has rejected
any sectors like bank, International airport, internet based
a valid user. Iris recognition has low false accepted rates. Iris
marketing etc. Accurate and reliable personal identification
pattern is unique and it is stable.
arrangement and biometrics have become an important
technology for the security in the modern advanced world. III. STRUCTURE OF IRIS
Automatic recognition and verification of an individual person
based on some sort of distinctive features or characteristics is The iris is a sensitive part comprising of a number of layers.
offered in a biometric system. There are several types of The epithelium layer makes the iris opaque because of its
Biometrics system available like fingerprints, face recognition, pigmented characteristics.
voice recognition, hand geometry, handwriting, the retina and
the iris. Most of the existing methods have limited capabilities
in recognizing relatively complex features in realistic real-
world situations. Iris recognition has been contemplated as one
of the most trustworthy biometrics technologies in recent years
[1], [2]. The iris has distinctive features. It is composite
sufficient to be used as a biometric signature [3]. This means
that the probability of finding two people with identical iris
patterns is almost zero [4]. This paper can be arranged as
follows. The next section familiarizes the basic concepts of (a) (b)
biometric technology. In the third section the internal iris Figure 1. Iris structure
___________________________________
978-1-4799-6805-3/15/$31.00 ©2015 IEEE

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

On the other hand, the stromal surface containing blood V. FLOW DIAGRAM OF THE PROPOSED METHOD
vessels, pigment tissue and two iris muscles contracts the pupil The proposed method can be described in figure 3 where an
[5]. There is two zones i.e. outer ciliary zone and inner
iris template can be generated by performing the stated
pupillary zone that are divided by the Collarette – which looks
as a zigzag pattern. The eye image is shown in figure 1(a) and necessary steps of iris identification and later it will be
1(b). Many differentiated and idiosyncratic features can be matched with the stored database of iris templates of many
contained in its compound and multiplex arrangement such as persons. The iris identification is basically divided in four
arching ligaments, furrows, ridges, crypts, rings, corona, steps shown in figure 2.
freckles etc. The features of the iris are random in nature. 1. Iris image acquisition
IV. RELATED WORKS 2. Iris image pre-processing: Segmentation, normalization and
Enhancement
The concept of automated iris recognition system was first
proposed by Flom and Safir in 1987. Daugman proposed 3. Feature extraction
complex valued wavelets to demodulate the texture phase 4. Matching
structure information of the iris and integro-differential
operator for iris inner and outer boundary localization [2], [3].
Many researchers used wavelets for iris encoding and some of
them reported that it has excellent performance of a diverse
database of many images [8], [9]. In [4], zero crossing of the
wavelet transform at multi resolution levels was calculated. In
[16], an iris recognition approach based on characterizing key
local intensity variations was proposed. Wildes [10], Kong and
Zhang [11], Tisse et al. [12], and Ma et al. [13] described an
automatic iris recognition system for personal verification. All
these algorithms are based on grey images, and color
information was not used. A new algorithm has been proposed
for locating the iris inner boundary and outer boundary by
modifying the existing algorithm so that the iris area can be
segmented very smoothly with less detection of eyelids and
eyelash. Then for feature extraction Gabor wavelets have been
applied. But the encoded binary iris template that has been
obtained gives better result than the previous research work.

Figure 2. Iris recognition system Figure 3. Overview of the proposed system

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

Algorithm 1: Iris Inner and Outer boundary localization Algorithm 2: Radial non maxima suppression

™ Step 1. Coarse localization of the Pupil Centre of the Input: A (the modulus image) and (xc,yc)(the approximate
eye; center point of the edge).
™ Step 2. Choose a small block of the input image and Output: m (the modulus image after applying radial non-
extract edge information based on edge detection maxima suppression).
operator like Canny operator; 1. for all points Pj of the modulus image A do
™ Step 3. Apply Hough transform for pupil area 2. determine the radial direction of point Pj means the
localization means inner boundary localization ; direction from approximate center point to Pj.
™ Step 4. Extract edge information from a small image 3. determine two points X1 and X2 in the eight pixel
block based on line’s grey gradient value; neighborhood of Pj along the radial direction;
™ Step 5. Radon transformation is used to localize Iris 4. if A(Pj)>A(X1) and A(Pj)>A(X2) then m(Pj)←A(Pj)
outer boundary.
5. else m(Pj)←0
™ Step 6. Apply Radial non maxima suppression
technique to detect and remove the eyelids and 6. end if
eyelash which increase the segmentation accuracy 7. end for
because the radial edges are always noises (such as 8. return m.
edges of eyelash, eyelids).

Figure 4 specifies the Daugman’s rubber sheet model


A. Iris image acquisition applied to normalize iris region. The whole iris region is
A good and clear image eliminates the process of noise designated by the grey values of these pixels. This
removal as well as helps in avoiding errors in calculation. information’s can be determined by coordinate’s combines of
Digital camera with good resolution or IR sensor can be used. inter boundary and outer boundary. This model rescales each
Here the CASIA [8] iris image database of different versions individual point inside the iris region to a pair of polar
have been used for performing our tasks because images of coordinates (r, θ) where r lies in the interval [0, 1] and θ is
this database [13] do not contain specular reflections. angle in the range of 0 to 360 degree.
If s(x,y) is an iris image represented in Cartesian
B. Iris image pre-processing
coordinates and s(r,θ) is the representation in polar coordinate.
The acquired image contains irrelevant parts like eyelid, If (xi,yi) and (xo,yo) is the inner boundary and outer boundary
eyelash, pupil, etc. They should be removed. This stage is unit in Cartesian coordinates respectively, then
composed of two steps: iris localization/ segmentation and
normalization [14].
C. Iris localization
The purpose of iris localization is to localize the eye image
that corresponds to an iris. The iris region, shown can be
In the above equation, where p=1, 2,…., M and
estimated by two circles. One is in the iris/sclera boundary that
can be called the outer boundary and the other is sometimes
called the iris/pupil boundary. The upper part of the iris area is M and N is the sample rate along with angle and radial
mostly occluded by the eyelashes and eyelids. direction respectively. The normalization algorithm can be
described as follows:
D. Iris Normalization
After successfully segmenting the iris region, the ™ Step 1: Obtain the parameters of (xi,yi,ri) and (xo,yo,ro)
normalization process will produce iris regions with constant based on iris boundary localization of iris image s(x,y)
dimensions. Daugman rubber sheet model can be used for the where the subscript i and o means the inner and outer
iris normalization process. Centre of the pupil is considered as boundary.
the reference point; radial vectors pass through the iris area, as
illustrated in figure 4. The radial lines around the iris region ™ Step 2: Distance between pupil Centre and iris Centre can
are called angular resolution. Since the pupil is non-concentric be calculated.
to the iris, a basic formula is required to rearrange points ™ Step 3: Connection direction angle is also calculated.
depending on the direction around the circle.

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

™ Step 4: Select the Centre of the pupil as pole. In polar normalized iris pattern that resemble to ‘0’ bits in noise masks
coordinates . For iris outer boundary, of both iris pattern templates are used in the calculation.
For computing the distance, only the noise masks of both
normalized iris pattern templates are used in the calculation. If
A and B are two bitwise iris templates then the hamming
distance (HD) formula can be stated as the summation of
exclusive-OR between A and B, where m defines the total
™ Step 5: Every pixel’s grey values of the normalization iris number of bits that constitute the template.
can be obtained by those grey of (x,y) positions, applying
the following equations:

The matching score can be calculated using the following


equation:

Where, i and j are two iris and S is a set of small region.

After the comparison of the two iris templates if we get the


Hamming Distance close to 0 then it means that the irises are
the same. When the value of Hamming Distance is 1 that
means irises are totally different. Figure 6 shows the value of
HD for matched conditions is 0.2455.
Figure 4. Daugman Rubber sheet model

VI. FEATURE EXTRACTION / ENCODING


The feature of one person’s iris is not same with the other
person iris’s features. In order to recognize the individual
person accurately, the required discriminating features that
present in the iris region must be extracted. Only the important
features of the iris must be encoded so that comparisons
between iris templates can be made [15], [16].
A matching metric is normally generated in the feature
encoding process. It gives a measure of similarity between the
two iris templates. There are two types of class comparisons.
Intra-class comparisons can be defined as the metric should
give one sort of values when comparing iris codes produced
from the same eye. On the other hand, inter-class comparisons
can be defined as another kind of values when comparing iris
code created from different eye. These two classes are very
important for this feature extraction process. After the
encoding technique applied to the iris pattern, the feature
vector is digitized with dimensions between -1 and 1. The
conversion of the vector into binary form comprised of
converting the vector dimensions between 0 and 1. The iris
recognition scheme does not use the color of the iris. The
procedure of converting a grey level vector signal into a black
and white vector signal is called binarization. Above the
threshold point all the values are equal to 1 and all values
below that point are equal to 0. Figure 5 shows a feature
encoding process using Gabor filter and phase quantization
method.
VII. MATCHING
In this method, only significant bits are used in computing
the Hamming distance between two iris templates. At the time
Figure 5. Feature Encoding process
of taking the value of Hamming distance, only those bits in the

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

VIII. EXPERIMENTAL RESULT


Figure 7(a) shows the input eye images that are taken from
the iris database. Eye images are also indicated by the ID.
When an eye images is captured and given to the program to
process, then it will decide the matched or non-matched
condition. After taking the eye image (normally Gray level
image can be used) it is necessary to find out the appropriate
centre of that eye. Figure 7(b) indicates the inner boundary
collarette region detection and circular Hough transform
operation to find out the centre coordinates of iris as well as
pupil regions respectively. Black regions shown in figure 7(b)
denote detected eyelid and eyelash regions. Sometimes some
lighter eyelashes may not be detected. But these undetected
areas are very minor when compared with the dimension of
the iris area. The proposed system also isolates the specular
reflections that are available in some eye images. Figure 7(c)
shows the Iris segmentation process. Figure 7(d) shows the
unwrapped normalized iris image.

Table I compares the success rate of finding the inner and


outer boundary of the Iris with eyelash removal of the
Figure 6. Hamming distance of matched conditions proposed method with previous implemented methods.
Histogram equalization is a method that improves the contrast
as well as intensity of images and it has been established to be
one of the most important methods in image enhancement.

TABLE I. SUCCESS RATE OF FINDING THE BOUNDARIES AND


REMOVING THE EYELASH

Success rate Previous method Proposed method


(a) Eye image Inner boundary 98.10% 98.9%
Outer boundary 90.52% 92.8%
Eyelashes Eyelash removal 60.52% 78.42%
Suppression

(b) Iris
Boundary
Localization

(c) Iris region


Segmentation
Into equal tracks

(d) Unwrapping the normalized iris

Figure 7. Illustration of the Iris segmentation and normalization process Figure 8. Illustration of the image enhancement process

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

TABLE IV. COMPARISON OF EER


Methods Equal error rate (%)

Applying Gabor filter CASIA Iris Interval CASIA Iris Lamp


Bouraoui et al. 1.74 18.21
Ma et al. 2.62 —
Daugman 1.80 3.47
Masek and Kovesi 0.584 —
Figure 9. Bit representation of the Iris template Daugman — 1.05
Wildes et al. — 0.86
Figure 8 describes the image enhancement process while
He et al. — 0.75
the normalized unwrapped iris image can be enhanced by Proposed 0.54 0.82
applying histogram equalization technique. Figure 9 depicts
the binarization technique using Gabor wavelet. Table II
Figure 10 shows the percentage of FAR and FRR with
shows the hamming distance value of comparing eye images
different threshold value for CASIA database. Lower value of
belonging to the same person i.e. Intra class comparison
threshold can decrease FAR and increase FRR. Table III
obtained by implementing our proposed method.
shows that the matching scores is stable about 0.050-0.060
TABLE II. THE HD OF COMPARING EYE IMAGES BELONGING TO range using our proposed method. No matter how to increase
THE SAME PERSON the percentages of undesirable eyelashes covering the iris. One
No. Image 1 Image 2 HD
special case, M16 with proposed detection model is less than
almost half of M16 only using traditional model. The
1 01 02 0.3854
experimental results demonstrate that the proposed detection
2 01 03 0.3007
model is required for iris feature extraction and higher
3 02 03 0.405 recognition accuracy.
4 01 04 0.4284 When FA and FR are equal; the error is referred to as
5 02 04 0.44512 equal error (EE). Table IV shows that our proposed method
6 02 05 0.46 has less EER than other methods.
7 02 06 0.41654
IX. CONCLUSIONS
8 03 06 0.43214
Eyelash and eyelids elimination is always a significant step
in iris recognition system. Due to elliptical shape of the iris,
unwanted eyelash can be present in segmented iris region.
Actually it is considered as noisy information which is
undesirable in iris feature extraction. They don’t have any
significant textural information. In order to extract unique
features of the iris, normalized iris must be minimally affected
by the eyelash and eyelids. The presented algorithm in this
paper is able to minimize a vital portion of unwanted
information. The proposed scheme not only eradicates the
presence of noisy details but also capable of determining the
iris and pupil boundary accurately. The phase quantization of
the output of Gabor wavelet (real and imaginary) represents
the result as a binary template known as iris code. This
algorithm is tested on CASIA V1.0 and CASIA V3.0 and the
resultant FAR and FRR is 0.001% and 37.880 % respectively.
Figure 10. FAR and FRR for different threhold value
The higher value of FRR is essential for higher security
TABLE III. COMPARISON OF RECOGNITION SCORE FROM application. The achieved EER is 0.54 for CASIA Iris Interval
TRADITIONAL IRIS SEGMENT MODEL WITH AND WITHOUT OUR and 0.82 for CASIA Iris Lamp database.
PROPOSED DETECTION MODEL
Different Match up Matching score of Matching score of REFERENCES
Previous iris segment Proposed Detection
[1] R. P. Wildes, “Iris recognition: an emerging biometric technology,”
model model Proceedings of the IEEE, Vol. 85, No. 9, pp. 1348-1363, 1997.
M12 0.050 0.050
M13 0.064 0.061 [2] J. Daugman, “How iris recognition works,” Proceedings of International
Conference on Image Processing, Vol. 1, pp. 33-36, 2002.
M14 0.060 0.057
M15 0.075 0.053 [3] J. Daugman, “Biometric personal identification system based on iris
M16 0.098 0.054 analysis,” US Patents, Patent Number: 5,291,560, 1994.

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.
2015 International Conference on Computer Communication and Informatics (ICCCI -2015), Jan. 08 – 10, 2015, Coimbatore, INDIA

[4] W. Boles and B. Boashash, “A human identification technique using [11] W. Kong and D. Zhang, “Accurate iris segmentation based on novel
images of the iris and wavelet transform,” IEEE Transactions on Signal reflection and eyelash detection model,” Proceedings of International
Processing, Vol. 46, No. 4, pp. 1185-1188, 1998. Symposium on Intelligent Multimedia, Video and Speech Processing,
[5] Haeng-kon Kim, Tai-hoon Kim and Akingbehin Kium , “Advances in Hong Kong, 2001.
Security Technology, ” International Conference, SecTech 2008, and Its [12] C. Tisse, L. Martin, L. Torres and M. Robert, “Person identification
Special Sessions, Sanya, Hainan Island, China, December 13-15, 2008. technique using human iris recognition,” International Conference on
[6] Y. Zhu, T. Tan and Y. Wang, “Biometric personal identification based Vision Interface, Canada, 2002.
on iris patterns,” Proceedings of the 15th International Conference on [13] Li Ma, Tieniu Tan, Yunhong Wang and Dexin Zhang, “Personal
Pattern Recognition, Spain, Vol. 2, pp. 801-804, 2000. identification based on iris texture analysis,” IEEE Transactions on
[7] “Chinese Academy of Sciences – Institute of Automation. Database of Pattern Analysis and Machine Intelligence, Vol. 25, No.12, pp. 1519–33,
756 Greyscale Eye Images,” http://www.sinobiometrics.com 2003.
Version 1.0, 2003. [14] Hanho Sung, Jaekyung Lim, Ji-hyun Park and Yillbyung Lee, “Iris
[8] M. Jafar, M. H. Ali and Abclul Ella Hassanien, “An Iris Recognition Recognition Using Collarette Boundary Localization,” Proceedings of
System to Enhance E-security Environment Based on Wavelet Theory,” the 17th International Conference on Pattern Recognition, Vol.4, pp.
AM0 - Advanced Modeling and Optimization, Vol. 5, No. 2, pp. 93-104, 857-860, 2004.
2003. [15] Mark S. Nixon and Alberto S. Aguado, “Feature extraction and image
[9] S. Liu and M. Silverman, “A practical guide to biometric security processing,” Academic Press, 2008.
technology,” IT Professional, Vol. 3, pp. 27-32, 2001. [16] Li Ma, Tieniu Tan, Yunhong Wang and Dexin Zhang, “Efficient iris
[10] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey and S. recognition by characterizing key local variations,” IEEE Transactions
McBride, “A system for automated iris recognition,” IEEE Workshop on on Image Processing, Vol. 13, No.6, pp. 739–50, 2004.
Applications of Computer Vision, Sarasota, FL, pp. 121-128, 1994.

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:28:16 UTC from IEEE Xplore. Restrictions apply.

You might also like