Feature Extraction of An Iris For PatternRecognition

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 13, ISSUE 1, MAY 2012 9

Feature Extraction of an Iris for Pattern Recognition


Sulochana B. Sonkamble and Dr. Ravindra C. Thool
Abstract In this paper, we propose a new approach for feature extraction of an iris. Feature extraction is the most important step to improve the accuracy of biometric- based person identification System. The iris-pattern-based systems have recently shown very high accuracies in verifying an individuals identity.This paper gives a new approach to extract the region of interest of an iris and its feature extraction.We have selected 100 iris images from CASIA database and 100 images from our database which are captured from 50 different male and female volunteers with the help of the system setup kept in our research laboratory. The proposed system consists of five modules iris localization and segmentation method using gradient vectors, normalization, feature extraction and matching. The system uses the Canny Edge Detection Algorithm and Circular Hough transform to detect the inner and outer boundary for localization of an iris. The efficient iris segmentation technique using gradient vector is to be used for extracting the iris region. The extracted iris region can be normalizing into a rectangular block of fixed dimensions. The Gabor wavelet transform techniques have been applied on the data set to get feature vectors of an iris which are to be used for the recognition. The extracted features are stored in a vector contains the biometric information of an individual. The Euclidian Distance method is proposed for comparing the iris pattern. Two iris templates are to be used for testing. The performance of the system can be increased by training the more feature vectors of extracted iris images. Index TermsBiometric Identification, Localization, Segmentation, Feature Extraction, Pattern Recognition.

1 INTRODUCTION

OW a days the world is becoming more dependent on computer based systems. Hence, computer security has been arisen to protect the information using passwords. The security and the authentication of individuals information are necessary for many different areas of our lives, most of the people having their identity as ID Cards. The ID and passwords can be stolen or forgotted. Biometric identification provides a valid alternative to traditional authentication mechanisms such as ID cards and passwords, whilst overcoming many of the shortfalls of these methods; it is possible to identify an individual based on who I am" rather than which identity I possess". Biometrics is the only high confidential method for recognition of a person using the features of the individual instead of his or her knowledge like password or belongings like ID card [1]. Biometrics has played an important role over the recent years for developing the Security systems. The Iris biometrics-based systems are proved the best among the all ; fingerprint, palm, face and voice [3]. The iris recognition is becoming a fundamental component of the computerized world with various application areas in national ID card, banking, passport, credit cards, smart cards, PIN, access control and network security, etc.The iris-pattern-based systems have recently shown very high accuracies in verifying an individuals identity.

Sulochana Balwant Sonkamble is with the Information Technology Department at Marathwada Mitra Mandals, College of Engineering, Pune, M.S., India. Dr.Ravindra C.Thool is with the Shri Guru Govin Singhji Institute of Engineering and Technology, Nanded, M.S., India.

The human iris, located between the pupil and the sclera, has a complex pattern. This iris pattern is unique to each person and to each eye, and it remains stable over a person life time which is observed through the clinical evidence. The body of this paper details the steps of iris recognition including localization, segmentation, normalization, feature extraction and classifier [4]. The performance of iris recognition system depends on the good image quality and extremely clear iris texture details. We are thankful to CASIA that they have provided the availability of carefully designed iris image database of sufficient size for our experiments. The CASIA Iris V3 contains a total of 22051 iris images from more than 700 subjects and 1500 eyes. All iris images are 8 bit graylevel JPEG files, collected under infrared illumination. In our experiment we selected five different iris images from CASIA-IrisV3-Interval which were captured by self developed iris camera in two sessions, with at least one month interval [2]. The figure 1 highlights the parts of iris image. In this paper, we are presenting an efficient iris segmentation method using gradient vector for the high confidence visual recognition of a person. Section 2 describes the iris localization and segmentation method using gradient vectors. Section 3 describes the iris normalization process. In section 4 describes feature extraction. In section 5 we are presenting the experimental results. The section 6 highlights the conclusion and references and biography are given at the end of the paper.

2012 JCSE www.journalcse.co.uk

10

point. For a specific curve f(x,a)=0, with parameter vector a, form an array A(a), initially set to zero. This array is termed as accumulator array [18]. For each pixel x, compute all a shown in equation (2) and (3) and increment the corresponding accumulator array entries by one. After each edge pixel x has been considered, local maxima in the array A correspond to curves of f in the image.

f ( x, a ) = 0
df dx

(2)

( x, a ) = 0

(3)

Fig. 1. Sample Iris Image.

2 IRIS LOCALIZATION AND SEGMENTATION


The main purpose of this process is to locate the iris on the image and isolate the region of interest from the rest of the eye image for further processing. The iris image shown in figure 1, contains the parts such as pupil, iris, sclera, eyelid etc [3]. So the captured iris image cannot be used directly. The preprocessing is performed after the image captured to isolate the region of interest. The iris region can be divided into two circles, outer for iris-sclera boundary, another inner for iris-pupil boundary [9] [10]. The eye images are used to find the iris with precise localization of its boundaries using the centre coordinates and radius of both the iris and pupil. The pupil centre and iris centre are same and the radius can range from 0.1 to 0.9 of iris radius. The pupil circle must separate from the iris to get the region of interest. For the image I(x,y) with centre coordinates ( x o , y o ) and radius r, we can define any circle by the equation (1). The Hough Transform is a standard computer vision algorithm which can be used to find the shapes in the image. To extract the features of the iris image, it is very essential to find the simple shapes like straight lines, circles, ellipses etc. In order to search the desired shapes in the image one must be able to detect a group of pixels that are on a straight line or a smooth curve. This can be achieved using the Hough transform [17]. The circular Hough Transform can be applied to find the centre coordinates ( x o , y o ) and radius r of the circular regions of the pupil and iris as shown in the equation (1).
2 2 xo + y o

= r

The edge characterizes the boundaries of the pupil and iris in an image. The edges in image are the areas with high intensity contrast from one pixel to next. There are many ways to perform the edge detection. We are using the basic algorithm of gradient and laplacian method. The gradient method detects the edges by considering the maximum and minimum in the first derivative of the image. The Laplacian method searches for zero crossings in the second derivative of the image to find the edges. The searched edge is having the one-dimensional shape of a ramp and calculates the derivative of the image which can highlight its location. A gradient -based algorithm is developed to overcome the limitations to increase the accuracy of the iris image segmentation compared with existing methods. The image segmentation using gradient vector is a type of edge detection method called Gradient Method. First applying the Gaussian Low pass Filter to the iris image to get smoothed image with Sobel method. A pixel location with higher intensity value is called as the edge if the value of gradient reaches to some threshold. When first derivative is at maximum, the second derivative is zero. Therefore, in second derivative, edge is located at zero; this method is called as Laplacian. After that the image is convolved with two 3X3 masks called horizontal and vertical masks, which gives the horizontal and vertical gradient values of the image in x (columns) and y (rows) direction respectively [2]. The magnitude of the gradient values is calculated using these horizontal and vertical values and finding the square root of the combined values. Smooth the iris image with a Gaussian Low pass Filter to reduce noise and unwanted data using the equation (4). The equation (5) is used to calculate the value of G .

(1)

The parameter space of the circular Hough Transform is three dimensional ( xo , yo , r ) . Each point in image gives rise to a locus of voting points in the 3D Hough space that will be a surface. For a given radius r, the locus of possible circle centers will be a itself circle of radius r and center ( x o , y o ) . Therefore, in 3D space, the locus of possible parameter values can be used to improve the efficiency of the edge strength in Hough space. For the three parameters ( x0 , y0 ,r) arranged in such a way that the resultant loci will pass through the same parameter space. Therefore, many circles will intersect at a common

G ( x, y ) = G ( x, y ) f ( x, y )

(4)

Where,

G (1 =
G

1 2

= ( 1

22

)e x y
2
1

[( + ) / 2 ]
2

[(

) / 2 ]

(5)

11

Fig. 2. (a) Original Image (b) Horizontal Edge map (c) Vertical Edge Map (d) Edge Map caption.

Compute gradient of g(X, Y) using the gradient operator Sobel . The Sobel operator performs a 2-D spatial gradient measurement on an iris image. Then, the approximate absolute gradient magnitude at each point is calculated. The Sobel operator uses a pair of 3x3 convolution masks, one to calculate the gradient in the x-direction and the other to calculate the gradient in the y-direction [17] . The gradient magnitude is calculated using the equation (7) and direction is approximated using the formula given in equation (6) the angle theta is calculated using the equation (8).

bined, forming a hierarchy of sub-regions. The neighboring sub-regions with the smallest difference of average separating edge gradient and average intensity are combined first, and the average characteristics are recalculated for the new sub-region. For specific application iris segmentation combination rule includes function of edge scale, absolute region intensity, location within the image, shape characteristics of the bounding edges [2]. The iris images are collected from the CASIA (The Institute of Automation, Chinese Academy of Science) database. The database contains total of 22051 iris images from more than 700 subjects and 1500 eyes [2]. These images are used for segmentation and implemented using the MATLAB function. The range of the values of iris is from 90 to 150 pixels and pupil radius is from 28 to 75 pixels [9]. The circular Hough transform was performed within the iris region. After this the six parameters are stored, x - y center and iris radius, similarly x-y center and pupil radius. These parameters are stored in a vector as the shape features of an iris image. The top and bottom eyelids are isolated by using the linear Hough. For isolating eyelashes a simple thresholding technique was used as these are dark as compared to other region of iris. The segmentation results are shown in figure 2, which gives the successful segmentation of the maximum CASIA images.

3 IRIS NORMALIZATION
Once, the iris is isilted from rest of the eye region then transformed into a rectangle form to fixed dimensions in order to allow comparision. It is necessary to convert it radial to a polar form known as normalization. The concentric iris image is unwrapped as shown in figure 44. Cartesian to polar transform, known as normalization, is based on the Daugmans rubber sheet model. Each point of the iris image is mapped to a pair of polar coordinates (r, !), where radius r [0, 1] and angle ! [0, 2"]. Regions with high occlusions are not considered, and the amount of occlusion free areas can be used as quality measure. The concentric iris image is unwrapped as shown in figure 4. The diameter of the pupil and the iris is not constant for all images. It is necessary to transform these to normalize the distance between pupil and iris by using the transform equations given in (10) and (11). It is necessary to transform these to normalize the distance between pupil and iris by using the transform equations given in (10) and (11). The mapping of the concentric iris region from (x , y) coordinates to thr normalized polar representation is given by the equation (9).

G = Gx + G y
M =
2 2 Gx + G y

(6)

(7)

= tan 1 (

Gy Gx

(8)

The highest gradient value in the image falls on the edge. The approach uses an edge detection algorithm to locate the edges in the iris images. All resulting edges are linked to form a segmented image and the region of interest is extracted. The edge detection algorithm locates and follows edge segments, across the image and across scales, using a large set of oriented Difference of Gaussian (DoG) filters to estimate derivative directions and maxima. Maxima are estimated by zero-crossings of second order derivative estimates [18]. The derived edges are linked by extending dangling edges in the direction of maximal gradient until an edge intersection is reached. Canny edge detection is used to create an edge map. The linear Hough transform is implemented to find the inner and outer circles. The results of the gradient based algorithm are shown in figure 2. Each elemental region enclosed by edges is then characterized by the average value of the central area, and the average gradient between adjacent regions. The elemental regions are then iteratively com-

I ( x(r , ), y (r , )) I (r , )

(9)

Where I(x,y) is the iris image, (x,y) are the original Cartesian coordinates, (r, ) are the corresponding polar coordinates. The pupil coordinates are ( x p , y p ) and iris coordinates are ( xi , yi ) along direction.

12

w0 = u 02 + v 02

(13)

The iris pattern is convolved with the modulation and phase Quantization of complex valued Gabor wavelet which is represented by the equation (14). Figure 8 (a) shows the real component or even symmetric filter characterized by a cosine modulated by a Gaussian and 8 (b) shows imaginary component or odd symmetric filter characterized by a sine modulated by a Gaussian [17].

h{Re ,Im } = sgn{Re ,Im } I ( , )e i (0 ) e ( r0 ) / e (0 ) / dd

(14)
Fig. 3. (a) Phase Quadrant Demodulation Code 2D Gabor Wavelet (b) Real component and Imaginary component filter characterized by a sine modulated by a Gaussian

x ( r , ) = (1 r ) x p ( ) + rx i ( )

(10) (11)

y (r , ) = (1 r ) y p ( ) + ry i ( )

The iris images are first scaled to get constant distance between pupil and iris region. While comparing the two images one is considered as the reference image. Once two images are same dimension, the features are extracted from the iris region by considering the intensity values along with the concentric circles with origin at the center of pupil. The unwrapped iris region is shown in figure 4. For normalization of the iris regions, center of pupil is considered as reference point. The radial line pass through the iris region is defined as angular resolution. The normalization process creates the 2D array of horizontal dimensions of angular resolution and vertical dimension of radial resolution. The data points are chosen from the radial and angular position in the normalized iris pattern. In case of matching of the two irises these patterns are compared. The rectangular iris pattern is shown in figure 4.

4 FEATURE EXTRACTION
Once the normalized iris region is obtained in the 2D rectangular form, this can be used to extract the features. The iris pattern is demodulated to extract phase information using the 2D wavelet. A 2D Gabor filter for any image I(x,y) is given by the equation (12). The particular position in the image is given by ( x 0 , y 0 ) , width and length is denoted by ( , ) , the modulation is denoted by ( u 0 , v 0 ) The figure 3 shows Phase Quadrant Demodulation Code 2D Gabor Wavelet. This is s generating complex valued coefficients whose real and imaginary parts specify the coordinates of a pharos in the complex plane [16]. Where the frequency w0 is given by the equation (13).
G( x, y) = e [( x x0 )
2

Where h{ R , I } can be used as complex valued bit whose e m real and imaginary parts are either 1 or 0 depending on the sign of the 2D integral. The iris image in polar coordinate system is denoted by I ( , ) . and are considered as the multi scale 2D wavelet size parameters, is wavelet frequency, and ( r0 , 0 ) represent the polar coordinates of iris region. The phase information is used for recognizing the iris whereas amplitude information is used for finding the contrast and camera gain. After feature extraction, an iris image is converted into feature vectors. The template for iris is stored for matching. Two irises can be compared using the features vectors. The difference of two iris vector is calculated. The proposed matching algorithm, Euclidian Distance can record results for iris images. The comparison is performed intra-class database as well as inter-class database. The results are shown in table 1. The encoded iris pattern values generated from encoding process are given in each filter recorded in table 3 with filter parameters frequency, bandwidth, and multiplicative factor. The template is used for comparison of different iris patterns. The normalization of the iris region determines the radial and angular resolutions which are to be used for encoding the iris pattern to create the iris template.

5 EXPERIMENTAL RESULTS
The performance of iris recognition system is tested at various stages. The tests were performed for image preprocessing, segmentation, normalization, and feature extraction and matching. The results of all testing are plotted and some of those are shown in paper. The performance efficiency at each stage is calculated and improved by minimizing the errors. The first and most important step in iris recognition is the image acquisition as recognition efficiency is depending on the quality the image. The images are captured using the system set up with camera installed in our College Research Laboratory. The collected data base is having total 1000 images of right and left eye with very good image quality and extremely clear iris texture details. These images are collected from different 50 male and female

/ 2 +( y y0 )2 / 2 ]

e 2i[u0 ( x x0 )+v0 ( y y0 )] (12)

13

datasets were tested for intra-class and inter-class comvolunteers. The images are captured by camera with high resolution. The images were captured at constant distance TABLE

Table 1. The images used for segmentation from database Table 2. Efficiency for Segmentation Results

Fig. 4. (a) Sample Normalized Iris and Encoded Iris Pattern


(b) Iris Image Capturing System setup.

Fig. 5. (a) Sample CASIA Iris Images. (b) Sample Iris Images captured in our Research Laboratory.

and keeping constant natural lighting effect in our laboratory. The images are color images that are processed to gray scale images for further processing. Here, we have selected only 100 iris images from CASIA database and 100 images from our captured images for testing. The images from our database are selected which are giving the good segmentation results. The images from CASIA database are standard and selected based on good segmentation results. The iris images from both the

parison given in table 1. Figure 5 shows some of the iris images selected for preprocessing. Once the iris images are captured, those images are preprocessed for iris localization. The image is smoothed using the Sobel operated the horizontal and vertical gradients are calculated. The pupil and iris boundaries are located by applying the canny edge detection algorithm. The Hough transform is used for segmenting the circular iris region. The details of the segmentation are given above in the section 2. The images from the CASIA as well as and the images acquired in our laboratory were tested for segmentation. The results are given in table 1. The segmentation efficiency for our acquired images is less due to the various factors affecting for capturing the image like light, distance between the camera and person. Figure 6 shows the segmentation results for some of the iris images. After the segmentation of an iris images, the extracted iris region is normalized. The iris region is converted to polar as explained above in section 3 to get the fixed sized rectangular iris region to extract the features. The figure 6 shows some of the normalized results of polar iris. The wavelet transform is applied on the normalized iris region to encode, the iris pattern. The Gabor wavelet is convolved with the iris pattern. The detail process of feature extraction using wavelet is explained above in section 4. Table 3 gives the some of the results of iris pattern of 20 iris images.

6. CONCLUSION
The paper Feature Extraction of an Iris for Pattern Recognition is focused on feature extraction and encoding of the iris pattern. The iris images near about 200, from CASIA database and our own captured were tesvery good for CASIA images, ted for intra-class and inter-class, comparison shown in the above section. The

14

TABLE 3

Fig. 6.(a)Some of the Iris Segmentation Results. (b) Polar Iris extracted for feature extraction.

segmentation efficiency for CASIA database is very good and for our own captured can be inceased by capturing the images using the high resolution camera. In this paper, the efficient and effective methods are proposed for segmentation and feature extraction of an iris image which gives effective and accurate results. The fixed sized rectangular iris pattern was encoded to generate iris template. The iris code was used for the comparison of inter-class and intra-class iris pattern. The proposed system is found successful and able to recognize a person using his/her iris images. The experimental results show the better performance for the proposed system.

Tab;e 3. Sample Feature Values on Multichannel Gabor Filtering , ACCV2002: The 5th Asian Conference on Computer Vision, 23-25 January 2002, Melbounce, Australia. Muhammad Khurram Khan, Jiashu Zhang and Shi-Jinn Horng, An Effective Iris Recognition System for Identification of Humans, IEEE 2004. Libor Masek, the University of Western Australia, Recognition of Human Iris Patterns for Biometric Identification, 2003. Yong Wang, Jiu-Qiang Han, Iris Recognition using Independent Component Analysis, IEEE, 2005. Hugo Proenca and Luis A. Alexandre, Portugal, A Metod for the Identification of Inaccuracies in Pupil Segmentation , IEEE, 2006. Kresimir Delac, Mislav Grgic, University of Zagreb, CROATIA, A Survey of Biometric Recognition Methods, 46th International Symposium Electronics in Marine, EELMAR-2004, 16-18 June 2004, Zadar, Croatia. The Institute of Automation, Chinese Academy of Science, Note on CASIA-Iris V3 , October ,2008. Hugo Proenca and Luis A. Alexandre, ICIAP, UBIRIS : A Noisy Iris Image Database , September, 2005. Shinyoung Lim, Kwanyong Lee, Okhwan Byeon, Taiyun Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier , ETRI Journal, 2001. John Daugman, How Iris Recognition Works, Invited paper , IEEE, 2004. John Daugman,, High Confidence Recognition of Person by Iris Pattern, University of Cambridge, The Computer Laboratory, Cambridge CB2 3QG, UK, IEEE, 2001.

[6]

[7] [8] [9]

REFERENCES
[1] Sulochana Sonkamble , Dr. Ravindra Thool, Balwant A. Sonkamble, An Effective Machine-Vision System for Information Security and Privacy using Iris Biometrics, WMSCI-2008, Orlando, USA, ISBN 978-1-934272-31-2. Sulochana Sonkamble, Dr. Ravindra Thool,Balwant SonkambleEfficient Iris Segmentation Methodology Using Gradient Vector for the High Confidence Visual Recognition of a Person, in IPCV,2009. Sulochana Sonkamble, Dr. Ravindra Thool, Balwant Sonkamble, Survey of Biometric Recognition Systems and Their Applications,Journal of Theoretical and Applied Information Technology, ISSN: 19928645 , EISSN: 18173195,2010. Joseph Lewis, University of Maryland, Bowie State University, Biometrics for secure Identity Verification: Trends and Developments January 2002. Lia Ma, Yunhong Wang, Tieniu Tan , Iris Recognition Based [10]

[2]

[11] [12] [13]

[3]

[14] [15]

[4]

[5]

15

[16] Book: Anil K. Jain, Patrick Flaynn, Arun A. Ross, Handbook of Biometrics, Springer [17] Book: Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing, Addition-Wesley. [18] Book : William K. Pratt, Digital Image Processing , WSE Wley. Mrs.Sulochana Balwant Sonkamble received B.E. degree in Computer Science and Engineering from Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Mahrashtra state, India. in 1996, M.E. in 2002. She is pursuing Ph. D. in Computer Science and Engineering from Shri Ramanand Teerth Marathwada University, Nanded, and M.S. India. She is distinguished Assistant Professor in Information Technology Department and presently working as Head of Department of Information Technology at Marathwada Mitra Mandals College of Engineering, Pune, Mahrashtra state, India. This author became a Member of IEEE in 2006, is member of Computer Society of India and life member of Indian Society for Technical Education. The author have published two and/or presented paper in International journal, five papers at national level and twelve papers at international level. Author has got research grant from Board of College and University Development , also sanctioned fund for organizing state level and district level workshops from University of Pune, M.S, India. Her research interest includes computer vision, iris biometrics, image processing, neural network and pattern recognition. Dr. Ravindra Thool received B.E. degree in Electronics Engineering from Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Mahrashtra state, India. in 1986, M.E. in 1991, and Ph. D. in Electronics and Computer Science in 2003 from Shri Ramanand Teerth Marathwada University, Nanded, and M.S. India. He is distinguished Professor and in Information Technology Department since 1986 at Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Mahrashtra state, India and presently working as Head of Department . His research interest includes computer vision, image processing, neural networks and pattern recognition. He has published seven papers in international journal twenty six papers in international conferences.He is life member of Indian Society for Technical Education and member of American Society for Agricultural Engineering. He has been worked as Chairman Board of Studies in Information Technology from and Member of Board of studies in Computer science Engineering.

You might also like