Iris Based Authentication System: R.Shanthi, B.Dinesh
Iris Based Authentication System: R.Shanthi, B.Dinesh
I.
INTRODUCTION
The biometric system has been used playing a main role now days. It has been used in many applications such as smart card, passports, security system, network and database access etc. This type of authentication system is meant by identifying the human beings physiological and behavioral characteristics. This is the systematic form of authentication done by the use of computer. In this paper the biometric authentication is done using the behavioral characteristics of iris which is the part of the human eye. Under the infrared illumination the human iris texture is being identified with high discriminating and stable characteristics for the system. This type of identification said to a kind of personal identification. This paper explains the algorithm used for iris authentication system of a human eye image. The CASIA database is being used with thousands of dataset eye images developed by a research group. The images have been captured by using infrared camera. It consist both the left and right eye images of the human. The CASIA database is developed with version V1, V2, V3 and V4. In this paper CASIA V3-iris database has been used with its 3 subsets such as CASIA-IrisV3 are the CASIA-Iris-Interval, CASIA-Iris-Lamp, and CASIA-Iris-Twins respectively. This iris recognition system automated by capturing the eye image of an individual human being as illustrated in the Fig.1.The eye image is being sequentially manipulated by various techniques implementing through MATLAB interface of the system. This iris recognition system is automated by comparing the preexisting image in database which possessing unique iris texture pattern with the present iris image for authenticating the system. This authentication is done by mathematical patterns .Here various stages are being taken place to fulfill the recognition system. The steps involved for authenticating automated system are segmentation, normalization, feature encoding, template matching. The first stage is the segmentation is done by detecting the edges by canny edge detection method and the texture pattern of the iris is being detected by applying the Circular Hough transform. Second the normalization is being done applying the Daugmans Rubber sheet model by unwrapping the texture of the image detected as the fixed valued as the eye dilates when light falls on the texture region of the iris.
www.iosrjen.org
15 | P a g e
Fig.2.Iris Recognition System Third is feature encoding processed by using Gabor filter and by convolving the normalized iris by 1D log Gabor wavelet to get the binarized iriscode .Finally the image processed so far is again being matched with the database to recognize the same image by Hamming distance. This mathematical function is shown in Fig.2. 1.1 ANATOMY OF IRIS The pattern of color merged in circular format called iris also known as collarets. This iris has the ability to control the light falling level region in which is similar o the aperture of the camera. The round black spot in center of iris regions is said to be the pupil. The iris and the pupil are both being regulated dynamically for lighting condition which is perking the eye. The major fact about the iris and pupil region is interesting and significant. The pupil constricts in bright light and dilates in dim light. The size of the pupil is hence being controlled by the iris region corresponding to the lighting sensation to the eye. The iris which is having a circular muscle layer that constricts the pupil and a radial muscle layer that dilates the pupil. Since, the iris is flat and divides the front of the eye (anterior chamber) from the back of the eye (posterior chamber). Its color comes from microscopic pigment cells called melanin. The color, texture, and patterns of each person's iris are as unique and significant as a fingerprint [2]. 1.2 CASIA DATABASE CASIA Iris Image Database (CASIA-Iris) introduced by a research group of international biometrics community and has been updated from the version CASIA-IrisV1 to version CASIA-IrisV3. More than 3,000 of users from all over the world 70 countries or regions have been downloaded CASIA-Iris and more excellent work on iris recognition has been processed by the iris image databases. CASIA-IrisV4 is said to be an extension of CASIA-IrisV3 which contains six subsets of dataset. The three subsets from CASIA-IrisV3 version are such as CASIA-Iris-Interval, CASIA-Iris-Lamp, and CASIA-Iris-Twins respectively. The three new subsets are CASIA-Iris-Distance, CASIA-Iris-Thousand, and CASIA-Iris-Syn. This CASIA database is being used in this paper to process the recognition system for the best authentication system and for proper identification through the system which have been developed. The following portions of this paper are organized as follows: we present a related work of Identity Recognition based on Iris Biometric Image in section II, followed by analyzes the methodology of works for Iris Biometric Image in section III. We sketch an implementation of our proposal and present experimental results that quantify the performance in section IV.
II.
RELATED WORK
The image acquisition is being done by capturing the image using the infrared camera. The segmentation algorithm is done by the circular Hough transform proposed by Wildes et al. [4], Kong and Zhang [5], Tisse et al. [6], and Ma et al. [7].Kong and Zhang [3] has processed by detecting the eyelash by eyelash detecting technique. The whole segmentation part is being dealt in this proposed methodology. The normalization is being done by processing the polar coordinates in the unwrapped rectangular block which is devised as Daugmans [3] Rubber sheet model. Field [8] has implied Log-Gabor filter. The extracted iris pattern hast the most discriminating pattern. 1Dwavelets has encoded iris pattern of dataset Boles and Boashash [9] make use of it. The matching pattern has been discovered by implementing the hamming distance technique by Daugmans. The matching is being done manipulating the bits as shifting bits.
www.iosrjen.org
16 | P a g e
III.
METHODOLOGY
3.1 IRIS ACQUISITION The human Iris image is captured using an infrared camera which is fixed without a laser scan system to get a high quality picture from system. There are several metrics of the infrared illuminated image with visible ranges: iris ridges, nerves, and crypts are being more evident here; the edges and boundaries of the iris image in between the iris and the pupil more dealt and image is being stored in database to process the dataset. 3.2. PRE-PROCESSING Preprocessing step is done to reduce the noise present in the image. Hence the correct lens aberrations of the image is processed by enhancing the image and accomplishing he similar task for the authentication of the iris recognition system. The original image interrupted by median filter to remove the noise in the image and histogram equalization process is held to get a perfect image for processing the further mathematical patterns of technique. 3.2 SEGMENTATION The preprocessing step is followed by image acquisition. In this preprocessing the segmentation is the first step to be processed. There are two major steps to be processed to segment the iris region of the eye image. The first step is to detect the edges using the modified version Kovesis [10] of canny edge detection method MATLAB function as edge map of iris region. Then detect the circular region by applying Daugmans [1] circular Hough transform in iris image. This circular Hough transform which is used to detect the binary image patterns is represented as: (x-xc)2+(y-yc)2-r2=0 (1) In equation (1) (x,y) represent the circumference of circle and (xc,yc) represent the center coordinate with the radius r. The gradients are being invoked in vertical direction at the outer part of sclera /iris boundary region as develop by Wildes et al. [4]. The inner part gradient is equally weighed in the iris /pupil both vertically and horizontally. For the CASIA database [11], the value of the iris radius which range from 90 to 150 pixels of the eye image, while the pupil of eye radius which ranges from 28 to 75 pixels. The second step is the Kong and Zhang [5] eye lash detection to isolate the eyelashes and eyelid which is first fitting the border of the iris region .The upper and lower eyelid is being detected by the method of eyelid detection method. Then the Hough transform is being by the first fitting to the border of the upper and lower eyelid using the linear Hough transform technique. A further horizontal border is isolated, which intersects with the border near the iris and pupil. This is done for the both of the top and bottom eyelids. The linear Hough transform is implemented using the MATLAB [10], which can be a form of the Hough transform. If the Hough space is maximum of the iris region than the threshold hence no border is fit into the region corresponding to the no occluding eyelid region which is isolated. The line are restricted both in exterior and interior region of pupil. The linear Hough transform has the advantage over the parabolic version of this process of segmentation, there are only few parameters that are to be deduced and make less computation demanding. To isolate the eyelashes in database few thresolding techniques that has invoked are darker eyelashes than the other part of the images. Since the eyelashes and eyelids eyelashes have darker intensity of pixel than other part of the eye images. Hence the thresholding to the pixels is done to isolate the eyelashes and eyelid to get the required part of the eye image to detect the perfect iris. The steps that are taken to detect the boundary of the iris are as follows: The boundary is extracted by applying the canny edge detected method. The Hough transform is being applied to detect the perfect Hough circle of the iris image. The eyelashes and eyelids are being isolated to get the perfect intermediate iris image from the eye image database that is being selected. 3.3 NORMALIZATION To normalize the segmented iris image which is processed up to an intermediate template of the image the Daugmans [3] rubber sheet model is applied. This Daugmans ru bber sheet model is invoked to obtain the value of invariance of the segmented iris according to its size, position and the pupil dilatation inside the eye. The normalization is done by assigning the pixel to pair the co-ordinates real value over the double dimensionless values of polar coordinate values of the iris image. The normalization is done by form a fixed size of rectangular block of image as unwrapped form from the wrapped iris circular template image which is enhanced. The reference point is considered to be the center of pupil, and the radial vectors which passes through the iris region around the pupil region within the eye image. The data points are being selected with radial line which is form around the iris region and hence said to be radial region of the iris region which is normalized to get the perfect template. Daugmans rubber sheet model is illustrated below in equation:
www.iosrjen.org
17 | P a g e
IRIS Based Authentication System Iunwrapped=(x(ri ,j),y(ri ,j))Iunwrapped(ri,j) (2) In Equation (2) represents x(ri,j)=(1-ri )xp(j)+rjxi (j), y(ri,j)=(1-ri )yp(j)+yjxi (j) To remove the noise created by the dilation that is occurring in the iris and pupil region when illumination with the camera this normalization process is being applied. Here iriss doughnut region is being normalized. In this application, the angular resolution is considered to be 240 units (1unit=1.5 ) and the radial resolution is said to be 20.This results in an form of unwrapped image of iris as I unwrapped of 240 X 20.The normalization is done by Ioriginal to Iunwrapped image as required as shown in the figure 2.1 below. The coordinates (x i,yj) and (xp ,yp) of both the iris and pupil in the direction x y are said to be Cartesian coordinates system. The normalization pattern is done by creating a process of backtracking to Cartesian coordinates from the unwrapped form of polar coordinates. The data point are gathered from radial angular position in the normalized pattern of the image With the doughnut iris region normalization has been playing the main role by producing 2D array of the horizontal dimension of both angular and radial resolution as vertical dimension of it. Another 2D array was being created to mark the reflections, eyelashes, and eyelids detected to the segmentation stage of the iris image. In order to prevent non-iris region data from corrupting the normalized is presenting, data points are discarded which is around the pupil border and iris border.
Fig.3. Normalization process by unwrapping to a rectangular block of image 3.4 FEATURE ENCODING Feature encoding was implemented by convolving the normalized iris pattern with the 1D Log-Gabor wavelets. The 2D normalized pattern is divided into 1D signal and these 1D signals are convolved into 1D Gabor wavelets. The circular rings formed in iris refer to the 1D signal is being considered as 2D normalized pattern. To get a perfect output in filtering the given image the intensity values in the pattern normalized image is to maximum valued intensity around it to avoid the effect of noise in the image of iris. The output given by Gabor filtering is further phase quantized into four levels using the Daugman [1] Gabor filtering method, with each phase the filtering done has produced two bits of data. The intermediate output is produced in the form of gray code. Hence the bits are being shifted from one to another quadrant with one bit change, if there is less number of bits than the consideration range of gray code then it is disagreed. The recognition is achieved since the two intra class patterns are misaligned here. This feature encoding process is being illustrated below. Lee [12] has processed in 2D Gabor wavelets with finishing of amplified phase. Since Gabor wavelet has built with even and odd symmetric (sine and cosine wave) of modulation. It is held below with equations (3), (4) and (5): w(x,y)=exp(-[(x-x0)2/2+(y-y0)/ 2]) (3) m(x,y)=exp(-2i[u0(x-x0)2+v0(y-y0)]) (4) (x,y)=w(x,y)*m(x,y) (5) where, (x0,y0)=wavelet-position = effective-width = effective- length (u0-v0)=modulation-wave-vector 0 = u02 + v02 spatial-frequency
The amplitude A(x,y) and phase (x,y) is defined as: (6) The generated encoding (x,y) is done using demodulation as the resultant bit pattern. This pattern is used for matching procedure. This encoding process is preparing the bitwise template which consist bits of information corresponding to the noise mask which corrupt the iris pattern that highlight the bits of pattern into template as corrupt. As the phase information is said meaningless with zero value as the amplitude, since the region is noise mask region. Since the phase information will be meaningless at the regions where amplitude is zero, these regions are also being marked in the noise mask region.
www.iosrjen.org
18 | P a g e
IRIS Based Authentication System 4.5 MATCHING This hamming distance technique is optimized to be the matching metric of the iris recognition system, as the bit wise differentiation is vital part of this process. Since the noise mask is in corporate with this Hamming distance, the significant bits are used to manipulate the data using the Hamming distance for matching the templates with the database images. The iris pattern which corresponds to the0 bits for the noise mask invoked to manipulate the data. The bit generated for the manipulation is applied for the real iris image implemented for recognition process and hence the image is being classified. Hamming distance formula is given as: (8) Such as, In equation (8), Xj and Yj are the bit-wise templates to be processed to compare, Xnj and Ynj are the corresponding to the respective noise masks for Xj and Yj , and N is said as the number of bits represents each of the template of the image required. After the Hamming distance is applied sequentially by normalization the template is achieved here. In spite of inn consistency by rotation has taken part, hamming distance is manipulated with the template to the database of the iris image. These templates are being shifted from left to right and values are processed for the Hamming distance matching pattern as successive shifts. Hence, this process is demonstrated by Daugman [1] and is applied after the normalization pattern. The hamming distance is therefore finally taken and used to match the template with the database.
IV.
PERFORMANCE EVALUATION
Here the image to process this recognition system in tested with the CASIA database of iris images. The database images are verified and manipulated by a biometric research group. The first and foremost thing in the recognition process is image acquisition and the segmentation. The image is localized to get the perfect iris region of the eye image. Hence, in segmentation eyelash and eyelid detection is processed. These parts of eye isolated to get the iris region of the eye image by introducing canny edge method and circular Hough transform. The segmentation has occurred by occluding the isolated region of the eye image. Since these images are dimensionally inconsistent the normalization is processed here as the next step. By the Daugmans theory rubber sheet model is introduced in this recognition system. The rectangular block is healed to detect the iris region without any noise mask in the iris region by unwrapping the image as polar coordinates. Then the feature encoding is processed to get good iris region recognized template by applying 2D Log-Gabor filter and the quantizing into four different quadrants to get the bit wise template for further processing. Finally the template ready for matching the image with database using hamming distance by shifting the bits from left to right. Therefore the database in invoked segmented and then normalized into a form of templates to match again with the database by hamming distance to get the perfect authentication of the recognition system. The recognition rate of 94.3% obtained shows the experimental proposed method is being efficient for iris based biometric recognition.
V.
EXPERIMENTAL RESULTS
The results obtained for the recognition system has been processed by applying iris code in MATLAB [10] as shown below.
www.iosrjen.org
19 | P a g e
VI.
CONCLUSION
This paper is presenting the iris authenticating system. Here the image to process this recognition system in tested with the CASIA database of iris images. The database images are verified and manipulated by a biometric research group. The first and foremost thing in the recognition process is image acquisition and the segmentation. The image is localized to get the perfect iris region of the eye image. Hence, in segmentation eyelash and eyelid detection is processed. These parts of eye isolated to get the iris region of the eye image by introducing canny edge method and circular Hough transform. The segmentation has occurred by occluding the isolated region of the eye image. In enhancement is tried to recognize both the left and right eye image as template at the sequence of an individual human being corresponding to the image available in the CASIA database.
REFERENCES
[1]. [2]. [3]. [4].
[5].
J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, 2002. ]J.Daugman. Biometric personal identification system based on iris analysis. United States Patent, Patent Number: 5,291,560, 1994. J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, 1993. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride.A system for automated iris recognition. Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp. 121-128, 1994. W. Kong, D. Zhang. Accurate iris segmentation based on novel reflection and eyelash detection model. Proceedings of 2001,International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, 2001. C. Tisse, L. Martin, L. Torres, M. Robert. Person identification technique using human iris recognition. International Conference on Vision Interface,Canada, 2002 L. Ma, Y. Wang, T. Tan. Iris recognition using circular symmetric filters. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, 2002. D. Field. Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America, 1987. W. Boles, B. Boashash. A human identification technique using images of the iris and wavelet transform. IEEE Transactions on Signal Processing, Vol. 46, No. 4, 1998. P. Kovesi. MATLAB Functions for Computer Vision and ImageAnalysis.Availableat: http://www.cs.uwa.edu.au/~pk/Research/MatlabFns/index.html Chinese Academy of Sciences Institute of Automation. Database of 756 Grayscale Eye Images. http://www.sinobiometrics.com Version 1.0, 2003. T.S.Lee, Image Processing using 2D Gabor Wavelets,IEEE Trancsactions on the pattern An alysis and Machine Intelligence,vol.18,Issue10,1966.
www.iosrjen.org
20 | P a g e