A Human Iris Recognition Techniques To Enhance E-Security Environment Using Wavelet Trasform
A Human Iris Recognition Techniques To Enhance E-Security Environment Using Wavelet Trasform
ABSTRACT In this paper, efficient biometric security techniques for iris recognition system with high performance and high confidence are described. The system is based on an empirical analysis of the iris image and it is split in several steps using local image properties. The system steps are capturing iris patterns; determine the location of the iris boundaries; converting the iris boundary to the stretched polar coordinate system; extracting the iris code based on texture analysis using wavelet transforms; and classification of the iris code. The proposed system use the wavelet transforms for texture analysis, and it depends heavily on knowledge of the general structure of a human iris. The system was implemented and tested using a dataset of 240 samples of iris data with different contrast quality. The classification rate compared with the well known methods is discussed. KEYWORDS User authentication, E-security, Biometrics, Iris recognition, segmentation, wavelet, classification, e-business.
1. INTRODUCTION
Today's e-security are in critical need of finding accurate, secure and cost-effective alternatives to passwords and personal identification numbers (PIN) as financial losses increase dramatically year over year from computer-based fraud such as computer hacking and identity theft [15]. Biometric solutions address these fundamental problems, because an individual's biometric data is unique and cannot be transferred. Biometrics is automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteris tic. Examples of physiological characteristics include hand or finger images, facial characteristics, and iris recognition. Behavioral characteristics are traits that are learned or acquired. Dynamic signature verification, speaker verification, and keystroke dynamics are examples of behavioral characteristics [2,3]. Biometrics system uses hardware to capture the biometric information, and software to maintain and manage the system. In general, the system translates these measurements into a mathematical, computerreadable format. When a user first creates a biometric profile, known as a template, that template is stored in a database. The biometrics system then compares this template to the new image created every time a user accesses the system. For an enterprise, biometrics provides value in two ways. First, a biometric device automates entry into secure locations, relieving or at least reducing the need for full-time monitoring by personnel. Second, when rolled into an authentication scheme, biometrics adds a strong layer of verification for user names and passwords. Biometrics adds a unique identifier to network authentication, one that is extremely difficult to duplicate. Smart cards and tokens also provide a unique identifier, but biometrics has an advantage over these devices: a user can not lose or forget his or her fingerprint, retina, or voice. The practical applications for biometrics are diverse and expanding, and range from healthcare to government, financial services, transportation and public safety and justice [2,3]. Such applications are online identification for E -Commerce, access control of a certain building or restricted area, off-line personal identification, financial ATM (Automated Teller Machine), on-line tickets purchase and internet kiosk and
572
A HUMAN IRIS RECOGNITION TECHNIQUES TO ENHANCE E-SECURITY ENVIRONMENT USING WAVELET TRASFORM
military area access control and etc. Using iris recognition in ATM [5,6,7,12,13,14,16], a customer simply walks up to the ATM and looks in a sensor camera to access their accounts. The camera instantly photographs the iris of the customer. If the customers iris data matches the record stored a database access is granted. At the ATM, A positive authentication can be read through glasses, contact lenses and most sunglasses. Iris recognition proves highly accurate, easy to use and virtually fraud proof means to verify the identity of the customer. In this paper we present an iris recognition system using a wavelet theory. The proposed system use the wavelet transforms for texture analysis, and it depends heavily on knowledge of the general structure of a human iris. The paper is organized as follows. Section (2) discusses in details the proposed system. Results will discuss in section (3). Conclusions are shown in section (4).
Iris Localization: - Determine pupil iris boundary - Determine Limbic iris boundary
573
Figure (2) shows the device configuration for acquiring human eye images.
8 cm 12 cm Halogen lamp 50 W
The human eye should be 9 cm far away from the camera as shown above. The halogen lamp is in a fixed position to get the same illumination effect over all the images, thus excluding the illuminated part from the Iris while getting the Iris Code is easier, to acquire a more clear images through a CCD camera and minimize the effect of the reflected lights caused by the surrounding illumination, we arrange two halogen lamps as the surrounding lights and the two halogen lamp should be in front of the eye.
G ( x, y ) to the image I ( x, y ) at the position ( x, y ) to acquire the image information, where G ( x, y ) is smoothing function of scale that smoothes the image to select the spatial scale of edges
G( x, y ) = (1 / 2 ).( e ( x
+ y 2 ) / 2 2
)( x 2 + y 2 / 4 2 / 2 )
(1)
Edge detection result should be enhanced using linear method like Median filter to remove the garbage around the pupil to gain clear pupil to determine perfect centre. Get the centre of the pupil by counting the number of black pixels (zero value) of each column and row. Then get each row and column that has the maximum number of these black pixels. Then determine the center by simple calculation according to the image coordinate to set it correct on the image, consequently we can determine the radius of the pupil. Thus we can find the pupillary boundary (inner). A similar procedure is extended by using a coarse scale to locate the outer boundary (limbus) which can be apparent by using the mid-point algorithms of circle and ellipse. Merging the existing edge segments into boundaries by linking these edges, we can precisely isolate the iris boundary from the eye. The proposed iris boundary isolation algorithm is described as following steps:
574
A HUMAN IRIS RECOGNITION TECHNIQUES TO ENHANCE E-SECURITY ENVIRONMENT USING WAVELET TRASFORM
Step-1: Edge detection: We will localize the pupillary boundary by using a finer scale then apply the Zero-Crossing for each pixel to make a comparison between pixels to make all values of the pupil to be zero to easy determine the boundary. Step-2: Edge Linking: Using coarse-to-fine scale to get boundaries by merging the existing edge segments into boundaries this is done by edge linking. Step-3: Enhancement: The result of step-1 and step-2 should be enhanced by using median filter. Step-4: Pupil/limbus center:
0 0 by counting the number of black pixels (zero Determine the center of the pupil value) [17] as follows: Count every pixel in each row. Get the row of maximum number of pixels.
(x , y )
Get the position of the first and last pixels respectively, row.
( x1 , y1 ) and
( x2 , y 2 ) of this
1 2 Then find the center of this row by, 0 . Similarly, apply the previous steps for determining the center of the column of maximum
x =x +x 2
1 2 number of pixels by 0 . We actually can not obtain only one point, so we select the center point as the most frequently crossed point. Consequently, the radius of virtual circle of the pupil can be determined. Step-5: Isolate the iris boundary: Segment the image of the iris from the eye by applying boundary detection technique to localize the pupillary boundary. This technique based on merging the existing edge (the maximum numb er of point of the edge) segments into boundaries by edge linking [17] as follows:
y = y + y /2
Define the size of neighbourhoods 5 5 . Link similar points which having closed values, the entire image undergoes this process, while keeping a list of linked points. When the process is complete the boundary is determined by the linked list which can be apparent by using the mid-point algorithms of circle and ellipse. Similar steps can be extended by using a coarse scale to locate the outer boundary (limbus).
( [ ; 2 ] )
I ( x( p, ), y ( p, ) I ( p , )
x ( p , ) = (1 p) * x p ( ) + p * x i ( ) y ( p , ) = (1 p) * y p ( ) + p * y i ( ) x p ( ) = x p 0 ( ) + r p * cos( ) y p ( ) = y p 0 ( ) + r p * sin( )
575
x i ( ) = xt 0 ( ) + ri * cos( ) y i ( ) = y t 0 ( ) + ri * sin( ) rp ri
Where and
(7) (8)
( xi ( ), yi ( )) are the coordinates of the pupillary and limbic boundaries in the direction .
are respectively the radius of the pupil and the iris, while
( x p ( ), y p ( ))
and
Figure (3) shows the result of Harr transform. Where, H and L mean the high pass and low pass filter, respectively. While HH means that the high pass filter is applied to signals of both directions. The results of Haar transform in four types of coefficients: (a) coefficients that result from a convolution with g in both directions (HH) represent diagonal features of the image. (b) Coefficients that result from a convolution with g on the columns after a convolution with h on the rows (HL) correspond to horizontal structures. (c) Coefficients from high pass filtering on the rows, followed by low pass filtering of the columns (LH) reflect vertical information. (d) The coefficients from low pass filtering in both directions are further processed in the next step.The following MATLAB code illustrates the Haar decomposition process: function [s,d]= dwthaar(Signal) N = length(Signal); s = zeros(1, N/2); d = s; The actual transform for n=1:N/2 s(n) = 1/2*(Signal(2*n-1) + Signal(2*n)); d(n) = Signal(2*n-1) - s(n); Wavelet Decomposition Using the Haar Transform function T = wavelet-decomp(Signal) N = size(Signal,2); J = log2(N); if rem(J,1) error('Signal must be of length 2^N.'); T = zeros(J, N); T(1,:) = Signal; for j=1:J Length = 2^(J+1-j); T(j+1, 1:Length) = dwthaar( T(j, 1:Length) ); T(j+1, Length+1:N) = T(j, Length+1:N); For the 450x60 iris image in polar coordinates, we apply wavelet transform 4-times in order to get the 28x3 sub-images (i.e. 84 features). By combining these 84 features in the HH sub-image of the high-pass filter of the fourth transform (HH4) and each average value for the three remaining high-pass filters areas (HH1,HH2,HH3), the dimension of the resulting feature vector is 87. Each value of 87 dimensions has a real value between -1.0 and 1.0. By quantizing each real value into binary form by convert the positive value into 1 and the negative value into 0. Therefore, we can represent an iris image with only 87 bits.
576
A HUMAN IRIS RECOGNITION TECHNIQUES TO ENHANCE E-SECURITY ENVIRONMENT USING WAVELET TRASFORM
Aj
and
Bj
be two iris codes to be compared, the Hamming dis tance function can be calculated as:
HD =
1 87 A Bj 87 j j
(9)
denoting exclusive-OR operator. (The exclusive-OR is a Boolean operator that equals one if and
A j and B j B
are different).
Aj
For j = 1 to 87 do o Comparing bit-by-bit code j with the first code j in the database. o If the result of the XOR is (0), this mean the 2 bits are the same, so count the number of zeros o Else dont count it and continue to the next bit o Next j until reaching the final code in the database. Calculating the similarities (matching) ration by the following formula:
MR
(10) Where Nz and Tz are the number of zeros and total number of bits in each code, respectively. MR is a matching ratio.
* 100 Tn
577
Table (1) shows the classification rate compared with the well known two methods; Wilde's and Daugmans [9,14]. In two test modes, is a little better than ours. In fact, the dimensionality of the feature vector in both methods is much higher than ours. The feature vector consists of 2048 components in Daugman's method, while only 84 in our method. In addition, they extract features in much smaller local regions. These make their methods a little better than ours. Now, we are working on more precisely representing the variation of texture of the iris in local region and reducing the dimensionality of the feature vector. Thus, we expect to further improve the performance of the current method.
4. CONCLUSION
We describe in this paper efficient techniques for iris recognition system with high performance from the practical point of view. These techniques are: A method of evaluating the quality of an image in the image acquisition step and excluding it from the subsequent processing if it is not appropriate. A computer graphics algorithm for detecting the centre of the pupil and localizing the iris area from an eye image. Transforming the localized iris area into a simple coordination system. A compact and efficient feature extraction method which is based on 2D multiresolution wavelet transform. Matching process based on Hamming distance function between the input code and the registered iris codes. The system find out the recognition rate is about 97.3%.
578
A HUMAN IRIS RECOGNITION TECHNIQUES TO ENHANCE E-SECURITY ENVIRONMENT USING WAVELET TRASFORM
REFRENCES
1. A. A. Onsy et.al., 2001. A New Algorithm for Locating the Boundaries of the Human Iris. 1st IEEE International Symposium on Signal Processing and Information Technology. December 28-30, Hilton Ramses, Cairo, Egypt, 2001. 2. A. Julian, 2000. Biometrics:Advanced Identity Verification. Springer-Verlag publishers. 3. B J. Erik. 1999. Overview of the Biometric Identification Technology Industry. A presentation to the IBIA Conference: Defending Cyberspace '99, http://www.ibia.org 4. F. Kagan et. al., 1998. A Compact High-Speed Hamming Distance Comparator for Pattern Matching Applications. http://turquoise.wpi.edu 5. G. Kee, et.al. 2001. Improved Techniques for an Iris Recognition System with High Performance. Lecture Notes Artificial Intelligence, LNAI 2256, pp. 177-181. 6. G.O. Williams, 1997. Iris Recognition Technology. IEE Aerospace and Electronics Systems Magazine, vol. 12, no. 4, pp. 23-29. 7. J.G Daugman, 1994. Biometric Personal Identification System based on Iris Analysis. U.S. patent 5. 8. J.G Daugman, 1993. High Confidence Visual Recognition of Persons by a Test of Statistical Independence. IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no.11, pp.1148-1161. 9. J.G Daugman, 1998. Recognizing Persons by their Iris Patterns. In Biometrics: Personal Identification in Networked Societ. Kluwer, pp.103-121. 10. J.L. Wayman, 1999. Technical Testing and Evaluation of Biometric Identification Devices. In Biometrics: Personal Identification in Networked Society (A. Jain, R. Bolle, S. Pankanti, editors), Kluwer, Dordrecht, pp. 345-368. 11. Li Ma, et. al., 2002. Iris Recognition Based on Multichannel Gabor Filtering. ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January, Melbourne, Australia. 12. P. Jablonski, et. al., 2002. People Identification on the Basis of Iris Pattern Image Processing and Preliminary Analysis. International Conference MIEL'2002. 13. P. W. Hallinan, 1991. Recognizing Human Eyes. SPIE Proc. Geometric Methods in Computer Vision, 1570, pp. 214-226. 14. R. Wildes, 1997. Iris Recognition: An emerging biometric technology. Proceedings of the IEEE, vol.85, no.9, September. 15. R. Kevin, 2001. E-Security for E-Government. A Kyberpass Technical White Paper,April 2001, www.kyberpass.com. 16. S. Lim, et. al., 2001. Efficient Iris Recognition through Improvement of Feature Vector and Classifier. ETRI Journal, volume 23, no.2, June. 17. S.E. Umbaugh, 1998. Computer Vision and Image Processing: A practical approach using CVIP tools. NJ: Prentice-Hall. 18. W.W. Boles et. al., 1998. A Human Identification Technique Using Images of the Iris and Wavelet Transform. IEEE Trans. on Signal Processing, vol.46, pp. 1185-1188, April.
579