0% found this document useful (0 votes)
153 views5 pages

Iris Recognition Using Machine Learning From Smartphone Captured Images in Visible Light

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views5 pages

Iris Recognition Using Machine Learning From Smartphone Captured Images in Visible Light

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2017 IEEE International Conference on Telecommunications and Photonics (ICTP) 33

26-28 December, 2017, Dhaka, Bangladesh

Iris Recognition using Machine Learning from


Smartphone Captured Images in Visible Light
Md. Fahim Faysal Khan,1*Ahnaf Akif,1 and M. A. Haque1
1
Department of Electrical and Electronic Engineering
Bangladesh University of Engineering and Technology
Dhaka -1205, Bangladesh
*
[email protected]

Abstract—This work shows the applicability and feasibility of or give any further info on the feasibility of other machine
different machine learning techniques on iris recognition from learning techniques.
smartphone captured eye images. First, the iris is localized using In our work, we investigate further on the use of machine
the popular Daugman’s method and the eyelids are suppressed learning techniques on iris recognition using smartphone
with canny edge detection technique. Then normalization of the
captured iris images in visible light spectrum. In order to do so
extracted iris region is performed in a novel way by setting an
adaptive threshold. Next, the normalized image is decomposed we develop a complete segmentation and feature extraction
using Haar wavelets to obtain the feature vectors. Histogram technique and try to use same set of extracted features to train
equalization is performed for better classification accuracy. After different classifier. Finally, we compare the classification
that, different classifiers are trained using the extracted feature accuracy of the trained classifiers and decide whether the
vectors which yield about 99.7% accuracy for training and 97% machine learning techniques are feasible in case of smartphone
accuracy for testing. Finally, the results are compared with other captured database.
previously applied methods on the same dataset and it is found
that the proposed method outperforms most of them. II. RELATED WORKS
Several works have been done with the publicly available
Index Terms—Iris recognition, machine learning, visible datasets UBIRISv1 [5], UBIRISv2 [6], MICHE [7] etc.
spectrum, eyelash removal, smartphone
containing iris images in visible light spectrum. The challenges
I. INTRODUCTION of iris recognition associated with unconstrained iris images in
visible light were discussed by Proenca et al [8]. Noisy iris
Human iris is well known for its uniqueness, stability and images and independent segmentation and noise recognition
non-evasiveness [1]. Hence iris recognition is a very popular procedures are most likely the sources of errors. Santos et al.
problem for the researchers in the field of bioinformatics, [9] explored best illumination configurations for visible light
cryptography, computational intelligence etc. Many successful iris images.
approaches have been taken so far. This approaches can be Raja et al. [10] used deep sparse filtering with visible
classified into two categories based whether they used machine spectrum iris dataset VSSIRIS, BIPLab and obtained a very
learning or not. A significant thing to notice that the datasets promising result (EER less than 2%).
which are used in these approaches are iris images captured by Another interesting study from Trokielewicz et al. [2] where
NIR (Near Infrared) camera as they offer very good visibility a completely new dataset was created by themselves, showed
of iris texture, even for heavily pigmented regions [2]. As a that iris images captured with a mobile phone offer sufficient
result, the extracted iris region provides more accurate visibility of iris texture details for all level of pigmentation. And
information implying better chances for recognition. However, they also justified that this images are readily usable with
the setup complexity with the above mentioned camera is already available iris recognition solutions such as VeriEye
difficult especially when the issue of portability and simplicity [11], MIRLIN [12], OSIRIS [13], IriCore [14] etc. All of these
arises. On the other hand, smartphones with cameras are within algorithms offered more than 95% accuracy for the dataset.
everyone’s reach now a days. The only problem with these Machine learning techniques have also been proved to be
cameras is that they capture images in visible light spectrum very successful in iris recognition. A study from De Marsicoet
resulting in less detailed iris images compared to the NIR al. [3] compared different machine learning techniques in iris
cameras. So the usual question arises “Are they good enough recognition. In these studies, they used mostly CASIA-Iris [15]
for iris recognition?” dataset which is created from images taken with NIR camera.
A good number of studies [2,8] replied positively towards Among different approaches, Rai and Yadav [16] were able to
the question. The one thing to notice here that almost all the obtain 99% accuracy with a combination of Support Vector
above mentioned approaches paid a little or no effort to state Machines and Hamming distance.
the feasibility or usability of machine learning techniques in
case of the smartphone captured iris database. This is important III. DATASET
because machine learning techniques have provided very good
In our study, the dataset created by Trokielewicz [2] and
results in case of NIR camera captured datasets [3]. As iris
their group was used. Total 70 people participated and the
images in visible light are likely to offer relatively less details,
photos were taken by iPhone 5s (8 megapixels and f/2.2). The
the question of applicability of ML techniques in this case still
final dataset comprises about 3192 images acquired in 2
remain unanswered. However, one studyRaja et. al[4] used
sessions. We used these images for iris recognition. And to the
Sparse Reconstruction Classifier with K-means clustering
best of our knowledge, no one used machine learning
which gave a very low EER percentage (Equal Error Rate).It is
techniques on this dataset before.
in other words a very good indication but it does not compare

978-1-5386-3374-8/17/$31.00 ©2017 IEEE


34

edges were detected using canny edge detection followed by


the gamma adjustment and hysteresis thresholding. Finally, the
edge-image was radon transformed to get the eyelid line both
for upper and lower sections.

Upper Eyelid

Fig. 1 The typical components in an eye image [20].

IV. METHODOLOGY
Lower Eyelid
Fig. 3 Eyelid Suppression
A. Image Pre-processing
The images provided in the dataset were in RGB format. We
had to convert it into a single channel to proceed. While D. Normalization
converting, red channel was used as wavelengths So far we have successfully segmented the iris part and
corresponding to red light (closest to near infrared) are the suppressed eyelids. Now we have to transform it into fixed
longest in our visible spectra, the best iris pattern should be dimensions for further processing. To do that, we used the very
visible this way [2]. popular homogenous rubber sheet model introduced by
Daugman [17]. In the homogenous rubber sheet model, each
B. Iris Localization
point within the iris region is remapped to a pair of polar
For extracting the iris region first, the classic Daugman’s coordinates (𝑟, 𝜃) where r is on the interval [0, 1] and 𝜃 is angle
Integro-differential operator is used. The Integro-differential in the range [0, 2𝜋].
operator [17] is defined as
𝑑 𝐼(𝑥,𝑦)
𝑚𝑎𝑥(𝑟,𝑥𝑜 ,𝑦0 ) |𝐺𝜎 (𝑟) ∗ 𝑑𝑥 ∮𝑥 𝑑𝑠| (1)
𝑜 ,𝑦0 2𝜋𝑟 r
Where 𝐼(𝑥, 𝑦)is the eye image, r is the radius of the search,
𝐺𝜎 (𝑟)is Gaussian smoothing function and s is the contour of the
𝜃
circle given by (𝑟, 𝑥𝑜 , 𝑦0 ) i.e. circle of radius r whose centre is Fig. 4 Daugman’s rubber sheet model
at (𝑥𝑜 , 𝑦0 ). The operator searches for the circular path where
there is maximum change in intensity occurs by varying the The remapping can be modelled as
radius and centre x and y position of the circular contour. First, 𝐼(𝑥(𝑟, 𝜃), 𝑦(𝑟, 𝜃)) → 𝐼(𝑟, 𝜃) (2)
the iris boundary is localized as the maximum gradient is With,
usually there. Then a fine search detects the pupillary boundary. 𝑥(𝑟, 𝜃) = (1 − 𝑟)𝑥𝑝 (𝜃) + 𝑟𝑥𝑖 (𝜃) (3)
We used variance ( 𝜎 = 0.5 ) for the Gaussian smoothing 𝑦(𝑟, 𝜃) = (1 − 𝑟)𝑦𝑝 (𝜃) + 𝑟𝑦𝑖 (𝜃) (4)
function. For faster run the image was scaled down to find
Where 𝐼(𝑥, 𝑦) is the iris region, (𝑥, 𝑦) are the original
( 𝑟, 𝑥𝑜 , 𝑦0 ). These values were then rescaled to get the co-
Cartesian coordinates, (𝑟, 𝜃) are the corresponding normalized
ordinates and radius in the original image. Under normal
polar coordinates, (𝑥𝑝 ,𝑦𝑝 ) and (𝑥𝑖 ,𝑦𝑖 ) are the centre coordinates
circumstances, this operator successfully localized the iris
region. However, if there is any reflection, it might fail locally. of pupil and iris boundary along the 𝜃 direction.
So for fine search instead of brute force, we adaptively ran the
search for a selected set of points inside the iris region with
pupil radius varying from 10% to 90% than that of the iris. By
this way, we were able to localize the iris region of all eye Fig. 5 Normalized segmented iris
images in our database successfully.
E. Eyelash Removal
Even though eyelash removal is a part of noise cancellation,
it was done after the normalization in our work. Developing a
method to do so was a tough job as the eyelashes differ largely
from image to image. The most obvious option was applying a
threshold. But if a hard threshold value is applied, there is no
Red Channel Localized Iris guarantee that it will work for every image. In some images the
iris region was darker than the others. So we had to develop an
Fig. 2 Iris localization adaptive algorithm to set the threshold value in each image
C. Eyelid Suppression separately. We did it by analysing the histogram of the
normalized iris image. As the eyelashes are usually the darkest
The visible portion of the iris part is not exactly circular. It parts of the image, histogram inspection led us to find the pixel
is partly covered by the eyelids which needs to be suppressed. values of those. These pixel values were used as thresholds to
To do so, we followed an approach inspired from Masek [18]. detect the occluded regions. The detected pixels were set to “0”
The total search region was divided into two parts, upper eyelid at first and then were restored from the non-occluded regions
and lower eyelid. The width of the search region is exclusively in the neighbourhood of those pixels.
the difference between the iris and the pupil radius. First the
35

Removed eyelashes filled with G. Feature Extraction


approximate values
A typical iris consists of lots of complex patterns such as
arching ligaments, furrows, ridges, crypts, rings, corona,
freckles and a zigzag collarette. These complex patterns are
Fig. 6 Eyelash removal very much complicated to extract. That is why we chose to train
with the image itself. So far we have a normalized image of size
F. Histogram Equalization 64 x 512 pixels. If we want to train the classifier with this
Once the iris region is segmented, normalized and noise has amount of data, it will be too heavy and training will take a very
been removed, the relevant texture and intensity information long period of time. So we need to scale it. But scaling may
needs to be extracted to train a classifier. But before doing that result in loss of important information.
a histogram equalization was performed on the normalized The solution to this problem lies in wavelet decomposition
images. This is because the histogram analysis of the as the wavelets have localized frequency data i.e. features
normalized image revealed that the image intensities were having same resolution can be matched up. As we know of now
congested in a very small region making it harder for the that if a 2-d wavelet transformation is applied on an image, it
classifier to differentiate. In our study we found that the decomposes it into 4 segments: LL, LH, HL and HH. The LL
histogram equalization improved the recognition and training is called the approximation of the image. LH is the horizontal
accuracy over 2%. detail, HL is the vertical detail and HH represents the diagonal
detail of the image. The most energy and information is
contained within the LL coefficients. So these are our desired
values. Figure 9 shows the wavelet decomposed iris image after
two stages. Finally, we were able to make the image ready for
training after successive 3 stages of wavelet decomposition.
The LL3 coefficients were taken which contained 8 x 64 = 512
features. For decomposition, Haar wavelets were used.
The extracted feature vector was 2D with dimension 8 x 64.
Before Training the classifier, it was converted to a 1D vector
of length 512 by placing the rows side by side (Fig- 10).

8 x 64
A
B 1 x 512
C
D A B C D E F G H
Fig. 7 Before histogram equalization E
F
G
H

Fig. 10 1-d feature vector of size 512

H. Training Classifier
In the given database, we had eye images of 70 people. For
training the model we took 5 images for each person and rest of
the images were kept aside for testing the classifier. For training
we used 5-fold cross validation method so that each image in
the training set can be tested once against the others. We tried
several classifier and among them support vector machines, k-
nearest neighbour, linear discriminants etc. showed great
promise. The results are summarized in the next section.
Fig. 8 After histogram equalization

LL2 Coefficients

Fig. 9 Wavelet decomposition after 2 stages


36

I. Results J. Sources of Errors


For training and testing several classifiers were used. The Major Source of errors in our findings is failure in
Starting from decision trees, discriminant analysis, support segmentation of eye images. Despite our efforts, there were
vector machines, RUSBoosted trees, K-nearest-neighbours,
Subspace KNN etc. The following table summarizes the best
performed classifier accuracy:
TABLE I
CLASSIFIER ACCURACY

Classifier Train Accuracy (%) Test Accuracy (%)


SVM (Linear Kernel) 99.1 96.46
SVM (Quadratic) 99.7 97
KNN 99.4 95.1
LDA 99.4 94.28

It is evident from the above data that Support Vector Fig. 12 Failure in correct segmentation
Machines give the best results. K-nearest-neighbours also
performs very well. Though its accuracy is slightly lower than one or two such images whose segmentation could not be done
that of SVMs, it takes much less time to train and test. Similar properly. And these are the images who were falsely labelled.
is the case for Linear Discriminant Classifier (LDA). The ROC Another source of errors are eye images with extremely dark
(Receiver Operating Characteristics) curve matrix attained pigments. As a result, it becomes very difficult to extract
during training for the best model i.e. SVM with quadratic distinct information from them. Some other noise sources
kernel is given below: might be blurred images or images with excessive
eyelid/eyelash occlusion.

V. CONCLUSION
In this paper a machine learning based approach on iris
recognition from smartphone captured images is proposed.
With the results above, this paper successfully showed that in
case of smartphone captured visible spectrum iris images, the
machine learning techniques are equally as good as the other
ones, in some cases even better. Still accuracy can be further
improved. And in our findings, accuracy largely depends on
accurate segmentation. So some robust approaches may be
taken to improve the segmentation result. In our approach we
tried to stick to some basic segmentation approaches. This was
done keeping in mind their easy implementation. As
smartphones of todays’ are equipped with very good camera,
the whole recognition system shows great promise to be
implemented on these smartphones for recognition, security
and identification purpose. Already Samsung® [19] has
developed a built in iris scanner which works for the user who
is using it. Our next task would be to develop a cloud based
Fig. 11 ROC curve for SVM model server where iris data can be easily sent through the smartphone.
The classifier will run on the server and the sent data would be
It is evident from the result that our very first doubt about the matched and verified. Thus by just using the smartphones, it
feasibility and applicability of machine learning techniques on will be possible to develop a full security system.
iris recognition from smartphone captured visible light images
ACKNOWLEDGEMENT
is well answered through this work.
Now if we compare our approach and its results with the This work was performed in the laboratories of the Dept. of
other approaches that is already applied on the same dataset, we EEE of BUET. The authors would like to thank the concerned
see that our approach indeed shows a great promise. authorities of BUET for providing the facilities and help in
getting the datasets.
TABLE II
DIFFERENT APPROACHES’ RESULT ON SAME DATASET
REFERENCE
[1] J. Daugman, "How iris recognition works.," in IEEE Transactions on
Classifier Accuracy (%)
circuits and systems for video technology, 2004.
VeriEye[11] 94.57[2] [2] M. Trokielewicz, "Iris Recognition with a Database of Iris Recognition
MIRLIN[12] 95.63[2] with a Database of Iris Images Obtained in Visible Light Using
OSIRIS[13] 95.25[2] Smartphone Camera," in The IEEE International Conference on Identity,
IriCore[14] 99.67[2] Security and Behavior Analysis (ISBA 2016), Sendai, Japan, 2016/02.
Proposed Method 97 [3] M. D. Marsico, A. Petrosino and S. Ricciardi, "Iris recognition through
machine learning techniques: A survey," Pattern Recognition Letters,
vol. 82, pp. 106-115, 2016.
37

[4] K. B. Raja, R. Raghavendra and C. Busch, "features, Smartphone based


robust iris recognition in visible spectrum using clustered k-means," in
Biometric Measurements and Systems for Security and Medical
Applications (BIOMS) Proceedings, 2014 IEEE Workshop on, IEEE,
2014, pp. 15-21.
[5] H. Proença and L. A. Alexandre, "{UBIRIS}: A noisy iris image
database," in 13th International Conference on Image Analysis and
Processing - ICIAP 2005, Cagliari, Italy, Springer, 2005, pp. 970-977.
[6] H. Proenca, S. Filipe, R. Santos, J. Oliveira and L. A. Alexandre, "The
{UBIRIS.v2}: A Database of Visible Wavelength Images Captured On-
The-Move and At-A-Distance," IEEE Trans. PAMI, vol. 32, pp. 1529-
1535, 2010.
[7] M. D. Marsico, M. Nappi, D. Riccio and H. Wechslerd, "Mobile Iris
Challenge Evaluation (MICHE)-I, biometric iris dataset and protocols,"
Pattern Recognition Letters, vol. 57, pp. 17-23, 2015.
[8] H. Proenca and L. A. Alexandre, "The NICE. I: noisy iris challenge
evaluation-part I," in Biometrics: Theory, Applications, and Systems,
IEEE, 2007, pp. 1-4.
[9] G. Santos, M. V. Bernardo, H. Proenca and P. T. Fiadeiro, "Iris
Recognition: Preliminary Assessment about the Discriminating Capacity
of Visible Wavelength Data," in 2010 IEEE International Symposium on
Multimedia, IEEE, 2010, pp. 324-329.
[10] K. B. Raja, R. Raghavendra, V. K. Vemuria and C. Busch, "Smartphone
based visible iris recognition using deep sparse filtering," Pattern
Recognition Letters, no. 57, pp. 33-42, 2015.
[11] Neurotechnology, "VeriEye SDK, version 4.3".
[12] Smart Sensors Ltd., "MIRLIN SDK, version 2.23," 2013.
[13] G. Sutra, B. Dorizzi, S. Garcia-Salicetti and N. Othman, "A biometric
reference system for iris. OSIRIS," 2014.
[14] IriTech Inc., "IriCore Software Developers Manual," 2013.
[15] http://www.sinobiometrics.com, "Casia Iris Image Database".
[16] H. Rai and A. Yadav, "Iris recognition using combined support vector
machine and Hamming distance approach," Expert systems with
applications, vol. 41, pp. 588-593, 2014.
[17] J. Daugman, "High confidence visual recognition of persons by a test of
statistical independence," 1993.
[18] L. Masek and P. Kovesi, "MATLAB Source Code for a Biometric
Identification System Based on Iris Patterns," in The School of Computer
Science and Software Engineering, The University of Western Australia,
2003.
[19] "http://www.samsung.com/global/galaxy/galaxy-s8/security/".
[20] "Courtesy: https://www.pinterest.com/".

You might also like