0% found this document useful (0 votes)
37 views5 pages

Design and Analysis of Deep-Learning Based Iris

research paper

Uploaded by

hambaabebe16108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views5 pages

Design and Analysis of Deep-Learning Based Iris

research paper

Uploaded by

hambaabebe16108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2021 9th International Conference on Information and Education Technology

Design and Analysis of Deep-Learning Based Iris


Recognition Technologies by Combination of U-Net
and EfficientNet
Cheng-Shun Hsiao, Chih-Peng Fan, and Yin-Tsung Hwang
2021 9th International Conference on Information and Education Technology (ICIET) | 978-1-6654-1933-8/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICIET51873.2021.9419589

Department of Electrical Engineering, National Chung Hsing University, Taiwan


email: [email protected], [email protected], [email protected]

Abstract—In this paper, the effective deep-learning based In traditional methods, most of them used the edge detection
methodology is developed for iris biometric authentication. Firstly, to locate the pupil and iris boundary; some researchers used the
based on the U-Net model, the proposed system uses the semantic pupil and iris as concentric circles, and then used the Hough
segmentation technology to localize and extract the region of circle to detect the position of the iris. Others used the ellipse
interest (ROI) of iris. After the ROI of iris in the eye image is fitting method to detect the iris position. Regardless of the
revealed, the inputted eye image will be cropped to the small-size methods, the main idea of the previous designs was to find the
eye image with the just-fitted ROI of iris. Then, the iris features of position of the iris. After finding the position of the iris, the iris
the cropped eye image are strengthened optionally by adaptive
was normalized into a rectangular image. Some methods were
histogram equalization or Gabor filtering process. Finally, the
applied for extracting the iris zone, e.g. combining the fan-
cropped iris image is classified by the EfficientNet model. By the
Chinese Academy of Sciences Institute of Automation (CASIA) v1
shaped iris areas on both sides together to become the target
database, the proposed deep-learning based iris recognition image, or removing the eyelid area of the annular iris,
scheme reaches the recognition accuracies up to 98%. Compared performing normalization, and then comparing the extraction
with the previous works, the proposed technology can provide the features of iris images. Fig. 2. illustrates the iris regions that will
effective iris recognition accuracy for the biometrics applications be used for normalization by the method in [14], and Fig. 3.
with iris information. also reveals the normalization result by the method in [14].
By the development of machine learning, some researchers
Keywords— deep learning, U-Net, EfficientNet, iris recognition, matched the features of iris by using the SVM classification.
biometric authentication

I. INTRODUCTION
With the developments of AI, the trend of signature
digitization is also one of the important links, and the personal
identification plays a vital role for authentication. Because some
personal information may be revealed on the Internet, data
security also become highly valued. In the past, the traditional
used identification methods included: keys, magnetic cards,
RFID, etc.; however, because these methods were far less
unique than the biometric identification, and these previous
methods were easy to be forged. Recently, the biometric
identifications have been widely applied for authentication, and Fig. 1. The traditional iris recognition design flow
these technologies use the biometric information, including
fingers veins, fingerprints, iris, sclera, etc.; among them, the iris Besides the traditional methods, with the vigorous development
information is the most representative because it is not easy to of deep learning technologies in recent years, some researchers
be forged, and the personal iris feature owns strong stability and have used deep learning methods to provide the identify function,
it is also highly uniq`ue. and the deep learning methods in [11]-[17] perform the
functions of classification, object detection, semantic
Fig. 1 shows the traditional authentication design with iris
segmentation, etc. The classifier recognizes the types of objects,
recognition. The process of traditional iris recognition method
was: firstly, it locates the iris position, obtains the iris but if too many types appear on the same image, it will not be
information, and performs the features matching finally. The able to judge; while the object detector can find the location of
design procedure was developed by Daugman [1], [2]. The the marked object on an image, but it cannot accurately find the
located and extracted iris image was processed by Gaber filter, boundary of object; the semantic segmentation can find accurate
and then the Hamming distance was used as the authentication contour of object, but it needs relatively large amount of
judgment. After that, several different methods had sprung up to computations. In this work, the deep-learning based designs are
improve the accuracy or reliability [3]-[14]. studied for iris recognition by combination of U-Net and

978-1-6654-1933-8/21/$31.00 ©2021 IEEE 433

Authorized licensed use limited to: Universite Paris SUD-UMS 1786. Downloaded on June 15,2021 at 06:19:46 UTC from IEEE Xplore. Restrictions apply.
EfficientNet. The details of the rest are described in the process [20]. Finally, the proposed system classifies the pre-
following sections. processed iris image by the deep-learning based EfficientNet.
Fig. 4 depicts the processing flow of the proposed deep-learning
based methodology, and the CASIA-v1 [21] dataset of eye
images are used for the experimental inputs.

Input eye Segmentation


Location of
image of iris ROI
iris center
256x256 pixels (U-Net)

Fig. 2. The iris regions that will be used for normalization by the method in
[14] Classification Features Crop/
(Efficient Net) Enhancement Normalization

Fig. 4. The proposed iris recognition design flow

A. Extraction of Iris Zone & Location of Iris Center


The image size of the CASIA-v1 dataset is 320x280 pixels.
Firstly, the eye image is reduced to an image with 256x256
pixels to fit the U-Net input size. Fig. 5 illustrates the network
Fig. 3. The normalization result by the method in [14]
architecture of U-Net [17]. The U-Net is a well-known CNN-
based model for semantic segmentation, and it is named because
II. RELATED WORK the overall architecture is U-shaped. The U-Net achieved good
recognition results on the ISBI dataset, and the ISBI dataset is a
Recently, many classifiers based on CNN networks have been biological dataset. Because the iris owns biological features, the
proposed, such as: VGG16[11], EfficientNet[12], ResNet50[13], U-Net model is selected to be used in this work. However, the
etc. These CNN models show very high detection accuracy for prediction result of semantic segmentation occasionally includes
the classification performance. In [9], not only the classifier is some noises, so we use the morphology process in image
used to classify the iris, but also the iris image is normalized in processing to filter out the noise as shown in Fig. 6(a). After the
different ways. After locating the pupil’s position, according to noise filtering process, the pupil zone is found completely. Fig.
the pupil’s edge position, the iris zone is normalized on both 6(b) shows the result after the morphology filtering process.
sides into a square image, which is suitable for the iris Then, the pupil’s center and radius can be estimated through the
classification. eye filter with Haar feature. Compared with the traditional
In deep learning process, in addition to the application of methods to estimate the pupil’s center, the used scheme is less
classification, object detection and semantic segmentation also susceptible to interference from light, shadow, or noise.
play very important roles. The process of semantic segmentation B. Normalization Process
identifies the precise position of an object in an image, labels
each image, and gives each pixel a different category. However, Then the separated iris zone will be normalized into a square
since object detection only needs to predict the approximate image according to the pupil’s position. The system separates
position of the object, the semantic segmentation requires more the iris image by removing unnecessary black background.
calculations than object detection does, but the semantic Since the position of pupil and the boundary of iris are already
segmentation provides a more accurate object contour. available, the iris zone can be easily normalized. According to
Semantic segmentation can be applied to the fields, such as the location of the pupil, the system extracts a certain amount of
autonomous driving, geological inspection, and clothing iris pixels to the left and right, and merges the selected iris
classification [15, 16]. regions on both sides to form a normalized rectangle image. Fig.
7 reveals the result of the normalization process.
III. THE PROPOSED DEEP-LEARNING BASED SCHEME C. Features Enhancement
The steps of the proposed iris recognition method are Next, the histogram equalization, CLAHE, or Gabor filtering
described as follows: Firstly, the system extracts and segments process are used to enhance the iris features, and we will
the region of interest (ROI) of the iris by the deep-learning compare the performance differences caused by different
based U-Net [17] detector. Then, based on the segmented iris features enhancement methods. The advantage of the
zone, the iris center is estimated for the optional normalization normalization process is that by merging the partial iris rectangle
process. Next, the system strengthens the features of iris image zones on both sides, the process will not cause the problem of
by the histogram equalization [18], the contrast limited adaptive deformation, and it can avoid the interference of eyelids and
histogram equalization (CLAHE) [19], or Gabor filtering eyelashes. After the process, the system reduces the normalized

434

Authorized licensed use limited to: Universite Paris SUD-UMS 1786. Downloaded on June 15,2021 at 06:19:46 UTC from IEEE Xplore. Restrictions apply.
image to a 224x224 pixels size for the following features
matching operation by the CNN-based classifier.

UP-CONV, 1

UP-CONV, 2

(a) (b)
UP-CONV, 64
Fig. 6. (a) Before (b) after the morphology filtering process

UP-CONV, 64

CONCATE

UP-CONV, 64

UPSAMPLE x2

INPUT, 1 UP-CONV, 128


Fig. 7. The result of the normalization process

CONV, 64 UP-CONV, 128 D. Features Classification


After the above image processing, the eye image is
CONV, 64 CONCATE
feedforward to the CNN-based classifier, i.e. EfficientNet, for
features classification and identification. Finally, the system
will judges whether or not the features of inputted iris matches
MAXPOLLING UP-CONV, 128
according to the predicted probability. Fig. 8 depicts the
backbone architecture of the used EfficientNet.
CONV, 128 UPSAMPLE x2
CONV, 3x3

CONV, 128 UP-CONV, 256

MBConv, 3x3
MAXPOLLING 2x2 UP-CONV, 256

MBConv, 3x3
CONV, 256 CONCATE

CONV, 256 UP-CONV, 256 MBConv, 5x5

MAXPOLLING 2x2 UPSAMPLE x2


MBConv, 3x3

CONV, 512 UP-CONV, 512

MBConv, 5x5

CONV, 512 UP-CONV, 512

MBConv, 5x5
MAXPOLLING 2x2 CONCATE

CONV, 1024 UP-CONV, 512


MBConv, 3x3

CONV, 1024 UPSAMPLE x2


CONV, 1x1

Fig. 5. The network architecture of U-Net


SOFTMAX

Fig. 8. Backbone of EfficientNet

435

Authorized licensed use limited to: Universite Paris SUD-UMS 1786. Downloaded on June 15,2021 at 06:19:46 UTC from IEEE Xplore. Restrictions apply.
IV. COMPARISONS AND EXPERIMENTAL RESULTS without the black border (i.e. cropped) , and (3) the segmented
In our experiments, after semantic segmentation, the image iris ROI image with the normalization process. Then these three
data before features classification is divided into three types, types of images are feature-enhanced with three different image
including: (1) the segmented iris ROI image with the black processing methods, including histogram equalization, CLAHE,
border (i.e. not cropped), (2) the segmented iris ROI image and Gabor filtering process.

TABLE I. THE PROCESSING RESULTS BY THE PROPOSED METHODOLOGIES


Non Features Enhancement Modes
Features
-Enhancement Histogram CLAHE Gabor Filtering
(NonFE) Equalization
NonCrop

Crop

Normalization
process

TABLE II. COMPARISON RESULTS OF RECOGNITION PERFORMANCE BY THE enhanced, some of the detailed iris feature information on the
PROPOSED METHODOLOGIES original eye image may be lost. In addition, the normalized iris
Recognition
The experimental Accuracy image with the Gabor filtering processing, i.e. the
methods
(%) “Normalization+Gabor” mode, can also perform the effective
NonCrop(NonFE) 96.5 result. However, after normalization, some iris texture
NonCrop+HistEq 96.2 information will be lost, and then the recognition accuracy of the
NonCrop+CLAHE 97.6 “Normalization+Gabor” mode is slightly lower than that of the
NonCrop+Gabor 96.5 “Crop(NonFE)” mode.
Crop(NonFE) 97.9
V. CONCLUSIONS
Crop+HistEq 97.2
In this paper, by combination of U-Net and EfficientNet, the
Crop+CLAHE 96.5
effective deep-learning based scheme is studied for iris
Crop+Gabor 92.1 biometric authentication. Firstly, the system extracts and
Normalization(NonFE) 95.5 segments the ROI of the iris by the U-Net based detector. Then,
Normalization+HistEq 93.5 by the segmented iris zone, the iris center is estimated for the
Normalization+CLAHE 72.6 optional normalization process. Next, the system strengthens
Normalization+Gabor 96.2 the features of iris image. Finally the system classifies the pre-
processed iris image by the EfficientNet-based model. By the
CASIA-v1 database, the proposed deep-learning based iris
The processing results are listed in Table 1, and Table 2 lists recognition scheme reaches the recognition accuracies up to
the performance comparison results by the proposed 98%. By the experimental results, the cropped iris image
methodologies. By the experimental results, in Table 2, the without features enhancement retains more iris information to
cropped iris image without features enhancement, i.e. the match the iris features. If the iris image is feature-enhanced,
“Crop(NonFE)” mode, can retain more iris feature information some of the detailed iris features on the original eye image may
to match the personal identities. When the iris image is feature- be lost.

436

Authorized licensed use limited to: Universite Paris SUD-UMS 1786. Downloaded on June 15,2021 at 06:19:46 UTC from IEEE Xplore. Restrictions apply.
ACKNOWLEDGMENT Conference on Biometrics (ICB), New Delhi, 2012, pp. 283-290, doi:
10.1109/ICB.2012.6199821.
This work was financially supported by the Ministry of Science [10] A. A. B. Shirazi and L. Nasseri, "A Novel Algorithm to Classify Iris
and Technology (MOST) under Grant No. MOST 108-2218-E- Image Based on Differential of Fractal Dimension by Using Neural
005-017. Network," International Conference on Advanced Computer Theory and
Engineering, Phuket, 2008, pp. 181-185, doi: 10.1109/ICACTE.2008.148.
REFERENCES [11] S. Liu and W. Deng, "Very deep convolutional neural network based
image classification using small training sample size," The 3rd IAPR
[1] J. G. Daugman, "High confidence visual recognition of persons by a test Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, 2015,
of statistical independence," in IEEE Transactions on Pattern Analysis pp. 730-734, doi: 10.1109/ACPR.2015.7486599.
and Machine Intelligence, vol. 15, no. 11, pp. 1148-1161, Nov. 1993, doi: [12] Mingxing Tan and Quoc V. Le, " EfficientNet: Rethinking Model Scaling
10.1109/34.244676. for Convolutional Neural Networks," International Conference on
[2] J. G. Daugman, "How iris recognition works," IEEE Transactions on Machine Learning, 2019, arXiv:1905.11946.
Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21-30, [13] M. Tan, R. Pang and Q. V. Le, "EfficientDet: Scalable and Efficient
2004. Object Detection," IEEE/CVF Conference on Computer Vision and
[3] Y. Nakashima and Y. Kuroki, " SIFT feature point selection by using Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 10778-10787,
image segmentation," International Symposium on Intelligent Signal doi: 10.1109/CVPR42600.2020.01079.
Processing and Communication Systems (ISPACS), pp.275-280, 2017. [14] T. Le-Tien, H. Phan-Xuan, P. Nguyen-Duy and L. Le-Ba, "Iris-based
[4] T. D. Yang, et. al, "Effective Scale-Invariant Feature Transform Based Biometric Recognition using Modified Convolutional Neural Network,"
Iris Matching Technology for Identity Identification," IEEE International International Conference on Advanced Technologies for
Conference on Consumer Electronics-Taiwan, 2018, Taichung city, Communications (ATC), Ho Chi Minh City, 2018, pp. 184-188, doi:
Taiwan, May 2018. 10.1109/ATC.2018.8587560.
[5] Feddaoui Nadia and Kamel Hamrouni, " An efficient and reliable [15] Yingda Xia et al., "Synthesize then Compare: Detecting Failures and
algorithm for iris recognition based on Gabor filter," International Multi- Anomalies for Semantic Segmentation" arXiv:2003.08440v2 [cs.CV] 8
Conference on Systems Signals and Devices, April 2009. Sep 2020
[6] Hunny Mehrotra, Banshidhar Majhi, and Pankaj Kumar Sa, [16] Guolei Sun et al., " Mining Cross-Image Semantics for Weakly
"Unconstrained iris recognition using F-SIFT," 8th International Supervised Semantic Segmentation" arXiv:2007.01947v2 [cs.CV] 8 Jul
Conference on Information, Communications & Signal Processing, 2020
Singapore, 13-16 Dec. 2011. [17] Olaf Ronneberger et al., "U-Net: Convolutional Networks for Biomedical
[7] Z. Zhao and A. Kumar, "An Accurate Iris Segmentation Framework Image Segmentation" arXiv:1505.04597v1 [cs.CV] 18 May 2015
Under Relaxed Imaging Constraints Using Total Variation Model," IEEE [18] Histogram equalization, [Online]. Available:
International Conference on Computer Vision (ICCV), Santiago, 2015, https://en.wikipedia.org/wiki/Histogram_equalization
pp. 3828-3836, doi: 10.1109/ICCV.2015.436. [19] Contrast limited adaptive histogram equalization (CLAHE) , [Online].
[8] S. Arora and M. P. S. Bhatia, "A Computer Vision System for Iris Available:
Recognition Based on Deep Learning," IEEE 8th International Advance https://en.wikipedia.org/wiki/Adaptive_histogram_equalization
Computing Conference (IACC), Greater Noida, India, 2018, pp. 157-161, [20] Gabor filter, [Online]. Available:
doi: 10.1109/IADCC.2018.8692114. https://en.wikipedia.org/wiki/Gabor_filter
[9] A. Uhl and P. Wild, "Weighted adaptive Hough and ellipsopolar [21] Chinese Academy of Sciences Institute of Automation. (Aug. 2017).
transforms for real-time iris segmentation," The 5th IAPR International CASIA Iris Image Database. [Online]. Available:
http://biometrics.idealtest.org

437

Authorized licensed use limited to: Universite Paris SUD-UMS 1786. Downloaded on June 15,2021 at 06:19:46 UTC from IEEE Xplore. Restrictions apply.

You might also like