UniNet Huda 2020
UniNet Huda 2020
net/publication/346034541
CITATIONS READS
0 217
1 author:
Shaimaa Hameed
University of Technology, Iraq
82 PUBLICATIONS 124 CITATIONS
SEE PROFILE
All content following this page was uploaded by Shaimaa Hameed on 20 November 2020.
Abstract
Iris recognition is one of the most important biometric solution, which is used for
individuals identification because of its unique features; so always there is need to improve and
enhance recognition method used for that purpose, the proposed method aim to use un
enhanced iris detection methods to ensure iris detection accuracy with acceptable speed , for large
and different type of images from different source. the data is process using a set of preprocessing
steps and normalized , then using deep learning technique which is known as UNINET, that
work as feature extraction for the iris part .The net also have another sub net which is responsible
for detecting the non-iris part that may affected in recognition result, so iris and non-iris part is
depended during the matching stage. The proposed method gave promised result comparing with
some of high benchmark methods, where the accuracy was 96.56, 99.26, and 99.8 for CASI, ITTD,
and captured images correspondingly.
Key word: Biometric, Deep Learning, Fully Connected Network, Iris Identification, Uninet.
Introduction
Individual identification and authentication are improved significantly by biometric systems,
this play an important role in security for privet and public purpose (1)(2). Where biometric can be
defined as “the automated use of biological properties to identify a person” (3). Biometric systems
depend on physical and behavioral characteristic to identify persons, biometric systems are prefers
to use as security solution due to its properties where its cannot be stolen, lost or forgotten (4). the
most known biometric type that are used for authentication such as: finger print ,face, ,iris, signature
, voice , palm print and others..(5)(6). for identification, iris Identification is the best because of
stability of iris features for the individual through the live (7). Typically, each type of biometric
have its advantages and disadvantages or challenges in use. Biometric system generally consist of:
enrolment, preprocessing, matching and decision make, there are several algorithm and procedures
for each stage, especially for the final stage, where it is very important to give the result (8). Neural
network is used for this purpose, recently the use of deep learning is appeared as recognition and
classification method, and it has given a very good result. The classic neural network styles problem
is that they involve and need a many pre-processing and constraint tuning to give good results in
certain data set, also there is no guarantee to have efficient solution on other biometrics or on a
different dataset for e equivalent biometric. To overcome this problem, the researchers work hardly
to find features. Deep neural networks have achieved good result on several datasets. In the deep
learning framework, the images are input of the multi-layer neural network and the network learns
the best way to find the features from the image to get accurate identification result (9).
In final years, the use of deep learning was in the advertisement of researchers because of the
successful result they get by use it, so there are big number of research in this approach, which try
to use deep learning neural network for iris identification systems in different way. Some of These
work are shown in this paper, where Mohamed... et al., in 2016 (10). a system for iris classification
by using feed forward neural network that trained by using two algorithm ,practical swarm
optimization and gravitation search algorithm, their results emphasized that the use of the second
algorithm is better than the first for training the neural network. Alaa S. Al‑Waisy...et al.in 2017 [9]
design iris recognition system using deep learning based on convolutional neural network and soft
max classifier (multimodal),by using different database they tried to solve overfitting problem, the
system gave good result with high speed but still to be improved. In 2017 also Shabab Bazrafkan,
Peter Corcoran (11) they introduced a method that depend on using deep learning as segmentation
tool for iris to the images take by handheld devises ,with the use of different database also with
low quality images. In 2018 Rongrong Shi.. et al., (12) in this paper they tried to solve the problem
of incomplete and not clear images that is used for iris recognition so they took part of iris by
normalization operation then training these part using convolution neural network, it was good job
but have problem of overfitting and need more preprocessing operations .in 2019 , Ming Liu.. et al,
(13) aimed to increase the speed of convergence and accuracy by using fuzzy filter before the
training of convolution neural network in order to improve the work of neural network for iris
recognition, the result was good but they must try to use different member ship function to improve
fuzzy filter. Also in 2019 Kien Nguyen…et al, (14) new design for neural network is proposed to
have model with less computation and memory requirements that is used for iris recognition, they
put constraints for the new model taking in account model size and computation.
Preprocessing step
Preprocessing plays a very important role of identification or recognition process and it includes
the following process:
First: Region detection
Two method are used to detect the iris region, one of them work to detect the region by window,
and then the mission of pupil and iris detection is implemented inside the window. If this method is
not success to find good result, automatically the second method is used, that deal with the whole
eye image as work region with similar steps but without cropping, using of these two method make
the work more accurate. There are many steps for this Process, as shown in fig. (2).
Here I(x, y) is the intensity value at (x, y) in the iris region image.
The constraints x p, x l, y p, and y l are the coordinates of the pupil and iris borders along the 𝜽
direction. The result of Process is shown in fig. (4).
N is the quantity of triplet models,𝑓 𝐴 𝑖 , 𝑓 𝑃 𝑖 , and 𝑓 𝑁 𝑖, are the feature maps of anchor, positive and
negative images in the i-th triplet correspondingly. The symbol [•] + is corresponding to max (•, 0).
(α) Is a specific constraint to control the preferred boundary between positive anchor distance and
negative anchor distance. Improving typical loss function lead to decrease the anchor-positive
distance and increase anchor-negative distance until their boundary is higher than a convinced
value.
spatial features is used as dissimilarity metric, that consume the identical resolution with the
contribution; the identical procedure has to contract through non-iris part masking and level by level
shifting, which are normally detected in iris models. Therefore, the typical triplet loss function is
extended, which is mentioned as the Extended Triplet Loss (ETL):
Where D (𝑓1 ,𝑓 2 ) characterizes the Lowest Shifted and Masked Distance function, represent as
follows:
Set of mathematical operation including derivations is implement in a way to ensure that the
feature map is more accurate ,feature map is produced after learning of the feature that include
the accepted iris part and non-iris part.
Matching Process
First of all the features which is real value convert to binary to take less storage and also its
known that the binary values is more resistance to brightness variation, blurring and other causal
noise. The fractional Hamming distance is applied with the binarized feature maps and extended
masks for matching. It is detected that the performance dose not reduce when binaries value is used
matched with using the real-valued features, and this case profit minor improvements in some cross-
dataset states.
Result analysis
The method training and test on different data sets and with the use of images captured by
special camera. CASIA data set v4 for example, comprises 2,446 tasters from 142 subjects. Each
tester have a part of the face, which contain both side eyes image together. Pictures are picked up at
distance of three meters. All the right eye images from all the topics are used as training set, while
all the left eye images is used as test set. The test set produces 20,705 genuine pairs and 2,969,534
fraud pairs. While, The IITD database holds 2,240 image samples from 224 subjects. All of the right
eye images are used as training set while as check set, seven images for left eyes are used. The
check set holds 2,240 genuine pairs and 623,400 fraud pairs.
Cross DB and Within DB configuration are used for test the work in order to increase the
evaluation, cross DB mean that training is done with data set differs from the tested data set, The
aim of the Cross DB situation is to scan the generality competence of the proposed structure under
interesting situation that limited training tasters are existing. Within DB work to train net using one
data set, then the independent training set from the objective database is used to adjust it. Then the
adjusted network is estimated on the particular check set. The adjusted models from the Within DB
structure are predictable to accomplish well than the model with Cross DB, depending on higher
reliability of image quality between the training set and test set. It necessity be renowned that,
training set and test set are completely disjointed, this mean that not any of the iris images are
overlying between the training set and test set. Generally the tested results are produced under all-
to-all matching protocol, its mean proposed feature extraction not only accomplishes higher
accuracy but also exhibits unsettled generality capability. In addition, without other parameter
tuning.
Results Comparison
The results of the proposed method are compared with some research’s results that aim to iris
identification, which have high benchmark. Comparison include OSIRIS(16), Gabor filter based
Iris Code (17) with the result of proposed method for both cross and within database, the results of
comparison are shown in table (1).
Tabl1 (1) results comparison
CASIA IITD Captured image
FRR EER FRR EER FRR EER
OSIRIS 19.93% 6.39% 1.61% 1.11%
GABOR 20.72% 7.71% 1.81% 1.38%
Ours 13.22% 4.32% 0.80% 0.61% 0.50% 0.23%
CROSS
DB
Ours 11.9% 3.44% 1.16% 0.71% 0.92% 0.32%
WITHN
DB
From the above table the rate results of the proposed method proof that this method is compete with
other state of art method in iris recognition scope.
Conclusion
The use of the UNINET proof its successful in iris recognition purpose, due to using an
enhanced iris detection methods and using two sub net for features extraction and mask for none iris
part ,the determination and adding non iris part to the matching part make the result of matching
more accurate. The other reason for getting these promising results is the good preprocessing
techniques that applied to provide UNINET input, where the segmentation is done using two
methods instead one to ensure getting exact iris part information. These techniques result in high
accuracy, there is need to use more data set for training and testing. In addition, dataset with high
resolution were needed.
References
1. Severo, E., Laroca, R., Bezerra, C. S., Zanlorensi, L. A., Weingaertner, D., Moreira, G., &
Menotti, D. (2018). A Benchmark for Iris Location and a Deep Learning Detector Evaluation.
Proceedings of the International Joint Conference on Neural Networks, 2018-July.
https://doi.org/10.1109/IJCNN.2018.8489638
2. Menotti, D., Chiachia, G., Pinto, A., Schwartz, W. R., Pedrini, H., Falcão, A. X., & Rocha, A.
(2015). Deep Representations for Iris, Face, and Fingerprint Spoofing Detection. IEEE
Transactions on Information Forensics and Security, 10(4), 864–879.
https://doi.org/10.1109/TIFS.2015.2398817
3. Buciu, I., & Gacsadi, A. (2016). Biometrics systems and technologies: A survey.
International Journal of Computers, Communications and Control, 11(3), 315–330.
https://doi.org/10.15837/ijccc.2016.3.2556
4. Ortiz, N., Hernandez, R. D., Jimenez, R., Mauledeoux, M., & Aviles, O. (2018). Survey of
biometric pattern recognition via machine learning techniques. Contemporary Engineering
Sciences, 11(34), 1677–1694. https://doi.org/10.12988/ces.2018.84166
5. Faundez-Zanuy, M. (2006). Biometric security technology. IEEE Aerospace and Electronic
Systems Magazine, 21(6), 15–26. https://doi.org/10.1109/MAES.2006.1662038
6. Liu, S., & Silverman, M. (2001). Practical guide to biometric security technology. IT
Professional, 3(1), 27–32. https://doi.org/10.1109/6294.899930
7. Bhateja, A. K., Sharma, S., Chaudhury, S., & Agrawal, N. (2016). Iris recognition based on
sparse representation and k-nearest subspace with genetic algorithm. Pattern Recognition
Letters, 73(December 2015), 13–18. https://doi.org/10.1016/j.patrec.2015.12.009
8. Jain, A. K., Flynn, P., & Ross, A. A. (2007). Handbook of Biometrics Handbook of Biometrics.
Retrieved from http://www.springer.com/computer/image+processing/book/978-0-387-71040-2
9. Al-Waisy, A. S., Qahwaji, R., Ipson, S., Al-Fahdawi, S., & Nagem, T. A. M. (2018). A multi-
biometric iris recognition system based on a deep learning approach. Pattern Analysis and
Applications, 21(3), 783–802. https://doi.org/10.1007/s10044-017-0656-1
10.Rizk, M. R. M., Farag, H. H. A., & Said, L. A. A. (2016). Neural Network Classification for
Iris Recognition Using Both Particle Swarm Optimization and Gravitational Search Algorithm.
Proceedings - 2016 World Symposium on Computer Applications and Research, WSCAR 2016,
(June), 12–17. https://doi.org/10.1109/WSCAR.2016.10
11. Bazrafkan, S., & Corcoran, P. (2018). Enhancing iris authentication on handheld devices
using deep learning derived segmentation techniques. 2018 IEEE International Conference on
Consumer Electronics, ICCE 2018, 2018-Janua(November 2017), 1–2.
https://doi.org/10.1109/ICCE.2018.8326219
12. Li, Y. H., Huang, P. J., & Juan, Y. (2019). An Efficient and Robust Iris Segmentation
Algorithm Using Deep Learning. Mobile Information Systems, 2019.
https://doi.org/10.1155/2019/4568929
13. Liu, M., Zhou, Z., Shang, P., & Xu, D. (2020). Fuzzified Image Enhancement for Deep
Learning in Iris Recognition. IEEE Transactions on Fuzzy Systems, 28(1), 92–99.
https://doi.org/10.1109/TFUZZ.2019.2912576
14. 1. Nguyen K, Fookes C, Sridharan S. Constrained Design of Deep Iris Networks. 2019;1–9.
Available from: http://arxiv.org/abs/1905.09481
15. Zhao, Z., & Kumar, A. (2017). Towards Iris Recognition That is More Accurate Using
Deeply Learned Spatially Corresponding Features. Proceedings of the IEEE International
Conference on Computer Vision, 2017-Octob, 3829–3838.
https://doi.org/10.1109/ICCV.2017.411
16. N. Othman, B. Dorizzi , S. Garcia-Salicetti, OSIRIS: An Open Source Iris Recognition
Software, Pattern Recognition Letters, , 2016,vol. 82, pp. 124-131.
https://doi.org/10.1016/j.patrec.2015.09.002
17. J. Daugman, How Iris Recognition Works, IEEE. 2004. Transactions on Circuits and Systems
for Video Technology. vol. 14, no. 1, pp. 21-30, Jan.
https://ieeexplore.ieee.org/document/1262028