0% found this document useful (0 votes)
26 views5 pages

2nd Objective Paper

This document summarizes a research paper that compares two feature extraction methods, principal component analysis (PCA) and local binary pattern (LBP), for iris recognition. The paper tests the two methods on two iris databases and measures accuracy, F1 score, and area under the receiver operating characteristic curve. It finds that LBP achieves better performance than PCA, obtaining an accuracy of 94% for one database and 92% for the other. The paper concludes that LBP is more effective at extracting features from iris images for classification compared to PCA.

Uploaded by

DIVYA C D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views5 pages

2nd Objective Paper

This document summarizes a research paper that compares two feature extraction methods, principal component analysis (PCA) and local binary pattern (LBP), for iris recognition. The paper tests the two methods on two iris databases and measures accuracy, F1 score, and area under the receiver operating characteristic curve. It finds that LBP achieves better performance than PCA, obtaining an accuracy of 94% for one database and 92% for the other. The paper concludes that LBP is more effective at extracting features from iris images for classification compared to PCA.

Uploaded by

DIVYA C D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

International Journal of

Electrical and Electronics Research (IJEER)


Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 57-61 | e-ISSN: 2347-470X

Performance Analysis of Feature Extraction Approach:


Local Binary Pattern and Principal Component Analysis
for Iris Recognition system
C D Divya1 and Dr. A B Rajendra2
1
AP, DoCS, VVCE, Mysuru, Karnataka, India
2
Prof, DoIS, VVCE, Mysuru, Karnataka, India

*Correspondence: C D Divya; Email: [email protected]

░ ABSTRACT- Many techniques have been proposed for the recognition of Iris. Most of them are single resolution
techniques which results in poor performance. In this paper, feature extraction approaches like local binary pattern and principal
component analysis assimilation has been offered. For classification, Support Vector Machine has been used. This paper compares
the efficiency of two popular feature extraction methods Principal Component Analysis and Local Binary Pattern using two
different iris databases CASIA and UBIRIS. The models were tested using 200 iris images. Statistical parameters like F1 score
and Accuracy are tested for different threshold values. Our proposed method results with accuracy of 94 and 92%, is obtained for
using Local Binary Pattern for CASIA and UBIRIS data set respectively. The Receiver Operating Characteristic Curve has been
drawn and Area under Curve is also calculated. The experiment has been extended by varying the dataset sizes. The result shows
that LBP achieves better performance with both CASIA and UBIRIS databases compared to PCA.

General Terms: Iris Recognition, True Positive, True Negative.

Keywords: Area Under Curve, Local Binary Pattern, Iris, Principal Component Analysis, Feature Extraction, F1 Score, Receiver
Operating Characteristic Curve, Receiver Operating Characteristic Curve, Support Vector Machine.

ARTICLE INFORMATION are generated using two-dimensional Gabor wavelet. The


Author(s): C D Divya and Dr. A B Rajendra accuracy achieved is more compared to other methods. The
Received: 26/03/2022; Accepted: 27/04/2022; Published: 05/05/2022; important work done by Wildes [6] has adopted algorithm
e-ISSN: 2347-470X; which uses applications of the Gaussian filters for the purpose
Paper Id: IJEER220326; of recognition and here, eyelids were modelled using parabolic
Citation: 10.37391/IJEER.100201
Webpage-link:
arcs. Boashash and Boles [7] have proposed a method which
https://ijeer.forexjournal.co.in/archive/volume-10/ijeer-100201.html has adopted zero-crossings. In wavelet transform, zero
crossings are computed at different resolutions of circles of the
Publisher’s Note: FOREX Publication stays neutral with regard to iris which concentric. Signals with single dimensional are used
Jurisdictional claims in Published maps and institutional affiliations.
to correlate with some important features using various
dissimilarity functions. Same kind of approach has been
░ 1. INTRODUCTION presented which has adopted discrete dyadic wavelet
The authentication of as person is of high importance in transform and showed better efficiency [8]. The Multi-
modern days [1]. Other than passwords and magnetic cards, resolution Independent Component Identification (M-ICA) has
authentication using biometric system is based on bodily or acceptable ability to generate the iris features. According to
behavioral characteristics of a person. The physical authors accuracy of the system is comparatively low since the
characteristics such as palm print, fingerprint, face and iris M-ICA does not work efficiently with class-separability. Chen
recognition are proved to be accurate and fast. These and Yuan have proposed a new approach for generating the
characteristics are unique to an individual and remain stable features of iris using fractal dimension [9]. The iris part has
through life, and the pattern variability is high among different been divided into small parts and the features in the form of
persons make iris very fascinating for use as biometric system. iris templates are generated. These generated templates known
A biometric system enables the identification of a person using as iris code are then applied to the neural networks for the
distinct feature or characteristic exhibited individual person. matching purpose. Robert et. al. have proposed a method for
The Biometric systems are designed for recognition by taking achieving localization and feature extraction using Integral-
various physical traits as input such as palm prints, differential operators along with a Hough Transform[10].
fingerprints, face, and the iris [2]. The iris is the coloured Here, code for iris has been generated using both emergent
circle located around the pupil which includes number of frequency and instantaneous phase.
randomly distributed structures which are not mutable and
hence iris is quite different from other approach [3]. Earlier, Li Ma. et. al. [11] have adopted HAAR-Wavelet transform for
iris biometrics recognition system was roughly advised from features extraction in iris. The transformation has been used on
2001 [4]. The original algorithm for iris identification works the image to generate a feature vector. To classify the vectors,
with analysis of texture [5]. The algorithm uses codes which two approaches namely weight vector initialization and winner

Website: www.ijeer.forexjournal.co.in Performance Analysis of Feature Extraction Approach 57


International Journal of
Electrical and Electronics Research (IJEER)
Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 57-61 | e-ISSN: 2347-470X

selection methods are adopted [12]. The recognition capability as in computer graphics. PCA guides how to reduce a high
of classification methods are dependent on feature quality and dimension data to a lower dimensional data to explore only the
the size data used for training. The features are generated from sufficient and important features. Extraction of the important
shape and texture of segmented parts. Karu et al. [13] features corresponding to the available data is the one of the
presented an approach to achieve automatic identification of major aspect of PCA [26]. During this, dimensionality of a
textured parts and the categorization uses statistical measures. data consisting of many correlated variables will get
The Co-occurrence approach is claimed to be the best for minimized. This is achieved by converting the data to a set of
classification of texture by Conners and Harlow [14]. The iris variables called the principal components (PCs). These PCs
structure can be classified with the help of coherent Fourier are independent and are arranged so that the first few
spectra using optical transmission [15]. The iris biometric components represent the most important features present in
designed by Hamed Ranjzad [16] has adopted Principal all the original variables.
Component Analysis (PCA) for feature extraction. Here,
authors attempted different illumination and noise level during 2.2 Local Binary Pattern (LBP)
the iris image acquisition process. The number of methods has been developed to extract the
Furthermore, local-based approaches require less number of useful features from iris images to perform iris biometric
samples for analysis [24]. If Local Binary Pattern (LBP) recognition. LBP [19, 20] is one among them. LBP make it is
descriptors are used straight away, review illustrates that, possible to represent the image texture and shape.
prove to be analysis will become more effective for face
image. Since this method was utilized for face representation
[23], there has been a growing interest in LBP-based features
for other representations also [25].
Here, we offer a method for extracting the best features of iris
images for iris template classification using PCA and LBP.
Main purpose to use LBP in conjunction with PCA is to
minimize the iris template resolution.
Figure 2: Local Binary Pattern
░ 2. MATERIALS AND METHODS The concept of LBP was basically proposed by Ojala et al.
Fig. 1 shows the overall architecture of the work carried out. [21]. The LBP operator uses pixel’s neighbors and takes the
For feature extraction, two methods have been used and Centre pixel value as a threshold as shown in Fig. 2. If a
compared. One is PCA [17, 18] and another one is LBP. Here, neighbor pixel has a larger gray value than the Centre pixel,
CASIA and UBIRIS datasets have been used in the then a ‘1’ is set to the corresponding pixel otherwise ‘0’ will
experiment. For classification, Support Vector Machine be assigned. Final LBP code is evaluated by combining
(SVM) approach has been utilized. neighboring binary digits.

The value of the LBP code of a pixel (gm, gc) is given by Eq.
(1) and Eq. (2)

gm is the neighbor pixel value, gc is the Centre pixel value or


Threshold (T) value and m is the digit code with initial value 0
and final value 7.

l(x) is the LBP feature matrix.

2.3 Support Vector Machine (SVM)


A SVM [22] which is used to classify data which are linearly
separable is called linear SVM which is depicted in Fig. 3.
Linear SVM searches for a hyper-plane with the maximum
margin. Therefore, a linear SVM and hence called as a
Figure 1: Over all architecture
maximal margin classifier (MMC) [27]. Steps involved in
finding the Maximum Margin Hyper-plane are discussed
2.1 Principal Component Analysis (PCA) below:
PCA is one of the most import aspects of linear algebra and it
is widely being adopted for analysis purpose. Since it is simple 1. Consider a problem of binary classification consisting of n
and non-metric method it can be used in neuroscience as well training data.

Website: www.ijeer.forexjournal.co.in Performance Analysis of Feature Extraction Approach 58


International Journal of
Electrical and Electronics Research (IJEER)
Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 57-61 | e-ISSN: 2347-470X

Accuracy have been presented in Table 1. The accuracy has


2. Each tuple is represented by (Xi, Yi) where Xi = (xi1, been evaluated over different probability threshold values.
xi2.......xim) corresponds to the attribute set for the ith tuple
░ Table 1: Accuracy for different probability threshold
(data in m-dimensional space) and Yi ɛ [+,-] denotes its class
label.

3. Given {(Xi,Yi)}, a hyper-plane is generated which separates


all Xi into two sides of it.

4. Consider a two-dimensional training data with attributes A1


and A2 as X = (x1, x2), where x1 and x2 are values of attributes
A1 and A2, respectively for X.

5. Equation of a plane in 2-D space can be written as


0 + w1x1 + w2x2 = 0 [e.g., ax + by + c = 0] where w0, w1, and
w2 are some constants defining the slope and intercept of the
line. Any point lying above such a hyper-plane satisfies
w0 + w1x1 + w2x2 > 0 similarly, any point lying below the
hyper-plane satisfies w0 + w1x1 + w2x2 < 0

6. An SVM hyper-plane is an n-dimensional generalization of


a straight line in two dimensions.

7. Euclidean equation of a hyper-plane in Rm is w1x1 + w2x2


+........ + wmxm = b (3) where wi’s are the real numbers and b is
a real constant.

8. In matrix form, a hyper-plane thus can be expressed as W.X


+ b = 0 (4) where W = [w1, w2.......wm] and X = [x1, x2.......xm]
and b is a real constant.

Figure 3: Computation of MMH

░ 3. RESULTS AND DISCUSSION


For the computation processes we have used Intel core i5-i7
generation CPU, with 8 GB system memory for implementing.
Further we have considered CASIA and UBIRIS as two
different iris data set. The size of CASIA will be 320*280 and
for UBIRIS it is 200*150. At the time of feature extraction this
image of both the sets are resized to 37 * 50. Intruder and 100 The ROC-Curve has been drawn for different combinations of
genuine iris images have been used to conduct the experiment. methods and databases. Fig. 4 shows the ROC curves.
The True Positive Rate (TPR), False Positive Rate and

Website: www.ijeer.forexjournal.co.in Performance Analysis of Feature Extraction Approach 59


International Journal of
Electrical and Electronics Research (IJEER)
Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 57-61 | e-ISSN: 2347-470X

120

100

80 P+S+C

60 P+S+U

40 L+S+C
L+S+U
20

0
200 400 600 800

Figure 5: Accuracy graph


Figure 4: ROC-Curve
The Area under Curve (AUC) has been calculated for different ░ 4. CONCLUSION
combinations of feature extraction methods and classification
Experiment has been conducted to test the performance of
methods along with two databases listed. The result obtained
PCA and LBP with two popular iris databases CASIA and
is presented in Table 2.
UBIRIS. Support Vector Machine, which is a well-known
░ Table 2: Area Under Curve (AUC) feature-based classification approach has been adopted to
carry out our research for both the methods. The models were
Method AUC tested using 200 iris images. In case of LBP, the AUC values
PCA + SVM + CASIA 0.9125 obtained are 0.8962 and 0.9395 for CASIA and UBIRIS
PCA + SVM + UBIRIS 0.5518 datasets respectively. In case of PCA, the AUC values are
LBP + SVM + CASIA 0.9395 0.9125 and 0.5518 for CASIA and UBIRIS datasets. The
result shows that LBP achieves better performance with
LBP + SVM + UBIRIS 0.8962 CASIA and UBIRIS compared to PCA. The experiment was
extended for different dataset sizes of 400, 600 and 800. The
result shows that LBP outperforms PCA with both CASIA and
The experiment has been conducted using different dataset
UBIRIS datasets. The work further may be extended for
size like 200, 400, 600 and 800. Even in this test case 50% of
another approach of iris recognition like wavelet transform
the images will be as intruder and remaining 50% of the
and suitable Hybrid approach.
images are considered as genuine to conduct the experiment.
The result obtained for the above case conditions are presented
in Table 3 and plotted as in Fig. 5. For different data set size, ░ 5. REFERENCES
the accuracy remains high for the combinations PCA + SVM + [1] Jain A. K. Ross A. Prabhakar S. An introduction to biometric
CASIA, LBP + SVM + CASIA and LBP + SVM + UBIRIS. recognition. IEEE Transactions on Circuits and Systems for Video
Technology, 1 14, January 2004 4 20, 1051-8215.
░ Table 3: Accuracy and AUC for different dataset size
[2] Sharma, Abhilash. (2015). Biometric System- A Review. International
Journal of Computer Science and Information Technologies. 6. 4616-
4619.
[3] Sevugan, Prabu & Swarnalatha, P. & Gopu, Magesh & Sundararajan,
Ravee. (2017). Iris recognition system. International Research Journal of
Engineering and Technology.
[4] Kak, & Neha, & Rishi, Gupta & Sanchit, Mahajan. (2010). Iris
Recognition System. International Journal of Advanced Computer
Sciences and Applications. 1. 10.14569/IJACSA.2010.010106.
[5] Richard Yew Fatt Ng, Yong Haur Tay and Kai Ming Mok, "Iris
recognition algorithms based on texture analysis," 2008 International
Symposium on Information Technology, 2008, pp. 1-5, doi:
10.1109/ITSIM.2008.4631667.
[6] R. P. Wildes, Iris recognition: an emerging biometric technology,
Proceeding, pp s of the IEEE, vol.85, no.9, pp.1348-1363, p Se 1997.
[7] Boles, Wageeh & Boashash, Boualem. (1998). A Human Identification
Technique Using Images of the Iris and Wavelet Transform. Signal
Processing, IEEE Transactions on. 46. 1185 - 1188. 10.1109/78.668573.
[8] yankui sun, yong chen and hao feng, two-dimensional stationary dyadic
wavelet transform, decimated dyadic discrete wavelet transform and the
face recognition application, 9(3), 397-416, 2011.

Website: www.ijeer.forexjournal.co.in Performance Analysis of Feature Extraction Approach 60


International Journal of
Electrical and Electronics Research (IJEER)
Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 57-61 | e-ISSN: 2347-470X

[9] XiaoZhou Chen, ChangYin Wu, LiangLin Xiong, Fan Yang, The
Optimal Matching Algorithm for Multi-Scale Iris Recognition, Energy
Procedia, Volume 16, Part B, 2012.
[10] Hassanein, Allam S. et al. “A Survey on Hough Transform, Theory,
Techniques and Applications.” ArXiv abs/1502.02160 (2015).
[11] Ma, Li & Tan, Tieniu & Zhang, Dai. (2004). Efficient Iris Recognition
by Characterizing Key Local Variations. IEEE transactions on image
processing: a publication of the IEEE Signal Processing Society. 13.
739-50. 10.1109/TIP.2004.827237.
[12] Sousa, Celso. (2016). An overview on weight initialization methods for
feedforward neural networks. 10.1109/IJCNN.2016.7727180.
[13] J. Daugman, “How Iris Recognition Works,” IEEE Trans. Circuits and
Systems for Video Technology, vol. 14, no. 1, pp. 21-30, Jan. 2004.
[14] Conners RW, Harlow CA. A theoretical comparison of texture
algorithms. IEEE Trans Pattern Anal Mach Intell. 1980 Mar; 2(3):204-
22. doi: 10.1109/tpami.1980.4767008. PMID: 21868894.
[15] Hoxha, Julian & Stoja, Endri & Domnori, Elton & Cincotti, Gabriella.
(2017). Multicarrier Digital Fractional Fourier Transform For Coherent
Optical Communications. 10.1109/EUROCON.2017.8011073.
[16] Ranjzad, Hamed & Ebrahimi, Afshin & Ebrahimnezhad, Hossein.
(2008). Improving feature vectors for iris recognition through design and
implementation of new filter bank and locally compound using of PCA
and ICA. 10.1109/ISABEL.2008.4712612.
[17] Mishra, Sidharth & Sarkar, Uttam & Taraphder, Subhash & Datta,
Sanjoy & Swain, Devi & Saikhom, Reshma & Panda, Sasmita &
Laishram, Menalsh. (2017). Principal Component Analysis. International
Journal of Livestock Research. 1. 10.5455/ijlr.20170415115235.
[18] Karamizadeh, Sasan & Abdullah, Shahidan & Manaf, Azizah & Zamani,
Mazdak & Hooman, Alireza. (2013). An Overview of Principal
Component Analysis. Journal of Signal and Information Processing.
10.4236/jsip.2013.43B031.
[19] Song, Ke-Chen & YAN, Yun-Hui & CHEN, Wen-Hui & Zhang, Xu.
(2013). Research and Perspective on Local Binary Pattern. Acta
Automatica Sinica. 39. 730–744. 10.1016/S1874-1029(13)60051-8.
[20] Huang, di & Shan, Caifeng & Ardabilian, Mohsen & Chen, Liming.
(2011). Local Binary Patterns and Its Application to Facial Image
Analysis: A Survey. IEEE Transactions on Systems, Man, and
Cybernetics, Part C. 41. 765-781. 10.1109/TSMCC.2011.2118750.
[21] Ojal T., Pietikinen M. and Harwood D., “A Comparative study of texture
measures with classification based on featured distributions”, Pattern
Recognition, Vol 29, No. l, pp.51~59, 1996.
[22] Srivastava, Durgesh & Bhambhu, Lekha. (2010). Data classification
using support vector machine. Journal of Theoretical and Applied
Information Technology. 12. 1-7.
[23] T. Ahonen, A. Hadid, and M. Pietik äinen, “Face recognition with local
binary patterns,” in Proc. Euro. Conf. Computer Vision (ECCV), 2004,
pp. 469–481.
[24] X. Tan, S. Chen, Z. Zhou, and F. Zhang, “Face recognition from a single
image per person: a survey”, Pattern Recognition, vol. 39, no. 9, pp.
1725-1745, 2006.
[25] Kumar, G., Chowdhury, D. P., Bakshi, S., & Sa, P. K. (2020). Person
Authentication Based on Biometric Traits Using Machine Learning
Techniques. In IoT Security Paradigms and Applications (pp. 165-192).
CRC Press.
[26] Omran, Maryim, and Ebtesam N. AlShemmary. "An iris recognition
system using deep convolutional neural network." In Journal of Physics:
Conference Series, vol. 1530, no. 1, p. 012159. IOP Publishing, 2020.
[27] Soliman, R.F., Amin, M., El-Samie, A. and Fathi, E., 2020. Cancelable
Iris recognition system based on comb filter. Multimedia Tools and
Applications, 79(3), pp.2521-2541.
© 2022 by C D Divya and Dr. A B Rajendra.
Submitted for possible open access
publication under the terms and conditions of
the Creative Commons Attribution (CC BY)
license (http://creativecommons.org/licenses/by/4.0/).

Website: www.ijeer.forexjournal.co.in Performance Analysis of Feature Extraction Approach 61

You might also like