0% found this document useful (0 votes)
25 views4 pages

High-Performance Face Identification Using Hyperspectral Imaging To Counteract Deep Fake Biometrics

This research presents a method for high-performance face identification using hyperspectral imaging (HSI) to combat deep fake biometrics generated by Generative Adversarial Networks (GANs). The study addresses previous challenges in handling untrained images and high identification times by employing Principal Component Analysis (PCA) to reduce hyperspectral data dimensions and utilizing a Least Squares Generative Adversarial Network (LSGAN) for binary classification. Experimental results demonstrate significant improvements in identification accuracy and processing time, achieving an average accuracy of 0.994 across eight classes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

High-Performance Face Identification Using Hyperspectral Imaging To Counteract Deep Fake Biometrics

This research presents a method for high-performance face identification using hyperspectral imaging (HSI) to combat deep fake biometrics generated by Generative Adversarial Networks (GANs). The study addresses previous challenges in handling untrained images and high identification times by employing Principal Component Analysis (PCA) to reduce hyperspectral data dimensions and utilizing a Least Squares Generative Adversarial Network (LSGAN) for binary classification. Experimental results demonstrate significant improvements in identification accuracy and processing time, achieving an average accuracy of 0.994 across eight classes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2024 Eighth IEEE International Conference on Robotic Computing (IRC)

HIGH-PERFORMANCE FACE IDENTIFICATION


USING HYPERSPECTRAL IMAGING TO
COUNTERACT DEEP FAKE BIOMETRICS
Sota Furusawa Chinthaka Premachandra
Dept. of Electrical Engineering and Dept. of Electrical Engineering and
Computer Science, Graduate School of Engineering and Science, Computer Science, Graduate School of Engineering and Science,
Shibaura Institute of Technology Shibaura Institute of Technology
Tokyo, Japan Tokyo, Japan
2024 Eighth IEEE International Conference on Robotic Computing (IRC) | 979-8-3315-2155-4/24/$31.00 ©2024 IEEE | DOI: 10.1109/IRC63610.2024.00020

[email protected] [email protected]

Abstract—This research addresses the security challenges array data and the expensive equipment used to acquire data,
posed by Generative Adversarial Networks (GANs) in biometric making it difficult to generate by GAN and acquire duplicate
authentication, focusing particularly on the use of Hyperspectral data. HSI is an effective alternative to RGB images for
Imaging (HSI) to counteract the threat of deep fake biometrics biometrics due to its unique characteristics. Two problems exist
created by GANs. We sought solutions to the issues identified in a in the previous study [5] that used HSI to counteract deep fake
previous study, such as the inability to handle images of biometrics: The first point is that the amount of processing is
individuals not previously trained and the high time consumption large because 17 classifiers are generated and the final face
required for identification. In this paper, we first reduced the identification judgment is based on majority voting. The second
dimensionality of hyperspectral face data used for training, and
point is that even if the face image of untrained subjects (classes)
then conducted a binary classification of “target subject (class)”
and “nontarget subjects (classes).” A discriminator, based on the
is input into the classifier, it will always be classified as one of
Least Squares Generative Adversarial Network (LSGAN) model, the subjects (classes) used for training, since it is a multi-class
was developed using HSI for each class for binary classification. classifier that does not include space to accommodate untrained
The proposed method significantly reduced identification time data. In other words, binary classification of image data for
and achieved very high accuracy in identification. trained and untrained classes is not possible, which poses the
problem in application. This study aims to solve the above two
Keywords— hyperspectral data, LSGAN, PCA, face problems. We attempted to reduce the amount of processing by
identification, discriminator using Principal Component Analysis (PCA) to reduce the 51-
dimensional hyperspectral data to 3 dimensional and by using a
I. INTRODUCTION single classifier to perform face identification judgments. The
With recent advances in information technology, the least-squares generative adversarial network (LSGAN) model is
importance of protecting personal information has increased used to generate discriminators because the mean squared error
significantly. Therefore, biometric authentication is used as a is used as the loss function, which prevents gradient loss and
personal authentication method with higher security strength stabilizes the training process. In this study, we generate
than passwords [1]. Since biometric data is difficult to duplicate, LSGAN discriminators for each class. Then, each discriminator
it can be used to counter attacks such as brute force attacks. is used to conduct binary classification between the class used
However, biometric authentication is also considered to have its to generate the discriminator and other classes. Through this
challenges. That is the existence of vulnerabilities in common approach, untrained class data is also classified into other classes.
RGB images and limited data [2]. With advances in artificial In the experiments, face identification was conducted using
intelligence technology, methods now exist that can mimic eight classes of hyperspectral data. The results showed that the
biometric information, which is relatively easy to generate. A proposed method performed well in identification through
prime example of such a technique is the adversarial generative developed binary classifiers, and the time required for face
network (GAN) [3]; GAN models can generate imitation identification was reduced to less than one-fifth from the
biometric information from a limited number of images and data previous study. The proposed method could address the issues
[4]. A possible problem is that the generated biometric identified in previous studies, namely the inability to handle
information can be used to perform malicious authentication and images of individuals not previously trained and the high time
be used for criminal purposes. consumption required for identification.
Some research has been done on face identification methods II. HYPERSPECTRAL IMAGING
that can counter the threat of GANs by using hyperspectral In this study, a hyperspectral capturing device (commonly
imaging (HSI) to address this problem [5]. In this paper, HSI referred to as a hyperspectral camera) was used to capture the
refers to generated images that visualize the hyperspectral data. subjects' faces [5]. A hyperspectral camera can acquire
Hyperspectral data is characterized by its multidimensional

979-8-3315-2155-4/24/$31.00 ©2024 IEEE 186


DOI 10.1109/IRC63610.2024.00020
Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on February 04,2025 at 12:18:27 UTC from IEEE Xplore. Restrictions apply.
multidimensional channels that can be measured as bandwidth
information for each wavelength by dividing the reflected light
from the object into very fine spectra [6,7].
Measuring the wavelength of the object and analyzing its
spectral information enables the evaluation of the chemical
properties and state of the object that cannot be identified by
common RGB cameras. As a result, hyperspectral cameras have
been used in various industries such as medicine [8,9],
agriculture [10~14], marine science [15], and forestry [16].
Hyperspectral cameras are also used in various applications such
as detection of foreign objects, quality control, component
analysis, and environmental surveys. In this study, data was
Fig. 1. Distribution of Hyperspectral Data Values after PCA
captured using Cubert's ULTRIS-S5 hyperspectral camera [5].
III. FACIAL IDENTIFICATION OVERVIEW C. LSGAN short description
This study used LSGAN [19]. This model is characterized
A. Dataset Details by the fact that it generates more gradients when updating the
In this study, we utilized eight classes of hyperspectral data generators and prevents gradient loss, thus enabling stable
obtained from previous research, about 320 images for each training [20,21]. Fig. 2 shows the network structure of the
class [5]. Each dimensions of the hyperspectral data from the LSGAN model used in this study. The generator is shown in Fig.
prior study were (dimension, height, width) = (51, 275, 290). 2 (a) and the discriminator in Fig. 2 (b).
Since the LSGAN model used for generating discriminators is
primarily designed for 3-dimensional images, and the D. Methods of Determining Identification Boundary
cumulative contribution to the third principal component is The following describes the process of binary classification
approximately 0.99, we reduced the dimensionality of the between the class used to generate the discriminator and the
hyperspectral data from 51 to 3 dimensions using PCA [17,18]. other classes. The training data of the class used to train the
After this reduction and resized to 128 × 128, the shape of the LSGAN discriminator is input to the trained discriminator and
hyperspectral data became (dimension, height, width) = (3, 128, the output values are obtained. The mean and standard deviation
128). The purpose of the resizing is to reduce the amount of are calculated from these output values and the discriminant
processing required to determine face identification. Then, boundaries are determined. We chose the 95% confidence
normalization and imaging was performed on the interval as the appropriate discriminant boundary because of the
dimensionality-reduced and resized hyperspectral data. small number of outliers in the data. The equation used to
calculate the 95% confidence interval is shown in Equation (3)
B. Normalization and Imaging of Dimensionality Reduction below.
Data
95% 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 = 𝜇𝜇 ± 1.96 × 𝜎𝜎 (3)
The distribution range of hyperspectral data values is large.
Therefore, the dimension-reduced hyperspectral values also Let 𝜇𝜇 be the mean value and 𝜎𝜎 be the standard deviation of
have a large distribution range. Fig. 1 shows the distribution of the output values when the training image of the trained class is
hyperspectral data after dimensionality reduction. The values of given to the trained discriminator. In this study, the
hyperspectral data after processed by PCA include huge number discriminator's output value for the test data is classified as the
(i.e., around -4000~4000), which lead to high computational same class as the trained class if it falls within the interval, and
cost and hinder the proper training of the LSGAN model. To as a different class if it falls outside the interval. The confidence
mitigate this, the Min-Max normalization is applied as a intervals are obtained separately for each class in a similar
preprocessing step to the dimensionally reduced hyperspectral manner.
data. By doing this, the range of hyperspectral data after PCA is
able to transform into 0 to 255, treating as RGB image. The
formula for the Min-Max normalization method is presented in
Equation (1) below.
𝑥𝑥𝑖𝑖 − 𝑚𝑚𝑚𝑚𝑚𝑚 (ℎ)
𝑥𝑥𝑖𝑖′ = (1)
𝑚𝑚𝑚𝑚𝑚𝑚 (ℎ) − 𝑚𝑚𝑚𝑚𝑚𝑚 (ℎ) (a)
𝑝𝑝𝑝𝑝𝑝𝑝𝑖𝑖 = 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟(𝑥𝑥𝑖𝑖′ ) × 255 (2)
Let 𝑥𝑥𝑖𝑖 be the 𝑖𝑖𝑡𝑡ℎ hyperspectral point value in the data array
after PCA, 𝑥𝑥𝑖𝑖 ′ be the normalized value of 𝑥𝑥𝑖𝑖 , where, 𝑚𝑚𝑚𝑚𝑚𝑚 (ℎ)
and 𝑚𝑚𝑚𝑚𝑚𝑚 (ℎ) represents the minimum and maximum values of
hyperspectral data array after PCA. Following Equation (2), 𝑥𝑥𝑖𝑖′
(b)
is multiplied by 255 and rounded to the nearest positive integer
to obtain each 8-bit pixel values, generating a pseudo-RGB Fig. 2. Generator and Discriminator network structure
image of the data array for input into the LSGAN. (a) generator, (b)discriminator

187

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on February 04,2025 at 12:18:27 UTC from IEEE Xplore. Restrictions apply.
E. Environment for Creating Discriminators
We created the face identification model based on the
LSGAN model in the computer environment shown in TABLE
I in this study.

TABLE I. COMPUTER SPECIFICATION

CPU Intel Core i7-13700K @3.40GHz


GPU NVIDIA GeForce RTX 3060 Ti
RAM 64GB

Fig. 4. Class7 Identification Result


IV. EXPERIMENTAL RESULTS AND DISCUSSION TABLE II. CLASS 1 CONFUSION MATRIX
A. Representative example results class1
In the binary classification results, we focus on two of the Predicted label
eight classes as representative examples: Class 1 and Class 7. same different
Fig. 3 illustrates the distribution of identification results for the person person
Class 1 test data, evaluated using the Class 1 classifier, which same
149 8
True person
was trained for 5000 epochs. Similarly, Fig. 4 shows the label different
distribution of identification results for the Class 7 test data, person
0 1095
evaluated using the Class 7 classifier, also trained for 5000
epochs. TABLE III. CLASS 7 CONFUSION MATRIX
In Fig. 3 and 4, it can be observed that the output values of class7
hyperspectral face images belonging to the same class as the Predicted label
trained class are closely distributed around 0, while the output same different
values of hyperspectral face images belonging to other classes person person
are distributed away from 0. This observation means that no same
152 2
overlap exists in the distribution of output values between the True person
label different
trained class and the other classes, indicating highly accurate person
0 1098
classification.
TABLE II shows the confusion matrix representing the
prediction results for the Class 1 test data. TABLE III then Furthermore, upon examining TABLE Ⅱ and Ⅲ, it is evident
shows the confusion matrix representing the prediction results that there are no data items in the false positive (FP) category.
for the Class 7 test data. The confusion matrix is used to show This signifies that the biometric authentication system cannot
the results of the binary classification, with the vertical axis erroneously identify different individuals, resulting in an
representing the true label and the horizontal axis representing extremely high level of security.
the predicted label.
B. Overall Results
Based on the data from TABLE Ⅱ, for Class 1, the accuracy TABLE Ⅳ provides a summary of the accuracy rate, recall
rate is 0.994, the recall rate is 0.949, and the precision rate is rate, and precision rate for all eight classes of identification
1.000. Similarly, as shown in TABLE Ⅲ, for Class 7, the results. As TABLE IV shows, for all eight classes, the average
accuracy rate is 0.998, the recall rate is 0.987, and the precision accuracy is 0.994, the average recall is 0.949, and the average
rate is 1.000. These three-evaluation metrics [5] indicate that precision is 1.000. These results show that highly accurate face
both Class 1 and Class 7 exhibit exceptionally high identification was achieved in all eight classes. It can be asserted
discrimination accuracy. that the reduction of hyperspectral data from 51 dimensions to 3
dimensions, coupled with the utilization of LSGAN for classifier
construction, which facilitates stable training, has contributed to
the attainment of remarkably accurate identification.
C. Time required for identification
TABLE V shows the average time required for face
identification in the previous study[5] and in this study. This
indicates that the time required for identification has been
significantly reduced to less than one-fifth.

Fig. 3. Class1 Identification Result

188

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on February 04,2025 at 12:18:27 UTC from IEEE Xplore. Restrictions apply.
TABLE IV. IDENTIFICATION RESULTS FOR 8 CLASSES [5] R. Nakazawa and C. Premachandra, “AI Based Biometrics Recognition
with a Hyperspectral Image Sensor”, Proc. of 2022 2nd International
Accuracy Recall Precision Conference on Robotics, Automation and Artificial Intelligence (RAAI) ,
class1 0.994 0.949 1.000 pp.240-243, Dec. 2022.
class2 0.999 0.994 1.000 [6] F. Vasefi, N. MacKinnon, and D. L. Farkas, “Hyperspectral and
class3 0.996 0.969 1.000 Multispectral Imaging in Dermatology”, Imaging in Dermatology.
Academic Press, pp. 187-201, 2016.
class4 0.994 0.954 1.000
[7] S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson,
class5 0.983 0.865 1.000 “Deep Learning for Hyperspectral Image Classification: An Overview”,
class6 0.996 0.968 1.000 IEEE Transactions on Geoscience and Remote Sensing, Vol. 57, no. 9,
class7 0.998 0.987 1.000 pp. 6690-6709, Sep. 2019.
class8 0.988 0.904 1.000 [8] S. Prigent, X. Descombes, D. Zugaj, and J. Zerubia, "Spectral analysis
Average 0.994 0.949 1.000 and unsupervised SVM classification for skin hyper-pigmentation
classification", Proc. of 2010 2nd Workshop on Hyperspectral Image and
Signal Processing: Evolution in Remote Sensing, pp. 1-4, Jun. 2010.
TABLE V. AVERAGE IDENTIFICATION TIME [9] S. Yamamoto, K. Ogawa-Ochiai, T. Nakaguchi, N. Tsumura, T. Namiki,
and Y. Miyake, "Detecting hyper-/hypothyroidism from tongue color
Average Identification Time spectrum", Proc of IEEE 10th International Workshop on Biomedical
Previous Study This Study Engineering, pp. 1-3, Oct. 2011.
[10] C. J. Perera, C. Premachandra and H. Kawanaka, "Low Pixel Resolution
Approx.198 seconds Approx.34 seconds Hyperspectral Image Mosaics Generation Using Learning-Based Feature
Matching," in IEEE Access, vol. 11, pp. 104084-104093, Sep. 2023.
[11] C. J. Perera, C. Premachandra and H. Kawanaka, "Enhancing Feature
Detection and Matching in Low-Pixel-Resolution Hyperspectral Images
V. CONCLUSION Using 3D Convolution-Based Siamese Networks," in Sensors, 23(18),
This paper focused on the security problems that deep-fake 8004, Sep. 2023.
images generated by GANs pose for face recognition among [12] C. J. Perera, C. Premachandra, and H. Kawanaka, "Feature Detection and
Matching for Low-Resolution Hyperspectral Images," Proc. of 2023
various biometrics. Specifically, we focused on HSI to IEEE International Conference on Consumer Electronics-Taiwan (IEEE
overcome the threat of deep-fake generated by GANs. We ICCE-TW2023), July 2023.
addressed two important problems encountered in previous [13] C. J. Perera, C. Premachandra, and H. Kawanaka, "Comparison of Light
study: reduced identification time and the inability to handling Weight Hyperspectral Camera Spectral Signatures with Field Spectral
of trained and untrained person image data. In our proposed Signatures for Agricultural Applications," Proc. of 2023 IEEE Int.
method, we introduced the LSGAN model to generate Conference on Consumer Electronics (ICCE), Jan. 2023.
discriminators for each class using hyperspectral face images [14] Y. Yanmin, W. Na, C. Youqi, H. Yingbin, and T. Pengqin, “Soil moisture
that had undergone dimensionality reduction through PCA. monitoring using hyper-spectral remote sensing technology”, Proc. of
2010 Second IITA International Conference on Geoscience and Remote
These discriminators were then employed for binary Sensing, pp. 373-376, Aug. 2010.
classification on the test data, distinguishing between the class [15] G. Peiyuan, F. Yan, X. Lingzi, B. Man, and C. Xinghai, “Research on
used to generate each discriminator and other classes. The Marine and Freshwater Fish Identification Model Based on Hyper-
results of our study demonstrated high-performance spectral Imaging Technology”, Proc. of 2013 5th International
identification outcomes. Additionally, the average identification Conference on Intelligent Human-Machine Systems and Cybernetics, pp.
time significantly reduced in our proposed method compared to 369-372, Aug. 2013.
previous studies. In the future, we will verify the effectiveness [16] B. Hoshino, G. Kudo, T. Yabuki, M. Kaneko and S. Ganzorig,
“Investigation on the water stress in alpine vegetation using Hyperspectral
of this method by increasing the number of face identification sensors”, Proc. of 2009 IEEE International Geoscience and Remote
classes and using hyperspectral data acquired in various Sensing Symposium, pp. III-554-III-556, Jul. 2009.
shooting environments. [17] T. Li, J. Zhang, and Y. Zhang, “Classification of hyperspectral image
based on deep belief networks”, Proc. of 2014 IEEE International
Conference on Image Processing (ICIP), pp. 5132-5136, Oct. 2014.
REFERENCES [18] A. Shwetank, Neeraj, Jitendra, Jitendra, Vikesh, and K. Jain, “Pixel Based
Supervised Classification of Hyperspectral Face Images for Face
[1] A. K. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric Recognition”, Procedia Computer Science, Vol 132, pp.706-717, Jun.
recognition”, IEEE Transactions on Circuits and Systems for Video 2018.
Technology, Vol. 14, no. 1, pp. 4-20, Jan. 2004. [19] X. Mao, Q. Li, H. Xie, R. Y.K. Lau, Z. Wang, and S. P. Smolley, “Least
[2] A. Peña, A. Morales, I. Serna, J. Fierrez, and A. Lapedriza, "Facial Squares Generative Adversarial Networks”, Proc. of 2017 IEEE
Expressions as a Vulnerability in Face Recognition", Proc. of 2021 IEEE International Conference on Computer Vision (ICCV), pp.2813-2821,
International Conference on Image Processing (ICIP), pp. 2988-2992, Sep. Oct. 2017.
2021. [20] C. Dewi, R. C. Chen, Y. T. Liu, and H. Yu, “Various Generative
[3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. W.-Farley, S. Ozair, Adversarial Networks Model for Synthetic Prohibitory Sign Image
A. Courville, and Y. Bengio, “Generative Adversarial Nets”, Commun. Generation”, Applied Sciences, Vol 11, Issue 7, pp. 2913-2927, Mar.
of the ACM, Vol. 63, Issue 11, pp. 139-144, Oct. 2020. 2021.
[4] S. Minaee and A. Abdolrashidi, “Finger-GAN: Generating Realistic [21] R. Gupta and V. Gupta, “Performance Analysis of Different GAN
Fingerprint Images Using Connectivity Imposed GAN” in arXiv, Dec. Models: DC-GAN and LS-GAN”, Proc. of 2021 7th International
2018. Conference on Signal Processing and Communication (ICSC), pp222-227,
Nov. 2021.

189

Authorized licensed use limited to: Indian Institute of Information Technology Kottayam. Downloaded on February 04,2025 at 12:18:27 UTC from IEEE Xplore. Restrictions apply.

You might also like