Few-Shot Learning For Palmprint Recognition Via Meta-Siamese Network
Few-Shot Learning For Palmprint Recognition Via Meta-Siamese Network
Abstract— Palmprint is one of the discriminant biometric widely applied in daily life, such as face recognition [2] and
modalities of humans. Recently, deep learning-based palmprint fingerprint recognition [3]. As one of the unique technolo-
recognition algorithms have improved the accuracy and robust- gies of biometrics, palmprint recognition has received much
ness of recognition results to a new level. Most of them require
a large amount of labeled training samples to guarantee satis- research attention recently [4], [5]. Generally, researchers
factory performance. However, getting enough labeled data is applied signal processing methods to analyze the patterns of
difficult due to time consumption and privacy issues. Therefore, palmprint for personal authentication [6]. So far, local tex-
in this article, a novel meta-Siamese network (MSN) is pro- ture [7] and principal lines [8] have been exploited for feature
posed to exploit few-shot learning for small-sample palmprint representation. They are time-invariant with large interclass
recognition. During each episode-based training iteration, a few
images are selected as sample and query sets to simulate the variance and low intraclass variance. Therefore, promising
support and testing sets in the test set. Specifically, the model recognition results have been achieved with high universality,
is trained episodically with a flexible framework to learn both stability, and uniqueness.
the feature embedding and deep similarity metric function. Typical procedure of palmprint recognition consists of
In addition, two distance-based losses are introduced to assist image acquisition, preprocessing, feature extraction, and
the optimization. After training, the model can learn the ability
to get similarity scores between two images for few-shot testing. matching [9], [10]. Palmprint image acquisitions are usually
Adequate experiments conducted on several constrained and employed by optical cameras. Preprocessing is mainly adopted
unconstrained benchmark palmprint databases show that MSN to implement noise reduction and region of interest (ROI)
can obtain competitive improvements compared with baseline extraction. Then, several categories of feature extraction and
methods, where the best accuracy can be up to 100%. matching methods are proposed to separate different identities,
Index Terms— Biometrics, few-shot learning, information secu- e.g. encoding-based methods, structure-based methods, statis-
rity, meta-learning, palmprint recognition. tics methods, and subspace methods [9]. So far, deep learning
techniques have emerged as effective tools for automatic
I. I NTRODUCTION
visual understanding and obtained the state of the arts in
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
labeled training data. It is similar to the practical palmprint can outperform others to obtain the state-of-the-art palm-
recognition scenarios, where a few images are registered in print recognition.
the database and the query images need to be matched with Compared with our previous work in [20], we have made
the registration images to determine the tester’s identity. One some significant improvements. First, in addition to the pre-
of the effective solutions is meta-learning [16], which aims vious similarity loss, two other losses are constructed to
to train the deep neural networks (DNNs) for generalizing on constrain the distance of image pairs in the feature space
different tasks. Inspired by it, in this article, we propose a directly, i.e. contrastive loss and BD. Though obtaining
few-shot palmprint recognition method called meta-Siamese the category relations between two images matched through
network (SN) (MSN). neural networks can reduce the impact of manual interven-
In order to help DNN to generalize to new palmprint tion, the distance constraints on them will be beneficial and
images, our MSN follows the structure of SN [17], which improve the performance, which is shown in the results. Here,
firstly extracts feature from image pairs using weight-shared distance-based losses can make positive matching features
convolutional neural network (CNN). Then, the feature vectors closer while negative matching features farther in the feature
extracted are concatenated together and input to follow-up space. Second, eight new unconstrained palmprint databases
decision network to obtain their similarity. Finally, the mean and four benchmark palmprint databases are introduced in
square error (MSE)-based similarity losses are adopted and the experiments to verify the effectiveness of our modified
backpropagated so that it can verify whether palmprint image algorithms. Third, more adequate analyses and comparisons
pairs are from the same individual. are conducted with the state-of-the-art algorithms to demon-
However, different from SN, meta-learning is introduced strate the superiority of our algorithms, especially the recent
to improve the generalization ability and the model is trained few-shot palmprint recognition methods.
through episode-based iteration. Specifically, images of N The remainder of this article is structured as follows.
categories are firstly randomly sampled from train set, and for Section II reviews some related works. Our methods are
each category, k images are selected, which are matched with described in Section III in detail. Section IV presents our
the images in query set (denoted as a N-way, k-shot episode experiments and results on several databases. Analysis of
task). In the testing stage, a similar strategy is adopted to results is in Section V. Section VI gives a conclusion for this
select support set and testing set from test set, where the article.
former is labeled and the latter is unlabeled, and N-way,
k-shot tasks are also randomly sampled. The model is trained II. R ELATED W ORK
on a large number of N-way, k-shot tasks in the training
set, and finally it can adapt the new N-way, k-shot tasks A. Palmprint Recognition
to obtain the similarity scores of different palmprint image Traditional palmprint recognition algorithms mainly
pairs. In order to help optimize the model and improve the extract its rich main line, texture, and wrinkle features.
performance, two distance-based losses, contrastive loss [18] One kind of the commonly used methods is based on
or binomial deviance (BD) [19], are introduced to constrain orientation code. They convolve palmprint images with a
the distance between features in the feature space. Specially, list of Gabor filters with several orientations and convert
to increase the flexibility, the convolutional blocks are them into codes as features, such as competitive code [21],
incorporated in the decision network instead of pure stacked binary orientation co-occurrence vector (BOCV) [22],
fully connected (FC) layers so that the entire neural networks extend BOCV (E-BOCV) [23], double-orientation code
can adapt quickly. Experiments on several popular benchmark (DOC) [24], discriminative and robust competitive code
palmprint datasets reveal the outperforming accuracy and (DRCC) [25], and so on. Using the multiplication and
generalization ability of our model. The details can be found addition schemes, Fei et al. [26] fused the apparent and
in Second III. The overview of MSN is shown in Fig. 1. latent direction features of palmprint and proposed a unique
The contributions can be summarized as follows. double-layer direction extraction method, called apparent
1) MSN is proposed for efficient few-shot palmprint recog- and latent direction code (ALDC). Luo et al. [27] proposed
nition. Its core is to directly imitate the identification task local line directional patterns (LLDP) which operated
of the test in the training phase to improve the accuracy. in local line-geometry space for palmprint recognition.
After the episode-based training on the tasks in training Zhang et al. [6] established a contactless palmprint database
set, the model can be applied to the test set for new and proposed CR_CompCode for palmprint identification
few-shot palmprint recognition tasks. with low computational complexity. Fei et al. [26] extracted
2) MSE-based similarity loss is applied to measure the sim- six discriminant direction binary codes (DDBCs) for each
ilarity scores of palmprint image pairs to determine their pixel of palmprint image and concatenated them as the global
categories. Distance-based losses are further constructed feature vector, called discriminant direction binary palmprint
to assist in training the model and improve the accuracy. descriptor (DDBPD). Toward more accurate direction
3) Adequate experiments are conducted on several con- representations, Jia et al. [28] extracted the direction features
strained and unconstrained benchmark palmprint data- of palmprint on more levels such as multiscale, multidirection
bases. From the results, MSN can obtain promising level, and multiregion. Zhang et al. [6] proposed a unique local
performance and the best accuracy can be up to 100%. descriptor to extract both direction descriptors and thickness
Furthermore, compared with the previous models, MSN features, called local microstructure tetra pattern (LMTrP).
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
SHAO et al.: FEW-SHOT LEARNING FOR PALMPRINT RECOGNITION 5009812
Fig. 1. Overview of our MSN (5-way, 1-shot). Sample and query images are selected from training set randomly to imitate the N -way, k-shot recognition tasks
in test set, where support and testing images are selected randomly during testing. Sample and query images are input into weight-shared feature extractors to
get feature vectors. Then, they are concatenated and input into decision network to obtain similarity score. Similarity loss and distance loss are constructed
to optimize the model. After training, the network can be used for new N -way, k-shot tasks to distinguish the identity of other images in the test set.
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
SHAO et al.: FEW-SHOT LEARNING FOR PALMPRINT RECOGNITION 5009812
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
TABLE I
S OME D ETAILS OF D IFFERENT PALMPRINT D ATABASES
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
SHAO et al.: FEW-SHOT LEARNING FOR PALMPRINT RECOGNITION 5009812
TABLE II TABLE IV
F EW-S HOT R ECOGNITION A CCURACIES (%) ON M ULTISPECTRAL F EW-S HOT R ECOGNITION A CCURACIES (%) ON XJTU-UP D ATABASE
D ATABASE
TABLE III
F EW-S HOT R ECOGNITION A CCURACIES (%) ON T ONGJI PALMPRINT
D ATABASE
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
TABLE V
F EW-S HOT R ECOGNITION A CCURACIES (%) ON HF D ATABASE W ITH
D IFFERENT H YPERPARAMETERS
Fig. 8. Results of different hyperparameters. (a) and (b) “Con.” (c) and
(d) “BD.” In each subgraph, the horizontal axis represents w, and the vertical
axis represents the recognition accuracy.
TABLE VI
F EW-S HOT R ECOGNITION A CCURACIES (%) ON HF D ATABASE W ITH
Fig. 7. Some typical samples of XJTU-UP database. (a) and (b) Original
images in HF and IN. (c) and (d) ROIs in HF and HN. D IFFERENT L OSSES
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
SHAO et al.: FEW-SHOT LEARNING FOR PALMPRINT RECOGNITION 5009812
V. E VALUATION AND A NALYSES 5) DHN [57] converts palmprint images into binary codes,
which can improve the efficiency of authentication and
A. Comparisons Between Different Settings
obtain the state of the arts on the traditional palmprint
1) Comparisons Between Different “Ways” and “Shots”: recognition scenario.
In the experiments, 5-way/1-shot, 5-way/3-shot, 5-way/5-shot, 6) GNN [29] uses nodes to represent image features and
15-way/1-shot, 15-way/3-shot, and 15-way/5-shot recognition edge to represent their positive or negative relation.
tasks are performed. From the results, the accuracies of 5-way 7) DRCC [25] adopts a more accurate dominant orientation
are better than that of 15-way. In this article, meta-learning representation of palmprint by weighting the orientation
strategies are adopted. In each episode-based training iteration, information of a neighbor area.
it seems that the network tries to classify the images in the 8) ALDC [58] extracts the apparent and latent direction fea-
sample/query sets. In 15-way few-shot recognition, the knowl- tures and pools them as the histogram feature descriptor.
edge acquired by sampling tasks when training may not be 9) LDDBP [59] adopts a novel exponential and Gaussian
specific enough for handling classification problem among fusion model (EGM) to present the discriminative power
larger amounts of classes. The categories in 5-way setting of different directions of palmprint.
are less, so the classification task is easier and the accuracy 10) DDBPD [26] concatenates several binary feature DDBC
is higher. Furthermore, if there are more labeled images in codes as a global feature vector to perform recognition.
sample set (more “shots”), most of the results will be also 11) PCANet [30], [60] applies cascaded PCA, binary hash-
better, because more knowledge of a certain class is obtained ing, and block-wise histograms to extract features.
by MSN. It can also be observed that the accuracy of 3-shot 12) PalmNet [11] combines Gabor responses and CNN, and
is higher than that of 5-shot in some datasets, such as Red is trained by an unsupervised procedure.
in Table II. It may be because there is more variation between 13) TPN [44] learns a graph construction module to prop-
images, and more “shots” increase the difficulty of learning. agate labels from labeled instances to unlabeled test
2) Comparisons Between Different Databases: In this instances.
article, three benchmarks are adopted. PolyU multispectral 14) LGM-Net [47] learns transferable knowledge across
database and Tongji contactless database are constrained data- different tasks and produces network parameters for
bases, which are collected in closed space with additional similar unseen tasks through TargetNet and MetaNet.
illuminations. So their qualities are better and easier to 15) ABLM [46] amortizes hierarchical variational inference
identify. XJTU-UP database consists of images collected by across tasks and learns a prior distribution over neural
mobile phones in an unconstrained manner, so they contain network weights.
more noise. From the results, the performances of constrained 16) LS loss [61] samples an equal number of negative pairs
images outperform the unconstrained images, but the latter as the positive pairs to take full advantage of training
are also good. However, the unconstrained acquisition is more batches.
suitable for mobile terminal application scenarios. 17) MS loss [62] adopts two iterative steps with sampling
and weighting to improve the performance.
B. Comparisons With Other Models
Note that all of modules are implemented using similar
For comparison, we present the results of some base- hyperparameters with a slight difference in each model to
line methods in the 5-way, 1-shot recognition, namely reach the best performance, respectively. LS loss and MS loss
SN [17], model-agnostic meta-learning (MAML) [39], Pro- are adopted to train deep metric model to extract discriminative
totypical Nets (P-Net) [37], DHN [57], Matching Net (M- features, and Resnet 18 is used as the backbone [63]. The
Net) [49], GNN [29], DRCC [25], ALDC [58], local discrim- experiment settings are kept as consistent, such as the split
inant direction binary pattern (LDDBP) [59], DDBPD [26], of the training data and test data and evaluation method.
PCANet [30], [60], PalmNet [11], TPN [44], LGM-Net [47], From the tables, our model can achieve competitive results
ABLM [46], lifted structure (LS) loss [61], and multisimilar- compared with several popular low-shot recognition methods,
ity (MS) loss [62]. The results are shown in Tables VIII and namely M-Net, P-Net, MAML, GNN, TPN, LGM-Net, and
IX, and the top 1 accuracy is highlighted with bold. ABLM. Compared with the state-of-the-art palmprint recogni-
1) SN [17] uses two same networks to extract features and a tion models using traditional training strategies without special
decision network to get the similarity scores of matched design for low-shot recognition, our model performs better
images. in all datasets. ALDC, LDDBP, and DDBPD are handcrafted
2) MAML [39] is based on meta-learning and aims to palmprint recognition methods. Though they can obtain rela-
explicitly train a network on a number of learning tasks tively high accuracy on constrained database, but they are not
so that it can adapt to new learning tasks. as good as deep learning-based methods, such as PalmNet.
3) P-Net [37] adopts distance-based loss to learn a metric LS loss and MS loss can obtain satisfactory performance when
space and achieves classification by obtaining the dis- there are enough training data. However, compared with MSN,
tances to prototype representations of every category. due to the lack of labeled training data, their performances are
4) M-Net [49] is based on deep metric learning and aug- also limited. SN and DHN are supervised algorithms, but there
ments neural networks with external memories to adapt are not enough labeled samples here, so their performances
new tasks. have dropped significantly. In the SSS palmprint recognition
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
TABLE VIII
C OMPARATIVE R ESULTS (%) OF F EW-S HOT R ECOGNITION ON D IFFERENT M ODELS
TABLE IX
C OMPARATIVE R ESULTS (%) OF F EW-S HOT R ECOGNITION ON D IFFERENT M ETHODS
scenario (only a few labeled samples can be used for train- VI. C ONCLUSION
ing and registration), which is more common in practical In this article, a novel few-shot model, MSN, is proposed for
applications, these previous methods do not work well, and palmprint recognition only using a few labeled images. On the
this shows the effectiveness of our proposed methods. From basis of classical SN, meta-episode training is introduced
the results, few-shot learning-based methods can generally for better generalization performance. Two weight-shared net-
achieve better performance, especially for constrained data- works are adopted to extract the features, which are measured
bases. However, our MSN combined similarity loss with by a decision network to obtain their similarities. In the
distance loss can obtain higher accuracy on few-shot palmprint similarity metric learning stage, the initial model is modified
recognition. to compare two palmprint images flexibly by introducing CNN
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
SHAO et al.: FEW-SHOT LEARNING FOR PALMPRINT RECOGNITION 5009812
blocks. Inspired by meta-learning, the training set is split into [17] D. Zhong, Y. Yang, and X. Du, “Palmprint recognition using siamese
sample/query sets and the test set is split into support/testing network,” in Proc. Chin. Conf. Biometric Recognit., Urumqi, China,
2018, pp. 48–55.
sets. Furthermore, two distance-based losses are adopted to [18] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric
assist the optimization, which makes the positive matchings discriminatively, with application to face verification,” in Proc. IEEE
closer and negative matchings farther in the feature space. Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), San Diego,
CA, USA, Jun. 2005, pp. 539–546.
Finally, the model learns the ability to measure the similarity [19] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Deep metric learning for person
between two palmprint images on the learning tasks during re-identification,” in Proc. 22nd Int. Conf. Pattern Recognit., Stockholm,
training, which can adapt the new recognition tasks in the Sweden, Aug. 2014, pp. 1–11.
[20] X. Du, D. Zhong, and P. Li, “Low-shot palmprint recognition based
test set. Experiments on several benchmarks, including con- on meta-siamese network,” in Proc. IEEE Int. Conf. Multimedia Expo
strained and unconstrained palmprint databases, show that our (ICME), Shanghai, China, Jul. 2019, pp. 79–84.
algorithms can outperform other methods to be the state of the [21] F. Yue, W. Zuo, D. Zhang, and K. Wang, “Orientation selection
using modified FCM for competitive code-based palmprint recognition,”
arts, and the accuracies can be up to 100%. And, our model is Pattern Recognit., vol. 42, no. 11, pp. 2841–2849, Nov. 2009.
very suitable for hand-based practical personal authentication [22] Z. Guo, D. Zhang, L. Zhang, and W. Zuo, “Palmprint verification using
scenarios when the size of the training or registration set is binary orientation co-occurrence vector,” Pattern Recognit. Lett., vol. 30,
no. 13, pp. 1219–1227, Oct. 2009.
small or when only a small part of palmprint images are
[23] L. Zhang, H. Li, and J. Niu, “Fragile bits in palmprint recognition,”
labeled in the acquisition stage. In the future, we will extend IEEE Signal Process. Lett., vol. 19, no. 10, pp. 663–666, Oct. 2012.
our model to zero-shot recognition scenario by introducing the [24] L. Fei, Y. Xu, W. Tang, and D. Zhang, “Double-orientation code
semantic features of palmprint. and nonlinear matching scheme for palmprint recognition,” Pattern
Recognit., vol. 49, pp. 89–101, Jan. 2016.
[25] Y. Xu, L. Fei, J. Wen, and D. Zhang, “Discriminative and robust
competitive code for palmprint recognition,” IEEE Trans. Syst., Man,
R EFERENCES Cybern. Syst., vol. 48, no. 2, pp. 232–241, Feb. 2018.
[1] A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric [26] L. Fei, B. Zhang, Y. Xu, Z. Guo, J. Wen, and W. Jia, “Learning
recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, discriminant direction binary palmprint descriptor,” IEEE Trans. Image
pp. 4–20, Jan. 2004. Process., vol. 28, no. 8, pp. 3808–3820, Aug. 2019.
[2] M. Kopaczka, R. Kolk, J. Schock, F. Burkhard, and D. Merhof, “A [27] Y.-T. Luo et al., “Local line directional pattern for palmprint recogni-
thermal infrared face database with facial landmarks and emotion tion,” Pattern Recognit., vol. 50, pp. 26–44, Feb. 2016.
labels,” IEEE Trans. Instrum. Meas., vol. 68, no. 5, pp. 1389–1401, [28] W. Jia et al., “Palmprint recognition based on complete direction repre-
May 2019. sentation,” IEEE Trans. Image Process., vol. 26, no. 9, pp. 4483–4498,
[3] K. Cao and A. K. Jain, “Automated latent fingerprint recognition,” Sep. 2017.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 4, pp. 788–800, [29] H. Shao and D. Zhong, “Few-shot palmprint recognition via graph neural
Apr. 2019. networks,” Electron. Lett., vol. 55, no. 16, pp. 890–891, Aug. 2019.
[4] A.-S. Ungureanu, S. Salahuddin, and P. Corcoran, “Toward uncon- [30] A. Meraoumia, F. Kadri, H. Bendjenna, S. Chitroub, and A. Bouridane,
strained palmprint recognition on consumer devices: A literature review,” “Improving biometric identification performance using PCANet deep
IEEE Access, vol. 8, pp. 86130–86148, 2020. learning and multispectral palmprint,” in Biometric Security and Privacy.
[5] W. M. Matkowski, T. Chai, and A. W. K. Kong, “Palmprint recogni- Cham, Switzerland: Springer, 2017, pp. 51–69.
tion in uncontrolled and uncooperative environment,” IEEE Trans. Inf. [31] S. Zhao and B. Zhang, “Deep discriminative representation for generic
Forensics Security, vol. 15, pp. 1601–1615, 2020. palmprint recognition,” Pattern Recognit., vol. 98, pp. 1–11, Feb. 2020.
[6] L. Zhang, L. Li, A. Yang, Y. Shen, and M. Yang, “Towards contactless [32] H. Shao, D. Zhong, and Y. Li, “PalmGAN for cross-domain palmprint
palmprint recognition: A novel device, a new benchmark, and a collab- recognition,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Shang-
orative representation based identification approach,” Pattern Recognit., hai, China, Jul. 2019, pp. 1390–1395.
vol. 69, pp. 199–212, Sep. 2017. [33] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and
[7] L. Fei, B. Zhang, S. Teng, Z. Guo, S. Li, and W. Jia, “Joint multiview T. M. Hospedales, “Learning to compare: Relation network for few-shot
feature learning for hand-print recognition,” IEEE Trans. Instrum. Meas., learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Salt
vol. 69, no. 12, pp. 9743–9755, Dec. 2020. Lake City, UT, USA, Jun. 2018, pp. 1199–1208.
[8] N. B. Mahfoudh, Y. B. Jemaa, and F. Bouchhima, “A robust palmprint [34] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap,
recognition system based on both principal lines and Gabor wavelets,” “Meta-learning with memory-augmented neural networks,” in Proc. Int.
Int. J. Image, Graph. Signal Process., vol. 5, no. 7, pp. 1–8, 2013. Conf. Mach. Learn., New York, NY, USA, 2016, pp. 1842–1850.
[9] D. Zhong, X. Du, and K. Zhong, “Decade progress of palmprint [35] T. Munkhdalai and H. Yu, “Meta networks,” in Proc. 34th Int. Conf.
recognition: A brief survey,” Neurocomputing, vol. 328, pp. 16–28, Mach. Learn., Sydney, NSW, Australia, 2017, pp. 2554–2563.
Feb. 2019. [36] L. Bertinetto, J. F. Henriques, J. Valmadre, P. H. S. Torr, and A. Vedaldi,
[10] A. Kong, D. Zhang, and M. Kamel, “A survey of palmprint recognition,” “Learning feed-forward one-shot learners,” in Proc. 30th Conf. Neural
Pattern Recognit., vol. 42, no. 7, pp. 1408–1418, Jul. 2009. Inf. Process. Syst. (NIPS), Barcelona, Spain, 2016, pp. 523–531.
[11] A. Genovese, V. Piuri, K. N. Plataniotis, and F. Scotti, “PalmNet: Gabor- [37] J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-
PCA convolutional networks for touchless palmprint recognition,” IEEE shot learning,” in Proc. Adv. Neural Inf. Process. Syst., Long Beach,
Trans. Inf. Forensics Security, vol. 14, no. 12, pp. 3160–3174, Dec. 2019. CA, USA, 2017, pp. 4080–4090.
[12] H. Shao and D. Zhong, “Towards cross-dataset palmprint recognition via [38] M. Ren et al., “Meta-learning for semi-supervised few-shot classifica-
joint pixel and feature alignment,” IEEE Trans. Image Process., vol. 30, tion,” in Proc. 6th Int. Conf. Learn. Represent. (ICLR), Vancouver, BC,
pp. 3764–3777, 2021, doi: 10.1109/TIP.2021.3065220. Canada, 2018, pp. 1–15.
[13] C. Yan, B. Shao, H. Zhao, R. Ning, Y. Zhang, and F. Xu, “3D room [39] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for
layout estimation from a single RGB image,” IEEE Trans. Multimedia, fast adaptation of deep networks,” in Proc. 34th Int. Conf. Mach. Learn.,
vol. 22, no. 11, pp. 3014–3024, Nov. 2020. Sydney, NSW, Australia, 2017, pp. 1126–1135.
[14] H. Shao and D. Zhong, “One-shot cross-dataset palmprint recogni- [40] B. Kang, Z. Liu, X. Wang, F. Yu, J. Feng, and T. Darrell, “Few-shot
tion via adversarial domain adaptation,” Neurocomputing, vol. 432, object detection via feature reweighting,” in Proc. IEEE/CVF Int. Conf.
pp. 288–299, Apr. 2021. Comput. Vis. (ICCV), Seoul, South Korea, Oct. 2019, pp. 8419–8428.
[15] J. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Regularized [41] H. Yao et al., “Graph few-shot learning via knowledge transfer,” in
discriminant analysis for the small sample size problem in face recogni- Proc. 34th AAAI Conf. Artif. Intell., New York, NY, USA, 2020,
tion,” Pattern Recognit. Lett., vol. 24, no. 16, pp. 3079–3087, Dec. 2003. pp. 6656–6663.
[16] C. Lemke, M. Budka, and B. Gabrys, “Metalearning: A survey of [42] V. G. Satorras and J. B. Estrach, “Few-shot learning with graph neural
trends and technologies,” Artif. Intell. Rev., vol. 44, no. 1, pp. 117–130, networks,” in Proc. 6th Int. Conf. Learn. Represent. (ICLR), Vancouver,
Jun. 2015. BC, Canada, 2018, pp. 1–13.
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction
5009812 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 70, 2021
[43] C. Jiang, H. Xu, X. Liang, and L. Lin, “Hybrid knowledge routed Huikai Shao (Graduate Student Member, IEEE)
modules for large-scale object detection,” in Proc. Adv. Neural Inf. received the B.Sc. degree from Chongqing Univer-
Process. Syst., Montreal, QC, Canada 2018, pp. 1552–1563. sity, Chongqing, China, in 2017. He is currently pur-
[44] Y. Liu et al., “Learning to propagate labels: Transductive propagation suing the Ph.D. degree with the School of Automa-
network for few-shot learning,” in Proc. 7th Int. Conf. Learn. Repre- tion Science and Engineering, Xi’an Jiaotong Uni-
sent. (ICLR), New Orleans, LA, USA, 2019, pp. 1–14. versity, Xi’an, China.
[45] Y. Wang, C. Xu, C. Liu, L. Zhang, and Y. Fu, “Instance credibility infer- His main research interests are biometrics and
ence for few-shot learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pat- computer vision.
tern Recognit. (CVPR), Seattle, WA, USA, Jun. 2020, pp. 12833–12842.
[46] S. Ravi and A. Beatson, “Amortized Bayesian meta-learning,” in Proc.
7th Int. Conf. Learn. Represent. (ICLR), New Orleans, LA, USA, 2019,
pp. 1–14.
[47] H. Li, W. Dong, X. Mei, C. Ma, F. Huang, and B. Hu, “LGM-Net:
Learning to generate matching networks for few-shot learning,” in Proc. Dexing Zhong (Member, IEEE) received the B.Sc.
36th Int. Conf. Mach. Learn. (ICML), Long Beach, CA, USA, 2019, and Ph.D. degrees from Xi’an Jiaotong University,
pp. 3825–3834. Xi’an, China, in 2005 and 2010, respectively.
[48] G. Koch, R. Zemel, and R. Salakhutdinov, “Siamese neural networks He was a Visiting Scholar with the University
for one-shot image recognition,” in Proc. ICML Deep Learn. Workshop, of Illinois at Urbana–Champaign, Champaign, IL,
Lille, France, 2015, pp. 1–30. USA. He is currently an Associate Professor with
[49] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, the School of Automation Science and Engineering,
“Matching networks for one shot learning,” in Proc. Adv. Neural Inf. Xi’an Jiaotong University. His main research inter-
Process. Syst., Barcelona, Spain, 2016, pp. 3630–3638. ests are biometrics and computer vision.
[50] E. Ustinova and V. S. Lempitsky, “Learning deep embeddings with
histogram loss,” in Proc. Adv. Neural Inf. Process. Syst., Barcelona,
Spain, 2016, pp. 4170–4178.
[51] D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zuo, “An online system
of multispectral palmprint verification,” IEEE Trans. Instrum. Meas., Xuefeng Du received the B.Eng. degree from the
vol. 59, no. 2, pp. 480–490, Feb. 2010. School of Automation Science and Engineering,
[52] H. Shao, D. Zhong, and X. Du, “Efficient deep palmprint recognition via Xi’an Jiaotong University, Xi’an, China, in 2020.
distilled hashing coding,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern He is currently pursuing the Ph.D. degree (major in
Recognit. Workshops (CVPRW), Long Beach, CA, USA, Jun. 2019, computer science) with the University of Wisconsin–
pp. 714–723. Madison, Madison, WI, USA.
[53] H. Shao, D. Zhong, and X. Du, “Deep distillation hashing for uncon- His main research interests are computer vision
strained palmprint recognition,” IEEE Trans. Instrum. Meas., vol. 70, and deep learning.
pp. 1–13, 2021, doi: 10.1109/TIM.2021.3053991.
[54] A.-S. Ungureanu, S. Thavalengal, T. E. Cognard, C. Costache, and
P. Corcoran, “Unconstrained palmprint as a smartphone biometric,”
IEEE Trans. Consum. Electron., vol. 63, no. 3, pp. 334–342, Aug. 2017.
[55] Z. Sun, T. Tan, Y. Wang, and S. Li, “Ordinal palmprint represention
for personal identification [represention read representation],” in Proc. Shaoyi Du (Member, IEEE) received the dual bach-
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Orlando, FL, elor’s degrees in computational mathematics and
USA, Jun. 2005, pp. 279–284. computer science, the M.S. degree in applied math-
[56] A. Kumar and S. Shekhar, “Personal identification using multibiometrics ematics, and the Ph.D. degree in pattern recogni-
rank-level fusion,” IEEE Trans. Syst., Man, Cybern., C (Appl. Rev.), tion and intelligence systems from Xi’an Jiaotong
vol. 41, no. 5, pp. 743–752, Sep. 2011. University, Xi’an, China, in 2002, 2005, and 2009,
[57] D. Zhong, H. Shao, and X. Du, “A hand-based multi-biometrics via respectively.
deep hashing network and biometric graph matching,” IEEE Trans. Inf. He is currently a Professor with Xi’an Jiaotong
Forensics Security, vol. 14, no. 12, pp. 3140–3150, Dec. 2019. University. His current research interests include
[58] F. Ma, X. Zhu, C. Wang, H. Liu, and X.-Y. Jing, “Multi-orientation computer vision, machine learning, and pattern
and multi-scale features discriminant learning for palmprint recognition,” recognition.
Neurocomputing, vol. 348, pp. 169–178, Jul. 2019.
[59] L. Fei, B. Zhang, Y. Xu, D. Huang, W. Jia, and J. Wen, “Local
discriminant direction binary pattern for palmprint representation and Raymond N. J. Veldhuis (Senior Member, IEEE)
recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 2, received the degree from the University of Twente,
pp. 468–481, Feb. 2020. Twente, The Netherlands, in 1981, and the Ph.D.
[60] T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A degree from Nijmegen University, Nijmegen, The
simple deep learning baseline for image classification?” IEEE Trans. Netherlands, on a thesis entitled Adaptive Restora-
Image Process., vol. 24, no. 12, pp. 5017–5032, Dec. 2015. tion of Lost Samples in Discrete-Time Signals and
[61] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese, “Deep metric Digital Images, in 1988.
learning via lifted structured feature embedding,” in Proc. IEEE Conf. From 1982 to 1992, he was a Researcher
Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, Jun. 2016, with Philips Research Laboratories, Eindhoven, The
pp. 4004–4012. Netherlands, in various areas of digital signal
[62] X. Wang, X. Han, W. Huang, D. Dong, and M. R. Scott, “Multi- processing. From 1992 to 2001, he was involved in
similarity loss with general pair weighting for deep metric learning,” the field of speech processing. He is currently a Full Professor in biometric
in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long pattern recognition with the University of Twente, where he is leading a
Beach, CA, USA, Jun. 2019, pp. 5017–5025. research team in this field. The main research topics are face recognition (2-D
[63] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for and 3-D), fingerprint recognition, vascular pattern recognition, multibiometric
image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. fusion, and biometric template protection. The research is both applied and
(CVPR), Seattle, WA, USA, Jun. 2016, pp. 770–778. fundamental.
zed licensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on April 24,2023 at 16:15:31 UTC from IEEE Xplore. Restriction