A Survey of Deep Learning For Hyperspectral Image Classification
A Survey of Deep Learning For Hyperspectral Image Classification
A Survey of Deep Learning For Hyperspectral Image Classification
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
a r t i c l e i n f o a b s t r a c t
Article history: With the rapid development of deep learning technology and improvement in computing capability, deep
Received 7 February 2021 learning has been widely used in the field of hyperspectral image (HSI) classification. In general, deep
Revised 4 March 2021 learning models often contain many trainable parameters and require a massive number of labeled sam-
Accepted 11 March 2021
ples to achieve optimal performance. However, in regard to HSI classification, a large number of labeled
Available online 23 March 2021
Communicated by Zidong Wang
samples is generally difficult to acquire due to the difficulty and time-consuming nature of manual label-
ing. Therefore, many research works focus on building a deep learning model for HSI classification with
few labeled samples. In this article, we concentrate on this topic and provide a systematic review of the
Keywords:
Hyperspectral image classification
relevant literature. Specifically, the contributions of this paper are twofold. First, the research progress of
Deep learning related methods is categorized according to the learning paradigm, including transfer learning, active
Transfer learning learning and few-shot learning. Second, a number of experiments with various state-of-the-art
Few-shot learning approaches has been carried out, and the results are summarized to reveal the potential research direc-
tions. More importantly, it is notable that although there is a vast gap between deep learning models
(that usually need sufficient labeled samples) and the HSI scenario with few labeled samples, the issues
of small-sample sets can be well characterized by fusion of deep learning methods and related tech-
niques, such as transfer learning and a lightweight model. For reproducibility, the source codes of the
methods assessed in the paper can be found at https://github.com/ShuGuoJ/HSI-Classification.git.
Ó 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
https://doi.org/10.1016/j.neucom.2021.03.035
0925-2312/Ó 2021 The Author(s). Published by Elsevier B.V.
This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 1. Illustration of the massive gap between practical situations (i.e., few labeled samples) and a large number of labeled samples of deep learning-based methods. Here,
the widely used Kennedy Space Center (KSC) hyperspectral image is employed, which contains 13 land covers and 5211 labeled samples (detailed information can be found in
the experimental section). Generally, sufficient samples are required to well train a deep learning model (as illustrated in the right figure), which is hard to be achieved in
practice due to the difficulty of manually labeling (as shown in the left figure).
Fig. 2. The category of deep learning-based methods for hyperspectral image classification. The left is from the model architecture point of view, while the right is from the
learning paradigm point of view. It is worth noting that the both kinds of methods can be combined arbitrarily.
tra contain much redundant information and the relation between tional machine learning model, deep learning technology does
spectra and ground objects is non-linear, which enlarges the diffi- not need to artificially design feature patterns and can automati-
culty of the model classification. Therefore, most later methods cally learn patterns from data. Therefore, it has been successfully
give more attention to dimension reduction and feature extraction applied in the fields of natural language processing, speech recog-
to learn the more discriminative feature. For the approaches based nition, semantic segmentation, autonomous driving, and object
on dimension reduction, principle component analysis [17], inde- detection, and gained excellent performance. Recently, it also has
pendent component analysis [18], linear discriminant analysis been introduced into the field of HSI classification. Researchers
[19], and low-rank [20] are widely used. Nevertheless, the perfor- have proposed a number of new deep learning-based HIS classifi-
mance of those models is still unsatisfactory. Because, there is a cation approaches, as shown in the left part of Fig. 2. Currently,
common phenomenon in the hyperspectral image which is that all methods, based on the joint spectral-spatial feature, can be
different surface objects may have the same spectral characteristic divided into two categories–Two-Stream and Single-Stream,
and, otherwise, the same surface objects may have different spec- according to whether they simultaneously extract the joint
tral characteristics. The variability of spectra of ground objects is spectral-spatial feature. The architecture of two-stream usually
caused by illumination, environmental, atmospheric, and temporal includes two branches–spectral branch and spatial branch. The for-
conditions. Those enlarge the probability of misclassification. Thus, mer is to extract the spectral feature of the pixel, and the latter is to
those methods are only based on spectral information, and ignore capture the spatial relation of the central pixel with its neighbor
spatial information, resulting in unsatisfactory classification per- pixels. And the existing methods have covered all deep learning
formance. The spatial characteristic of ground objects supply abun- modules, such as fully connected layer, convolutional layer, and
dant information of shape, context, and layout about ground recurrent unit.
objects, and neighboring pixels belong to the same class with high In the general deep learning framework, a large number of
probability, which is useful for improving classification accuracy training samples should be provided to well train the model and
and robustness of methods. Then, a large number of feature extrac- tune the numerous parameters. However, in practice, manually
tion methods that integrate the spatial structural and texture infor- labeling is often very time-consuming and expensive due to the
mation with the spectral features have been developed, including need for expert knowledge, and thus, a sufficient training set is
morphological [21–23], filtering [24,25], coding [26], etc. Since often unavailable. As shown in Fig. 1,2 (here the widely used Ken-
deep learning-based methods are mainly concerned in this paper, nedy Space Center (KSC) hyperspectral image is utilized for illus-
the readers are referred to [27] for more details on these conven- tration), the left figure randomly selects 10 samples per class and
tional techniques. contains 130 labeled samples in total, which is very scattered
In the past decade, deep learning technology has developed and can hardly be seen. Alternatively, the right figure in Fig. 1 dis-
rapidly and received widespread attention. Compared with tradi- plays 50% of labeled samples, which is more suitable for deep
180
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 3. The architecture of the autoencoder. The solid line represents training, while the dashed line represents inference.
learning-based methods. Hence, there is a vast gap between the 2.1. Autoencoder for HSI classification
training samples required by deep learning models and the labeled
samples that can be collected in practice. And there are many An autoencoder [30] is a classic neural network, which consists
learning paradigms proposed for solving the problem of few label of two parts: an encoder and a decoder. The encoder pencoder ðh j xÞ
samples, as shown in the right part of Fig. 2. In Section 2, we will maps the input x as a hidden representation h, and then, the deco-
discuss them in detail. And they can be integrated with any model ^ j hÞ reconstructs x
der pdecoder ðx ^ from h. It aims to make the input
architecture. Some pioneering works such as [28] started the topic and output as similar as possible. The loss function can be formu-
by training a deep model with good generalization only using few lated as follows:
labeled samples. However, there are still many challenges for this
topic. ^Þ ¼ min j x x
Lðx; x ^j ð1Þ
In this paper, we hope to provide a comprehensive review of the
where L is the similarity measure. If the dimension of h is smaller
state-of-the-art deep learning-based methods for HSI classification
than x, the autoencoder procedure is undercomplete and can be
with few labeled samples. First, instead of separating the various
used to reduce the data dimension. Evidently, if there is not any
methods according to feature fusion manner, such as spectral-
constraint on h, the autoencoder is the simplest identical function.
based, spatial-based, and joint spectral-spatial-based methods,
In other words, the network does not learn anything. To avoid such
the research progress of methods related to few training samples
a situation, the usual way is to add the normalization term XðhÞ to
is categorized according to the learning paradigm, including trans-
the loss. In [31,32], the normalization of the autoencoder, referred
fer learning, active learning, and few-shot learning. Second, a num- P
as a sparse autoencoder, is XðhÞ ¼ k i hi , which will make most of
ber of experiments with various state-of-the-art approaches have
the parameters of the network very close to zero. Therefore, it is
been carried out, and the results are summarized to reveal the
equipped with a certain degree of noise immunity and can produce
potential research directions. Further, it should be noted that dif-
the sparsest representation of the input. Another way to avoid the
ferent from the previous review papers [12,29], this paper mainly
identical mapping is by adding some noise into x to make the dam-
focuses on the few labeled sample issue, which is considered as
aged input xnoise and then forcing the decoder to reconstruct the x.
the most challenging problem in the HSI classification scenario.
In this situation, it becomes the denoising autoencoder [33], which
For reproducibility, the source codes of the methods conducted
can remove the additional noise from xnoise and produce a powerful
in the paper can be found at the web site for the paper1.
hidden representation of the input. In general, the autoencoder
The remainder of this paper is organized as follows. Section 2
plays the role of feature extractor [34] to learn the internal pattern
introduces the deep models that are popular in recent years. In Sec-
of data without labeled samples. Fig. 3 illustrates the basic architec-
tion 3, we divide the previous works into four mainstream learning
ture of the autoencoder model.
paradigms, including transfer learning, active learning, and few-
Therefore, Chen et al. [35] used an autoencoder for the first time
shot learning. In Section 4, we performed many experiments, and
for feature extraction and classification of HSIs. First, in the pre-
a number of representative deep learning-based classification
training stage, the spectral vector of each pixel directly inputs
methods are compared on several real hyperspectral image data
the encoder module, and then, the decoder is used to reconstruct
sets. Finally, conclusions and suggestions are provided in Section 5.
it so that the encoder has the ability to extract spectral features.
Alternatively, to obtain the spatial features, principal component
analysis (PCA) is utilized to reduce the dimensionality of the
2. Deep learning models for HSI classification hyperspectral image, and then, the image patch is flattened into
a vector. Another autoencoder is employed to learn the spatial fea-
In this section, three classical deep learning models, including tures. Finally, the spatial-spectral joint information obtained above
the autoencoder, convolutional neural network (CNN), and recur- is fused and classified. Subsequently, a large number of hyperspec-
rent neural network (RNN), for HSI classification are respectively tral image classification methods [36,37] based on autoencoders
described, and the relevant references are reviewed. appeared. Most of these methods adopt the same training strategy
as [35], which is divided into two modules: fully training the enco-
1
https://github.com/ShuGuoJ/HSI-Classification.git der in an unsupervised manner and fine-tuning the classifier in a
181
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
supervised manner. Each of these methods attempts different by the pooling operation in the output, and thus, more abstract fea-
types of encoders or preprocessing methods to adapt to HSI classi- tures can be reserved.
fication under the condition of small samples. For example, Xing In the early works of applying CNNs for HSI classification, two-
et al. [37] stack multiple denoising autoencoders to form a feature dimensional convolution was the most widely used method, which
extractor, which has a stronger anti-noise ability to extract more is mainly employed to extract spatial texture information
robust representations. Given that the same ground objects may [48,28,49], but the redundant bands greatly enlarge the size of
have different spectra while different ground objects may exhibit the convolutional kernel, especially the channel dimensionality.
similar spectra, spectral-based classification methods often fail to Later, a combination of one-dimensional convolution and two-
achieve satisfactory performance, and spatial structural informa- dimensional convolution appeared [50] to solve the above prob-
tion of objects provides an effective supplement. To gain a better lem. Concretely, one-dimensional and two-dimensional convolu-
spatial description of an object, some autoencoder models com- tions are responsible for extracting spectral and spatial features,
bined with convolutional neural networks (CNNs) have been devel- respectively. The two types of features are then fused before being
oped [38,39]. Concretely, the autoencoder module is able to extract input to the classifier. For the small training sample problem, due
spectral features on large unlabeled samples, while the CNN is pro- to insufficient labeled samples, it is difficult for CNNs to learn effec-
ven to be able to extract spatial features well. After fusion, the tive features. For this reason, some researchers usually introduced
spatial-spectral features can be achieved. Further, to reduce the traditional machine learning methods, such as attribute profiles
number of trainable parameters, some researchers use the light- [51], GLCM [52], hash learning [53], and Markov Random fields
weight models, such as SVMs [40,41], random forests [42,43] or [54], to introduce prior information to the convolutional network
logistic regression [35,44], to serve as the classifier. and improve the performance of the network. Similar to the trend
Due to the three-dimensional (3D) pattern of hyperspectral of autoencoder-based classification methods, three-dimensional
images, it is desirable to simultaneously investigate the spectral CNN models have also been applied to HSI classification in recent
and spatial information such that the joint spatial-spectral correla- years and have shown better feature fusion capabilities [55,56].
tion can be better examined. Some three-dimensional operators However, due to the large number of parameters, three-
and methods have been proposed. In the preprocessing stage, Li dimensional convolution is not suitable for solving small-sample
et al. [45] utilized the 3D Gabor operator to fuse spatial informa- classification problems under supervised learning. To reduce the
tion and spectral information to obtain spatial-spectral joint fea- number of parameters of 3D convolution, Fang et al. [57] designed
tures, which were then fed into the autoencoder to obtain more a 3D separable convolution. In contrast, Mou et al. [58,59] intro-
abstract features. Mei et al. [41] used a 3D convolutional operator duced an autoencoder scheme into the three-dimensional convolu-
to construct an autoencoder to extract spatial-spectral features tion module to solve this problem. By a combination with the
directly. In addition, image segmentation has been introduced to classic autoencoder training method, the three-dimensional con-
characterize the region structure of objects to avoid misclassifica- volution autoencoder can be trained in an unsupervised learning
tion of pixels at the boundary [46]. Therefore, Liu et al. [47] utilized manner, and then, the decoder is replaced with a classifier, while
superpixel segmentation technology as a postprocessing method to the parameters of the encoder are frozen. Finally, a small classifier
perform boundary regularization on the classification map. is trained by supervised learning. Moreover, due to the success of
ResNet [60], scholars have studied the HSI classification problem
2.2. Convolutional Neural Networks (CNNs) for HSI classification based on convolutional residuals [58,59,62,62]. These methods
try to use jump connections to enable the network to learn com-
In theory, the CNN uses a group of parameters that refer to a plex features with a small number of labeled samples. Similarly,
kernel function or kernel to scan the image and produce a specified CNNs with dense connections have also been introduced into this
feature. It has three main characteristics that make it very power- field [63,64]. In addition, the attention mechanism is another hot-
ful for feature representation, and thus, the CNN has been success- pot for fully mining sample features. Concretely, Haut and Xiong
fully applied in many research fields. The first one is the local et al. [65,66] incorporated the attention mechanism with CNNs
connection that greatly decreases the number of trainable param- for HSI classification. Although the above models can work well
eters and makes itself suitable for processing large images. This is on HSI, they cannot overcome the disadvantage of the low spatial
the most obvious difference from the fully connected network, resolution of HSIs, which may cause mixed pixels. To make up
which has a full connection between two neighboring neural layers for this shortcoming, multimodality CNN models have been pro-
and is unfriendly for large spatial images. To further reduce the posed. These methods [67–69] combine HSIs and LiDAR data
number of parameters, the same convolutional kernel shares the together to increase the discriminability of sample features. More-
same parameters, which is the second characteristic of CNNs. In over, to achieve good performance under the small-sample scenar-
contrast, in the traditional neural network, the parameters of the io, Yu et al. [28] enlarged the training set through data
output are independent from each other. However, the CNN augmentation by implementing rotation and flipping. On the one
applies the same parameters for all of the output to cut back the hand, this method increases the number of samples and improves
number of parameters, leading to the third characteristic: shift their diversity. On the other hand, it enhances the model’s ability of
invariance. It means that even if the feature of an object has shifted rotation invariance, which is important in some fields such as
from one position to another, the CNN model still has the capacity remote sensing. Subsequently, Li et al. [70,71] designed a data aug-
to capture it regardless of where it appears. Specifically, a common mentation scheme for HSI classification. They combined the sam-
convolutional layer consists of three traditional components: lin- ples in pairs so that the model no longer learns the
ear mapping, the activation function and the pooling function. Sim- characteristics of the samples themselves but learns the differ-
ilar to other modern neural network architectures, activation ences between the samples. Different combinations make the scale
functions are used to bring a nonlinear mapping feature into the of the training set larger, which is more conducive for model
network. Generally, the rectified linear unit (ReLU) is the prior training.
choice. Pooling makes use of the statistical characteristic of the
local region to represent the output of a specified position. Taking 2.3. Recurrent neural network (RNN) for HSI classification
the max pooling step as an example, it employs the max value to
replace the region of input. Clearly, the pooling operation is robust Compared with other forms of neural networks, recurrent neu-
to small changes and noise interfere, which could be smoothed out ral networks (RNNs) [72] have memory capabilities and can record
182
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
the context information of sequential data. Because of this memory neighborhood are sequentially input into the network to deeply
characteristic, recurrent neural networks are widely used in tasks mine the context information in the spatial neighborhood. In [76],
such as speech recognition and machine translation. More pre- Zhou et al. used two LSTMs to extract spectral features and spatial
cisely, the input of a recurrent neural network is usually a features. In particular, for the extraction of spatial features, PCA is
sequence of vectors. At each time step t, the network receives an first used to extract principal components from the sample rectan-
element xt in a sequence and the state ht1 of the previous time gular space neighborhood. Then, the first principal component is
step, and produces an output yt and a state ht representing the con- divided into several lines to form a set of sequence data, and grad-
text information at the current moment. This process can be for- ually input into the network. In contrast, Ma and Zhang et al.
mulated as: [77,78] measures the similarity between the sample point in the
spatial neighborhood and the center point. The sample points in
ht ¼ f ðWhh ht1 þ Wxh xt þ bÞ ð2Þ
the neighborhood will be reordered according to the similarity
where Wxh represents the weight matrix from the input layer to the and then input into the network step by step. This approach allows
hidden layer, Whh denotes the state transition weight in the hidden the network to focus on learning sample points that are highly sim-
layer, and b is the bias. It can be seen that the current state of the ilar to the center point, and the memory of the internal hidden state
recurrent neural network is controlled by both the state of the pre- can thus be enhanced. Erting Pan et al. [79] proposed an effective
vious time step and the current input. This mechanism allows the tiny model for spectral-spatial classification on HSIs based on a sin-
recurrent neural network to capture the contextual semantic infor- gle gate recurrent unit (GRU). In this work, the rectangular space
mation implicitly between the input vectors. For example, in the neighborhood is flattened into a vector, which is used to initialize
machine translation task, it can enable the network to understand the hidden vector h0 of GRU, and the center point pixel vector is
the semantic relationship between words in a sentence. input into the network to learn features.
However, the classic RNN is prone to encounter gradient explo- In addition, Wu and Saurabh argue that it is difficult to dig out
sion or gradient vanishing problems during the training process. the internal features of the sample by directly inputting a single
When there are too many inputs, the derivation chain of the RNN original spectral vector into the RNN [80,81]. The authors use a
will become too long, making the gradient value close to infinity one-dimensional convolution operator to extract multiple feature
or zero. Therefore, the classic RNN model is replaced by a long vectors from the spectrum vector, which form a feature sequence
short-term memory (LSTM) network [72] or a gated recurrent unit and are then input to the RNN. Finally, the fully connected layer
(GRU) [73] in the HSI classification task. and the softmax function are adopted to obtain the classification
Both LSTM and GRU use gating technology to filter the input result. It can be seen that only using recurrent neural networks
and the previous state so that the network can forget unnecessary or one-dimensional convolution to extract the spatial-spectrum
information and retain the most valuable context. LSTM maintains joint features is actually not efficient because this will cause the
an internal memory state, and there are three gates: input gate it , loss of spatial structure information. Therefore, some researchers
forget gate f t and output gate ot , which are formulated as: combine two-dimensional/three-dimensional CNNs with an RNN
and use convolution operators to extract spatial-spectral joint fea-
it ¼ rðWi ½xt ; ht1 Þ ð3Þ tures. For example, Hao et al. [82] utilized U-Net to extract features
and input them into an LSTM or GRU so that the contextual infor-
f t ¼ rðWf ½xt ; ht1 Þ ð4Þ mation between features could be explored. Moreover, Shi et al.
[83] introduced the concept of the directional sequence to fully
ot ¼ rðWio ½xt ; ht1 Þ ð5Þ extract the spatial structure information of HSIs. First, the rectan-
It can be found that the three gates are generated based on the cur- gular area of the sampling point is divided into nine overlapping
rent input and the previous state. First, the current input and the patches. Second, the patch will be mapped to a set of feature vec-
previous state will be spliced and mapped to a new input g t accord- tors through a three-dimensional convolutional network, and the
ing to the following formula: relative position of the patch can generate 8 combinations of direc-
tions (for example, top, middle, bottom, left, center, and right) to
g t ¼ tanhðWg ½xt ; ht1 Þ ð6Þ form a direction sequence. Finally, the sequence is input into the
Subsequently, the input gate, the forget gate, the new input g t and LSTM or GRU to obtain the classification result. In this way, the
^ t1 update the internal memory state spatial distribution and structural characteristics of the features
the internal memory unit h
can be explored.
tegother. In this process, LSTM discards invalid information and
adds new semantic information.
^t ¼ f t h
^ t1 þ it g 3. Deep learning paradigms for HSI classification with few
h t ð7Þ labeled samples
Finally, the new internal memory state is filtered by the output gate
to form the output of the current time step Although different HSI classification methods have different
specific designs, they all follow some learning paradigms. In this
^t Þ
ht ¼ ot tanhðh ð8Þ section, we mainly introduce several learning paradigms that are
applied to HSI classification with few labeled training samples.
Concerning HSI processing, each spectral image is a high-
These learning paradigms are based on specific learning theories.
dimensional vector and can be regarded as a sequence of data.
We hope to provide a general guide for researchers to design
There are many works using LSTM for HSI classification tasks. For
algorithms.
instance, Mou et al. [74] proposed an LSTM-based HSI classification
method for the first time, and their work only focused on spectral
information. For each sample pixel vector, each band is input into 3.1. Deep transfer learning for HSI classification
the LSTM step by step. To improve the performance of the model,
spatial information is considered in subsequent research. For exam- Transfer learning [84] is an effective method to deal with the
ple, Liu et al. fully considered the spatial neighborhood of the sam- small-sample problem. Transfer learning tries to transfer knowl-
ple and used a multilayer LSTM to extract spatial spectrum features edge learned from one domain to another. First, there are two data
[75]. Specifically, in each time step, the sampling points of the sets/domains, one is called a source domain that contains abun-
183
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
dant labeled samples, and the other is called a target domain and learning. The network is named DANN (domain-adversarial neural
only contains few labeled samples. To facilitate the subsequent network), which is different from DaNN proposed by Ghifary et al.
description, we define the source domain as Ds , the target domain [91]. The generator Gf and the discriminator Gd compete with each
as Dt , and their label spaces as Ys and Yt , respectively. Usually, the other until they have converged. In transfer learning, the data in
data distribution of the source domain and the target domain are one of the domains (usually the target domain) are regarded as
inconsistent: PðX s Þ – PðX t Þ. Therefore, the purpose of transfer the generated sample. The generator aims to learn the characteris-
learning is to use the knowledge learned from Ds to identify the tics of the target domain sample so that the discriminator cannot
labels of samples in Dt . distinguish which domain the sample comes from to achieve the
Fine-tuning is a general method in transfer learning that uses Ds purpose of domain adaptation. Therefore, Gf is used to represent
to train the model and adjust it by Dt . Its original motivation is to the feature extractor here.
reduce the number of samples needed during the training process. Elshamli et al. [95] first introduced the concept of DANN to the
Since deep learning models generally contain a vast number of task of hyperspectral image classification. Compared to general
parameters and if it is trained on the target domain Dt , it is easy GNN, it has two discriminators. One is the class discriminator pre-
to overfit and perform poorly in practice. However, fine-tuning dicting the class labels of samples, and the other is the domain dis-
allows the model parameters to reach a suboptimal state, and a criminator predicting the source of the samples. Different from the
small number of training samples of the target domain can tune two-stage method, DANN is an end-to-end model that can perform
the model to reach the optimal state. It involves two steps. First, representation learning and classification tasks simultaneously.
the specific model will be fully trained on the source domain Ds Moreover, it is easy to train. Further, it outperforms two-stage
with abundant labeled samples to make the model parameters frameworks such as the denoising autoencoder and traditional
arrive at a good state. Then, the model is transferred to the target approaches such as PCA in hyperspectral image classification.
domain Dt , except for some task-related modules, and slightly
tuned on Dt so that the model fits the data distribution of the target 3.2. Deep Active Learning for HSI classification
domain Dt .
Because the fine-tuning method is relatively simple, it is widely Active learning [96] in the supervised learning method can effi-
used in the transfer learning method for hyperspectral image clas- ciently deal with small-sample problems. It can effectively learn
sification. To our knowledge, Yang et al. [85] are the first to com- discriminative features by autonomously selecting representative
bine deep learning with transfer learning to classify or high-information samples from the training set, especially when
hyperspectral images. The model consists of two convolutional the labeled samples are scarce. Generally speaking, active learning
neural networks, which are used to extract spectral features and consists of five components, A ¼ ðC; L; U; Q ; SÞ. Among them, C rep-
spatial features. Then, the joint spectral-spatial feature will be resents one or a group of classifiers. L and U represent the labeled
input into the fully connected layer to gain a final result. According samples and unlabeled samples, respectively. Q is the query func-
to fine-tuning, the model is first fully trained on the hyperspectral tion, which is used to query the samples with a large amount of
image of the source domain. Next, the fully connected layer is information among the unlabeled samples. S is an expert and can
replaced and the parameters of the convolutional network are label unlabeled samples. In general, active learning has two stages.
reserved. Finally, the transfer model will be trained on the target The first stage is the initialization stage. In this stage, a small num-
hyperspectral image to adapt to the new data distribution. The ber of samples will be randomly selected to form the training set L
later transfer learning models based on fine-tuning basically follow and be labeled by experts to train the classifier. The second stage is
that architecture [86–89]. It is worth noting that Deng et al. [90] the iterative query. Q will select new samples from the unlabeled
combined transfer learning with active learning to classify HSI. sample set U for S to mark them based on the results of the previ-
Data distribution adaptation is another commonly used transfer ous iteration and add them to the training set L. The active learning
learning method. The basic idea of this theory is that in the original method applied to hyperspectral image classification is mainly
feature space, the data probability distributions of the source based on the active learning algorithm of the committee and the
domain and the target domain are usually different. However, they active learning algorithm based on the posterior probability. In
can be mapped to a common feature space together. In this space, the committee-based active learning algorithm, the EQB method
their data probability distributions become similar. In 2014, Ghi- uses entropy to measure the amount of information in unlabeled
fary et al. [91] first proposed a shadow neural network-based samples. Specifically, the training set L will be divided into k sub-
domain adaptation model, called DaNN. The innovation of this sets to train k classifiers and then use these k classifiers to classify
work is that a maximum mean discrepancy (MMD) adaptation all unlabeled samples. Therefore, each unlabeled sample corre-
layer is added to calculate the distance between the source domain sponds to k predicted labels. The entropy value is calculated from
and the target domain. Moreover, the distance is merged into the this:
loss function to reduce the difference between the two data distri-
butions. Subsequently, Tzeng et al. [92] extended this work with a HEQB ðxi Þ
deeper network and proposed deep domain confusion to solve the xEQB ¼ arg min ð9Þ
xi 2U logðNi Þ
adaptive problem of deep networks. Wang et al. [93] introduced
the deep domain adaptation model to the field of hyperspectral where H represents the entropy value, and Ni represents the num-
image classification for the first time. In [93], two hyperspectral ber of classes predicted by the sample xi . Samples with large
images from different scenes will be mapped to two low- entropy will be selected and manually labeled [97]. In [98], the deep
dimensional subspaces by the deep neural network, in which the belief network is used to generate the mapping feature h of the
samples are represented as manifolds. MMD is used to measure input x in an unsupervised way, and then, h will be used to calculate
the distance between two low-dimensional subspaces and is added the information entropy. At the same time, sparse representation is
to the loss function to make two low-dimensional subspaces have used to estimate the representations of the sample. In the process of
high similarity. In addition, they still add the sum of the distances selecting samples for active learning, the information entropy and
between samples and their neighbor into the loss function to representations of the samples are comprehensively considered.
ensure that the low-dimensional manifold is discriminative. In contrast, the active learning method based on posterior prob-
Motivated by the excellent performance of generative adversar- ability [99–101] is more widely used. Breaking ties belongs to the
ial net (GAN), Yaroslav et al. [94] first introduced it into transfer active learning method of posterior probability, which is widely
184
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
used in hyperspectral classification tasks. This method first uses The relation network [108] is another metric-based model of
specifies models, such as convolutional networks, maximum likeli- few-shot learning. In general, it has two modules: the embedding
hood estimation classifiers, support vector machines, etc., to esti- function f / : RD ! RM and relation function f w : R2M ! R. The func-
mate the posterior probabilities of all samples in the candidate tion of the embedding module is the same as the prototype net-
pool. Then, the approach uses the posterior probability to input work, and its key idea is the relation module. The relation
the following formula to produce a measure of sample uncertainty: module is to calculate the similarity of samples. It is a learnable
module that is different from the Euclidean distance or cosine dis-
xBT ¼ arg min maxp yi ¼ wjxi maxþ pðyi ¼ wjxi Þ ð10Þ tance. In other words, the relation network introduces a learnable
xi 2U w2N w2Nnw
metric function based on the prototype network. The relation mod-
In the above formula, we first calculate the difference between the ule can more precisely describe the difference of samples by the
largest probability and the second-largest probability among the study. During inference, the query embedding f w ðxi Þ will be com-
posterior probabilities of all candidate samples and select the sam- bined with the support embedding f w ðxj Þ as Cðf w ðxi Þ; f w ðxj ÞÞ. Usu-
ple with the minimum difference to join the valuable data set. The ally, Cð; Þ is a concatenation operation. Then, the relation
lower xBT is, the more uncertain is the sample. In [99], Li et al. first function will transform the splicing vector to a relation score ri;j ,
used an autoencoder to construct an active learning model for which indicates the similarity between xi and xj .
hyperspectral image classification tasks. At the same time, Sun
et al. [100] also proposed a similar method. However, this method ri;j ¼ f w ðCðf w ðxi Þ; f w ðxj ÞÞÞ ð12Þ
only uses spectral features. Because of the effectiveness of spatial
Several works have introduced the relation network into hyper-
information, in [102], when generating the posterior probability,
spectral image classification to solve the small sample set problem.
the space-spectrum joint features are considered at the same time.
Deng et al. [109] first introduced the relation network into HSI. They
In contrast, Cao et al. [101] use convolutional neural networks to
use a 2-dimensional convolutional neural network to construct
generate the posterior probability.
both the embedding function and relation function. Gao et al.
In general, the active learning method can automatically select
[110] and Ma et al. [111] have also proposed a similar architecture.
effective samples according to certain criteria, reduce inefficient
In [112], to extract the joint spatial-spectral feature, Rao et al.
redundant samples, and thus well alleviate the problem of missing
implemented the embedding function with a 3-dimensional convo-
training samples in the small-sample problem.
lutional neural network.
The Siamese network [113–115] is a typical network in few-
3.3. Deep few-shot learning for HSI classification shot learning. Compared with the above network, its input is a
sample pair. Thus, it is composed by two parallel subnetworks
Few-shot learning is among meta-learning approaches and aims
f /1 : RD ! RM with the same structure and sharing parameters.
to study the difference between the samples instead of directly
The subnetworks respectively accept an input sample and map it
learning what the sample is, different from most other deep learn-
to a low-dimensional metric space to generate their own embed-
ing methods. It makes the model learn to learn. In few-shot classi-
ding f /1 ðxi Þ and f /1 ðxj Þ. The Euclidean distances Dðxi ; xj Þ is used
fication, given a small support set with N labeled samples
to measure their similarity.
SkN ¼ fðx1 ; y1 Þ; ; ðxN ; yN Þg, which have k categories, the classifier
will mask the query sample with the label of the largest similarity Dðxi ; xj Þ ¼ kf /1 ðxi Þ f /1 ðxj Þk2 ð13Þ
sample among SkN . To achieve this target, many learning frame-
The higher the similarity between the two samples is, the more
works have been proposed and they can be divided into two cate-
likely they are to belong to the same class. Recently, the Siamese
gories: meta-based model and metric-based model.
network was introduced into HSI classification. Usually, a 2-
The prototype network [103] is one of the metric-based models
dimensional convolutional neural network [116,117] is used to
of few-shot learning. Its basic idea is that every class can be
serve as the embedding function, as in the above two networks.
depicted by a prototype representation, and the samples that
In the same way, several methods combined the 1-dimensional
belong to the same category should be around the class prototype.
convolution neural network with the 2-dimensional one [118,119]
First, all samples will be transformed into a metric space through
or use a 3-dimensional network [120] for the joint spectral-spatial
an embedding function f / : RD ! RM and represented by the
feature. Moreover, Miao et al. [121] have tried to use the stack
embedding vector ck 2 RM . Due to the powerful ability of the con- autoencoder to construct the embedding function f /1 . After training,
volutional network, it is used as the embedding function. More-
the model has the ability to identify the difference between sam-
over, the prototype vector is usually the mean of the embedding
ples. To obtain the final classification result, we still need a classifier
vector of the samples in the support set for each class ci .
to classify the embedding feature of the sample, which is different
1 X from the prototype network and the relation network. To avoid
ci ¼ i
f / ðxj Þ ð11Þ overfitting under limited labeled samples, an SVM is usually used
jS j ðx ;y Þ2Si
j j as a classifier since it is famous for its lightweight.
In [104], Liu et al. simply introduce the prototype network into
hyperspectral image classification task and use ResNet [60] to serve 4. Experiments
as a feature extractor that maps the samples into a metric space.
Then, the prototype network is significantly improved for the In most papers, comprehensive experiments and analysis are
hyperspectral image classification task by [105]. In the paper, the introduced to describe the advantages and disadvantages of the
spatial-spectral feature is first integrated by the local pattern cod- methods in the paper. However, the problem is that different
ing, and the 1D-CNN converts it to an embedding vector. The proto- papers may choose different experimental settings. For example,
type is the weighted mean of these embedding vectors, which is the same number of samples for training or test is used in the
contrary to the general prototype network. In [106] Xi et al. replace experiments, and the chosen samples are normally different since
the mapping function with hybrid residual attention [107] and they are chosen randomly. To evaluate different methods fairly, we
introduce a new loss function to force the network to increase the should use the exact same experimental setting. That is the reason
interclass distance and decrease the intraclass distance. why we design experiments to evaluate different methods.
185
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Table 4
Originators of model implementations. F denotes that the code of the model comes from the original paper. T denotes our implemented model.
186
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
SAE_LR [34]. This is the first paper to introduce the autoencoder assigned to the corresponding clusters. Finally, a simple KNN
into hyperspectral image classification, opening a new era of is used to classify the query sample. In the experiment, the
hyperspectral image processing. It adopts a raw autoencoder neighbor region of the pixel is fixed as 5 5 and the feature
composed of linear layers to extract the feature. The size of dimension is set to 64.
3DCAE [40]. This is a 3D convolutional autoencoder adopting a
3D convolution layer to extract the joint spectral-spatial fea-
ture. First, 3DCAE is trained by the traditional method, and then,
an SVM classifier is adopted to classify the hidden features on
the top of 3DCAE. In the experiment, the neighbor region of
the pixel is set to 5 5 and 90% of the samples are used to train
the 3D autoencoder. There are two different hyperparameter
settings corresponding to Salinas and PaviaU, and the model
has not been tested on KSC in [122]. Therefore, on the KSC,
the model uses the same hyperparameter configuration as on
the Salinas because they are collected by the same sensor.
SSDL [37]. This is a typical two-stream structure extracting the
spectral and spatial feature separately through two different
branches and merging them at the end. Inspired by [35], the
author adopts a 1D autoencoder to extract the spectral feature.
In the branch of spatial feature extraction, the model uses a spa-
tial pyramid pooling layer to replace the traditional pooling
layer on the top convolutional layer. The spatial pyramid pool-
ing layer enables the deep convolutional neural network to gen-
Fig. 4. Flowchart of the fine-tuning method. The solid line represents pretraining,
and the dashed line represents fine-tuning. f x is a learning function. erate a fixed-length feature. On the one hand, it enables the
model to convert the input of different sizes into a fixed-
length, which is good for the module that is sensitive to the
the neighbor region is 5 5, and the first 4 components of input size; on the other hand, it is useful for the model to better
PCA are chosen. Subsequently, we can gain a spatial feature vec-
tor. Before inputting into the model, the raw spatial feature and
the spatial feature are stacked to form a joint feature. To reduce
the difficulty of training, it uses a greedy layerwise pretraining
method to train each layer, and the parameters of the encoder
and decoder are symmetric. Then, the encoder concatenates a
linear classifier for fine tuning. According to [35], the hidden
size is set to 60, 20, and 20 for PaviaU, Salinas, and KSC,
respectively.
S-DMM [121]. This is a relation network that contains an
embedding module and relation module implemented by 2D
convolutional networks. The model aims to make samples in
the feature space have a small intraclass distance and a large
interclass distance through a learnable feature embedding func-
tion and a metric function. After training, all samples will be Fig. 7. Architecture of a prototype network.
187
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Table 5
PaviaU. Classification accuracy obtained by S-DMM [122], 3DCAE [41], SSDL [38], TwoCnn [123], 3DVSCNN [124], SSLstm [76], CNN_HSI [28] and SAE_LR [35] on PaviaU. The best
accuracies are marked in bold. The ‘‘size” in the first line denotes the sample size per category.
Size Classes S-DMM 3DCAE SSDL TwoCnn 3DVSCNN SSLstm CNN_HSI SAE_LR
10 1 94.34 49.41 68.33 71.80 63.03 72.59 84.60 66.67
2 73.13 51.60 72.94 88.27 69.22 68.86 67.57 56.68
3 86.85 54.06 53.71 47.58 71.77 48.08 72.80 46.37
4 95.04 94.81 88.58 96.29 85.10 79.06 93.65 80.10
5 99.98 99.86 97.21 94.99 98.61 93.80 99.84 98.81
6 85.58 57.40 66.21 49.75 75.17 62.53 78.35 55.87
7 98.55 80.34 68.17 58.65 65.61 65.39 92.14 81.42
8 86.47 57.97 64.07 66.95 55.77 67.60 78.17 66.83
9 99.81 98.76 98.83 97.15 96.48 97.02 98.92 98.90
AA 91.08 71.58 75.34 74.60 75.64 72.77 85.12 72.40
OA 84.55 60.00 74.79 78.61 75.17 69.59 82.13 66.05
50 1 97.08 80.76 76.11 88.50 90.60 82.96 93.66 78.83
2 90.09 63.14 87.39 86.43 93.68 82.42 94.82 65.36
3 95.15 62.57 70.28 69.21 90.64 81.59 94.87 65.50
4 97.35 97.33 89.27 98.80 93.47 91.31 94.49 92.43
5 100.00 100.00 98.14 99.81 99.92 99.67 100.00 99.47
6 96.32 80.15 75.12 84.93 94.15 82.58 88.14 72.30
7 99.31 88.45 75.80 83.12 94.98 92.34 97.21 86.04
8 92.97 75.11 70.57 83.57 91.55 84.75 87.52 79.74
9 99.98 99.69 99.61 99.91 98.72 99.39 99.78 99.29
AA 96.47 83.02 82.48 88.25 94.19 88.56 94.50 82.10
OA 94.04 64.17 84.92 90.69 94.23 84.50 95.21 77.42
100 1 97.11 83.05 85.59 92.21 94.38 90.84 94.44 78.64
2 91.64 73.45 86.17 76.86 95.90 83.26 97.75 74.28
3 94.23 73.02 80.29 72.24 95.96 80.66 95.37 79.87
4 98.70 97.87 97.14 99.28 97.65 92.54 95.88 93.54
5 100.00 100.00 99.06 99.89 99.95 99.57 99.99 99.24
6 93.51 86.82 83.16 95.90 97.92 87.61 91.01 69.83
7 99.21 90.17 94.08 89.88 98.39 93.45 98.37 89.42
8 92.73 88.31 88.43 90.03 94.21 90.08 92.41 85.05
9 99.99 99.82 99.65 99.98 99.85 99.80 99.70 99.55
AA 96.35 88.06 90.40 90.70 97.13 90.87 96.10 85.49
OA 94.65 70.15 89.33 94.76 97.05 87.19 97.35 81.44
188
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Table 6
Salinas. Classification accuracy obtained by S-DMM [122], 3DCAE [41], SSDL [38], TwoCnn [123], 3DVSCNN [124], SSLstm [76], CNN_HSI [28] and SAE_LR [35] on Salinas. The best
accuracies are marked in bold. The ‘‘size” in the first line denotes the sample size per category.
Size Classes S-DMM 3DCAE SSDL TwoCnn 3DVSCNN SSLstm CNN_HSI SAE_LR
10 1 99.45 99.28 76.01 88.22 97.92 79.38 98.80 86.01
2 99.21 59.04 69.24 78.09 99.71 72.49 98.77 44.21
3 96.70 66.54 69.89 74.80 95.09 86.83 95.48 44.72
4 99.56 98.65 94.96 98.19 99.28 99.45 98.36 97.40
5 97.12 81.94 89.43 96.54 93.35 94.95 92.55 83.93
6 89.64 98.52 96.19 98.96 99.81 93.65 99.96 87.28
7 99.82 97.31 76.83 92.52 96.73 87.82 99.61 96.94
8 70.53 68.11 42.58 54.35 67.89 61.64 77.51 41.58
9 99.02 95.06 89.58 81.22 99.42 90.47 97.19 78.45
10 91.13 9.43 76.40 75.18 91.75 86.66 89.23 30.75
11 97.56 72.26 93.04 92.26 95.26 91.37 95.45 23.52
12 99.87 72.16 86.60 86.40 96.65 95.38 99.96 82.63
13 99.25 99.78 95.46 98.18 96.64 96.90 99.22 92.88
14 96.30 89.93 90.50 96.10 99.68 91.68 96.80 62.40
15 72.28 56.98 65.40 55.60 83.86 75.55 72.03 57.10
16 95.29 44.35 75.89 92.39 92.03 88.43 94.07 76.75
AA 93.92 75.58 80.50 84.94 94.07 87.04 94.06 67.91
OA 89.69 71.50 74.29 77.54 90.18 81.20 91.31 67.43
50 1 99.97 98.81 92.70 97.99 99.99 94.18 99.20 85.37
2 99.84 86.97 88.30 91.35 99.94 92.34 99.57 92.51
3 99.84 54.83 87.50 94.87 99.74 97.02 99.62 81.25
4 99.93 98.87 99.41 99.96 99.89 99.95 99.63 98.40
5 99.40 95.62 95.83 98.96 99.38 98.34 98.79 95.12
6 99.92 99.62 98.95 99.87 100.00 98.78 99.98 98.86
7 99.92 98.17 96.47 96.60 99.85 97.80 99.78 98.55
8 68.92 81.74 62.99 68.05 85.35 77.17 77.93 46.04
9 99.76 94.87 95.34 86.01 99.99 96.15 99.71 94.84
10 97.18 12.87 95.31 93.94 98.23 97.23 97.33 77.69
11 99.57 75.82 97.73 97.10 98.59 97.71 99.54 77.14
12 99.90 58.18 97.51 97.16 99.89 98.88 99.84 96.87
13 99.84 99.98 98.55 98.60 100.00 99.12 99.87 97.33
14 98.15 93.80 97.54 99.37 99.91 99.24 99.53 91.49
15 76.12 41.84 69.04 67.21 88.77 86.24 83.39 65.15
16 98.87 69.00 94.34 97.78 98.55 97.64 98.15 91.94
AA 96.07 78.81 91.72 92.80 98.00 95.49 96.99 86.78
OA 90.92 74.73 85.79 87.01 95.30 91.37 95.08 79.49
100 1 99.86 98.81 98.22 98.74 99.99 97.86 99.77 92.44
2 99.74 91.88 96.54 96.70 99.99 97.74 99.86 89.46
3 99.99 63.20 95.40 97.47 99.16 98.91 99.79 92.05
4 99.84 99.12 99.29 99.95 99.85 99.78 99.44 99.03
5 99.58 98.24 98.09 99.61 99.70 98.89 99.54 96.32
6 99.99 99.95 99.12 99.79 100.00 99.62 100.00 98.96
7 99.93 98.71 97.14 97.94 99.88 98.97 99.86 98.42
8 67.88 71.43 59.51 66.83 90.54 86.00 79.90 39.73
9 99.81 95.51 94.87 90.65 99.98 98.15 99.75 96.34
10 96.54 22.92 96.97 96.21 97.77 98.55 97.29 84.35
11 99.75 76.67 99.28 99.25 99.82 99.39 99.70 92.76
12 100.00 64.12 99.39 98.01 99.99 99.84 99.99 96.97
13 99.87 99.98 98.74 99.34 99.98 99.38 99.75 97.48
14 98.66 94.73 98.62 99.72 99.91 99.44 99.67 93.52
15 78.73 63.65 83.03 70.16 91.31 86.77 91.86 69.09
16 99.27 79.70 96.65 99.26 99.26 98.69 99.10 93.21
AA 96.21 82.41 94.43 94.35 98.57 97.37 97.83 89.38
OA 91.56 76.61 88.67 90.25 96.89 94.41 96.28 81.95
adapt to objects of different scales, and the output will include pines-Salinas, and Indian pines-KSC. In [123], they also did
features from coarse to fine, achieving multiscale feature fusion. not test the model on KSC. Thus, we regard Indian pines as
Then, a simple logistic classifier is used to classify the spectra- the source domain for KSC, given that both data sets come from
spatial feature. In the experiment, 80% of the data are used to the same type of sensor. The neighbor region of the pixel is set
train the autoencoder through the method of greedy layer- to 21*21. Additionally, it averages along the spectral channel to
wise pretraining. Moreover, in the spatial branch, the size of reduce the input dimension, instead of PCA. In the pretraining
the neighbor region is set to 42*42 and PCA is used to extract process, 15% of samples of each category of Pavia and 90% of
the first component. Then, the overall model is trained together. samples of each category of Indian pines are treated as the
TwoCnn [122]. This is a two-stream structure based on fine- training data set, and the rest serve as the test data set. To make
tuning. In the spectral branch, it adopts a 1D convolutional layer the number of bands in the source data set and target data set
to capture local information of spectral features, which is the same, we filter out the band that has the smaller variance.
entirely different from SSDL. In particular, transfer learning is According to [123], all other layers are transferred except for
used to pretrain parameters of the model and endow it with the softmax layer. Finally, the model is fine-tuned on the target
good robustness on limited samples. The pairs of the source data set with the same configuration.
data set and target data set are Pavia Center–PavaU, Indian
189
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Table 7
KSC. Classification accuracy obtained by S-DMM [122], 3DCAE [41], SSDL [38], TwoCnn [123], 3DVSCNN [124], SSLstm [76], CNN_HSI [28] and SAE_LR [35] on KSC. The best
accuracies are marked in bold. The ‘‘size” in the first line denotes the sample size per category.
Size Classes S-DMM 3DCAE SSDL TwoCnn 3DVSCNN SSLstm CNN_HSI SAE_LR
10 1 93.49 35.46 79.21 67.11 95.33 73.58 92.17 83.95
2 89.74 49.40 67.68 58.37 40.39 68.45 81.67 69.01
3 95.16 40.41 76.87 77.20 75.41 81.59 86.91 50.61
4 58.72 5.54 70.33 75.12 35.87 76.16 60.83 20.21
5 87.95 33.38 81.26 88.08 47.42 87.22 64.37 23.11
6 93.42 51.05 79.18 66.44 64.29 76.71 66.16 45.39
7 98.63 16.32 95.26 92.74 57.79 96.42 96.00 63.58
8 97.93 46.44 72.42 61.92 71.88 52.95 85.77 58.05
9 94.88 86.25 87.00 92.31 79.00 90.65 91.06 76.24
10 98.12 8.76 72.59 86.27 56.57 89.04 85.13 63.12
11 97.51 76.21 88.68 78.17 86.99 89.32 95.60 89.98
12 93.69 8.54 83.65 78.09 60.79 83.96 89.66 69.59
13 100.00 46.95 99.98 100.00 84.92 100.00 99.95 97.90
AA 92.25 38.82 81.09 78.60 65.90 82.00 84.25 62.36
OA 94.48 49.73 83.71 82.29 77.40 83.07 91.13 72.68
50 1 97.99 22.53 96.12 72.95 98.45 96.77 94.40 88.21
2 98.24 30.98 94.56 94.04 39.90 98.19 91.50 78.50
3 98.69 45.10 96.55 90.10 99.13 99.47 94.47 83.06
4 78.22 3.86 93.51 92.33 74.01 98.32 76.49 43.07
5 92.16 40.54 96.94 97.12 64.32 99.55 87.03 53.33
6 98.49 62.07 96.70 93.80 77.21 99.72 70.89 51.90
7 98.36 18.00 99.64 97.82 20.36 100.00 98.00 84.73
8 99.21 43.04 91.92 90.60 96.25 97.40 93.86 77.77
9 99.96 89.77 98.57 89.55 63.91 98.83 98.77 86.47
10 99.92 12.12 93.70 95.56 54.72 99.52 91.67 85.28
11 98.62 80.38 97.86 98.40 90.95 99.11 87.75 96.56
12 99.07 19.85 94.99 95.01 87.37 99.67 89.54 82.19
13 100.00 91.24 100.00 90.00 96.77 99.46 98.95 99.44
AA 96.84 43.04 96.24 92.10 74.10 98.92 90.25 77.73
OA 98.68 54.01 96.88 96.61 96.03 98.72 97.39 84.93
100 1 98.17 19.03 97.41 96.51 98.94 99.74 93.93 89.77
2 98.74 34.13 98.60 99.58 56.50 99.79 89.93 80.77
3 99.55 57.18 96.67 99.42 99.81 99.23 98.33 82.88
4 88.29 1.38 97.96 98.68 88.29 99.14 85.86 53.95
5 93.11 52.46 99.51 100.00 76.23 100.00 93.77 58.52
6 99.61 59.77 98.68 97.36 80.62 99.53 74.96 58.22
7 100.00 8.00 100.00 100.00 32.00 100.00 98.00 86.00
8 99.79 51.81 95.53 98.07 98.91 99.40 97.37 83.96
9 99.74 87.40 98.74 98.74 63.93 99.12 99.76 91.95
10 100.00 13.16 98.22 99.61 72.47 100.00 97.70 91.28
11 99.91 83.76 99.06 99.97 94.42 99.81 99.84 97.81
12 99.33 24.94 97.99 99.03 94.32 99.80 95.31 85.73
13 100.00 90.07 99.96 99.94 97.62 99.94 99.85 99.58
AA 98.17 44.85 98.33 98.99 81.08 99.65 94.20 81.57
OA 98.96 59.63 98.75 99.15 98.55 99.68 98.05 89.15
3DVSCNN [123]. This is a general CNN-based image classifica- spatial joint features directly, instead of 3D convolution. At
tion model, but it uses a 3D convolutional network to extract the same time, it also adopts a dropout layer and data augmen-
spectral-spatial features simultaneously followed by a fully tation, including rotation and flipping, to improve the general-
connected network for classification. The main idea of [124] is ization of the model and reduce overfitting. After data
the usage of active learning. The process can be divided into augmentation, an image can generate eight different orientation
two steps: the selection of valuable samples and the training images. Moreover, the model removes the linear classifier to
of the model. In [124], an SVM serves as a selector to iteratively decrease the number of trainable parameters. According to
select some of the most valuable samples according to Eq.(10). [28], the dropout rate is set to 0.6, the size of the neighbor
Then, the 3DVSCNN is trained on the valuable data set. The size region is 5 5, and the batch size is 16 in the experiment.
of its neighbor region is set to 13*13. During data preprocessing, SSLstm [75]. Unlike the above methods, SSLstm adopts recur-
it uses PCA to extract the top 10 components for PaviaU and rent networks to process spectral and spatial features simulta-
Salinas, and the top 30 components for KSC, which contain more neously. In the spectral branch, called SeLstm, the spectral
than 99% of the original spectral information and still keep a vector is seen as a sequence. In the spatial branch, called
clear spatial geometry. In the experiment, 80% of samples will SaLstm, it treats each line of the image patch as a sequence ele-
be picked by the SVM to form a valuable data set for 4 samples ment. Therefore, along the column direction, the image patch
in each iteration. Then, the model is trained on the valuable data can be well converted into a sequence. In particular, it fuses
set. the predictions of the two branches in the label space to obtain
CNN_HSI [28]. The model combines multilayer 1 1 2D convo- the final prediction result, which is defined as
lutions followed by local response normalization to capture the
Pðy ¼ jjxi Þ ¼ wspe Pspe ðy ¼ jjxi Þ þ wspa Pspa ðy ¼ jjxi Þ ð14Þ
feature of hyperspectral images. To avoid the loss of informa-
tion after PCA, it uses 2D convolution to extract spectral and
190
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 10. Change in accuracy over the number of samples for each category. (a) PaviaU. (b) Salinas. (c) KSC.
where Pðy ¼ jjxi Þ denotes the final posterior probability, 85.12% and 82.13%, 72.40% and 66.05% for 3DCAE, SSDL, TwoCnn,
Pspe ðy ¼ jjxi Þ and P spa ðy ¼ jjxi Þ denote the posterior probabilities 3DVSCNN, SSLstm, CNN_HSI and SAE_LR. Besides, S-DMM has the
from spectral and spatial modules, respectively, and wspe and largest number of class classification accuracy. When the sample
wspa are fusion weights that satisfy the sum of 1. In the experi- size is 50, S-DMM and CNN_HSI have the highest AA and OA
ment, the size of the neighbor region is set to 32*32 for PaviaU respectively, which are 96.47% and 95.21%. In the last group,
and Salinas. In addition, for KSC, it is set to 64*64. Next, the first 3DVSCNN and CNN_HSI have the highest AA and OA, which are
component of PCA is reserved on all data sets. The number of 97.13% and 97.35%. According to the other two tables, we can con-
hidden nodes of the spectral branch and the spatial branch are clude with a similar result (see Figs. 20–22).
128 and 256, respectively. In addition, wspe and wspa are set to As shown in Tables 5,6,7, we can conclude that most models’
0.5 and 0.5 separately. performance on KSC, except for 3DCAE, is better than the other
two data sets. Especially when the data set contains few samples,
the accuracy of S-DMM is up to 94%, superior to other data sets.
4.3. Experimental results and analysis This is because the surface objects on the KSC itself have a discrim-
inating border between each other, regardless of its higher spatial
The accuracy of the test data set is shown in Tables 5,6,7. Corre- resolution than that of the other data sets, as shown in
sponding classification maps are shown in Figs. 10–19. The final Figs. 17,18,19. In the other data sets, models easily misclassify
classification result of the pixel is decided by the voting result of the objects that have a similar spatial structure, as illustrated in
10 experiments. Meadows (class 2) and Bare soil (class 6) in PaviaU and Fallow
Taking Table 5 as an example, the experiment is divided into rough plow (class 4) and Grapes untrained (class 8) in Salinas, as
three groups, and the sample sizes in each group are 10, 50, and shown in Figs. 11–16. The accuracy of all models on Grapes
100, respectively. The aforementioned models are conducted 10 untrained is lower than other classes in Salinas. In Fig. 4.3, on all
times in every experiment sets. Then, we count the average of their data sets, as the number of samples increases, the accuracy of all
class classification accuracy, AA, and OA for comparing their per- models will improve together.
formance. When sample size is 10, S-DMM has the highest AA As shown in Fig. 4.3, when the sample size of each category is
and OA, which are 91.08% and 84.45% respectively, in comparison 10, S-DMM and CNN_HSI have achieved stable and excellent per-
with the AA and OA of 71.58% and 60.00%, 75.34 % and 74.79%, formance on all data sets. They are not sensitive to the size of
74.60% and 78.61%, 75.64% and 75.17%, 72.77% and 69.59%, the data set. In Fig. 10b and c, with increasing sample size, the
191
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 11. Classification maps on the PaviaU data set (10 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
accuracy of S-DMM and CNN_HSI have improved slightly, but their racy of class 7 decreasing. On almost all data sets, autoencoder-
increase is lower than that of others. In Fig. 10a, when the sample based models achieve poor performance compared with other
size increases from 50 to 100, we can obtain the same conclusion. models. Although unsupervised learning does not need to label
This result shows that both of them can be applied to solve the samples, if there are no constraints, the autoencoder might actually
small-sample problem in hyperspectral images. Especially for S- learn nothing. Moreover, since it has a symmetric architecture, it
DMM, it has gained the best performance on the metric of AA would result in a vast number of parameters and increase the dif-
and OA on Salinas and KSC in the experiment with a sample size ficulty of training. Therefore, SSDL and SAE_LR use a greedy layer-
of 10. On PaviaU, it still wins the third place. This result also proves wise pretraining method to solve this problem. However, 3DCAE
that it can work well on a few samples. Although TwoCnn, does not adopt this approach and achieves the worst performance
3DVSCNN, and SSLstm achieve good performance on all data sets, on all data sets. As shown in Fig. 4.3, it still has considerable room
when the data set contains fewer samples, they will not work well. for improvement.
It is worth mentioning that 3DVSNN uses fewer samples to train Overall, classification results based on few-shot learning, active
than other models for selecting valuable samples. This approach learning, transfer learning, and data augmentation are better than
may not be beneficial for those classes with few samples. As shown autoencoder-based unsupervised learning methods on the limited
in 7, 3DVSCNN has a good performance on OA, but a bad perfor- sample in all experiments. Few-shot learning benefits from the
mance on AA. For class 7, when its sample size increases from 10 exploration of the relationship between samples to find a discrim-
to 50 and 100, its accuracy drops. This is because the total sample inative decision boarder. Active learning benefits from the selec-
size of it is the smallest on KSC. Therefore, it contains few valuable tion of valuable samples, which enables the model to focus more
samples. Moreover, the step of selecting valuable samples would attention to indistinguishable samples. Transfer learning makes
cause an imbalance between the classes, which leads to the accu- good use of the similarity between different data sets, which
192
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 12. Classification maps on the PaviaU data set (50 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
reduces the quantity of data required for training and trainable solely uses a 1 1 kernel to process an image. Moreover, it uses
parameters, improving the model’s robustness. According to raw a 1 1 convolution layer to serve as a classifier instead of the lin-
data, the method of data augmentation generates more samples ear layer, which greatly reduces the number of trainable parame-
to expand the diversity of samples. Although the autoencoder ters. The next is the S-DMM. This also explains why S-DMM and
can learn the internal structure of the unlabeled data set, the final CNN_HSI are less affected by augmentation in sample size but very
feature representation might not have task-related characteristics. effective on few samples. Additionally, the problem of overfitting is
This is the reason why its performance on a small-sample data set of little concern in these approaches. Stacking the spectral and spa-
is inferior to supervised learning. tial feature to generate the final fused feature is the main reason
for the large number of parameters of TwoCnn. However, regard-
less of its potentially millions of trainable parameters, it can work
4.4. Model parameters well on limited samples, benefiting from transfer learning, which
decreases trainable parameters and achieves good performance
To further explore the reasons why the model has achieved dif- on all target data sets. Next, the models with the most parameters
ferent results on the benchmark data set, we also counted the are successively 3DCAE and SSLstm. 3DCAE’s trainable parameters
number of trainable parameters of each framework (including are at most eight times those of SSDL, which contains not only a 1D
the decoder module) on different data sets, which are shown in autoencoder in the spectral branch but also a spatial branch based
Table 8. On all data sets, the model with the least number of train- on a 2D convolutional network, but 3DCAE is still worse than SSDL.
ing parameters is the SAE_LR, the second is the CNN_HSI and the Although 3D convolutional and pooling modules can greatly avoid
most is the TwoCnn. SAE_LR is a lightweight architecture in all the problem of data structure information loss caused by the flat-
models for the simple linear layer, but its performance is poor. Dif- tening operation, the complexity of the 3D structure and the sym-
ferent from other 2D convolution approaches in HSI, CNN_HSI
193
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 13. Classification maps on the PaviaU data set (100 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
metric structure of the autoencoder increase the number of model 4.5. The speed of model convergence
parameters, which make it easy to overfit the model. 3DVSCNN
also uses a 3D convolutional module and is better than 3DCAE, In addition, we compare the convergence speed of the model
which first reduces the number of redundant bands by PCA. That according to the changes in training loss of each model in the first
may also be applied to 3DCAE to decrease the number of model 200 epochs on each group of experiments (see Figs. 4.5–4.5).
parameters and make good use of characteristics of 3D convolu- Because the autoencoder and classifier of 3DCAE are be trained
tion, extracting spectral and spatial information simultaneously. separately, and all data are used during training the autoencoder,
The main contribution of the parameter of SSLstm comes from it is not comparable to other models. Therefore, it is not be listed
the spatial branch. Although the gate structure of LSTM improves here. On all data sets, S-DMM has the fastest convergence speed.
the model’s capabilities of long and short memory, it increases After approximately 3 epochs, the training loss tends to become
the complexity of the model. When the number of hidden layer stable given its fewer parameters. Although CNN_HSI has a similar
units increases, the model’s parameters will also skyrocket greatly. performance to S-DMM and fewer parameters, the learning curve
Perhaps it is the coupling between the spectral features and recur- of CNN_HSI’s convergence rate is slower than that of S-DMM and
rent network that make performance of SSLstm not as bad as that is sometimes accompanied by turbulence. The second place
of 3DCAE on all data sets, which has a similar number of parame- regarding performance is held by TwoCnn, which is mainly due
ters and even achieved superior results on KSC. Moreover, there are to transfer learning to better position the initial parameters, and
no methods that were adopted for solving the problem of few sam- it actually has fewer parameters requiring training. Thus, it just
ples. This finding also shows that supervised learning is better than needs a few epochs to fine-tune on the target data set. Moreover,
unsupervised learning in some tasks. the training curve of most models stabilizes after 100 epochs.
The training loss of the SSLstm has severe oscillations in all data
194
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 14. Classification maps on the Salinas data set (10 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
sets. This is especially noted in the SeLstm, where the loss some- 5. Conclusions
times has difficulty in decreasing. When the sequence is very long,
the challenge might be that the recurrent neural network is more In this paper, we introduce the current research difficulties,
susceptible to a vanishing or exploding gradient. Moreover, the namely, few samples, in the field of hyperspectral image classifica-
pixels of the hyperspectral image usually contain hundreds of tion and discuss popular learning frameworks. Furthermore, we
bands, which is the reason why the training loss has difficulty also introduce several popular learning algorithms to solve the
decreasing or oscillations occur in SeLstm. In the spatial branch, small-sample problem, such as autoencoders, few-shot learning,
it does not have this serious condition because the length of the transfer learning, activate learning, and data augmentation.
spatial sequence depending on patch size is shorter than spectral According to the above methods, we select some representative
sequences. During training, the LSTM-based model spent a consid- models to conduct experiments on hyperspectral benchmark data
erable amount of time because it cannot train in parallel. sets. We developed three different experiments to explore the per-
195
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 15. Classification maps on the Salinas (50 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i) SAE_LR.
formance of the models on small-sample data sets and docu- Autoencoders, including linear autoencoders and 3D convolu-
mented their changes with increasing sample size, finally evaluat- tional autoencoders, have been widely explored and applied
ing their effectiveness and robustness through AA and OA. Then, to solve the sample problem in HSI. Nevertheless, their perfor-
we also compared the number of parameters and convergence mance does not approach excellence. The future development
speeds of various models to further analyze their differences. Ulti- trend should be focused on few-shot learning, transfer learning,
mately, we also highlight several possible future directions of and active learning.
hyperspectral image classification on small samples: We can fuse some learning paradigms to make good use of the
advantages of each approach. For example, regarding the fusion
of transfer learning and active learning, such an approach can
196
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 16. Classification maps on the Salinas data set (100 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI.
(i) SAE_LR.
select the valuable samples on the source data set and transfer Graph convolution network has been growing more and more
the model to the target data set to avoid the imbalance of the interested in hyperspectral image classification. Fully connected
class sample size. network, convolution network, and recurrent network are just
According to the experimental results, the RNN is also suitable suitable for processing the euclidean data and do not solve with
for hyperspectral image classification. However, there is little the non-euclidean data directly. And image can be regarded as a
work focused on combining the learning paradigms with RNN. special case of the euclidean-data. Thus, there are many
Recently, the transformer, as an alternative to the RNN that is researches [125–127] utilizing graph convolution networks to
capable of processing in parallel, has been introduced into the classify HSI.
computer vision domain and has achieved good performance The reason for requiring a large amount of label samples is the
on some tasks such as object detection. Therefore, we can also tremendous trainable parameters of the deep learning model.
employ this method in hyperspectral image classification and There are many methods proposed, such as group convolution
combine it with some learning paradigms.
197
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 17. Classification maps on the KSC data set (10 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
[128], to light the weight of a deep neural network. So, how to to avoid the over-fitting and improve model’s generalization is
construct a light-weight model further is also a future direction. the huge challenge of HSI few label classification in the application
potential.
Although few label classification can save much time and labor
force to collect and label diverse samples, the models are easy to
suffer from over-fit and gaining a weak generalization. Thus, how
Fig. 18. Classification maps on the KSC data set (50 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
198
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 19. Classification maps on the KSC data set (100 samples per class). (a) Original. (b) S-DMM. (c) 3DCAE. (d) SSDL. (e) TwoCnn. (f) 3DVSCNN. (g) SSLstm. (h) CNN_HSI. (i)
SAE_LR.
Fig. 20. Training Loss on the PaviaU data set. (a) 10 samples per class. (b) 50 samples per class. (c) 100 samples per class.
199
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Fig. 21. Training Loss on the Salinas data set. (a) 10 samples per class. (b) 50 samples per class. (c) 100 samples per class.
Fig. 22. Training Loss on the KSC data set. (a) 10 samples per class. (b) 50 samples per class. (c) 100 samples per class.
200
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Table 8 [10] J. Li, Y. Pang, Z. Li, W. Jia, Tree species classification of airborne hyperspectral
The number of trainable parameters. image in cloud shadow area, in: International Symposium of Space Optical
Instrument and Application, Springer, 2018, pp. 389–398.
PaviaU Salinas KSC [11] Z. Du, M. Jeong, S. Kong, Band selection of hyperspectral images for automatic
detection of poultry skin tumors, IEEE Trans. Autom. Sci. Eng. 4 (3) (2007)
S-DMM 33921 40385 38593
332–339, https://doi.org/10.1109/TASE.2006.888048.
3DCAE 256563 447315 425139
[12] S. Li, W. Song, L. Fang, Y. Chen, J. Benediktsson, Deep learning for
SSDL 35650 48718 44967 hyperspectral image classification: an overview, IEEE Trans. Geosci. Remote
TwoCnn 1379399 1542206 1501003 Sens. PP (99) (2019) 1–20..
3DVSCNN 42209 42776 227613 [13] A. Plaza, J. Plaza, G. Martin, Incorporation of spatial constraints into spectral
SSLstm 367506 370208 401818 mixture analysis of remotely sensed hyperspectral data, Machine Learning for
CNN_HSI 22153 33536 31753 Signal Processing.mlsp.ieee International Workshop on (2009) 1–6..
SAE_LR 21426 5969 5496 [14] F. Melgani, L. Bruzzone, Classification of hyperspectral remote sensing images
with support vector machines, IEEE Trans. Geosci. Remote Sens. 42 (8) (2004)
1778–1790, https://doi.org/10.1109/TGRS.2004.831865.
[15] Y. Zhong, L. Zhang, An adaptive artificial immune network for supervised
classification of multi-hyperspectral remote sensing imagery, IEEE Trans.
Geosci. Remote Sens. 50 (3) (2011) 894–909.
CRediT authorship contribution statement [16] J. Li, J. Bioucas-Dias, A. Plaza, Semisupervised hyperspectral image
classification using soft sparse multinomial logistic regression, IEEE Geosci.
Sen Jia: Writing - original draft, Funding acquisition. Shuguo Remote Sens. Lett. 10 (2) (2012) 318–322.
[17] G. Licciardi, P. Marpu, J. Chanussot, J. Benediktsson, Linear versus nonlinear
Jiang: Software, Writing - review & editing. Zhijie Lin: Writing - pca for the classification of hyperspectral data based on the extended
review & editing. Nanying Li: Writing - review & editing. Meng morphological profiles, IEEE Geosci. Remote Sens. Lett. 9 (3) (2011) 447–451.
Xu: Writing - review & editing. Shiqi Yu: Supervision, Funding [18] A. Villa, J. Chanussot, C. Jutten, J. Benediktsson, S. Moussaoui, On the use of
ICA for hyperspectral image analysis, in: Proc. Geoscience and Remote
acquisition, Writing - review & editing.
Sensing Symp., 2009 IEEE Int., IGARSS 2009, vol. 4, 2009, pp. IV–97.
doi:10.1109/IGARSS.2009.5417363..
[19] C. Zhang, Y. Zheng, Hyperspectral remote sensing image classification based
Declaration of Competing Interest on combined SVM and LDA, SPIE Asia Pac. Remote Sens. (2014) 92632P.
[20] L. He, J. Li, A. Plaza, Y. Li, Discriminative low-rank Gabor filtering for spectral-
spatial hyperspectral image classification, IEEE Trans. Geosci. Remote Sens. PP
The authors declare that they have no known competing finan- (99) (2016) 1–15. doi:10.1109/TGRS.2016.2623742..
cial interests or personal relationships that could have appeared [21] M.D. Mura, J.A. Benediktsson, B. Waske, L. Bruzzone, Extended profiles with
to influence the work reported in this paper. morphological attribute filters for the analysis of hyperspectral data, Int. J.
Remote Sens. 31 (22) (2010) 5975–5991.
[22] N. Falco, J. Atli Benediktsson, L. Bruzzone, Spectral and spatial classification of
hyperspectral images based on ICA and reduced morphological attribute
Acknowledgments profiles, IEEE Trans. Geosci. Remote Sens. 53 (11) (2015) 6223–6240.
[23] M. Dalla Mura, A. Villa, J. Atli Benediktsson, J. Chanussot, L. Bruzzone,
Classification of hyperspectral images by using extended morphological
The work is supported by the National Natural Science Founda-
attribute profiles and independent component analysis, IEEE Geosci. Remote
tion of China (Grant No. 41971300, 61901278 and 61976144), the Sens. Lett. 8 (3) (2011) 542–546.
National Key Research and Development Program of China (Grant [24] S. Jia, L. Shen, Q. Li, Gabor feature-based collaborative representation for
No. 2020AAA0140002), the Program for Young Changjiang Schol- hyperspectral imagery classification, IEEE Trans. Geosci. Remote Sens. 53 (2)
(2015) 1118–1129.
ars, the Key Project of Department of Education of Guangdong Pro- [25] Y. Qian, M. Ye, J. Zhou, Hyperspectral image classification based on structured
vince (Grant No. 2020ZDZX3045) and the Shenzhen Scientific sparse logistic regression and three-dimensional wavelet texture features,
Research and Development Funding Program under (Grant No. IEEE Trans. Geosci. Remote Sens. 51 (4) (2013) 2276–2291.
[26] W. Li, C. Chen, H. Su, Q. Du, Local binary patterns and extreme learning
JCYJ20180305124802421 and JCYJ20180305125902403). machine for hyperspectral imagery classification, IEEE Trans. Geosci. Remote
Sens. 53 (7) (2015) 3681–3693.
[27] P. Ghamisi, M. Dalla Mura, J. Atli Benediktsson, A survey on spectral–spatial
References classification techniques based on attribute profiles, IEEE Trans. Geosci.
Remote Sens. 53 (5) (2015) 2335–2353.
[28] S. Yu, S. Jia, C. Xu, Convolutional neural networks for hyperspectral image
[1] M. Teke, H. Deveci, O. Haliloğlu, S. Gürbüz, U. Sakarya, A short survey of
classification, Neurocomputing 219 (2017) 88–98.
hyperspectral remote sensing applications in agriculture, in: 2013 6th
[29] M. Paoletti, J. Haut, J. Plaza, A. Plaza, Deep learning classifiers for
International Conference on Recent Advances in Space Technologies (RAST),
hyperspectral imaging: a review, ISPRS J. Photogramm. Remote Sens. 158
IEEE, 2013, pp. 171–176.
(Dec.) (2019) 279–317.
[2] I. Strachan, E. Pattey, J. Boisvert, Impact of nitrogen and environmental
[30] G. Hinton, R. Salakhutdinov, Reducing the dimensionality of data with neural
conditions on corn as detected by hyperspectral reflectance, Remote Sens.
networks, Science 313 (5786) (2006) 504–507.
Environ. 80 (2) (2002) 213–224.
[31] A. Coates, A. Ng, H. Lee, An analysis of single-layer networks in unsupervised
[3] A. Bannari, A. Pacheco, K. Staenz, H. McNairn, K. Omari, Estimating and
feature learning, J. Mach. Learn. Res. 15 (2011) 215–223.
mapping crop residues cover on agricultural lands using hyperspectral and
[32] N. Zeng, H. Zhang, B. Song, W. Liu, Y. Li, A. Dobaie, Facial expression
ikonos data, Remote Sens. Environ. 104 (4) (2006) 447–459.
recognition via learning deep sparse autoencoders, Neurocomputing 273
[4] C. Sabine, M. Robert, S. Thomas, R. Manuel, E. Paula, P. Marta, P. Alicia,
(2018) 643–649.
Potential of hyperspectral imagery for the spatial assessment of soil erosion
[33] P. Vincent, H. Larochelle, Y. Bengio, P. Manzagol, Extracting and composing
stages in agricultural semi-arid spain at different scales, in: 2014 IEEE
robust features with denoising autoencoders, in: International Conference on
Geoscience and Remote Sensing Symposium, IEEE, 2014, pp. 2918–2921.
Machine Learning, 2008, pp. 1096–1103..
[5] P. Kuflik, S. Rotman, Band selection for gas detection in hyperspectral images,
[34] L. Windrim, R. Ramakrishnan, A. Melkumyan, R. Murphy, A. Chlingaryan,
in: 2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel,
Unsupervised feature-learning for hyperspectral data with autoencoders,
2012, pp. 1–4. doi: 10.1109/EEEI.2012.6376973..
Remote Sens. 11 (7) (2019) 864.
[6] S. Foudan, K. Menas, E. Tarek, G. Richard, Y. Ruixin, Hyperspectral image
[35] Y. Chen, Z. Lin, X. Zhao, G. Wang, Y. Gu, Deep learning-based classification of
analysis for oil spill detection, in: Summaries of NASA/JPL Airborne Earth
hyperspectral data, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7 (6)
Science Workshop, Pasadena, CA, 2001, pp. 5–9.
(2014) 2094–2107.
[7] A. Mohamad, Sea water chlorophyll-a estimation using hyperspectral images
[36] A. Ghasem, S. Farhad, R. Peter, Spectral–spatial feature learning for
and supervised artificial neural network, Ecol. Inf. 24 (2014) 60–68, https://
hyperspectral imagery classification using deep stacked sparse autoencoder,
doi.org/10.1016/j.ecoinf.2014.07.004.
J. Appl. Remote Sens. 11 (4) (2017) 042604.
[8] J. Sylvain, G. Mireille, A novel maximum likelihood based method for mapping
[37] C. Xing, L. Ma, X. Yang, Stacked denoise autoencoder based feature extraction
depth and water quality from hyperspectral remote-sensing data, Remote
and classification for hyperspectral images, J. Sens. (2016).
Sens. Environ. 147 (2014) 121–132, https://doi.org/10.1016/j.
[38] J. Yue, S. Mao, M. Li, A deep learning framework for hyperspectral image
rse.2014.01.026.
classification using spatial pyramid pooling, Remote Sens. Lett. 7 (9) (2016)
[9] C. Jänicke, A. Okujeni, S. Cooper, M. Clark, P. Hostert, S. van der Linden,
875–884.
Brightness gradient-corrected hyperspectral image mosaics for fractional
vegetation cover mapping in northern california, Remote Sens. Lett. 11 (1)
(2020) 1–10.
201
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
[39] S. Hao, W. Wang, Y. Ye, T. Nie, B. Lorenzo, Two-stream deep architecture for [67] Q. Feng, D. Zhu, J. Yang, B. Li, Multisource hyperspectral and lidar data fusion
hyperspectral image classification, IEEE Trans. Geosci. Remote Sens. 56 (4) for urban land-use mapping based on a modified two-branch convolutional
(2017) 2349–2361. neural network, ISPRS Int. J. Geo-Inf. 8 (1) (2019) 28.
[40] X. Sun, F. Zhou, J. Dong, F. Gao, Q. Mu, X. Wang, Encoding spectral and spatial [68] X. Xu, W. Li, Q. Ran, Q. Du, L. Gao, B. Zhang, Multisource remote sensing data
context information for hyperspectral image classification, IEEE Geosci. classification based on convolutional neural network, IEEE Trans. Geosci.
Remote Sens. Lett. 14 (12) (2017) 2250–2254. Remote Sens. 56 (2) (2017) 937–949.
[41] S. Mei, J. Ji, Y. Geng, Z. Zhang, X. Li, Q. Du, Unsupervised spatial–spectral [69] H. Li, G. Pedram, S. Uwe, X. Zhu, Hyperspectral and lidar fusion using deep
feature learning by 3d convolutional autoencoder for hyperspectral three-stream convolutional neural networks, Remote Sens. 10 (10) (2018)
classification, IEEE Trans. Geosci. Remote Sens. 57 (9) (2019) 6808–6820. 1649.
[42] C. Zhao, X. Wan, G. Zhao, B. Cui, W. Liu, B. Qi, Spectral-spatial classification of [70] W. Li, C. Chen, M. Zhang, H. Li, Q. Du, Data augmentation for hyperspectral
hyperspectral imagery based on stacked sparse autoencoder and random image classification with deep cnn, IEEE Geosci. Remote Sens. Lett. 16 (4)
forest, Eur. J. Remote Sens. 50 (1) (2017) 47–63. (2018) 593–597.
[43] X. Wan, C. Zhao, Y. Wang, W. Liu, Stacked sparse autoencoder in [71] W. Wei, J. Zhang, L. Zhang, C. Tian, Y. Zhang, Deep cube-pair network for
hyperspectral data classification using spectral-spatial, higher order hyperspectral imagery classification, Remote Sens. 10 (5) (2018) 783.
statistics and multifractal spectrum features, Infrared Phys. Technol. 86 [72] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput. 9 (8)
(2017) 77–89. (1997) 1735–1780.
[44] C. Wang, P. Zhang, Y. Zhang, L. Zhang, W. Wei, A multi-label hyperspectral [73] C. Kyunghyun, V. Bart, G. Caglar, B. Dzmitry, B. Fethi, S. Holger, B. Yoshua,
image classification method with deep learning features, in: Proceedings of Learning phrase representations using rnn encoder-decoder for statistical
the International Conference on Internet Multimedia Computing and Service, machine translation, arXiv preprint arXiv:1406.1078..
2016, pp. 127–131. [74] L. Mou, P. Ghamisi, X. Zhu, Deep recurrent neural networks for hyperspectral
[45] J. Li, B. Lorenzo, S. Liu, Deep feature representation for hyperspectral image image classification, IEEE Trans. Geosci. Remote Sens. 55 (7) (2017) 3639–
classification, in: 2015 IEEE International Geoscience and Remote Sensing 3655.
Symposium (IGARSS), IEEE, 2015, pp. 4951–4954. [75] B. Liu, X. Yu, A. Yu, P. Zhang, G. Wan, Spectral-spatial classification of
[46] M. Atif, L. Tao, Efficient deep auto-encoder learning for the classification of hyperspectral imagery based on recurrent neural networks, Remote Sens.
hyperspectral images, in: 2016 International Conference on Virtual Reality Lett. 9 (12) (2018) 1118–1127.
and Visualization (ICVRV), IEEE, 2016, pp. 44–51.. [76] F. Zhou, R. Hang, Q. Liu, X. Yuan, Hyperspectral image classification using
[47] Y. Liu, G. Cao, Q. Sun, S. Mel, Hyperspectral classification via learnt features, spectral-spatial lstms, Neurocomputing 328 (2019) 39–47.
in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, [77] M. Andong, F.A. M, Z. Wang, Z. Yin, Hyperspectral image classification using
pp. 2591–2595.. similarity measurements-based deep recurrent neural networks, Remote
[48] H. Lee, K. Heesung, Going deeper with contextual cnn for hyperspectral image Sens. 11 (2) (2019) 194..
classification, IEEE Trans. Image Process. 26 (10) (2017) 4843–4855. [78] X. Zhang, Y. Sun, K. Jiang, C. Li, L. Jiao, H. Zhou, Spatial sequential recurrent
[49] J. Leng, T. Li, G. Bai, Q. Dong, H. Dong, Cube-cnn-svm: a novel hyperspectral neural network for hyperspectral image classification, IEEE J. Sel. Topics Appl.
image classification method, in: 2016 IEEE 28th International Conference on Earth Observ. Remote Sens. 11 (11) (2018) 4141–4155.
Tools with Artificial Intelligence (ICTAI), IEEE, 2016, pp. 1027–1034. [79] E. Pan, X. Mei, Q. Wang, Y. Ma, J. Ma, Spectral-spatial classification for
[50] H. Zhang, Y. Li, Y. Zhang, Q. Shen, Spectral-spatial classification of hyperspectral image based on a single gru, Neurocomputing 387 (2020) 150–
hyperspectral imagery using a dual-channel convolutional neural network, 160.
Remote Sens. Lett. 8 (5) (2017) 438–447. [80] H. Wu, P. Saurabh, Semi-supervised deep learning using pseudo labels for
[51] E. Aptoula, M. Ozdemir, B. Yanikoglu, Deep learning with attribute profiles for hyperspectral image classification, IEEE Trans. Image Process. 27 (3) (2017)
hyperspectral image classification, IEEE Geosci. Remote Sens. Lett. 13 (12) 1259–1270.
(2016) 1970–1974. [81] H. Wu, P. Saurabh, Convolutional recurrent neural networks forhyperspectral
[52] W. Zhao, S. Li, A. Li, B. Zhang, Y. Li, Hyperspectral images classification with data classification, Remote Sens. 9 (3) (2017) 298.
convolutional neural network and textural feature using limited training [82] S. Hao, W. Wang, S. Mathieu, Geometry-aware deep recurrent neural
samples, Remote Sens. Lett. 10 (5) (2019) 449–458. networks for hyperspectral image classification, IEEE Trans. Geosci. Remote
[53] C. Yu, M. Zhao, M. Song, Y. Wang, F. Li, R. Han, C. Chang, Hyperspectral image Sens..
classification method based on cnn architecture embedding with hashing [83] C. Shi, P. Chi-Man, Multi-scale hierarchical recurrent neural networks for
semantic feature, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 12 (6) hyperspectral image classification, Neurocomputing 294 (2018) 82–93.
(2019) 1866–1881. [84] S. Pan, Q. Yang, A survey on transfer learning, IEEE Trans. Knowl. Data Eng. 22
[54] C. Qing, J. Ruan, X. Xu, J. Ren, J. Zabalza, Spatial-spectral classification of (10) (2010) 1345–1359, https://doi.org/10.1109/tkde.2009.191.
hyperspectral images: a deep learning framework with markov random fields [85] J. Yang, Y. Zhao, J. Chan, C. Yi, Hyperspectral image classification using two-
based modelling, IET Image Proc. 13 (2) (2018) 235–245. channel deep convolutional neural network, in: 2016 IEEE International
[55] Z. Zhong, J. Li, Z. Luo, M. Chapman, Spectral-spatial residual network for Geoscience and Remote Sensing Symposium (IGARSS), IEEE, 2016, pp. 5079–
hyperspectral image classification: a 3-d deep learning framework, IEEE 5082.
Trans. Geosci. Remote Sens. 56 (2) (2017) 847–858. [86] J. Yang, Y. Zhao, J. Chan, Learning and transferring deep joint spectral–spatial
[56] B. Liu, X. Yu, P. Zhang, X. Tan, R. Wang, L. Zhi, Spectral–spatial classification of features for hyperspectral classification, IEEE Trans. Geosci. Remote Sens. 55
hyperspectral image using three-dimensional convolution network, J. Appl. (8) (2017) 4729–4742.
Remote Sens. 12 (1) (2018) 016005. [87] L. Lin, C. Chen, J. Yang, S. Zhang, Deep transfer hsi classification method based
[57] B. Fang, Y. Li, H. Zhang, J. Chan, Collaborative learning of lightweight on information measure and optimal neighborhood noise reduction,
convolutional neural network and deep clustering for hyperspectral image Electronics 8 (10) (2019) 1112.
semi-supervised classification with limited training samples, ISPRS J. [88] H. Zhang, Y. Li, Y. Jiang, P. Wang, Q. Shen, C. Shen, Hyperspectral classification
Photogramm. Remote Sens. 161 (2020) 164–178. based on lightweight 3-d-cnn with transfer learning, IEEE Trans. Geosci.
[58] L. Mou, P. Ghamisi, X. Zhu, Unsupervised spectral–spatial feature learning via Remote Sens. 57 (8) (2019) 5813–5828.
deep residual conv–deconv network for hyperspectral image classification, [89] Y. Jiang, Y. Li, H. Zhang, Hyperspectral image classification based on 3-d
IEEE Trans. Geosci. Remote Sens. 56 (1) (2017) 391–406. separable resnet and transfer learning, IEEE Geosci. Remote Sens. Lett. 16 (12)
[59] A. Sellami, M. Farah, I. Farah, B. Solaiman, Hyperspectral imagery (2019) 1949–1953.
classification based on semi-supervised 3-d deep neural network and [90] C. Deng, Y. Xue, X. Liu, C. Li, D. Tao, Active transfer learning network: a unified
adaptive band selection, Expert Syst. Appl. 129 (2019) 246–259. deep joint spectral–spatial feature learning model for hyperspectral image
[60] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: classification, IEEE Trans. Geosci. Remote Sens. 57 (3) (2018) 1741–1754.
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.. [91] M. Ghifary, W. Kleijn, M. Zhang, Domain adaptive neural networks for object
[61] M. Paoletti, J. Haut, R. Fernandez-Beltran, J. Plaza, A. Plaza, F. Pla, Deep recognition, in: Pacific Rim International Conference on Artificial Intelligence,
pyramidal residual networks for spectral–spatial hyperspectral image Springer, 2014, pp. 898–904..
classification, IEEE Trans. Geosci. Remote Sens. 57 (2) (2018) 740–754. [92] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, T. Darrell, Deep domain confusion:
[62] X. Ma, A. Fu, J. Wang, H. Wang, B. Yin, Hyperspectral image classification maximizing for domain invariance, arXiv preprint arXiv:1412.3474..
based on deep deconvolution network with skip architecture, IEEE Trans. [93] Z. Wang, B. Du, Q. Shi, W. Tu, Domain adaptation with discriminative
Geosci. Remote Sens. 56 (8) (2018) 4781–4791. distribution and manifold embedding for hyperspectral image classification,
[63] M. Paoletti, J. Haut, J. Plaza, A. Plaza, Deep&dense convolutional neural IEEE Geosci. Remote Sens. Lett. 16 (7) (2019) 1155–1159.
network for hyperspectral image classification, Remote Sens. 10 (9) (2018) [94] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M.
1454. Marchand, V. Lempitsky, Domain-adversarial training of neural networks, J.
[64] W. Wang, S. Dou, Z. Jiang, L. Sun, A fast dense spectral–spatial convolution Mach. Learn. Res. 17 (1) (2016) 2030–2096.
network framework for hyperspectral images classification, Remote Sens. 10 [95] A. Elshamli, G. Taylor, A. Berg, S. Areibi, Domain adaptation using
(7) (2018) 1068. representation learning for the classification of remote sensing images, IEEE
[65] J. Haut, M. Paoletti, J. Plaza, A. Plaza, J. Li, Visual attention-driven J. Sel. Top. Appl. Earth Observ. Remote Sens. 10 (9) (2017) 4198–4209.
hyperspectral image classification, IEEE Trans. Geosci. Remote Sens. 57 (10) [96] B. Settles, Active learning literature survey, Tech. rep., University of
(2019) 8065–8080. Wisconsin-Madison Department of Computer Sciences (2009)..
[66] Z. Xiong, Y. Yuan, Q. Wang, Ai-net: attention inception neural networks for [97] J. Haut, M. Paoletti, J. Plaza, J. Li, A. Plaza, Active learning with convolutional
hyperspectral image classification, in: IGARSS 2018-2018 IEEE International neural networks for hyperspectral image classification using a new bayesian
Geoscience and Remote Sensing Symposium, IEEE, 2018, pp. 2647–2650. approach, IEEE Trans. Geosci. Remote Sens. 56 (11) (2018) 6440–6461.
202
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
[98] P. Liu, H. Zhang, K. Eom, Active deep learning for classification of [124] L. Hu, X. Luo, Y. Wei, Hyperspectral image classification of convolutional
hyperspectral images, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 10 neural network combined with valuable samples, J. Phys. Conf. Ser. 1549
(2) (2016) 712–724. (2020) 052011.
[99] J. Li, Active learning for hyperspectral image classification with a stacked [125] S. Wan, C. Gong, P. Zhong, B. Du, L. Zhang, J. Yang, Multiscale dynamic graph
autoencoders based neural network, in: 2015 7th Workshop on convolutional network for hyperspectral image classification, IEEE Trans.
Hyperspectral Image and Signal Processing: Evolution in Remote Sensing Geosci. Remote Sens. 58 (5) (2019) 3162–3177.
(WHISPERS), IEEE, 2015, pp. 1–4.. [126] B. Liu, K. Gao, A. Yu, W. Guo, R. Wang, X. Zuo, Semisupervised graph
[100] Y. Sun, J. Li, W. Wang, P. Antonio, Z. Chen, Active learning based autoencoder convolutional network for hyperspectral image classification, J. Appl. Remote
for hyperspectral imagery classification, in: 2016 IEEE International Sens. 14 (2) (2020) 026516.
Geoscience and Remote Sensing Symposium (IGARSS), IEEE, 2016, pp. 469– [127] S. Wan, C. Gong, P. Zhong, S. Pan, G. Li, J. Yang, Hyperspectral image
472. classification with context-aware dynamic graph convolutional network,
[101] X. Cao, J. Yao, Z. Xu, D. Meng, Hyperspectral image classification with IEEE Trans. Geosci. Remote Sens. 59 (1) (2020) 597–612.
convolutional neural network and active learning, IEEE Trans. Geosci. Remote [128] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M.
Sens.. Andreetto, H. Adam, Mobilenets: efficient convolutional neural networks for
[102] C. Deng, Y. Xue, X. Liu, C. Li, D. Tao, Active transfer learning network: a unified mobile vision applications, arXiv preprint arXiv:1704.04861..
deep joint spectral–spatial feature learning model for hyperspectral image
classification, IEEE Trans. Geosci. Remote Sens. 57 (3) (2018) 1741–1754.
[103] J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-shot learning, in:
Advances in Neural Information Processing Systems, 2017, pp. 4077–4087.. Sen Jia received his B.E. and Ph.D degrees from College
[104] Y. Liu, M. Su, L. Liu, C. Li, Y. Peng, J. Hou, T. Jiang, Deep residual prototype of Computer Science, Zhejiang University in 2002 and
learning network for hyperspectral image classification, Second Target 2007, respectively. He is currently an Associate Profes-
Recognition and Artificial Intelligence Summit Forum, vol. 11427, sor with the College of Computer Science and Software
International Society for Optics and Photonics, 2020, p. 1142705. Engineering, Shenzhen University, China. His research
[105] H. Tang, Y. Li, X. Han, Q. Huang, W. Xie, A spatial–spectral prototypical interests include hyperspectral image processing, signal
network for hyperspectral remote sensing image, IEEE Geosci. Remote Sens. and image processing, pattern recognition and machine
Lett. 17 (1) (2019) 167–171. learning.
[106] B. Xi, J. Li, Y. Li, R. Song, Y. Shi, S. Liu, Q. Du, Deep prototypical networks with
hybrid residual attention for hyperspectral image classification, IEEE J. Sel.
Top. Appl. Earth Observ. Remote Sens. 13 (2020) 3683–3700.
[107] A. Muqeet, M. Iqbal, S. Bae, Hran: hybrid residual attention network for single
image super-resolution, IEEE Access 7 (2019) 137020–137029.
[108] F. Sung, Y. Yang, L. Zhang, T. Xiang, P.H.S. Torr, T.M. Hospedales, Learning to
compare: relation network for few-shot learning, in: Proc. IEEE Conf. Comput.
Vis. Pattern Recognit., 2018, pp. 1199–1208..
[109] B. Deng, D. Shi, Relation network for hyperspectral image classification, in: Shuguo Jiang received the B.E. degree in software
2019 IEEE International Conference on Multimedia & Expo Workshops engineering from Xiamen University of Technology,
(ICMEW), IEEE, 2019, pp. 483–488. Xiamen, China, in 2020. He is currently pursuing the
[110] K. Gao, B. Liu, X. Yu, J. Qin, P. Zhang, X. Tan, Deep relation network for master?s degree in software engineering with the Col-
hyperspectral image few-shot classification, Remote Sens. 12 (6) (2020) 923. lege of Computer Science and Software Engineering,
[111] X. Ma, S. Ji, J. Wang, J. Geng, H. Wang, Hyperspectral image classification Shenzhen University, Shenzhen, China. His research
based on two-phase relation learning network, IEEE Trans. Geosci. Remote interests include hyperspectral image classification,
Sens. 57 (12) (2019) 10398–10409. machine learning, and pattern recognition.
[112] M. Rao, P. Tang, Z. Zhang, Spatial–spectral relation network for hyperspectral
image classification with limited training samples, IEEE J. Sel. Top. Appl. Earth
Observ. Remote Sens. 12 (12) (2019) 5086–5100.
[113] B. Jane, G. Isabelle, L. Yann, S. Eduard, S. Roopak, Signature verification using a
siamese time delay neural network, in: Advances in Neural Information
Processing Systems, 1994, pp. 737–744..
[114] C. Sumit, H. Raia, L. Yann, Learning a similarity metric discriminatively, with
application to face verification, in: Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., vol. 1, IEEE, 2005, pp. 539–546.. Zhijie Lin received the B.E. degree from the Guangzhou
[115] M. Norouzi, D. Fleet, R. Salakhutdinov, Hamming distance metric learning, in: Medical University of Information Management and
Advances in Neural Information Processing Systems, 2012, pp. 1061–1069.. Information System, Guangzhou, China, in 2017. He is
[116] B. Liu, X. Yu, P. Zhang, A. Yu, Q. Fu, X. Wei, Supervised deep feature extraction currently pursuing the master?s degree in computer
for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens. 56 (4) technology with the College of Computer Science and
(2017) 1909–1921. Software Engineering, Shenzhen University, Shenzhen,
[117] B. Liu, X. Yu, A. Yu, G. Wan, Deep convolutional recurrent neural network
China. His research interests include hyperspectral
with transfer learning for hyperspectral image classification, J. Appl. Remote
image classification, machine learning, and pattern
Sens. 12 (2) (2018) 026028.
[118] Z. Li, X. Tang, W. Li, C. Wang, C. Liu, J. He, A two-stage deep domain recognition.
adaptation method for hyperspectral image classification, Remote Sens. 12
(7) (2020) 1054.
[119] L. Huang, Y. Chen, Dual-path siamese cnn for hyperspectral image
classification with limited training samples, IEEE Geosci. Remote Sens. Lett..
[120] M. Rao, P. Tang, Z. Zhang, A developed siamese cnn with 3d adaptive spatial-
spectral pyramid pooling for hyperspectral image classification, Remote Sens.
12 (12) (2020) 1964. Nanying Li received the B.S. degree in automation from
[121] J. Miao, B. Wang, X. Wu, L. Zhang, B. Hu, J. Zhang, Deep feature extraction the Hunan Institute of Science and Technology,
based on siamese network and auto-encoder for hyperspectral image Yueyang, China, in 2017, and is currently working
classification, in: IGARSS 2019-2019 IEEE International Geoscience and
toward the M.Sc. degree in information and communi-
Remote Sensing Symposium, IEEE, 2019, pp. 397–400..
cation engineering with the same Institute. Her research
[122] B. Deng, S. Jia, D. Shi, Deep metric learning-based feature embedding for
interests include hyperspectral image classification and
hyperspectral image classification, IEEE Trans. Geosci. Remote Sens. 58 (2)
(2020) 1422–1435. anomaly detection.
[123] J. Yang, Y. Zhao, J. Chan, Learning and transferring deep joint spectral-spatial
features for hyperspectral classification, IEEE Trans. Geosci. Remote Sens. 55
(8) (2017) 4729–4742, https://doi.org/10.1109/TGRS.2017.2698503.
203
S. Jia, S. Jiang, Z. Lin et al. Neurocomputing 448 (2021) 179–204
Meng Xu received the B.S. and M.E. degrees in electrical Shiqi Yu is currently an associate professor in the
engineering from the Ocean University of China, Qing- Department of Computer Science and Engineering,
dao, China, in 2011 and 2013, respectively, and the Ph.D. Southern University of Science and Technology, Shen-
degree from the University of New South Wales, Can- zhen, China. He received his B.E. degree in computer
berra, ACT, Australia, in 2017. She is currently an science and engineering from the Chu Kochen Honors
Associate Research Fellow with the College of Computer College, Zhejiang University in 2002, and Ph.D. degree in
Science and Software Engineering, Shenzhen University, pattern recognition and intelligent systems from the
Shenzhen, China. Her research interests include cloud Institute of Automation, Chinese Academy of Sciences in
removal and remote sensing image processing. 2007. He worked as an assistant professor and an
associate professor at Shenzhen Institutes of Advanced
Technology, Chinese Academy of Sciences from 2007 to
2010, and as an associate professor at Shenzhen
University from 2010 to 2019. His research interests include computer vision,
pattern recognition and artificial intelligence.
204