Hybrid Adaptive Neural Network For Remote Sensing Image Classification
Hybrid Adaptive Neural Network For Remote Sensing Image Classification
Corresponding Author:
Natya Sathyanarayana
Department of Electronics and Communication Engineering, Presidency University
Bengaluru 560064, Karnataka, India
Email: [email protected]
1. INTRODUCTION
Addressing the public's diverse and expanding requirements in a constrained environment will be one
of the future challenges. Success in this undertaking requires efficient land management and competence to
effectively allot land to a wide range of applications. Three components are said to be necessary for resource
"management" to be effective. These include knowledge about natural resources, clear regulations on how the
resource may be managed (Such as laws, policy, and administrative processes), and involvement from everyone
with a stake in the area [1].
When it comes to delivering geographical data on the physical properties of the land, remote sensing
(RS) and its associated technologies are of utmost importance. RS is the process of gathering and analysing
data about an entity or the phenomena using electromagnetic radiation without directly contacting them (from
aircraft or satellite). The two primary categories of RS are optical and microwave. Electromagnetic radiation
having a wavelength between 1 cm and 1 m is used in microwave RS as a measuring technique. Since
microwaves have a longer wavelength than visible and infrared radiation, they have the valuable feature of
penetrating clouds, fog, and ash. When collecting radiation reflected and emitted from the surfaces being
examined, optical RS focuses on that region of the electromagnetic spectrum with wavelengths ranging from
visible to near infrared up to thermal infrared [2]–[4].
Remote sensing image’s (RSI) temporal, spectral, radiometric, and spatial resolutions define it.
Spectral resolution is the ability of a sensor to define accurate wavelength intervals. The narrower the
wavelength range for a given channel or band, the higher the spectral resolution [5], [6]. Examples of spatial
resolution include the linear dimension on the ground that each pixel represents and the portion of the ground
it saw in its current field of view are the smallest objects discerned by the sensor. The degree of electromagnetic
energy that a sensor is sensitive to each time it takes an image determines the radiometric resolution.
Radiometric resolution is the ability of an imaging system to identify incredibly small changes in energy. A
sensor with higher radiometric resolution can be used to identify subtle variations in the energy being radiated
[7]. The time interval between two consecutive images taken by the sensor at the same ground location is
referred to as the temporal resolution. Based on their orbit, satellite-based sensors may continually monitor a
region or return to the same region every few days [8]. The temporal characteristic is useful for tracking change
in land usage.
Land cover (LC) is one of the most important and common uses of RS and land use (LU) mapping.
The LC describes the top layer of the earth, including any flora, bodies of water, soil, snow, desert, or other
surface coverings. The LU indicates how a piece of land is used for animal habitat, farming, or recreation. The
application of LU constitute ongoing monitoring and baseline mapping since the conveying of information in
a timely manner is a necessity to ascertain the accurate tracking of land and to identify the modifications in
land with time. This information will be useful in creating plans to balance development demands, competing
uses, and conservation. The destruction of agricultural land, encroachments in water bodies, the expansion of
cities, and the reduction of forest cover are issues that are driving LU research [9]. RSI often has a high spatial
resolution and a poor spectral resolution, or vice versa, as a result of technology compromises linked to data
quantity and signal-to-noise ratio (SNR) constraints. The classification procedure is used to identify various
feature types on the surface of the earth based on the notion that each has a unique spectral reflectance and
emission property [10], [11].
The practical requirement of utilizing RSI across various applications serves as the motivation behind
this research. By developing an adaptive artificial neural network (ANN) model this research aims to streamline
the process and to improve the speed to accurately classify RSI into a class/category regardless of their source,
resolution, or size. This method seeks to provide important insights into the existence and importance of defined
classes/categories, thereby enhancing decision-making across several domains. In conclusion, the primary
driving force behind the research is to maximize the potential of RSI through automated classification for better
decision support.
There are six sections to this research. The first section discusses the core concepts of RSI, LU, and
LC while summarizing the motivation and contributions of the study. The second section includes a thorough
literature review. The suggested solution and a block diagram for resolving the identified problem are presented
in the third section. The emphasis on experimental work in the fourth section allows for a deeper
comprehension of the image characteristics used and makes it easier to train the model. The fifth section is
devoted to providing the test image's results and an explanation. The sixth and last section provides a
conclusion of the research.
2. RELATED WORK
ANN categorization has been extensively used in RS applications. There are several different ANN
types, each of which aims to improve a certain aspect of categorization effectiveness. Conjugate gradient is one
technique with a low memory need and good efficiency. According to Zhang and Yu [12], three components that
were obtained using principal component analysis and classified using the conjugate gradient approach were
applied to landsat thematic mapper (TM) images and found that this ANN outperforms a traditional classifier in
both quantitative and visual assessment. Han and Liu [13] presented an extreme-learning-machine (ELM)
ensemble approach, where feature segmentation and non-negative matrix factorization were initially performed
to the RSI in order to enhance the ensemble's diversification. ELM was then used as the main classifier to increase
classification accuracy [13]. Kaichang et al. [14] have been recommended two learning granularities for inductive
learning (IL) from spatial data, these are spatial object granularity and pixel granularity. The findings suggest that
spectral uncertainty can be largely resolved through IL.
Dong et al. [15] was suggested in consideration of the features of RSI, a classification system based
on hopfield neural network in paper, findings indicate that its accuracy surpasses maximum likelihood (ML).
He and Tong [16] proposed an adaptive-ant colony algorithm (ACA) based hyper-spectral image classification
to address the issues of slow convergence, poor accuracy, and low speed. With the aim of achieving high
classification precision in complicated ground object environments and to avoid local convergence, a modified
technique is devised which address the RSI classification challenge using gene expression programming that
is based on grouping technique [17]. In order to improve efficiency, Tan et al. [18] presents a method for
classifying RSI based on support vector machine (SVM) and object semantics. Rough sets theory is a recently
developed soft computing technique for handling ambiguity and uncertainty. Dong et al. [19] highlights the
fundamental theory, nature, and contemporary applications of rough sets. The processing of RSI classification
then incorporates the rough set theory.
Gustafson Kessel and Gath Geva algorithms were devised [20] to enhance the conventional fuzzy c-
means (FCM) algorithm, which uses the Euclidean distance norm. These algorithms are being utilized to
improve classification accuracy and overcome the drawbacks of FCM in conventional clustering techniques.
Research by Wan et al. [21], a selfadaptive adjustment of clustering centre strategy is provided after evaluating
the spectrum characteristics of multispectral RSI, which is one type of RSI of LU consisting of many different
surface object categories, and difficult to develop a multi-distribution framework of class spectral component.
While the modified granular hough transform improves the recommended granular watershed algorithm's
ability to differentiate lines of different widths and lengths in RSI, the latter performs considerably more
reliably with human visual qualities in the segmentation as stated in [22].
Over the recent years, lots of effort has gone into creating various scene categorization algorithms for
RSI, Chen et al. [23] sparsed regularise feature learning to solve multi-class classification problems with
satisfactory performance. A comparison of the various supervised learning techniques for classifying RSI was
reported [24]. The analysis's primary focus was on categorising LU and LC with SVM being found to be the
most successful approach out of all those assessed. A hyperspectral image (HSI) is a type of image that may
capture a large range of finely tuned small spectral bands between the infrared and visible spectrum. The vast
volume of spectral data offers significant LC information that assists in the accurate categorization of LU and
LC on the surface. However, obtaining labelled training data from HSI necessitates labor-and time-intensive
processes. Therefore, a classifier design that employs as few labelled samples as feasible for classification was
suggested [25] based on active learning. The objects and features of RSI typically have ambiguous backgrounds
and are unable to produce satisfying results. The significant intraclass differences make it more challenging to
appropriately identify the RSI. In order to resolve this problem and collect three particular domains characteristics
for the scene categorization challenge, the multi-view-feature-learning-network has been presented [26]. To
address the aforementioned issues with classic deep learning (DL) networks, the ensemble of prototype
networks and model-agnostic-meta-learning was introduced [27]. This method applied meta-learning to the
RSI categorization issue. A prominent subject of study is the multiresolution classification of panchromatic
and multispectral images. The key issue in this subject is figuring out how to properly analyse data and extract
characteristics to increase classification accuracy. For the categorization of multi-resolution RSI, a concept for
an adaptive-hybrid-fusion (AHF) network that included data fusion and feature fusion was given out in [28].
3. PROPOSED SOLUTION
The aim is to develop a computationally inexpensive yet efficient ANN model for categorizing LC in
RSI of any size. The primary objective is to identify the main class/category the image belongs to, and then
calculate each class/category percentage contribution. It is essential that the design be template-free in order to
prevent the necessity for template storage and save memory utilization.
To address the above aim, some kind of intelligent solution technique is to be implemented, which
can give performance above the cost of compute and memory demand. This is possible if we reduce the
dimension of the image information without using a complex compression algorithm. The quantitative
information should be representative of the available image contents characteristics and should be of the same
size irrespective of the image size. Then to develop an intelligent platform for a classifier that is both simple
and efficient which supports effective learning.
To accomplish the goal, the entire image classification issue has been framed as a one-dimensional
pattern recognition problem. There are two stages over which the whole solution was designed. The
quantitative availability of contents has been determined using the histogram approach. The pixel density-
based distribution, which provides a one-dimensional pattern and has the same array length for the various
image sizes, was generated through normalization. Using the extremely efficient capability of universal
approximation of multilayer feed forward architecture to provide the facility of efficient learning of pixel
density distribution (PDD) pattern. Gradient learning is made more effective by including an adaptive slope in
the activation function of the neural network.
Multilayer perceptron’s (MLP) three layers were considered, with the hidden and output layers
carrying the active function, which is a uni-model sigmoid function. The weight was upgraded using the
gradient descent method, which included the momentum term to boost the speed of convergence. If the slope
of the sigmoid function is fixed, the operation may shift during learning in the saturation regions R1 and R3,
as illustrated in Figure 1(a). When the inputs are varied, there is little to no change. Such a situation may make
correct and quicker learning harder since the operating region may not shift from there or may need a significant
number of iterations to come out.
Hence in the proposed solution during the training, the slope of the activation function has adaptive
characteristics which provide the change in the slope, cause of change in the area of active and saturation
regions as shown in Figure 1(b). This provision gives the learner three degrees of freedom: hidden layer
A hybrid approach to remote sensing image classification based on pixel density … (Natya Sathyanarayana)
2294 ISSN: 2252-8938
weights, output layer weights, and activation function slope. The learning technique that was used is illustrated
as follows.
a) Weighted initialization of the network connection with a random value in the range [-1 to 1], and the slopes
of all active functions is set to 1.
b) From PDD the training is set, input is considered and the network processing is applied, to deliver the
outputs.
c) The obtained output was then compared to the pre-defined class/category targets, and the error result was
evaluated using (1) and (2).
For output layer,
(a) (b)
Figure 1. Sigmoid function (a). Different regions under uni-model sigmoid function and (b). Change in the
area of active and saturation regions with different ‘p’ values (or slopes)
The suggested solution's entire flow diagram for the training and testing phases is given in Figure 2.
Each RS colour image’s individual red (R), green (G), and blue (B) colour matrix pixels were first extracted.
A corresponding colour histogram for each is then generated. All image’s PDD has the same scale throughout
the horizontal and vertical axes as a consequence of applying a self-maximum normalizing technique to the
histogram, in order to make the size for all histograms equal within the range of 0 to 1. To create the training
data set for each colour, a set of PDD for the same class/category images was created. The output layer has the
same number of nodes as the number of image class/categories. The gradient technique was used in the training
to provide the appropriate weights and slope values for each class/category and colour. These weights and
slope values were stored, which needed much less memory than storing the templates.
As a result, three neural networks are developed, each capable of recognizing the pattern of normalized
pixel density (NPD) for RGB colour pixel matrices. During the test, the colour matrix pixels were extracted
first, and after getting the histogram for each colour matrix, the normalized pixel density distribution (NPDD)
was obtained. To process the input for each colour pixel matrix, the same three architectural were used during
training. The highest correlation between image PDD and neural network parameters attained defined the
maximum output with the existing class/category. If more than one major content contributor existed in the
images, the results from different nodes in the network are getting values in a similar proportion because each
network architecture has been trained to recognize the pattern of NPDD for different class/categories. This
proportional outcome can be used to determine if certain class/category contents are present in the input images.
4. EXPERIMENTAL WORK
Five different image affiliation types (i) snow covered (ii) bare land (iii) water body with land structure
(iv) vegetation and (v) city have been taken into consideration in this work, depending on what was assessed
to be the majority of the image's content. There are no limitations on the types and resolution of images included
in the findings, thus varied image sizes have been taken into consideration. The individual colour matrix images
(R, G, and B) have been extracted from each image, and a histogram has been created for each colour matrix.
After normalisation, each identical colour matrix from various categories was arranged in a block with NPD.
As a consequence, there are three blocks each which contained the distribution of pixel densities for various
categories. Each colour matrix's length has a 256 PDD, and there was a total of 5 distinct categories. Therefore,
a feed-forward neural network with an input size of 256 neurons and an output size of 5 neurons has been taken
A hybrid approach to remote sensing image classification based on pixel density … (Natya Sathyanarayana)
2296 ISSN: 2252-8938
into consideration for training. A total of 16 hidden nodes were taken into account. Less than 10% of the input
dimension, this number of nodes has been determined to offer the best generalisation. At the active nodes, the
uni-model sigmoid function has been taken into consideration as the transfer function. In order to make learning
faster and more effective, gradient descent has been used to simultaneously impart learning of neuron weights
and transfer function slope. A momentum element is incorporated into the learning process to allow faster
learning. The learning and momentum constants were 0.2 and 0.1 respectively. These parameters' low values
ensure slow, smooth learning and a maximum of 500 iterations set. In order to make the suggested solution
appear aesthetically accurate, we took into consideration of two images (i.e., image-1 and image-2) from each
of the five class/category as illustrated in Figure 3, and the Table 1 gives the specifics of the training image's
size of these two images from those class/categories. It is obvious from looking at the histogram in Figure 4
(i.e., image-1 and image-2 from each category) and their normalised histogram in Figure 5 (i.e., image-1 and
image-2 from each category) that this problem of image classification categorization may be effectively
handled employing pattern recognition techniques and neural networks. For each set of colours (R, G, and B)
in the colour matrix, the learning convergence with the proposed neural architecture is depicted in Figure 6. It
is evident that mean square error (MSE) was reduced quickly and nearly to zero.
(a) (b)
(c) (d)
(e)
Figure 3. Images used in training from five different categories/classes: (a). Snow cover, (b). Bare land,
(c). Water and land structure, (d). Vegetation, and (e). City
Table 1. Size of the images used in training from five different categories/classes
Categories/classes Image-1 (size in pixels) Image-2 (size in pixels)
Snow cover 888×572 508×595
Bare land 421×297 421×262
Water and land structure 136×128 136×128
Vegetation 789×555 1024×768
City 600×309 2847×2817
Figure 4. Histogram corresponding to each image in the categories/classes used in training: (a). Snow cover
(b). Bare land, (c). Water and land structure, (d). Vegetation, and (e). City
Figure 5. NPD vs pixel distribution, corresponding to each image in the categories/classes used in training:
(a). Snow cover, (b). Bare land, (c). Water and land structure, (d). Vegetation, and (e). City
A hybrid approach to remote sensing image classification based on pixel density … (Natya Sathyanarayana)
2298 ISSN: 2252-8938
Figure 6. Learning convergence with the proposed neural architecture: (a) red matrix learning with NPD, (b)
green matrix learning with NPD, and (c) blue matrix learning with NPD
5. RESULTS
To verify the performance of the proposed solution, a new image (Test case image) has been
considered as depicted in Figure 7(a). The obtained histogram and corresponding NPD of the test case image
is shown in Figure 7(b) and Figure 7(c) respectively. The testing of this image begins with the extraction of the
colour matrix pixels (R, G, and B), and the NPDD was produced after each colour matrix's histogram was
obtained. The same three architectural settings from the training were applied to this image. Every network
architecture has been trained to identify the pattern of NPDD for various class/category, so if there were
multiple important content contributors in the image, the results from various nodes in the network are
receiving values in a similar proportion. If specific class/category contents are present in this image, it can be
ascertained using this proportionate result.
Figure 7. Test case result: (a) test case image considered, (b) NPD vs pixel distribution of test case image,
and (c) RGB histogram of test case image
The test case image clearly shows that the majority of the information was concerning a city
class/category. The city class/category obtained the highest decision value, as shown in Table 2's last column,
where the readings for Red is 0.5787, Green is 0.3955, and Blue is 0.5997 when summed together yield a total
of 1.5739, the highest of any other class/category. The third column in Table 2, where the values for Red is
0.0379, Green is 0.6697, and Blue is 0.4251 when added together provide a total of 1.1326, has emerged with
the second highest decision value because the test case image also shows the presence of a water bodies. It is
evident from the Figure 7 that there exists combination of vegetation, water bodies, buildings and some part of
bare land in the test case image. After processing the entire test case image, the determined decision values for
the various colours (R, G, and B) and category/class contents are depicted in Figure 8, and the numerical results
of each class/category are provided in Table 2.
Table 2. The decision value outcome over the test case image
Colour matrix Snow cover Bare land Water and land structure Vegetation City
Red 0.0244 0.0469 0.0379 0.1316 0.5787
Green 0.0223 0.0605 0.6697 0.0183 0.3955
Blue 0.0081 0.1029 0.4251 0.0092 0.5997
Total 0.0548 0.2102 1.1326 0.1591 1.5739
Figure 8. The decision value over individual colour matrix and final class/categories outcome
6. CONCLUSION
RSI categorization is widely used in real-world situations. With the transformation of the image
classification problem to a pattern recognition challenge, the requirement for having an effective and
straightforward classification solution design has been archived. This transformation has made it possible to
create a solution that works for images of any size and type. Dimension has been drastically decreased, and the
transform array now only has 256 variables for images of any size. The adaptive slope has made learning more
efficient, and storing merely weight values has lessened the load on memory. In order to understand how a
particular region has changed over time, it might be extremely helpful in reality to be able to define the different
class characteristics quantitatively. For example, it may be used to describe the many changes that have taken
place in a city or how agricultural land has changed from being utilised for farming to being bare land or
inhabited by buildings. It is conceivable to add the sort of neural architecture that combines good performance
with a simple design.
REFERENCES
[1] R. Pazúr, B. Price, and P. M. Atkinson, “Fine temporal resolution satellite sensors with global coverage: an opportunity for landscape
ecologists,” Landscape Ecology, vol. 36, no. 8, pp. 2199–2213, 2021, doi: 10.1007/s10980-021-01303-w.
[2] N. Sathyanarayana and V. J. Rehna, “Land cover classification schemes using remote sensing images: a recent survey,” British
Journal of Applied Science & Technology, vol. 13, no. 4, pp. 1–11, 2016, doi: 10.9734/bjast/2016/22037.
[3] S. A. Ahmed, H. Desa, and A. S. T. Hussain, “Classification of semantic segmentation using fully convolutional networks based
unmanned aerial vehicle application,” IAES International Journal of Artificial Intelligence, vol. 12, no. 2, pp. 641–647, 2023, doi:
10.11591/ijai.v12.i2.pp641-647.
[4] E. R. Kondal and S. S. Barpanda, “Hyperspectral image classification using Hyb-3D convolution neural network spectral
partitioning,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 29, no. 1, pp. 295–303, 2023, doi:
10.11591/ijeecs.v29.i1.pp295-303.
[5] V. Y. Ignatiev, I. A. Matveev, A. B. Murynin, A. A. Usmanova, and V. I. Tsurkov, “Increasing the spatial resolution of panchromatic
satellite images based on generative neural networks,” Journal of Computer and Systems Sciences International, vol. 60, no. 2, pp.
239–247, 2021, doi: 10.1134/S1064230721020076.
[6] M. Karthick et al., “Real-Time MRI lungs images revealing using hybrid feedforward deep neural network and convolutional neural
network,” Intelligent Data Analysis, vol. 27, pp. 95–114, 2023, doi: 10.3233/IDA-237436.
[7] N. Verde, G. Mallinis, M. Tsakiri-Strati, C. Georgiadis, and P. Patias, “Assessment of radiometric resolution impact on remote
sensing data classification accuracy,” Remote Sensing, vol. 10, no. 8, pp. 1–17, 2018, doi: 10.3390/rs10081267.
[8] H. Salehi, A. Shamsoddini, S. M. Mirlatifi, B. Mirgol, and M. Nazari, “Spatial and temporal resolution improvement of actual
evapotranspiration maps using landsat and MODIS data fusion,” Frontiers in Environmental Science, vol. 9, pp. 1–14, 2021, doi:
10.3389/fenvs.2021.795287.
[9] N. Sathyanarayana, R. K, and S. Singh, “Insights on deep learning based segmentation schemes towards analyzing satellite
A hybrid approach to remote sensing image classification based on pixel density … (Natya Sathyanarayana)
2300 ISSN: 2252-8938
imageries,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 11, pp. 119–129, 2021, doi:
10.14569/IJACSA.2021.0121114.
[10] M. Sobhana, S. C. Chaparala, D. N. V. S. L. S. Indira, and K. K. Kumar, “A disaster classification application using convolutional
neural network by performing data augmentation,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 27,
no. 3, pp. 1712–1720, 2022, doi: 10.11591/ijeecs.v27.i3.pp1712-1720.
[11] K. W. V. Geollegue, E. R. Arboleda, and A. A. Dizon, “Seed of rice plant classification using coarse tree classifier,” IAES
International Journal of Artificial Intelligence, vol. 11, no. 2, pp. 727–735, 2022, doi: 10.11591/ijai.v11.i2.pp727-735.
[12] D. Zhang and L. Yu, “Conjugate gradient method neural network for medium resolution remote sensing image classification,” in
Applied Informatics and Communication, Berlin, Heidelberg: Springer, 2011, pp. 264–270, doi: 10.1007/978-3-642-23220-6_32.
[13] M. Han and B. Liu, “A remote sensing image classification method based on extreme learning machine ensemble,” in Advances in
Neural Networks – ISNN 2013, Berkeley, CA: Springer, 2013, pp. 447–454, doi: 10.1007/978-3-642-39065-4_54.
[14] D. Kaichang, L. Deren, and L. Deyi, “Remote sensing image classification with gis data based on spatial data mining techniques,”
Geo-Spatial Information Science, vol. 3, no. 4, pp. 30–35, 2000, doi: 10.1007/BF02829393.
[15] G. J. Dong, Y. S. Zhang, and C. J. Zhu, “Remote sensing image classification algorithm based on hopfield neural network,” in
Advances in Neural Networks - ISNN 2006, Berlin, Heidelberg: Springer, 2006, pp. 337–342, doi: 10.1007/11760023_49.
[16] T. He and H. Tong, “Remote sensing image classification based on adaptive ant colony algorithm,” Arabian Journal of Geosciences,
vol. 13, no. 14, pp. 1–7, 2020, doi: 10.1007/s12517-020-05717-9.
[17] J. Lu and Y. Cheng, “Application of Bs-Gep algorithm in water conservancy remote sensing image classification,” in Application
of Intelligent Systems in Multi-modal Information Analytics, Cham: Springer, 2022, pp. 1029–1034, doi: 10.1007/978-3-031-05484-
6_139.
[18] X. Tan, Y. Song, and W. Xiang, “Remote sensing image classification based on SVM and object semantic,” in Communications in
Computer and Information Science, Berlin, Heidelberg: Springer, 2013, pp. 748–755, doi: 10.1007/978-3-642-45025-9_73.
[19] G. J. Dong, Y. S. Zhang, and Y. H. Fan, “Remote sensing image classification algorithm based on rough set theory,” in Fuzzy
Information and Engineering, Berlin, Heidelberg: Springer, 2007, pp. 846–851, doi: 10.1007/978-3-540-71441-5_92.
[20] J. Yu, P. Guo, P. Chen, Z. Zhang, and W. Ruan, “Remote sensing image classification based on improved fuzzy c-means,” Geo-
Spatial Information Science, vol. 11, no. 2, pp. 90–94, 2008, doi: 10.1007/s11806-008-0017-8.
[21] S. Wan et al., “The self-adaptive adjustment method of clustering center in multi-spectral remote sensing image classification of
land use,” in IFIP Advances in Information and Communication Technology, Berlin, Heidelberg: Springer, 2012, pp. 559–568, doi:
10.1007/978-3-642-27278-3_57.
[22] W. Zhaocong, Y. Lina, and Q. Maoyun, “Granular approach to object-oriented remote sensing image classification,” in Rough Sets
and Knowledge Technology, Berlin, Heidelberg: Springer, 2009, pp. 563–570, doi: 10.1007/978-3-642-02962-2_71.
[23] T. Chen, Y. Zhao, and Y. Guo, “Sparsity-regularized feature selection for multi-class remote sensing image classification,” Neural
Computing and Applications, vol. 32, no. 11, pp. 6513–6521, 2020, doi: 10.1007/s00521-019-04046-7.
[24] A. Joshi, A. Dhumka, Y. Dhiman, C. Rawat, and Ritika, “A comparative study of supervised learning techniques for remote sensing
image classification,” in Soft Computing: Theories and Applications, Singapore: Springer, 2022, pp. 49–61, doi: 10.1007/978-981-
16-1740-9_6.
[25] V. K. Shrivastava and M. K. Pradhan, “Hyperspectral remote sensing image classification using active learning,” Studies in
Computational Intelligence, vol. 907, pp. 133–152, 2021, doi: 10.1007/978-3-030-50641-4_8.
[26] Y. Guo, J. Ji, D. Shi, Q. Ye, and H. Xie, “Multi-view feature learning for VHR remote sensing image classification,” Multimedia
Tools and Applications, vol. 80, no. 15, pp. 23009–23021, 2021, doi: 10.1007/s11042-020-08713-z.
[27] S. Pundir and J. A. Akshay, “EPM: meta-learning method for remote sensing image classification,” in Machine Intelligence and
Smart Systems, Singapore: Springer, 2022, pp. 329–339, doi: 10.1007/978-981-16-9650-3_25.
[28] W. Ma et al., “A novel adaptive hybrid fusion network for multiresolution remote sensing images classification,” IEEE Transactions
on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022, doi: 10.1109/TGRS.2021.3062142.
BIOGRAPHIES OF AUTHORS