Remote Sensing: Overview of The Special Issue On Applications of Remote Sensing Imagery For Urban Areas
Remote Sensing: Overview of The Special Issue On Applications of Remote Sensing Imagery For Urban Areas
Remote Sensing: Overview of The Special Issue On Applications of Remote Sensing Imagery For Urban Areas
Editorial
Overview of the Special Issue on Applications of Remote
Sensing Imagery for Urban Areas
Xinghua Li 1, * , Yongtao Yu 2 , Xiaobin Guan 3 and Ruitao Feng 4
1 School of Remote Sensing and Information Engineering, Wuhan University, No. 129 Luoyu Road,
Wuhan 430079, China
2 Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, No. 1 Meicheng Road East,
Huaian 223003, China; [email protected]
3 School of Resource and Environmental Sciences, Wuhan University, No. 129 Luoyu Road,
Wuhan 430079, China; [email protected]
4 School of Geography and Tourism, Shaanxi Normal University, No. 620 West Chang’an Avenue,
Xi’an 710119, China; [email protected]
* Correspondence: [email protected]
Urban areas are the center of human settlement with intensive anthropic activities and
dense built-up infrastructures, suffering significant evolution in population shift, land-use
change, industrial production, and so on. Urbanization-induced environmental pollution,
climate change, and ecosystem degradation are the research hotpot that highly relates to the
sustainable human future. Remote sensing (RS) imageries from different platforms (drone,
airborne, and spaceborne) and different sensors (optical, thermal, SAR, and LiDAR), provide
essential information for these applications in urban areas with various characteristics and
spatiotemporal resolutions. Especially, the continually improved spatial resolution can satisfy
the description of the complex urban geographical system, and it is applicable for monitoring
numerous natural and anthropogenic issues at different scales.
This Special Issue (SI) aims to invite recent advances in the applications of RS imagery
Citation: Li, X.; Yu, Y.; Guan, X.; for urban areas, and 17 papers in total were selected and published. Among them, 12 papers
Feng, R. Overview of the Special emphasize the novel urban application algorithms based on RS imageries, such as urban
Issue on Applications of Remote attribute mapping, building extraction, classification, change detection, and so on [1–12], and
Sensing Imagery for Urban Areas. 5 papers directly employed RS imageries to analyze the environmental variations and urban
Remote Sens. 2022, 14, 1204. https:// expansion in typical cities, such as urban heat island, air pollution, lightning, and so on [13–17].
doi.org/10.3390/rs14051204 RS imageries provide new opportunities to extract the urban building information and
Received: 24 January 2022
detect its changes, and thus there are four papers focused on this issue [1–4]. Cao et al. [1]
Accepted: 28 February 2022
proposed a stacking ensemble deep learning model (SENet) to obtain fine-scale spatial and
Published: 1 March 2022
spectral building information, based on a sparse autoencoder integrating U-NET, SegNet,
and FCN-8s models. The model was assessed by a building dataset in Hebei Province,
Publisher’s Note: MDPI stays neutral
China, and the results indicate that its accuracy is significantly improved compared to all
with regard to jurisdictional claims in
three models. Xue et al. [2] proposed a multi-branched network structure to fuse the seman-
published maps and institutional affil-
tic information of the building changes at different levels. Experimentation with the WHU
iations.
Building Change Detection Dataset showed that the proposed method obtained accuracies
of 0.8526, 0.9418, and 0.9204 in IoU, Recall, and F1 Score, respectively, which could assess
building change areas with complete boundaries and accurate results. Luo et al. [3] utilized
Copyright: © 2022 by the authors.
GF-7 high-resolution stereo mapping satellite double-line camera images and multispectral
Licensee MDPI, Basel, Switzerland. images for the segment of building boundary, based on a multilevel features fusion net-
This article is an open access article work (MFFN). The results show that high accuracy of 95.29% can be achieved in building
distributed under the terms and extraction. The 3D building model can be efficiently built in Level of Details 1 (LOD1)
conditions of the Creative Commons based on the extracted building vector and elevation information from the digital surface
Attribution (CC BY) license (https:// model, and the urban scene was produced for realistic 3D visualization. Chen et al. [4]
creativecommons.org/licenses/by/ reconstructed bias U-Net with self-attention for semantic segmentation of building rooftops.
4.0/). Concretely, a self-attention module is added to learn the attention weights of inputs in the
encoding part. The proposed method achieves IoU scores of 89.39% and 73.49% for WHU
and Massachusetts datasets, respectively.
Except for building information extraction, classification, target detection, and change
detection are also very important for urban applications using RS imageries, and there are
five papers on these issues [5–9]. As for classification, Ling et al. [5] proposed a research
framework to quantify the urban land cover (ULC) classification accuracy using optical
and SAR data with various cloud levels, using three typical supervised classification meth-
ods. The experimental results indicate that the ULC classification accuracy decreases with
increasing cloud content, and the fusion of SAR and optical data can significantly help
reduce the confusion between land covers under clouds and improve the classification
accuracy. Shi et al. [6] proposed an attention-guided classification method (AGCNet) for
multispectral and panchromatic images, based on a lightweight multi-sensor classifica-
tion network. AGCNet mainly consists of a share split network (SSNet) and a selective
classification network (SCNet), which are used to balance the classification performance
and time cost better. The classification maps and accuracies show the superiority of the
proposed AGCNet, and it can be easily extended to other multi-sensor and multi-scale
classifications. As for target detection, Chen et al. [7] proposed a Rotation-Invariant and
Relation-Aware (RIRA) CDAOD network. It is trained at the image level and the proto-
type level based on relation aware graph to align the feature distribution and added the
rotation-invariant regularizer to deal with the rotation diversity. The results show that the
method can effectively improve the detection effect in the target domain, and outperforms
competing methods. Shen et al. [8] proposed an algorithm combining the constrained
energy minimization (CEM) algorithm and the improved maximum between-class variance
(OTSU) algorithm (t-OTSU), to obtain the initial target detection results and adaptively
segment the target region. The detection accuracy is above 99%, and the false alarm rate is
below 0.2%. Yang et. al. [9] focused on the change detection in high-resolution RS imageries
and proposed an MRA-SNet model based on the UNet network. The Siamese network is
used to extract the features of bi-temporal images in the encoder separately and perform
the difference connection to generate difference maps better. The multi-Res blocks and the
residual connections are applied to extract detailed spatial and spectral features of different
scales, and the Attention Gates module is added to better focus on the changing features
and suppress the irrelevant features.
There are also three other papers that aimed at different demands of urban RS ap-
plications [10–12]. Chao et al. [10] analyzed the ability to utilize contextual features from
very-high-spatial-resolution (<2 m) and medium-spatial-resolution (Sentinel-2, 10 m) im-
ageries to model the urban attributes and population density under the human-modified
landscape. The results suggest that contextual features can model urban attributes well
at very high spatial resolutions, with out-of-sample R2 values up to 93%. Feng et al. [11]
aimed at the image quality for urban analysis, and proposed a region-by-region registration
algorithm that combines the feature-based and optical flow methods. Concretely, the initial
displacement fields for a pair of images are calculated by the block-weighted projective
model and Brox optical flow estimation, respectively, in the flat- and complex-terrain re-
gions. The abnormal displacements resulting from the sensitivity of optical flow in the
land use or land cover changes, are adaptively detected and corrected by the weighted
Taylor expansion. The experimental results demonstrated that the proposed method could
achieve the sub-pixel alignment accuracy of different optical RS images. Zhang et al. [12]
investigated the mechanisms of the radar return changes induced by urban flooding under
different polarizations, and proposed an urban flooding index (UFI) for unsupervised
inundated urban area detection. The Sentinel-1 PolSAR is used as the basic data, and
the Jilin-1 high-resolution optical images acquired on the same day are used for visual
interpretation as ground truth. The results indicate that the UFI-based method can achieve
higher overall accuracy than the conventional unsupervised method.
For the remaining five papers analyzing the urban expansion and urban environmental
changes [13–17], Liu et al. [13] used time-series Landsat imagery to map and quantify the
Remote Sens. 2022, 14, 1204 3 of 4
Author Contributions: Writing—original draft preparation, R.F. and X.G.; writing—review and
editing, X.L. and Y.Y. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments: We would like to thank all the authors and reviewers who contributed to this SI.
Conflicts of Interest: The authors declare no conflict of interest.
Remote Sens. 2022, 14, 1204 4 of 4
References
1. Cao, D.; Xing, H.; Wong, M.S.; Kwan, M.-P.; Xing, H.; Meng, Y. A Stacking Ensemble Deep Learning Model for Building Extraction
from Remote Sensing Images. Remote Sens. 2021, 13, 3898. [CrossRef]
2. Xue, J.; Xu, H.; Yang, H.; Wang, B.; Wu, P.; Choi, J.; Cai, L.; Wu, Y. Multi-Feature Enhanced Building Change Detection Based on
Semantic Information Guidance. Remote Sens. 2021, 13, 4171. [CrossRef]
3. Luo, H.; He, B.; Guo, R.; Wang, W.; Kuai, X.; Xia, B.; Wan, Y.; Ma, D.; Xie, L. Urban Building Extraction and Modeling Using GF-7
DLC and MUX Images. Remote Sens. 2021, 13, 3414. [CrossRef]
4. Chen, Z.; Li, D.; Fan, W.; Guan, H.; Wang, C.; Li, J. Self-attention in reconstruction bias U-Net for semantic segmentation of
building rooftops in optical remote sensing images. Remote Sens. 2021, 13, 2524. [CrossRef]
5. Ling, J.; Zhang, H.; Lin, Y. Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images.
Remote Sens. 2021, 13, 4708. [CrossRef]
6. Shi, C.; Dang, Y.; Fang, L.; Lv, Z.; Shen, H. Attention-Guided Multispectral and Panchromatic Image Classification. Remote Sens.
2021, 13, 4823. [CrossRef]
7. Chen, Y.; Liu, Q.; Wang, T.; Wang, B.; Meng, X. Rotation-Invariant and Relation-Aware Cross-Domain Adaptation Object Detection
Network for Optical Remote Sensing Images. Remote Sens. 2021, 13, 4386. [CrossRef]
8. Shen, Y.; Li, J.; Lin, W.; Chen, L.; Huang, F.; Wang, S. Camouflaged Target Detection Based on Snapshot Multispectral Imaging.
Remote Sens. 2021, 13, 3949. [CrossRef]
9. Yang, X.; Hu, L.; Zhang, Y.; Li, Y. MRA-SNet: Siamese Networks of Multiscale Residual and Attention for Change Detection in
High-Resolution Remote Sensing Images. Remote Sens. 2021, 13, 4528. [CrossRef]
10. Chao, S.; Engstrom, R.; Mann, M.; Bedada, A. Evaluating the Ability to Use Contextual Features Derived from Multi-Scale Satellite
Imagery to Map Spatial Patterns of Urban Attributes and Population Distributions. Remote Sens. 2021, 13, 3962. [CrossRef]
11. Feng, R.; Du, Q.; Shen, H.; Li, X. Region-by-Region Registration Combining Feature-Based and Optical Flow Methods for Remote
Sensing Images. Remote Sens. 2021, 13, 1475. [CrossRef]
12. Zhang, H.; Qi, Z.; Li, X.; Chen, Y.; Wang, X.; He, Y. An Urban Flooding Index for Unsupervised Inundated Urban Area Detection
Using Sentinel-1 Polarimetric SAR Images. Remote Sens. 2021, 13, 4511. [CrossRef]
13. Liu, Y.; Zuo, R.; Dong, Y. Analysis of Temporal and Spatial Characteristics of Urban Expansion in Xiaonan District from 1990 to
2020 Using Time Series Landsat Imagery. Remote Sens. 2021, 13, 4299. [CrossRef]
14. Shen, Y.; Zeng, C.; Cheng, Q.; Shen, H. Opposite Spatiotemporal Patterns for Surface Urban Heat Island of Two “Stove Cities” in
China: Wuhan and Nanchang. Remote Sens. 2021, 13, 4447. [CrossRef]
15. Wang, H.; Shi, Z.; Wang, X.; Tan, Y.; Wang, H.; Li, L.; Lin, X. Cloud-to-Ground Lightning Response to Aerosol over Air-Polluted
Urban Areas in China. Remote Sens. 2021, 13, 2600. [CrossRef]
16. Xue, M.; Zhang, X.; Sun, X.; Sun, T.; Yang, Y. Expansion and Evolution of a Typical Resource-Based Mining City in Transition
Using the Google Earth Engine: A Case Study of Datong, China. Remote Sens. 2021, 13, 4045. [CrossRef]
17. Wang, Y.; Wang, M.; Huang, B.; Li, S.; Lin, Y. Estimation and Analysis of the Nighttime PM2. 5 Concentration Based on LJ1-01
Images: A Case Study in the Pearl River Delta Urban Agglomeration of China. Remote Sens. 2021, 13, 3405. [CrossRef]