Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = bitemporal multispectral images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11901 KiB  
Article
LIRRN: Location-Independent Relative Radiometric Normalization of Bitemporal Remote-Sensing Images
by Armin Moghimi, Vahid Sadeghi, Amin Mohsenifar, Turgay Celik and Ali Mohammadzadeh
Sensors 2024, 24(7), 2272; https://doi.org/10.3390/s24072272 - 2 Apr 2024
Cited by 1 | Viewed by 1323
Abstract
Relative radiometric normalization (RRN) is a critical pre-processing step that enables accurate comparisons of multitemporal remote-sensing (RS) images through unsupervised change detection. Although existing RRN methods generally have promising results in most cases, their effectiveness depends on specific conditions, especially in scenarios with [...] Read more.
Relative radiometric normalization (RRN) is a critical pre-processing step that enables accurate comparisons of multitemporal remote-sensing (RS) images through unsupervised change detection. Although existing RRN methods generally have promising results in most cases, their effectiveness depends on specific conditions, especially in scenarios with land cover/land use (LULC) in image pairs in different locations. These methods often overlook these complexities, potentially introducing biases to RRN results, mainly because of the use of spatially aligned pseudo-invariant features (PIFs) for modeling. To address this, we introduce a location-independent RRN (LIRRN) method in this study that can automatically identify non-spatially matched PIFs based on brightness characteristics. Additionally, as a fast and coregistration-free model, LIRRN complements keypoint-based RRN for more accurate results in applications where coregistration is crucial. The LIRRN process starts with segmenting reference and subject images into dark, gray, and bright zones using the multi-Otsu threshold technique. PIFs are then efficiently extracted from each zone using nearest-distance-based image content matching without any spatial constraints. These PIFs construct a linear model during subject–image calibration on a band-by-band basis. The performance evaluation involved tests on five registered/unregistered bitemporal satellite images, comparing results from three conventional methods: histogram matching (HM), blockwise KAZE, and keypoint-based RRN algorithms. Experimental results consistently demonstrated LIRRN’s superior performance, particularly in handling unregistered datasets. LIRRN also exhibited faster execution times than blockwise KAZE and keypoint-based approaches while yielding results comparable to those of HM in estimating normalization coefficients. Combining LIRRN and keypoint-based RRN models resulted in even more accurate and reliable results, albeit with a slight lengthening of the computational time. To investigate and further develop LIRRN, its code, and some sample datasets are available at link in Data Availability Statement. Full article
Show Figures

Figure 1

19 pages, 7547 KiB  
Article
Semi-Supervised Urban Change Detection Using Multi-Modal Sentinel-1 SAR and Sentinel-2 MSI Data
by Sebastian Hafner, Yifang Ban and Andrea Nascetti
Remote Sens. 2023, 15(21), 5135; https://doi.org/10.3390/rs15215135 - 27 Oct 2023
Cited by 5 | Viewed by 2342
Abstract
Urbanization is progressing at an unprecedented rate in many places around the world. The Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions, combined with deep learning, offer new opportunities to accurately monitor urbanization at a global scale. Although the joint [...] Read more.
Urbanization is progressing at an unprecedented rate in many places around the world. The Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions, combined with deep learning, offer new opportunities to accurately monitor urbanization at a global scale. Although the joint use of SAR and optical data has recently been investigated for urban change detection, existing data fusion methods rely heavily on the availability of sufficient training labels. Meanwhile, change detection methods addressing label scarcity are typically designed for single-sensor optical data. To overcome these limitations, we propose a semi-supervised urban change detection method that exploits unlabeled Sentinel-1 SAR and Sentinel-2 MSI data. Using bitemporal SAR and optical image pairs as inputs, the proposed multi-modal Siamese network predicts urban changes and performs built-up area segmentation for both timestamps. Additionally, we introduce a consistency loss, which penalizes inconsistent built-up area segmentation across sensor modalities on unlabeled data, leading to more robust features. To demonstrate the effectiveness of the proposed method, the SpaceNet 7 dataset, comprising multi-temporal building annotations from rapidly urbanizing areas across the globe, was enriched with Sentinel-1 SAR and Sentinel-2 MSI data. Subsequently, network performance was analyzed under label-scarce conditions by training the network on different fractions of the labeled training set. The proposed method achieved an F1 score of 0.555 when using all available training labels, and produced reasonable change detection results (F1 score of 0.491) even with as little as 10% of the labeled training data. In contrast, multi-modal supervised methods and semi-supervised methods using optical data failed to exceed an F1 score of 0.402 under this condition. Code and data are made publicly available. Full article
Show Figures

Figure 1

24 pages, 7021 KiB  
Article
A Change Detection Method Based on Multi-Scale Adaptive Convolution Kernel Network and Multimodal Conditional Random Field for Multi-Temporal Multispectral Images
by Shou Feng, Yuanze Fan, Yingjie Tang, Hao Cheng, Chunhui Zhao, Yaoxuan Zhu and Chunhua Cheng
Remote Sens. 2022, 14(21), 5368; https://doi.org/10.3390/rs14215368 - 26 Oct 2022
Cited by 16 | Viewed by 2088
Abstract
Multispectral image change detection is an important application in the field of remote sensing. Multispectral images usually contain many complex scenes, such as ground objects with diverse scales and proportions, so the change detection task expects the feature extractor is superior in adaptive [...] Read more.
Multispectral image change detection is an important application in the field of remote sensing. Multispectral images usually contain many complex scenes, such as ground objects with diverse scales and proportions, so the change detection task expects the feature extractor is superior in adaptive multi-scale feature learning. To address the above-mentioned problems, a multispectral image change detection method based on multi-scale adaptive kernel network and multimodal conditional random field (MSAK-Net-MCRF) is proposed. The multi-scale adaptive kernel network (MSAK-Net) extends the encoding path of the U-Net, and designs a weight-sharing bilateral encoding path, which simultaneously extracts independent features of bi-temporal multispectral images without introducing additional parameters. A selective convolution kernel block (SCKB) that can adaptively assign weights is designed and embedded in the encoding path of MSAK-Net to extract multi-scale features in images. MSAK-Net retains the skip connections in the U-Net, and embeds an upsampling module (UM) based on the attention mechanism in the decoding path, which can give the feature map a better expression of change information in both the channel dimension and the spatial dimension. Finally, the multimodal conditional random field (MCRF) is used to smooth the detection results of the MSAK-Net. Experimental results on two public multispectral datasets indicate the effectiveness and robustness of the proposed method when compared with other state-of-the-art methods. Full article
Show Figures

Figure 1

25 pages, 14334 KiB  
Article
Multi-Temporal Satellite Image Composites in Google Earth Engine for Improved Landslide Visibility: A Case Study of a Glacial Landscape
by Erin Lindsay, Regula Frauenfelder, Denise Rüther, Lorenzo Nava, Lena Rubensdotter, James Strout and Steinar Nordal
Remote Sens. 2022, 14(10), 2301; https://doi.org/10.3390/rs14102301 - 10 May 2022
Cited by 30 | Viewed by 5986
Abstract
Regional early warning systems for landslides rely on historic data to forecast future events and to verify and improve alarms. However, databases of landslide events are often spatially biased towards roads or other infrastructure, with few reported in remote areas. In this study, [...] Read more.
Regional early warning systems for landslides rely on historic data to forecast future events and to verify and improve alarms. However, databases of landslide events are often spatially biased towards roads or other infrastructure, with few reported in remote areas. In this study, we demonstrate how Google Earth Engine can be used to create multi-temporal change detection image composites with freely available Sentinel-1 and -2 satellite images, in order to improve landslide visibility and facilitate landslide detection. First, multispectral Sentinel-2 images were used to map landslides triggered by a summer rainstorm in Jølster (Norway), based on changes in the normalised difference vegetation index (NDVI) between pre- and post-event images. Pre- and post-event multi-temporal images were then created by reducing across all available images within one month before and after the landslide events, from which final change detection image composites were produced. We used the mean of backscatter intensity in co- (VV) and cross-polarisations (VH) for Sentinel-1 synthetic aperture radar (SAR) data and maximum NDVI for Sentinel-2. The NDVI-based mapping increased the number of registered events from 14 to 120, while spatial bias was decreased, from 100% of events located within 500 m of a road to 30% close to roads in the new inventory. Of the 120 landslides, 43% were also detectable in the multi-temporal SAR image composite in VV polarisation, while only the east-facing landslides were clearly visible in VH. Noise, from clouds and agriculture in Sentinel-2, and speckle in Sentinel-1, was reduced using the multi-temporal composite approaches, improving landslide visibility without compromising spatial resolution. Our results indicate that manual or automated landslide detection could be significantly improved with multi-temporal image composites using freely available earth observation images and Google Earth Engine, with valuable potential for improving spatial bias in landslide inventories. Using the multi-temporal satellite image composites, we observed significant improvements in landslide visibility in Jølster, compared with conventional bi-temporal change detection methods, and applied this for the first time using VV-polarised SAR data. The GEE scripts allow this procedure to be quickly repeated in new areas, which can be helpful for reducing spatial bias in landslide databases. Full article
(This article belongs to the Special Issue Remote Sensing Analysis of Geologic Hazards)
Show Figures

Graphical abstract

19 pages, 4345 KiB  
Article
Normalized Burn Ratio Plus (NBR+): A New Index for Sentinel-2 Imagery
by Emanuele Alcaras, Domenica Costantino, Francesca Guastaferro, Claudio Parente and Massimiliano Pepe
Remote Sens. 2022, 14(7), 1727; https://doi.org/10.3390/rs14071727 - 3 Apr 2022
Cited by 54 | Viewed by 22293
Abstract
The monitoring of burned areas can easily be performed using satellite multispectral images: several indices are available in the literature for highlighting the differences between healthy vegetation areas and burned areas, in consideration of their different signatures. However, these indices may have limitations [...] Read more.
The monitoring of burned areas can easily be performed using satellite multispectral images: several indices are available in the literature for highlighting the differences between healthy vegetation areas and burned areas, in consideration of their different signatures. However, these indices may have limitations determined, for example, by the presence of clouds or water bodies that produce false alarms. To avoid these inaccuracies and optimize the results, this work proposes a new index for detecting burned areas named Normalized Burn Ratio Plus (NBR+), based on the involvement of Sentinel-2 bands. The efficiency of this index is verified by comparing it with five other existing indices, all applied on an area with a surface of about 500 km2 and covering the north-eastern part of Sicily (Italy). To achieve this aim, both a uni-temporal approach (single date image) and a bi-temporal approach (two date images) are adopted. The maximum likelihood classifier (MLC) is applied to each resulting index map to define the threshold separating burned pixels from non-burned ones. To evaluate the efficiency of the indices, confusion matrices are constructed and compared with each other. The NBR+ shows excellent results, especially because it excludes a large part of the areas incorrectly classified as burned by other indices, despite being clouds or water bodies. Full article
(This article belongs to the Special Issue Recent Advances in GIS Techniques for Remote Sensing)
Show Figures

Figure 1

21 pages, 9626 KiB  
Article
Deep Learning of High-Resolution Aerial Imagery for Coastal Marsh Change Detection: A Comparative Study
by Grayson R. Morgan, Cuizhen Wang, Zhenlong Li, Steven R. Schill and Daniel R. Morgan
ISPRS Int. J. Geo-Inf. 2022, 11(2), 100; https://doi.org/10.3390/ijgi11020100 - 1 Feb 2022
Cited by 16 | Viewed by 4786
Abstract
Deep learning techniques are increasingly being recognized as effective image classifiers. Aside from their successful performance in past studies, the accuracies have varied in complex environments, in comparison with the popularly of applied machine learning classifiers. This study seeks to explore the feasibility [...] Read more.
Deep learning techniques are increasingly being recognized as effective image classifiers. Aside from their successful performance in past studies, the accuracies have varied in complex environments, in comparison with the popularly of applied machine learning classifiers. This study seeks to explore the feasibility of using a U-Net deep learning architecture to classify bi-temporal, high-resolution, county-scale aerial images to determine the spatial extent and changes of land cover classes that directly or indirectly impact tidal marsh. The image set used in the analysis is a collection of a 1-m resolution collection of National Agriculture Imagery Program (NAIP) tiles from 2009 and 2019, covering Beaufort County, South Carolina. The U-Net CNN classification results were compared with two machine learning classifiers, the random trees (RT) and support vector machine (SVM). The results revealed a significant accuracy advantage in using the U-Net classifier (92.4%), as opposed to the SVM (81.6%) and RT (75.7%) classifiers, for overall accuracy. From the perspective of a GIS analyst or coastal manager, the U-Net classifier is now an easily accessible and powerful tool for mapping large areas. Change detection analysis indicated little areal change on marsh extent, though increased land development throughout the county has the potential to negatively impact the health of the marshes. Future work should explore applying the constructed U-Net classifier to coastal environments in large geographic areas, while also implementing other data sources (e.g., LIDAR and multispectral data) to enhance classification accuracy. Full article
Show Figures

Figure 1

19 pages, 7578 KiB  
Article
Alteration Detection of Multispectral/Hyperspectral Images Using Dual-Path Partial Recurrent Networks
by Jinlong Li, Xiaochen Yuan and Li Feng
Remote Sens. 2021, 13(23), 4802; https://doi.org/10.3390/rs13234802 - 26 Nov 2021
Cited by 3 | Viewed by 1632
Abstract
Numerous alteration detection methods are designed based on image transformation algorithms and divergence of bi-temporal images. In the process of feature transformation, pseudo variant information caused by complex external factors will be highlighted. As a result, the error of divergence between the two [...] Read more.
Numerous alteration detection methods are designed based on image transformation algorithms and divergence of bi-temporal images. In the process of feature transformation, pseudo variant information caused by complex external factors will be highlighted. As a result, the error of divergence between the two images will be further enhanced. In this paper, we propose to fuse the variability of Deep Neural Networks’ (DNNs) structure flexibly with various detection algorithms for bi-temporal multispectral/hyperspectral imagery alteration detection. Specifically, the novel Dual-path Partial Recurrent Networks (D-PRNs) was proposed to project more accurate and effective deep features. The Unsupervised Slow Feature Analysis (USFA), Iteratively Reweighted Multivariate Alteration Detection (IRMAD), and Principal Component Analysis (PCA) were then utilized, respectively, with the proposed D-PRNs, to generate two groups of transformed features corresponding to the bi-temporal remote sensing images. We next employed the Chi-square distance to compute the divergence between two groups of transformed features and, thus, obtain the Alteration Intensity Map. Finally, threshold algorithms K-means and Otsu were, respectively, applied to transform the Alteration Intensity Map into Binary Alteration Map. Experiments were conducted on two bi-temporal remote sensing image datasets, and the testing results proved that the proposed alteration detection model using D-PRNs outperformed the state-of-the-art alteration detection model. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

16 pages, 5470 KiB  
Article
Evaluating the Hyperspectral Sensitivity of the Differenced Normalized Burn Ratio for Assessing Fire Severity
by Max J. van Gerrevink and Sander Veraverbeke
Remote Sens. 2021, 13(22), 4611; https://doi.org/10.3390/rs13224611 - 16 Nov 2021
Cited by 13 | Viewed by 3299
Abstract
Fire severity represents fire-induced environmental changes and is an important variable for modeling fire emissions and planning post-fire rehabilitation. Remotely sensed fire severity is traditionally evaluated using the differenced normalized burn ratio (dNBR) derived from multispectral imagery. This spectral index is based on [...] Read more.
Fire severity represents fire-induced environmental changes and is an important variable for modeling fire emissions and planning post-fire rehabilitation. Remotely sensed fire severity is traditionally evaluated using the differenced normalized burn ratio (dNBR) derived from multispectral imagery. This spectral index is based on bi-temporal differenced reflectance changes caused by fires in the near-infrared (NIR) and short-wave infrared (SWIR) spectral regions. Our study aims to evaluate the spectral sensitivity of the dNBR using hyperspectral imagery by identifying the optimal bi-spectral NIR SWIR combination. This assessment made use of a rare opportunity arising from the pre- and post-fire airborne image acquisitions over the 2013 Rim and 2014 King fires in California with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. The 224 contiguous bands of this sensor allow for 5760 unique combinations of the dNBR at a high spatial resolution of approximately 15 m. The performance of the hyperspectral dNBR was assessed by comparison against field data and the spectral optimality statistic. The field data is composed of 83 in situ measurements of fire severity using the Geometrically structured Composite Burn Index (GeoCBI) protocol. The optimality statistic ranges between zero and one, with one denoting an optimal measurement of the fire-induced spectral change. We also combined the field and optimality assessments into a combined score. The hyperspectral dNBR combinations demonstrated strong relationships with GeoCBI field data. The best performance of the dNBR combination was derived from bands 63, centered at 0.962 µm, and 218, centered at 2.382 µm. This bi-spectral combination yielded a strong relationship with GeoCBI field data of R2 = 0.70 based on a saturated growth model and a median spectral index optimality statistic of 0.31. Our hyperspectral sensitivity analysis revealed optimal NIR and SWIR bands for the composition of the dNBR that are outside the ranges of the NIR and SWIR bands of the Landsat 8 and Sentinel-2 sensors. With the launch of the Precursore Iperspettrale Della Missione Applicativa (PRISMA) in 2019 and several planned spaceborne hyperspectral missions, such as the Environmental Mapping and Analysis Program (EnMAP) and Surface Biology and Geology (SBG), our study provides a timely assessment of the potential and sensitivity of hyperspectral data for assessing fire severity. Full article
(This article belongs to the Special Issue Remote Sensing of Burnt Area)
Show Figures

Figure 1

19 pages, 3742 KiB  
Article
Multispectral Image Change Detection Based on Single-Band Slow Feature Analysis
by Youxi He, Zhenhong Jia, Jie Yang and Nikola K. Kasabov
Remote Sens. 2021, 13(15), 2969; https://doi.org/10.3390/rs13152969 - 28 Jul 2021
Cited by 8 | Viewed by 3064
Abstract
Due to differences in external imaging conditions, multispectral images taken at different periods are subject to radiation differences, which severely affect the detection accuracy. To solve this problem, a modified algorithm based on slow feature analysis is proposed for multispectral image change detection. [...] Read more.
Due to differences in external imaging conditions, multispectral images taken at different periods are subject to radiation differences, which severely affect the detection accuracy. To solve this problem, a modified algorithm based on slow feature analysis is proposed for multispectral image change detection. First, single-band slow feature analysis is performed to process bitemporal multispectral images band by band. In this way, the differences between unchanged pixels in each pair of single-band images can be sufficiently suppressed to obtain multiple feature-difference images containing real change information. Then, the feature-difference images of each band are fused into a grayscale distance image using the Euclidean distance. After Gaussian filtering of the grayscale distance image, false detection points can be further reduced. Finally, the k-means clustering method is performed on the filtered grayscale distance image to obtain the binary change map. Experiments reveal that our proposed algorithm is less affected by radiation differences and has obvious advantages in time complexity and detection accuracy. Full article
(This article belongs to the Special Issue Advances in Optical Remote Sensing Image Processing and Applications)
Show Figures

Graphical abstract

19 pages, 2453 KiB  
Article
A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection
by Yue Wu, Zhuangfei Bai, Qiguang Miao, Wenping Ma, Yuelei Yang and Maoguo Gong
Remote Sens. 2020, 12(13), 2098; https://doi.org/10.3390/rs12132098 - 30 Jun 2020
Cited by 23 | Viewed by 3314
Abstract
Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the [...] Read more.
Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the distribution of the bitemporal multi-spectral image data and transforms it into change detection results, and these change detection results (as the fake data) are input into the discriminator to train the discriminator. The results obtained by pre-classification are also input into the discriminator as the real data. The adversarial training can facilitate the generator learning the transformation from a bitemporal image to a change map. When the generator is trained well, the generator has the ability to generate the final result. The bitemporal multi-spectral images are input into the generator, and then the final change detection results are obtained from the generator. The proposed method is completely unsupervised, and we only need to input the preprocessed data that were obtained from the pre-classification and training sample selection. Through adversarial training, the generator can better learn the relationship between the bitemporal multi-spectral image data and the corresponding labels. Finally, the well-trained generator can be applied to process the raw bitemporal multi-spectral images to obtain the final change map (CM). The effectiveness and robustness of the proposed method were verified by the experimental results on the real high-resolution multi-spectral image data sets. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

36 pages, 1528 KiB  
Review
Change Detection Techniques Based on Multispectral Images for Investigating Land Cover Dynamics
by Dyah R. Panuju, David J. Paull and Amy L. Griffin
Remote Sens. 2020, 12(11), 1781; https://doi.org/10.3390/rs12111781 - 1 Jun 2020
Cited by 61 | Viewed by 17160
Abstract
Satellite images provide an accurate, continuous, and synoptic view of seamless global extent. Within the fields of remote sensing and image processing, land surface change detection (CD) has been amongst the most discussed topics. This article reviews advances in bitemporal and multitemporal two-dimensional [...] Read more.
Satellite images provide an accurate, continuous, and synoptic view of seamless global extent. Within the fields of remote sensing and image processing, land surface change detection (CD) has been amongst the most discussed topics. This article reviews advances in bitemporal and multitemporal two-dimensional CD with a focus on multispectral images. In addition, it reviews some CD techniques used for synthetic aperture radar (SAR). The importance of data selection and preprocessing for CD provides a starting point for the discussion. CD techniques are, then, grouped based on the change analysis products they can generate to assist users in identifying suitable procedures for their applications. The discussion allows users to estimate the resources needed for analysis and interpretation, while selecting the most suitable technique for generating the desired information such as binary changes, direction or magnitude of changes, “from-to” information of changes, probability of changes, temporal pattern, and prediction of changes. The review shows that essential and innovative improvements are being made in analytical processes for multispectral images. Advantages, limitations, challenges, and opportunities are identified for understanding the context of improvements, and this will guide the future development of bitemporal and multitemporal CD methods and techniques for understanding land cover dynamics. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

18 pages, 26932 KiB  
Article
Patch Similarity Convolutional Neural Network for Urban Flood Extent Mapping Using Bi-Temporal Satellite Multispectral Imagery
by Bo Peng, Zonglin Meng, Qunying Huang and Caixia Wang
Remote Sens. 2019, 11(21), 2492; https://doi.org/10.3390/rs11212492 - 24 Oct 2019
Cited by 50 | Viewed by 5403
Abstract
Urban flooding is a major natural disaster that poses a serious threat to the urban environment. It is highly demanded that the flood extent can be mapped in near real-time for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Many [...] Read more.
Urban flooding is a major natural disaster that poses a serious threat to the urban environment. It is highly demanded that the flood extent can be mapped in near real-time for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Many efforts have been taken to identify the flooding zones with remote sensing data and image processing techniques. Unfortunately, the near real-time production of accurate flood maps over impacted urban areas has not been well investigated due to three major issues. (1) Satellite imagery with high spatial resolution over urban areas usually has nonhomogeneous background due to different types of objects such as buildings, moving vehicles, and road networks. As such, classical machine learning approaches hardly can model the spatial relationship between sample pixels in the flooding area. (2) Handcrafted features associated with the data are usually required as input for conventional flood mapping models, which may not be able to fully utilize the underlying patterns of a large number of available data. (3) High-resolution optical imagery often has varied pixel digital numbers (DNs) for the same ground objects as a result of highly inconsistent illumination conditions during a flood. Accordingly, traditional methods of flood mapping have major limitations in generalization based on testing data. To address the aforementioned issues in urban flood mapping, we developed a patch similarity convolutional neural network (PSNet) using satellite multispectral surface reflectance imagery before and after flooding with a spatial resolution of 3 meters. We used spectral reflectance instead of raw pixel DNs so that the influence of inconsistent illumination caused by varied weather conditions at the time of data collection can be greatly reduced. Such consistent spectral reflectance data also enhance the generalization capability of the proposed model. Experiments on the high resolution imagery before and after the urban flooding events (i.e., the 2017 Hurricane Harvey and the 2018 Hurricane Florence) showed that the developed PSNet can produce urban flood maps with consistently high precision, recall, F1 score, and overall accuracy compared with baseline classification models including support vector machine, decision tree, random forest, and AdaBoost, which were often poor in either precision or recall. The study paves the way to fuse bi-temporal remote sensing images for near real-time precision damage mapping associated with other types of natural hazards (e.g., wildfires and earthquakes). Full article
Show Figures

Graphical abstract

27 pages, 6576 KiB  
Article
Classification of Land Cover, Forest, and Tree Species Classes with ZiYuan-3 Multispectral and Stereo Data
by Zhuli Xie, Yaoliang Chen, Dengsheng Lu, Guiying Li and Erxue Chen
Remote Sens. 2019, 11(2), 164; https://doi.org/10.3390/rs11020164 - 16 Jan 2019
Cited by 126 | Viewed by 10835
Abstract
The global availability of high spatial resolution images makes mapping tree species distribution possible for better management of forest resources. Previous research mainly focused on mapping single tree species, but information about the spatial distribution of all kinds of trees, especially plantations, is [...] Read more.
The global availability of high spatial resolution images makes mapping tree species distribution possible for better management of forest resources. Previous research mainly focused on mapping single tree species, but information about the spatial distribution of all kinds of trees, especially plantations, is often required. This research aims to identify suitable variables and algorithms for classifying land cover, forest, and tree species. Bi-temporal ZiYuan-3 multispectral and stereo images were used. Spectral responses and textures from multispectral imagery, canopy height features from bi-temporal stereo imagery, and slope and elevation from the stereo-derived digital surface model data were examined through comparative analysis of six classification algorithms including maximum likelihood classifier (MLC), k-nearest neighbor (kNN), decision tree (DT), random forest (RF), artificial neural network (ANN), and support vector machine (SVM). The results showed that use of multiple source data—spectral bands, vegetation indices, textures, and topographic factors—considerably improved land-cover and forest classification accuracies compared to spectral bands alone, which the highest overall accuracy of 84.5% for land cover classes was from the SVM, and, of 89.2% for forest classes, was from the MLC. The combination of leaf-on and leaf-off seasonal images further improved classification accuracies by 7.8% to 15.0% for land cover classes and by 6.0% to 11.8% for forest classes compared to single season spectral image. The combination of multiple source data also improved land cover classification by 3.7% to 15.5% and forest classification by 1.0% to 12.7% compared to the spectral image alone. MLC provided better land-cover and forest classification accuracies than machine learning algorithms when spectral data alone were used. However, some machine learning approaches such as RF and SVM provided better performance than MLC when multiple data sources were used. Further addition of canopy height features into multiple source data had no or limited effects in improving land-cover or forest classification, but improved classification accuracies of some tree species such as birch and Mongolia scotch pine. Considering tree species classification, Chinese pine, Mongolia scotch pine, red pine, aspen and elm, and other broadleaf trees as having classification accuracies of over 92%, and larch and birch have relatively low accuracies of 87.3% and 84.5%. However, these high classification accuracies are from different data sources and classification algorithms, and no one classification algorithm provided the best accuracy for all tree species classes. This research implies the same data source and the classification algorithm cannot provide the best classification results for different land cover classes. It is necessary to develop a comprehensive classification procedure using an expert-based approach or hierarchical-based classification approach that can employ specific data variables and algorithm for each tree species class. Full article
(This article belongs to the Special Issue Remote Sensing Techniques for Precision Forestry)
Show Figures

Graphical abstract

3742 KiB  
Article
Urban Change Analysis with Multi-Sensor Multispectral Imagery
by Yuqi Tang and Liangpei Zhang
Remote Sens. 2017, 9(3), 252; https://doi.org/10.3390/rs9030252 - 9 Mar 2017
Cited by 34 | Viewed by 6952
Abstract
An object-based method is proposed in this paper for change detection in urban areas with multi-sensor multispectral (MS) images. The co-registered bi-temporal images are resampled to match each other. By mapping the segmentation of one image to the other, a change map is [...] Read more.
An object-based method is proposed in this paper for change detection in urban areas with multi-sensor multispectral (MS) images. The co-registered bi-temporal images are resampled to match each other. By mapping the segmentation of one image to the other, a change map is generated by characterizing the change probability of image objects based on the proposed change feature analysis. The map is then used to separate the changes from unchanged areas by two threshold selection methods and k-means clustering (k = 2). In order to consider the multi-scale characteristics of ground objects, multi-scale fusion is implemented. The experimental results obtained with QuickBird and IKONOS images show the superiority of the proposed method in detecting urban changes in multi-sensor MS images. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

2548 KiB  
Article
Spatiotemporal Changes of Farming-Pastoral Ecotone in Northern China, 1954–2005: A Case Study in Zhenlai County, Jilin Province
by Yuanyuan Yang, Shuwen Zhang, Dongyan Wang, Jiuchun Yang and Xiaoshi Xing
Sustainability 2015, 7(1), 1-22; https://doi.org/10.3390/su7010001 - 23 Dec 2014
Cited by 43 | Viewed by 6901
Abstract
Analyzing spatiotemporal changes in land use and land cover could provide basic information for appropriate decision-making and thereby plays an essential role in promoting the sustainable use of land resources, especially in ecologically fragile regions. In this paper, a case study was taken [...] Read more.
Analyzing spatiotemporal changes in land use and land cover could provide basic information for appropriate decision-making and thereby plays an essential role in promoting the sustainable use of land resources, especially in ecologically fragile regions. In this paper, a case study was taken in Zhenlai County, which is a part of the farming-pastoral ecotone of Northern China. This study integrated methods of bitemporal change detection and temporal trajectory analysis to trace the paths of land cover change for every location in the study area from 1954 to 2005, using published land cover data based on topographic and environmental background maps and also remotely sensed images including Landsat MSS (Multispectral Scanner) and TM (Thematic Mapper). Meanwhile, the Lorenz curve and Gini coefficient derived from economic models were also used to study the land use structure changes to gain a better understanding of human impact on this fragile ecosystem. Results of bitemporal change detection showed that the most common land cover transition in the study area was an expansion of arable land at the expense of grassland and wetland. Plenty of grassland was converted to other unused land, indicating serious environmental degradation in Zhenlai County during the past decades. Trajectory analysis of land use and land cover change demonstrated that settlement, arable land, and water bodies were relatively stable in terms of coverage and spatial distribution, while grassland, wetland, and forest land had weak stability. Natural forces were still dominating the environmental processes of the study area, while human-induced changes also played an important role in environmental change. In addition, different types of land use displayed different concentration trends and had large changes during the study period. Arable land was the most decentralized, whereas forest land was the most concentrated. The above results not only revealed notable spatiotemporal features of land use and land cover change in the time series, but also confirmed the applicability and effectiveness of the methodology in our research, which combined bitemporal change detection, temporal trajectory analysis, and a Lorenz curve/Gini coefficient in analyzing spatiotemporal changes in land use and land cover. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

Back to TopTop