Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,669)

Search Parameters:
Keywords = optical remote sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 2221 KiB  
Article
A Novel Evolutionary Deep Learning Approach for PM2.5 Prediction Using Remote Sensing and Spatial–Temporal Data: A Case Study of Tehran
by Mehrdad Kaveh, Mohammad Saadi Mesgari and Masoud Kaveh
ISPRS Int. J. Geo-Inf. 2025, 14(2), 42; https://doi.org/10.3390/ijgi14020042 - 23 Jan 2025
Viewed by 204
Abstract
Forecasting particulate matter with a diameter of 2.5 μm (PM2.5) is critical due to its significant effects on both human health and the environment. While ground-based pollution measurement stations provide highly accurate PM2.5 data, their limited number and geographic coverage [...] Read more.
Forecasting particulate matter with a diameter of 2.5 μm (PM2.5) is critical due to its significant effects on both human health and the environment. While ground-based pollution measurement stations provide highly accurate PM2.5 data, their limited number and geographic coverage present significant challenges. Recently, the use of aerosol optical depth (AOD) has emerged as a viable alternative for estimating PM2.5 levels, offering a broader spatial coverage and higher resolution. Concurrently, long short-term memory (LSTM) models have shown considerable promise in enhancing air quality predictions, often outperforming other prediction techniques. To address these challenges, this study leverages geographic information systems (GIS), remote sensing (RS), and a hybrid LSTM architecture to predict PM2.5 concentrations. Training LSTM models, however, is an NP-hard problem, with gradient-based methods facing limitations such as getting trapped in local minima, high computational costs, and the need for continuous objective functions. To overcome these issues, we propose integrating the novel orchard algorithm (OA) with LSTM to optimize air pollution forecasting. This paper utilizes meteorological data, topographical features, PM2.5 pollution levels, and satellite imagery from the city of Tehran. Data preparation processes include noise reduction, spatial interpolation, and addressing missing data. The performance of the proposed OA-LSTM model is compared to five advanced machine learning (ML) algorithms. The proposed OA-LSTM model achieved the lowest root mean square error (RMSE) value of 3.01 µg/m3 and the highest coefficient of determination (R2) value of 0.88, underscoring its effectiveness compared to other models. This paper employs a binary OA method for sensitivity analysis, optimizing feature selection by minimizing prediction error while retaining critical predictors through a penalty-based objective function. The generated maps reveal higher PM2.5 concentrations in autumn and winter compared to spring and summer, with northern and central areas showing the highest pollution levels. Full article
24 pages, 6656 KiB  
Article
Large-Scale Stitching of Hyperspectral Remote Sensing Images Obtained from Spectral Scanning Spectrometers Mounted on Unmanned Aerial Vehicles
by Hong Liu, Bingliang Hu, Xingsong Hou, Tao Yu, Zhoufeng Zhang, Xiao Liu, Xueji Wang and Zhengxuan Tan
Electronics 2025, 14(3), 454; https://doi.org/10.3390/electronics14030454 - 23 Jan 2025
Viewed by 267
Abstract
To achieve large-scale stitching of the hyperspectral remote sensing images obtained by unmanned aerial vehicles (UAVs) equipped with an acousto-optic tunable filter spectrometer, this study proposes a method based on a feature fusion strategy and a seam-finding strategy using hyperspectral image classification. In [...] Read more.
To achieve large-scale stitching of the hyperspectral remote sensing images obtained by unmanned aerial vehicles (UAVs) equipped with an acousto-optic tunable filter spectrometer, this study proposes a method based on a feature fusion strategy and a seam-finding strategy using hyperspectral image classification. In the feature extraction stage, SuperPoint deep features from images in different spectral segments of the data cube were extracted and fused. The feature depth matcher, LightGlue, was employed for feature matching. During the data cube fusion stage, unsupervised K-means spectral classification was performed separately on the two hyperspectral data cubes. Subsequently, grayscale transformations were applied to the classified images. A dynamic programming method, based on a grayscale loss function, was then used to identify seams in the transformed images. Finally, the identified splicing seam was applied across all bands to produce a unified hyperspectral data cube. The proposed method was applied to hyperspectral data cubes acquired at specific waypoints by UAVs using an acousto-optic tunable filter spectral imager. Experimental results demonstrated that the proposed method outperformed both single-spectral-segment feature extraction methods and stitching methods that rely on seam identification from a single spectral segment. The improvement was evident in both the spatial and spectral dimensions. Full article
(This article belongs to the Special Issue New Challenges in Remote Sensing Image Processing)
Show Figures

Figure 1

25 pages, 6944 KiB  
Article
Representation Learning of Multi-Spectral Earth Observation Time Series and Evaluation for Crop Type Classification
by Andrea González-Ramírez, Clement Atzberger, Deni Torres-Roman and Josué López
Remote Sens. 2025, 17(3), 378; https://doi.org/10.3390/rs17030378 - 23 Jan 2025
Viewed by 244
Abstract
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To [...] Read more.
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To develop accurate solutions for RS-based applications, often supervised shallow/deep learning algorithms are used. However, such approaches usually require fixed-length inputs and large labeled datasets. Unfortunately, RS images acquired by optical sensors are frequently degraded by aerosol contamination, clouds and cloud shadows, resulting in missing observations and irregular observation patterns. To address these issues, efforts have been made to implement frameworks that generate meaningful representations from the irregularly sampled data streams and alleviate the deficiencies of the data sources and supervised algorithms. Here, we propose a conceptually and computationally simple representation learning (RL) approach based on autoencoders (AEs) to generate discriminative features for crop type classification. The proposed methodology includes a set of single-layer AEs with a very limited number of neurons, each one trained with the mono-temporal spectral features of a small set of samples belonging to a class, resulting in a model capable of processing very large areas in a short computational time. Importantly, the developed approach remains flexible with respect to the availability of clear temporal observations. The signal derived from the ensemble of AEs is the reconstruction difference vector between input samples and their corresponding estimations, which are averaged over all cloud-/shadow-free temporal observations of a pixel location. This averaged reconstruction difference vector is the base for the representations and the subsequent classification. Experimental results show that the proposed extremely light-weight architecture indeed generates separable features for competitive performances in crop type classification, as distance metrics scores achieved with the derived representations significantly outperform those obtained with the initial data. Conventional classification models were trained and tested with representations generated from a widely used Sentinel-2 multi-spectral multi-temporal dataset, BreizhCrops. Our method achieved 77.06% overall accuracy, which is 6% higher than that achieved using original Sentinel-2 data within conventional classifiers and even 4% better than complex deep models such as OmnisCNN. Compared to extremely complex and time-consuming models such as Transformer and long short-term memory (LSTM), only a 3% reduction in overall accuracy was noted. Our method uses only 6.8k parameters, i.e., 400x fewer than OmnicsCNN and 27x fewer than Transformer. The results prove that our method is competitive in terms of classification performance compared with state-of-the-art methods while substantially reducing the computational load. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Figure 1

31 pages, 6526 KiB  
Review
Remote Sensing Technology for Observing Tree Mortality and Its Influences on Carbon–Water Dynamics
by Mengying Ni, Qingquan Wu, Guiying Li and Dengqiu Li
Forests 2025, 16(2), 194; https://doi.org/10.3390/f16020194 - 21 Jan 2025
Viewed by 288
Abstract
Trees are indispensable to ecosystems, yet mortality rates have been increasing due to the abnormal changes in forest growth environments caused by frequent extreme weather events associated with global climate warming. Consequently, the need to monitor, assess, and predict tree mortality has become [...] Read more.
Trees are indispensable to ecosystems, yet mortality rates have been increasing due to the abnormal changes in forest growth environments caused by frequent extreme weather events associated with global climate warming. Consequently, the need to monitor, assess, and predict tree mortality has become increasingly urgent to better address climate change and protect forest ecosystems. Over the past few decades, remote sensing has been widely applied to vegetation mortality observation due to its significant advantages. Here, we reviewed and analyzed the major research advancements in the application of remote sensing for tree mortality monitoring, using the Web of Science Core Collection database, covering the period from 1998 to the first half of 2024. We comprehensively summarized the use of different platforms (satellite and UAV) for data acquisition, the application of various sensors (multispectral, hyperspectral, and radar) as image data sources, the primary indicators, the classification models used in monitoring tree mortality, and the influence of tree mortality. Our findings indicated that satellite-based optical remote sensing data were the primary data source for tree mortality monitoring, accounting for 80% of existing studies. Time-series optical remote sensing data have emerged as a crucial direction for enhancing the accuracy of vegetation mortality monitoring. In recent years, studies utilizing airborne LiDAR have shown an increasing trend, accounting for 48% of UAV-based research. NDVI was the most commonly used remote sensing indicator, and most studies incorporated meteorological and climatic factors as environmental variables. Machine learning was increasingly favored for remote sensing data analysis, with Random Forest being the most widely used classification model. People are more focused on the impacts of tree mortality on water and carbon. Finally, we discussed the challenges in monitoring and evaluating tree mortality through remote sensing and offered perspectives for future developments. Full article
Show Figures

Figure 1

16 pages, 4518 KiB  
Article
Inversion of Aerosol Chemical Composition in the Beijing–Tianjin–Hebei Region Using a Machine Learning Algorithm
by Baojiang Li, Gang Cheng, Chunlin Shang, Ruirui Si, Zhenping Shao, Pu Zhang, Wenyu Zhang and Lingbin Kong
Atmosphere 2025, 16(2), 114; https://doi.org/10.3390/atmos16020114 - 21 Jan 2025
Viewed by 456
Abstract
Aerosols and their chemical composition exert an influence on the atmospheric environment, global climate, and human health. However, obtaining the chemical composition of aerosols with high spatial and temporal resolution remains a challenging issue. In this study, using the NR-PM1 collected in the [...] Read more.
Aerosols and their chemical composition exert an influence on the atmospheric environment, global climate, and human health. However, obtaining the chemical composition of aerosols with high spatial and temporal resolution remains a challenging issue. In this study, using the NR-PM1 collected in the Beijing area from 2012 to 2013, we found that the annual average concentration was 41.32 μg·m−3, with the largest percentage of organics accounting for 49.3% of NR-PM1, followed by nitrates, sulfates, and ammonium. We then established models of aerosol chemical composition based on a machine learning algorithm. By comparing the inversion accuracies of single models—namely MLR (Multivariable Linear Regression) model, SVR (Support Vector Regression) model, RF (Random Forest) model, KNN (K-Nearest Neighbor) model, and LightGBM (Light Gradient Boosting Machine)—with that of the combined model (CM) after selecting the optimal model, we found that although the accuracy of the KNN model was the highest among the other single models, the accuracy of the CM model was higher. By employing the CM model to the spatially and temporally matched AOD (aerosol optical depth) data and meteorological data of the Beijing–Tianjin–Hebei region, the spatial distribution of the annual average concentrations of the four components was obtained. The areas with higher concentrations are mainly situated in the southwest of Beijing, and the annual average concentrations of the four components in Beijing’s southwest are 28 μg·m−3, 7 μg·m−3, 8 μg·m−3, and 15 μg·m−3 for organics, sulfates, ammonium, and nitrates, respectively. This study not only provides new methodological ideas for obtaining aerosol chemical composition concentrations based on satellite remote sensing data but also provides a data foundation and theoretical support for the formulation of atmospheric pollution prevention and control policies. Full article
(This article belongs to the Special Issue Atmospheric Pollution in Highly Polluted Areas)
Show Figures

Figure 1

22 pages, 3956 KiB  
Article
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
by Xiaoning Zhang, Yi Yu, Daqun Li and Yuqing Wang
Remote Sens. 2025, 17(2), 342; https://doi.org/10.3390/rs17020342 - 20 Jan 2025
Viewed by 339
Abstract
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The [...] Read more.
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The primary challenge lies in generating robust features that can effectively integrate both global semantic information for salient object localization and local spatial details for boundary reconstruction. Most existing ORSI-SOD methods rely on pre-trained CNN- or Transformer-based backbones to extract features from ORSIs, followed by multi-level feature aggregation. Given the significant differences between ORSIs and the natural images used in pre-training, the generalization capability of these backbone networks is often limited, resulting in suboptimal performance. Recently, prompt engineering has been employed to enhance the generalization ability of networks in the Segment Anything Model (SAM), an emerging vision foundation model that has achieved remarkable success across various tasks. Despite its success, directly applying the SAM to ORSI-SOD without prompts from manual interaction remains unsatisfactory. In this paper, we propose a novel progressive self-prompting model based on the SAM, termed PSP-SAM, which generates both internal and external prompts to enhance the network and overcome the limitations of SAM in ORSI-SOD. Specifically, domain-specific prompting modules, consisting of both block-shared and block-specific adapters, are integrated into the network to learn domain-specific visual prompts within the backbone, facilitating its adaptation to ORSI-SOD. Furthermore, we introduce a progressive self-prompting decoder module that performs prompt-guided multi-level feature integration and generates stage-wise mask prompts progressively, enabling the prompt-based mask decoders outside the backbone to predict saliency maps in a coarse-to-fine manner. The entire network is trained end-to-end with parameter-efficient fine-tuning. Extensive experiments on three benchmark ORSI-SOD datasets demonstrate that our proposed network achieves state-of-the-art performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

19 pages, 137082 KiB  
Article
Classification and Monitoring of Salt Marsh Vegetation in the Yellow River Delta Based on Multi-Source Remote Sensing Data Fusion
by Ran Xu, Yanguo Fan, Bowen Fan, Guangyue Feng and Ruotong Li
Sensors 2025, 25(2), 529; https://doi.org/10.3390/s25020529 - 17 Jan 2025
Viewed by 399
Abstract
Salt marsh vegetation in the Yellow River Delta, including Phragmites australis (P. australis), Suaeda salsa (S. salsa), and Tamarix chinensis (T. chinensis), is essential for the stability of wetland ecosystems. In recent years, salt marsh vegetation has [...] Read more.
Salt marsh vegetation in the Yellow River Delta, including Phragmites australis (P. australis), Suaeda salsa (S. salsa), and Tamarix chinensis (T. chinensis), is essential for the stability of wetland ecosystems. In recent years, salt marsh vegetation has experienced severe degradation, which is primarily due to invasive species and human activities. Therefore, the accurate monitoring of the spatial distribution of these vegetation types is critical for the ecological protection and restoration of the Yellow River Delta. This study proposes a multi-source remote sensing data fusion method based on Sentinel-1 and Sentinel-2 imagery, integrating the temporal characteristics of optical and SAR (synthetic aperture radar) data for the classification mapping of salt marsh vegetation in the Yellow River Delta. Phenological and polarization features were extracted to capture vegetation characteristics. A random forest algorithm was then applied to evaluate the impact of different feature combinations on classification accuracy. Combining optical and SAR time-series data significantly enhanced classification accuracy, particularly in differentiating P. australis, S. salsa, and T. chinensis. The integration of phenological features, polarization ratio, and polarization difference achieved a classification accuracy of 93.51% with a Kappa coefficient of 0.917, outperforming the use of individual data sources. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

18 pages, 52971 KiB  
Article
Frequent Glacial Hazard Deformation Detection Based on POT-SBAS InSAR in the Sedongpu Basin in the Himalayan Region
by Haoliang Li, Yinghui Yang, Xiujun Dong, Qiang Xu, Pengfei Li, Jingjing Zhao, Qiang Chen and Jyr-Ching Hu
Remote Sens. 2025, 17(2), 319; https://doi.org/10.3390/rs17020319 - 17 Jan 2025
Viewed by 382
Abstract
The Sedongpu Basin is characterized by frequent glacial debris movements and glacial hazards. To accurately monitor and research these glacier hazards, Sentinel-1 Synthetic Aperture Radar images observed between 2014 and 2022 were collected to extract surface motion using SBAS-POT technology. The acquired temporal [...] Read more.
The Sedongpu Basin is characterized by frequent glacial debris movements and glacial hazards. To accurately monitor and research these glacier hazards, Sentinel-1 Synthetic Aperture Radar images observed between 2014 and 2022 were collected to extract surface motion using SBAS-POT technology. The acquired temporal surface deformation and multiple optical remote sensing images were then jointly used to analyze the characteristics of the long-term glacier movement in the Sedongpu Basin. Furthermore, historical meteorological and seismic data were collected to analyze the mechanisms of multiple ice avalanche chain hazards. It was found that abnormal deformation signals of glaciers SDP1 and SDP2 could be linked to the historical ice avalanche disaster that occurred around the Sedongpu Basin. The maximum deformation rate of SDP1 was 74 m/a and the slope cumulative deformation exceeded 500 m during the monitoring period from 2014 to 2022, which is still in active motion at present; for SDP2, a cumulative deformation of more than 300 m was also detected over the monitoring period. Glaciers SDP3, SDP4, and SDP5 have been relatively stable until now; however, ice cracks are well developed in SDP4 and SDP5, and ice avalanche events may occur if these ice cracks continue to expand under extreme natural conditions in the future. Therefore, this paper emphasizes the seriousness of the ice avalanche event in Sedongpu Basin and provides data support for local disaster management and disaster prevention and reduction. Full article
Show Figures

Figure 1

32 pages, 6342 KiB  
Article
Statewide Forest Canopy Cover Mapping of Florida Using Synergistic Integration of Spaceborne LiDAR, SAR, and Optical Imagery
by Monique Bohora Schlickmann, Inacio Thomaz Bueno, Denis Valle, William M. Hammond, Susan J. Prichard, Andrew T. Hudak, Carine Klauberg, Mauro Alessandro Karasinski, Kody Melissa Brock, Kleydson Diego Rocha, Jinyi Xia, Rodrigo Vieira Leite, Pedro Higuchi, Ana Carolina da Silva, Gabriel Maximo da Silva, Gina R. Cova and Carlos Alberto Silva
Remote Sens. 2025, 17(2), 320; https://doi.org/10.3390/rs17020320 - 17 Jan 2025
Viewed by 726
Abstract
Southern U.S. forests are essential for carbon storage and timber production but are increasingly impacted by natural disturbances, highlighting the need to understand their dynamics and recovery. Canopy cover is a key indicator of forest health and resilience. Advances in remote sensing, such [...] Read more.
Southern U.S. forests are essential for carbon storage and timber production but are increasingly impacted by natural disturbances, highlighting the need to understand their dynamics and recovery. Canopy cover is a key indicator of forest health and resilience. Advances in remote sensing, such as NASA’s GEDI spaceborne LiDAR, enable more precise mapping of canopy cover. Although GEDI provides accurate data, its limited spatial coverage restricts large-scale assessments. To address this, we combined GEDI with Synthetic Aperture Radar (SAR), and optical imagery (Sentinel-1 GRD and Landsat–Sentinel Harmonized (HLS)) data to create a comprehensive canopy cover map for Florida. Using a random forest algorithm, our model achieved an R2 of 0.69, RMSD of 0.17, and MD of 0.001, based on out-of-bag samples for internal validation. Geographic coordinates and the red spectral channel emerged as the most influential predictors. External validation with airborne laser scanning (ALS) data across three sites yielded an R2 of 0.70, RMSD of 0.29, and MD of −0.22, confirming the model’s accuracy and robustness in unseen areas. Statewide analysis showed lower canopy cover in southern versus northern Florida, with wetland forests exhibiting higher cover than upland sites. This study demonstrates the potential of integrating multiple remote sensing datasets to produce accurate vegetation maps, supporting forest management and sustainability efforts in Florida. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

18 pages, 1410 KiB  
Article
Polarization Scattering Regions: A Useful Tool for Polarization Characteristic Description
by Jiankai Huang, Jiapeng Yin, Zhiming Xu and Yongzhen Li
Remote Sens. 2025, 17(2), 306; https://doi.org/10.3390/rs17020306 - 16 Jan 2025
Viewed by 339
Abstract
Polarimetric radar systems play a crucial role in enhancing microwave remote sensing and target identification by providing a refined understanding of electromagnetic scattering mechanisms. This study introduces the concept of polarization scattering regions as a novel tool for describing the polarization characteristics across [...] Read more.
Polarimetric radar systems play a crucial role in enhancing microwave remote sensing and target identification by providing a refined understanding of electromagnetic scattering mechanisms. This study introduces the concept of polarization scattering regions as a novel tool for describing the polarization characteristics across three spectral regions: the polarization Rayleigh region, the polarization resonance region, and the polarization optical region. By using ellipsoidal models, we simulate and analyze scattering across varying electrical sizes, demonstrating how these sizes influence polarization characteristics. The research leverages Cameron decomposition to reveal the distinctive scattering behaviors within each region, illustrating that at higher-frequency bands, scattering approximates spherical symmetry, with minimal impact from the target shape. This classification provides a comprehensive view of polarization-based radar cross-section regions, expanding upon traditional single-polarization radar cross-section regions. The results show that polarization scattering regions are practical tools for interpreting polarimetric radar data across diverse frequency bands. The applications of this research in radar target recognition, weather radar calibration, and radar polarimetry are discussed, highlighting the importance of frequency selection for accurately capturing polarization scattering features. These findings have significant implications for advancing weather radar technology and target recognition techniques, particularly as radar systems move towards higher frequency bands. Full article
Show Figures

Figure 1

29 pages, 19709 KiB  
Article
Surveying Nearshore Bathymetry Using Multispectral and Hyperspectral Satellite Imagery and Machine Learning
by David Hartmann, Mathieu Gravey, Timothy David Price, Wiebe Nijland and Steven Michael de Jong
Remote Sens. 2025, 17(2), 291; https://doi.org/10.3390/rs17020291 - 15 Jan 2025
Viewed by 468
Abstract
Nearshore bathymetric data are essential for assessing coastal hazards, studying benthic habitats and for coastal engineering. Traditional bathymetry mapping techniques of ship-sounding and airborne LiDAR are laborious, expensive and not always efficient. Multispectral and hyperspectral remote sensing, in combination with machine learning techniques, [...] Read more.
Nearshore bathymetric data are essential for assessing coastal hazards, studying benthic habitats and for coastal engineering. Traditional bathymetry mapping techniques of ship-sounding and airborne LiDAR are laborious, expensive and not always efficient. Multispectral and hyperspectral remote sensing, in combination with machine learning techniques, are gaining interest. Here, the nearshore bathymetry of southwest Puerto Rico is estimated with multispectral Sentinel-2 and hyperspectral PRISMA imagery using conventional spectral band ratio models and more advanced XGBoost models and convolutional neural networks. The U-Net, trained on 49 Sentinel-2 images, and the 2D-3D CNN, trained on PRISMA imagery, had a Mean Absolute Error (MAE) of approximately 1 m for depths up to 20 m and were superior to band ratio models by ~40%. Problems with underprediction remain for turbid waters. Sentinel-2 showed higher performance than PRISMA up to 20 m (~18% lower MAE), attributed to training with a larger number of images and employing an ensemble prediction, while PRISMA outperformed Sentinel-2 for depths between 25 m and 30 m (~19% lower MAE). Sentinel-2 imagery is recommended over PRISMA imagery for estimating shallow bathymetry given its similar performance, much higher image availability and easier handling. Future studies are recommended to train neural networks with images from various regions to increase generalization and method portability. Models are preferably trained by area-segregated splits to ensure independence between the training and testing set. Using a random train test split for bathymetry is not recommended due to spatial autocorrelation of sea depth, resulting in data leakage. This study demonstrates the high potential of machine learning models for assessing the bathymetry of optically shallow waters using optical satellite imagery. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

27 pages, 5381 KiB  
Article
Synthesizing Local Capacities, Multi-Source Remote Sensing and Meta-Learning to Optimize Forest Carbon Assessment in Data-Poor Regions
by Kamaldeen Mohammed, Daniel Kpienbaareh, Jinfei Wang, David Goldblum, Isaac Luginaah, Esther Lupafya and Laifolo Dakishoni
Remote Sens. 2025, 17(2), 289; https://doi.org/10.3390/rs17020289 - 15 Jan 2025
Viewed by 421
Abstract
As the climate emergency escalates, the role of forests in carbon sequestration is paramount. This paper proposes a framework that integrates local capacities, multi-source remote sensing data, and meta-learning to enhance forest carbon assessment methodologies in data-scarce regions. By integrating multi-source optical and [...] Read more.
As the climate emergency escalates, the role of forests in carbon sequestration is paramount. This paper proposes a framework that integrates local capacities, multi-source remote sensing data, and meta-learning to enhance forest carbon assessment methodologies in data-scarce regions. By integrating multi-source optical and radar remote sensing data alongside community forest inventories, we applied a meta-modelling approach using stacked generalization ensemble to estimate forest above-ground carbon (AGC). We also conducted a Kruskal–Wallis test to determine significant differences in AGC among different tree species. The Kruskal–Wallis test (p = 1.37 × 10−13) and Dunn post-hoc analysis revealed significant differences in carbon stock potential among tree species, with Afzelia quanzensis (˜x = 12 kg/ha, P-holm-adj. = 0.05) and the locally known species M’buta (˜x = 6 kg/ha, P-holm-adj. = 5.45 × 10−9) exhibiting a significantly higher median AGC. Our results further showed that combining optical and radar remote sensing data substantially improved prediction accuracy compared to single-source remote sensing data. To improve forest carbon assessment, we employed stacked generalization, combining multiple machine learning algorithms to leverage their complementary strengths and address individual limitations. This ensemble approach yielded more robust estimates than conventional methods. Notably, a stacking ensemble of support vector machines and random forest achieved the highest accuracy (R2 = 0.84, RMSE = 1.36), followed by an ensemble of all base learners (R2 = 0.83, RMSE = 1.39). Additionally, our results demonstrate that factors such as the diversity of base learners and the sensitivity of meta-leaners to optimization can influence stacking performance. Full article
(This article belongs to the Special Issue Remote Sensing in Environmental Modelling)
Show Figures

Figure 1

26 pages, 393 KiB  
Review
Monitoring Yield and Quality of Forages and Grassland in the View of Precision Agriculture Applications—A Review
by Abid Ali and Hans-Peter Kaul
Remote Sens. 2025, 17(2), 279; https://doi.org/10.3390/rs17020279 - 15 Jan 2025
Viewed by 864
Abstract
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of [...] Read more.
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of utilization (cuts) in grasslands. Therefore, the main goal of the review is to examine the techniques for using PA applications to monitor productivity and quality in forage and grasslands. To achieve this, the authors discuss several monitoring technologies for biomass and plant stand characteristics (including quality) that make it possible to adopt digital farming in forages and grassland management. The review provides an overview about mass flow and impact sensors, moisture sensors, remote sensing-based approaches, near-infrared (NIR) spectroscopy, and mapping field heterogeneity and promotes decision support systems (DSSs) in this field. At a small scale, advanced sensors such as optical, thermal, and radar sensors mountable on drones; LiDAR (Light Detection and Ranging); and hyperspectral imaging techniques can be used for assessing plant and soil characteristics. At a larger scale, we discuss coupling of remote sensing with weather data (synergistic grassland yield modelling), Sentinel-2 data with radiative transfer modelling (RTM), Sentinel-1 backscatter, and Catboost–machine learning methods for digital mapping in terms of precision harvesting and site-specific farming decisions. It is known that the delineation of sward heterogeneity is more difficult in mixed grasslands due to spectral similarity among species. Thanks to Diversity-Interactions models, jointly assessing various species interactions under mixed grasslands is allowed. Further, understanding such complex sward heterogeneity might be feasible by integrating spectral un-mixing techniques such as the super-pixel segmentation technique, multi-level fusion procedure, and combined NIR spectroscopy with neural network models. This review offers a digital option for enhancing yield monitoring systems and implementing PA applications in forages and grassland management. The authors recommend a future research direction for the inclusion of costs and economic returns of digital technologies for precision grasslands and fodder production. Full article
Show Figures

Graphical abstract

24 pages, 9871 KiB  
Article
AIR-POLSAR-CR1.0: A Benchmark Dataset for Cloud Removal in High-Resolution Optical Remote Sensing Images with Fully Polarized SAR
by Yuxi Wang, Wenjuan Zhang, Jie Pan, Wen Jiang, Fangyan Yuan, Bo Zhang, Xijuan Yue and Bing Zhang
Remote Sens. 2025, 17(2), 275; https://doi.org/10.3390/rs17020275 - 14 Jan 2025
Viewed by 400
Abstract
Due to the all-time and all-weather characteristics of synthetic aperture radar (SAR) data, they have become an important input for optical image restoration, and various cloud removal datasets based on SAR-optical have been proposed. Currently, the construction of multi-source cloud removal datasets typically [...] Read more.
Due to the all-time and all-weather characteristics of synthetic aperture radar (SAR) data, they have become an important input for optical image restoration, and various cloud removal datasets based on SAR-optical have been proposed. Currently, the construction of multi-source cloud removal datasets typically employs single-polarization or dual-polarization backscatter SAR feature images, lacking a comprehensive description of target scattering information and polarization characteristics. This paper constructs a high-resolution remote sensing dataset, AIR-POLSAR-CR1.0, based on optical images, backscatter feature images, and polarization feature images using the fully polarimetric synthetic aperture radar (PolSAR) data. The dataset has been manually annotated to provide a foundation for subsequent analyses and processing. Finally, this study performs a performance analysis of typical cloud removal deep learning algorithms based on different categories and cloud coverage on the proposed standard dataset, serving as baseline results for this benchmark. The results of the ablation experiment also demonstrate the effectiveness of the PolSAR data. In summary, AIR-POLSAR-CR1.0 fills the gap in polarization feature images and demonstrates good adaptability for the development of deep learning algorithms. Full article
Show Figures

Graphical abstract

22 pages, 21875 KiB  
Article
Inclined Aerial Image and Satellite Image Matching Based on Edge Curve Direction Angle Features
by Hao Wang, Chongyang Liu, Yalin Ding, Chao Sun, Guoqin Yuan and Hongwen Zhang
Remote Sens. 2025, 17(2), 268; https://doi.org/10.3390/rs17020268 - 13 Jan 2025
Viewed by 375
Abstract
Optical remote sensing images are easily affected by atmospheric absorption and scattering, and the low contrast and low signal-to-noise ratio (SNR) of aerial images as well as the different sensors of aerial and satellite images bring a great challenge to image matching. A [...] Read more.
Optical remote sensing images are easily affected by atmospheric absorption and scattering, and the low contrast and low signal-to-noise ratio (SNR) of aerial images as well as the different sensors of aerial and satellite images bring a great challenge to image matching. A tilted aerial image and satellite image matching algorithm based on edge curve direction angle features (ECDAF) is proposed, which accomplishes image matching by extracting the edge features of the images and establishing the curve direction angle feature descriptors. First, tilt and resolution transforms are performed on the satellite image, and edge detection and contour extraction are performed on the aerial image and transformed satellite image to make preparations for image matching. Then, corner points are detected and feature descriptors are constructed based on the edge curve direction angle. Finally, the integrated matching similarity is computed to realize aerial–satellite image matching. Experiments run on a variety of remote sensing datasets including forests, hills, farmland, and lake scenes demonstrate that the effectiveness of the proposed algorithm shows a great improvement over existing state-of-the-art algorithms. Full article
Show Figures

Figure 1

Back to TopTop