Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = DIRSIG

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 15168 KiB  
Article
Exploring the Limits of Species Identification via a Convolutional Neural Network in a Complex Forest Scene through Simulated Imaging Spectroscopy
by Manisha Das Chaity and Jan van Aardt
Remote Sens. 2024, 16(3), 498; https://doi.org/10.3390/rs16030498 - 28 Jan 2024
Cited by 1 | Viewed by 1736
Abstract
Imaging spectroscopy (hyperspectral sensing) is a proven tool for mapping and monitoring the spatial distribution of vegetation species composition. However, there exists a gap when it comes to the availability of high-resolution spatial and spectral imagery for accurate tree species mapping, particularly in [...] Read more.
Imaging spectroscopy (hyperspectral sensing) is a proven tool for mapping and monitoring the spatial distribution of vegetation species composition. However, there exists a gap when it comes to the availability of high-resolution spatial and spectral imagery for accurate tree species mapping, particularly in complex forest environments, despite the continuous advancements in operational remote sensing and field sensor technologies. Here, we aim to bridge this gap by enhancing our fundamental understanding of imaging spectrometers via complex simulated environments. We used DIRSIG, a physics-based, first-principles simulation approach to model canopy-level reflectance for 3D plant models and species-level leaf reflectance in a synthetic forest scene. We simulated a realistic scene, based on the same species composition, found at Harvard Forest, MA (USA). Our simulation approach allowed us to better understand the interplay between instrument parameters and landscape characteristics, and facilitated comprehensive traceability of error budgets. To enhance our understanding of the impact of sensor design on classification performance, we simulated image samples at different spatial, spectral, and scale resolutions (by modifying the pixel pitch and the total number of pixels in the sensor array, i.e., the focal plane dimension) of the imaging sensor and assessed the performance of a deep learning-based convolutional neural network (CNN) and a traditional machine learning classifier, support vector machines (SVMs), to classify vegetation species. Overall, across all resolutions and species mixtures, the highest classification accuracy varied widely from 50 to 84%, and the number of genus-level species classes identified ranged from 2 to 17, among 24 classes. Harnessing this simulation approach has provided us valuable insights into sensor configurations and the optimization of data collection methodologies to improve the interpretation of spectral signatures for accurate tree species mapping in forest scenes. Note that we used species classification as a proxy for a host of imaging spectroscopy applications. However, this approach can be extended to other ecological scenarios, such as in evaluating the changing ecosystem composition, detecting invasive species, or observing the effects of climate change on ecosystem diversity. Full article
Show Figures

Graphical abstract

21 pages, 11843 KiB  
Article
Vehicle Detection and Attribution from a Multi-Sensor Dataset Using a Rule-Based Approach Combined with Data Fusion
by Lindsey A. Bowman, Ram M. Narayanan, Timothy J. Kane, Eliza S. Bradley and Matthew S. Baran
Sensors 2023, 23(21), 8811; https://doi.org/10.3390/s23218811 - 30 Oct 2023
Cited by 2 | Viewed by 1637
Abstract
Vehicle detection using data fusion techniques from overhead platforms (RGB/MSI imagery and LiDAR point clouds) with vector and shape data can be a powerful tool in a variety of fields, including, but not limited to, national security, disaster relief efforts, and traffic monitoring. [...] Read more.
Vehicle detection using data fusion techniques from overhead platforms (RGB/MSI imagery and LiDAR point clouds) with vector and shape data can be a powerful tool in a variety of fields, including, but not limited to, national security, disaster relief efforts, and traffic monitoring. Knowing the location and number of vehicles in a given area can provide insight into the surrounding activities and patterns of life, as well as support decision-making processes. While researchers have developed many approaches to tackling this problem, few have exploited the multi-data approach with a classical technique. In this paper, a primarily LiDAR-based method supported by RGB/MSI imagery and road network shapefiles has been developed to detect stationary vehicles. The addition of imagery and road networks, when available, offers an improved classification of points from LiDAR data and helps to reduce false positives. Furthermore, detected vehicles can be assigned various 3D, relational, and spectral attributes, as well as height profiles. This method was evaluated on the Houston, TX dataset provided by the IEEE 2018 GRSS Data Fusion Contest, which includes 1476 ground truth vehicles from LiDAR data. On this dataset, the algorithm achieved a 92% precision and 92% recall. It was also evaluated on the Vaihingen, Germany dataset provided by ISPRS, as well as data simulated using an image generation model called DIRSIG. Some known limitations of the algorithm include false positives caused by low vegetation and the inability to detect vehicles (1) in extremely close proximity with high precision and (2) from low-density point clouds. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

13 pages, 2592 KiB  
Technical Note
Image Collection Simulation Using High-Resolution Atmospheric Modeling
by Andrew Kalukin, Satoshi Endo, Russell Crook, Manoj Mahajan, Robert Fennimore, Alice Cialella, Laurie Gregory, Shinjae Yoo, Wei Xu and Daniel Cisek
Remote Sens. 2020, 12(19), 3214; https://doi.org/10.3390/rs12193214 - 1 Oct 2020
Cited by 1 | Viewed by 4080
Abstract
A new method is described for simulating the passive remote sensing image collection of ground targets that includes effects from atmospheric physics and dynamics at fine spatial and temporal scales. The innovation in this research is the process of combining a high-resolution weather [...] Read more.
A new method is described for simulating the passive remote sensing image collection of ground targets that includes effects from atmospheric physics and dynamics at fine spatial and temporal scales. The innovation in this research is the process of combining a high-resolution weather model with image collection simulation to attempt to account for heterogeneous and high-resolution atmospheric effects on image products. The atmosphere was modeled on a 3D voxel grid by a Large-Eddy Simulation (LES) driven by forcing data constrained by local ground-based and air-based observations. The spatial scale of the atmospheric model (10–100 m) came closer than conventional weather forecast scales (10–100 km) to approaching the scale of typical commercial multispectral imagery (2 m). This approach was demonstrated through a ground truth experiment conducted at the Department of Energy Atmospheric Radiation Measurement Southern Great Plains site. In this experiment, calibrated targets (colored spectral tarps) were placed on the ground, and the scene was imaged with WorldView-3 multispectral imagery at a resolution enabling the tarps to be visible in at least 9–12 image pixels. The image collection was simulated with Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, using the 3D atmosphere from the LES model to generate a high-resolution cloud mask. The high-resolution atmospheric model-predicted cloud coverage was usually within 23% of the measured cloud cover. The simulated image products were comparable to the WorldView-3 satellite imagery in terms of the variations of cloud distributions and spectral properties of the ground targets in clear-sky regions, suggesting the potential utility of the proposed modeling framework in improving simulation capabilities, as well as testing and improving the operation of image collection processes. Full article
(This article belongs to the Special Issue Feature Papers of Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

34 pages, 9181 KiB  
Article
Simulations of Leaf BSDF Effects on Lidar Waveforms
by Benjamin D. Roth, Adam A. Goodenough, Scott D. Brown, Jan A. van Aardt, M. Grady Saunders and Keith Krause
Remote Sens. 2020, 12(18), 2909; https://doi.org/10.3390/rs12182909 - 8 Sep 2020
Cited by 4 | Viewed by 3533
Abstract
Establishing linkages between light detection and ranging (lidar) data, produced from interrogating forest canopies, to the highly complex forest structures, composition, and traits that such forests contain, remains an extremely difficult problem. Radiative transfer models have been developed to help solve this problem [...] Read more.
Establishing linkages between light detection and ranging (lidar) data, produced from interrogating forest canopies, to the highly complex forest structures, composition, and traits that such forests contain, remains an extremely difficult problem. Radiative transfer models have been developed to help solve this problem and test new sensor platforms in a virtual environment. Many forest canopy studies include the major assumption of isotropic (Lambertian) reflecting and transmitting leaves or non-transmitting leaves. Here, we study when these assumptions may be valid and evaluate their associated impacts/effects on the lidar waveform, as well as its dependence on wavelength, lidar footprint, view angle, and leaf angle distribution (LAD), by using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) remote sensing radiative transfer simulation model. The largest effects of Lambertian assumptions on the waveform are observed at visible wavelengths, small footprints, and oblique interrogation angles relative to the mean leaf angle. For example, a 77% increase in return signal was observed with a configuration of a 550 nm wavelength, 10 cm footprint, and 45° interrogation angle to planophile leaves. These effects are attributed to (i) the bidirectional scattering distribution function (BSDF) becoming almost purely specular in the visible, (ii) small footprints having fewer leaf angles to integrate over, and (iii) oblique angles causing diminished backscatter due to forward scattering. Non-transmitting leaf assumptions have the greatest error for large footprints at near-infrared (NIR) wavelengths. Regardless of leaf angle distribution, all simulations with non-transmitting leaves with a 5 m footprint and 1064 nm wavelength saw around a 15% reduction in return signal. We attribute the signal reduction to the increased multiscatter contribution for larger fields of view, and increased transmission at NIR wavelengths. Armed with the knowledge from this study, researchers will be able to select appropriate sensor configurations to account for or limit BSDF effects in forest lidar data. Full article
(This article belongs to the Special Issue Lidar Remote Sensing of Forest Structure, Biomass and Dynamics)
Show Figures

Graphical abstract

8032 KiB  
Article
Towards an Improved LAI Collection Protocol via Simulated and Field-Based PAR Sensing
by Wei Yao, David Kelbe, Martin Van Leeuwen, Paul Romanczyk and Jan Van Aardt
Sensors 2016, 16(7), 1092; https://doi.org/10.3390/s16071092 - 14 Jul 2016
Cited by 9 | Viewed by 7016
Abstract
In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While [...] Read more.
In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While photosynthetically-active radiation (PAR) sensors have been validated for measuring crop LAI, there is limited literature on the efficacy of PAR-based LAI measurement in the forest environment. This study (i) validates PAR-based LAI measurement in forest environments, and (ii) proposes a suitable collection protocol, which balances efficiency with measurement variation, e.g., due to sun flecks and various-sized canopy gaps. A synthetic PAR sensor model was developed in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and used to validate LAI measurement based on first-principles and explicitly-known leaf geometry. Simulated collection parameters were adjusted to empirically identify optimal collection protocols. These collection protocols were then validated in the field by correlating PAR-based LAI measurement to the normalized difference vegetation index (NDVI) extracted from the “classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) data ( R 2 was 0.61). The results indicate that our proposed collecting protocol is suitable for measuring the LAI of sparse forest (LAI < 3–5 ( m 2 / m 2 )). Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

27114 KiB  
Article
An Analysis of the Side Slither On-Orbit Calibration Technique Using the DIRSIG Model
by Aaron Gerace, John Schott, Michael Gartley and Matthew Montanaro
Remote Sens. 2014, 6(11), 10523-10545; https://doi.org/10.3390/rs61110523 - 31 Oct 2014
Cited by 21 | Viewed by 8330
Abstract
Pushbroom-style imaging systems exhibit several advantages over line scanners when used on space-borne platforms as they typically achieve higher signal-to-noise and reduce the need for moving parts. Pushbroom sensors contain thousands of detectors, each having a unique radiometric response, which will inevitably lead [...] Read more.
Pushbroom-style imaging systems exhibit several advantages over line scanners when used on space-borne platforms as they typically achieve higher signal-to-noise and reduce the need for moving parts. Pushbroom sensors contain thousands of detectors, each having a unique radiometric response, which will inevitably lead to streaking and banding in the raw data. To take full advantage of the potential exhibited by pushbroom sensors, a relative radiometric correction must be performed to eliminate pixel-to-pixel non-uniformities in the raw data. Side slither is an on-orbit calibration technique where a 90-degree yaw maneuver is performed over an invariant site to flatten the data. While this technique has been utilized with moderate success for the QuickBird satellite [1] and the RapidEye constellation [2], further analysis is required to enable its implementation for the Landsat 8 sensors, which have a 15-degree field-of-view and a 0.5% pixel-to-pixel uniformity requirement. This work uses the DIRSIG model to analyze the side slither maneuver as applicable to the Landsat sensor. A description of favorable sites, how to adjust the maneuver to compensate for the curvature of “linear” arrays, how to efficiently process the data, and an analysis to assess the quality of the side slither data, are presented. Full article
(This article belongs to the Special Issue Landsat-8 Sensor Characterization and Calibration)
Show Figures

Graphical abstract

350 KiB  
Article
Using Physically-Modeled Synthetic Data to Assess Hyperspectral Unmixing Approaches
by Matthew Stites, Jacob Gunther, Todd Moon and Gustavious Williams
Remote Sens. 2013, 5(4), 1974-1997; https://doi.org/10.3390/rs5041974 - 19 Apr 2013
Cited by 2 | Viewed by 6196
Abstract
This paper considers an experimental approach for assessing algorithms used to exploit remotely sensed data. The approach employs synthetic images that are generated using physical models to make them more realistic while still providing ground truth data for quantitative evaluation. This approach complements [...] Read more.
This paper considers an experimental approach for assessing algorithms used to exploit remotely sensed data. The approach employs synthetic images that are generated using physical models to make them more realistic while still providing ground truth data for quantitative evaluation. This approach complements the common approach of using real data and/or simple model-generated data. To demonstrate the value of such an approach, the behavior of the FastICA algorithm as a hyperspectral unmixing technique is evaluated using such data. This exploration leads to a number of useful insights such as: (1) the need to retain more dimensions than indicated by eigenvalue analysis to obtain near-optimal results; (2) conditions in which orthogonalization of unmixing vectors is detrimental to the exploitation results; and (3) a means for improving FastICA unmixing results by recognizing and compensating for materials that have been split into multiple abundance maps. Full article
Show Figures

Graphical abstract

910 KiB  
Article
Simulation of Image Performance Characteristics of the Landsat Data Continuity Mission (LDCM) Thermal Infrared Sensor (TIRS)
by John Schott, Aaron Gerace, Scott Brown, Michael Gartley, Matthew Montanaro and Dennis C. Reuter
Remote Sens. 2012, 4(8), 2477-2491; https://doi.org/10.3390/rs4082477 - 22 Aug 2012
Cited by 19 | Viewed by 9191
Abstract
The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance [...] Read more.
The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic “on-orbit” TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable. Full article
Show Figures

Back to TopTop