Quantitive Remote Sensing (Book Preview)
Quantitive Remote Sensing (Book Preview)
This book provides comprehensive and in-depth explanations of all topics related to quantitative
remote sensing and its applications in terrestrial, biospheric, hydrospheric, and atmospheric studies.
It elucidates how to retrieve quantitative information on a wide range of environmental parameters
from various remote sensing data at the highest accuracy possible and expounds how different
aspects of the target of remote sensing can be quantified using diverse analytical methods at various
levels of accuracy. Written in an easy-to-follow language, logically organized, and with step-by-
step examples, the book assists readers to deepen their understanding of the theory and cutting-edge
research on quantitative remote sensing.
Features
This is a suitable textbook for upper-level undergraduate or postgraduate students and serves as a
handy and valuable reference for professionals working in monitoring the environment. By reading
this book, readers can gain a sound understanding of how to retrieve quantitative information on
the environment from diverse remote sensing data using the most appropriate cutting-edge methods
and software.
ii
iii
Jay Gao
iv
Designed cover image: © Shi Y, J Gao, G Brierley, X Li, GLW Perry, and T Xu (2023), Improving the accuracy of models to
map alpine grassland above-ground biomass using Google earth engine. Grass and Forage Sci 78(2): 237-253. doi: 10.1111/
gfs.12607, CC BY 4.0 Deed, https://creativecommons.org/licenses/by/4.0/
First edition published 2025
by CRC Press
2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2025 Jay Gao
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted
to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us
know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized
in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not
available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification
and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Names: Gao, Jay, author.
Title: Quantitative remote sensing : fundamentals and environmental applications / Jay Gao.
Description: First edition. | Boca Raton, FL : CRC Press, 2025. | Includes bibliographical references and index.
Identifiers: LCCN 2024022074 (print) | LCCN 2024022075 (ebook) | ISBN 9781032852874 (hardback) |
ISBN 9781032852898 (paperback) | ISBN 9781003517504 (ebook)
Subjects: LCSH: Environmental sciences–Remote sensing. | Remote sensing–Data processing.
Classification: LCC GE45.R44 G36 2025 (print) | LCC GE45.R44 (ebook) |
DDC 621.36/78–dc23/eng/20240808
LC record available at https://lccn.loc.gov/2024022074
LC ebook record available at https://lccn.loc.gov/2024022075
ISBN: 978-1-032-85287-4 (hbk)
ISBN: 978-1-032-85289-8 (pbk)
ISBN: 978-1-003-51750-4 (ebk)
DOI: 10.1201/9781003517504
Typeset in Times New Roman
by Newgen Publishing UK
Access the Support Material: www.routledge.com/9781032852874
v
Dedication
I would like to dedicate this book to my brothers and sister who have always
supported me wholeheartedly throughout my academic career and welcomed me
with open arms to their homes as a true family member. I will always cherish the
wonderful times I spent together with them in their homes, which felt like a second
home away from home for me.
vi
vii
Contents
Preface..............................................................................................................................................xiii
Acknowledgments...........................................................................................................................xvii
About the Author..............................................................................................................................xix
List of Acronyms..............................................................................................................................xxi
PART I Fundamentals
Chapter 1 Introduction...................................................................................................................3
1.1 Quantitative Remote Sensing..............................................................................4
1.1.1 Static Versus Dynamic Quantification................................................... 6
1.1.2 Target of Quantification –The Environment.........................................6
1.1.3 Nature of Quantification........................................................................ 7
1.1.4 Requirements of Quantification.......................................................... 11
1.2 Field Data Collection........................................................................................13
1.2.1 In situ Sampling..................................................................................13
1.2.2 Collection of Terrestrial Data.............................................................. 17
1.2.3 Collection of Biophysical Data...........................................................20
1.2.4 Water Data...........................................................................................25
1.2.5 Second-hand Ancillary Data................................................................ 30
1.3 Common Predictor Variables............................................................................ 31
1.3.1 From Drone Images............................................................................. 31
1.3.2 From Space-borne Bands.................................................................... 32
1.4 Accuracy of Quantified Results........................................................................36
1.4.1 Validation and Cross-validation.......................................................... 36
1.4.2 Accuracy Expressions......................................................................... 38
1.5 Challenges Facing Quantification.....................................................................40
1.5.1 Limitations of Imagery Data...............................................................40
1.5.2 Retrospective Quantification...............................................................42
1.5.3 Data Mismatch....................................................................................42
1.6 Organization of this Book................................................................................. 44
vii
viii Contents
Index...............................................................................................................................................429
xii
xiii
Preface
It is very difficult to trace when exactly quantitative remote sensing came into existence. When
remote sensing as a discipline was developed in the 1970s, it was applied mostly for retrieving
qualitative information, either using manual method or digital data analysis. In my student years,
I was engaged with qualitative remote sensing research exclusively, using aerial photographs and
analogue satellite images. However, how to retrieve quantitative information from remotely sensed
data has always intrigued me. Soon after starting my career as an academia, I found myself handi-
capped by the lack of suitable books on the topic of quantitative remote sensing. Supervision of
my postgraduate student thesis research landed me the opportunity of gaining an understanding of
how to quantitatively retrieve physical parameter values from aerial photographs in the early 1990s.
Back then it was extremely cumbersome and challenging to undertake quantitative remote sensing
analysis due to the lack of standard software for atmospheric correction and the absence of powerful
computing packages and platforms. Indeed, the retrieval was a painstakingly slow and awkward
process involving working with several disparate computing packages to visualize the quantitative
outcome graphically.
This situation improved in subsequent years with the advent of advanced digital image pro-
cessing software packages that allowed rudimentary radiometric calibration of satellite data, and
digital mosaiking of multiple images. So far scientists have attempted to retrieve quantitative infor-
mation on the target of sensing in widely ranging fields. These attempts have proved fruitful owing
to the advances in computing power and the emergence of sophisticated computing packages. In the
evolutionary trajectory of quantitative remote sensing, two critical milestones stand out and merit
particular elaboration: the advent of LiDAR data and the exponentially improved computing power,
especially the widespread availability of machine learning packages. Dissimilar to imagery data that
may be processed to derive qualitative information, LiDAR data are processed to unexceptionally to
derive quantitative information on the target that is supplementary to the mostly 2D quantitative
information from remote sensing images. Consequently, LiDAR data have opened the floodgate of
quantitative remote sensing and extended remote sensing applications to widely ranging fields that
were unimaginable prior to the LiDAR era. Machine learning creates a computing environment in
which multiple co-variables can be considered and analyzed simultaneously in an effort to increase
the reliability of the retrieved quantitative information. It significantly facilitates the undertaking
of quantitative remote sensing by non-remote sensing specialists. Owing to the powerful analytical
capability, those quantification tasks that cannot be accomplished in rudimentary statistical analysis
can be achieved at a much higher and reputable accuracy. Powerful computing algorithms have
also widened the application areas of quantitative remote sensing to encompass the four spheres
(terrestrial, biological, hydrosphere, and atmosphere) of the natural environment that are closely
intertwined with human life.
As a discipline, quantitative remote sensing has developed phenomenally and matured over the
last two decades. The large volume of research outcome generated warrants the necessity of writing
a book on the topic to systematically scrutinize the current state of the discipline. This book aims to
enlighten educators and professionals on the latest development in the field and inform them how to
make use of the latest technologies and methods in their own work. Manifesting the culmination of
decades of my teaching and research in quantitative remote sensing, this book is a timely addition
to the existing body of literature on remote sensing.
As a newly emerged fledgling discipline, quantitative remote sensing lacks firm theoretical
grounding. Neither exist standard procedures of practice (e.g., lack of universal methods of data
processing and results validation). How quantitative remote sensing is practised is dictated by the
field in which the quantitative information is derived. However, there are still some commonalities
xiii
xiv Preface
to rudimentary data preparation, general components of data analysis, and quality assurance of the
retrieved outcome. For this reason, the book has been structured in two general parts, fundamentals
and practices, each comprising four chapters. After a general introduction to the topic of quantita-
tive remote sensing, this part provides an overview of remotely sensed data that have found wide
applications in quantitative remote sensing. It is hoped that this chapter will equip the reader with
the knowledge of how to select the most appropriate types of data for his or her own projects. Other
topics covered in this part include atmospheric calibration, a processing unique and essential to
quantitative remote sensing, and common analytical methods that have found applications in quan-
titative remote sensing and that are instrumental to the achievement of reasonable quantification
outcomes. In general, this part contains sufficient mathematical background to reveal how remotely
sensed data are converted to quantitative information while also providing quality assurance of the
retrieved results.
The practice part elucidates how the retrieval of diverse quantitative information is implemented
for different targets in four spheres from diverse remote sensing data, both graphic and non-graphic.
Since the retrieval of physical parameter values varies widely with the sphere or the target of quan-
tification in the same sphere, I have decided to organize the second part into four chapters, each
devoted to the quantification in a unique sphere of land, water, air, and biosphere. The content of
each chapter does not conform to a standard format. Neither does it follow a particular logic except
that it starts from simpler topics at the beginning to more sophisticated and complex topics later on.
Therefore, there is no expectation that the reader must follow all the chapters sequentially as they
are ordered, or the sequence of each chapter, especially those chapters in the second part. Since the
retrieval in each sphere is so unique in the type of suitable data used, the methods of data processing,
and the models of quantification used, it is unrealistic to expect a professional to be conversant
with the quantification in all four spheres. Chances are a few chapters may suffice. Knowledgeable
readers may jump to the relevant practice chapters most relevant to their field of interest directly
after reading the first part. Since acronyms are widely used in the quantitative remote sensing litera-
ture, this book is no exception. Whenever an acronym is encountered the first time, it is spelled out
fully only once instead of in every chapter. Readers unfamiliar with the acronyms are encouraged to
consult the list of acronyms at the beginning of the book.
This book aims to inform the reader on how to retrieve information on a target of interest
quantitatively from widely ranging remote sensing data using different processing methods and
models. Its flavor leans heavily towards pragmatism, namely, how to produce the most reliable
quantitative information from the relevant remote sensing data using the best practice. As such,
it minimizes the theoretical exposition of the atmospheric radiative transfer process. Instead, it
emphasizes how to implement the quantitative retrieval in a given computing environment. When
multiple platforms are available, the pros and cons of each computing platform are comparatively
evaluated. Wherever relevant, step-by-step examples are supplied to illustrate how the retrieval is
accomplished conceptually. Ample examples are supplied to deepen the comprehension of the text
and to facilitate maximum reader engagement. Another unique feature of the book is the lavish
attention paid to retrieval accuracy, for the entire effort of quantification is lost if the retrieved out-
come cannot meet the practical accuracy requirements. In order to understand how the accuracy
may be further improved, all the factors responsible for degrading the quality of the retrieved infor-
mation are enumerated and elaborated, with their contribution to inaccuracy ranked and quantified
whenever possible.
This book is written with the assumption that the reader has already gained some fundamental
understanding of remote sensing, such as digital number and its relationship with the spectral
reflectance curves, and the accuracy (precision) of quantitative retrieval. It can be used as a text-
book for upper-level undergraduate and postgraduate courses, or a handy reference. Additional
teaching-related materials (e.g., lecturing PPTs and lab assignments) can be obtained by browsing
the publisher’s website (WWW.CRCPRESS.COM). Accompanying this book is a lab manual
Preface xv
with step-by-step instructions for a few lab exercises. They can be accessed online only for those
instructors who will adopt the book for their classroom teaching. Professionals working in a field
related to the natural environment will find the book useful in enlightening them on how to derive
the quantitative information of their interest from a vast array of remotely sensed data using the
latest computing methods and technologies available. Through reading this book, the reader can
hope to gain an in-depth understanding and appreciation of the subject matter and cutting-edge
research on quantitative remote sensing that no other books can offer.
Jay Gao
April 2024, Auckland
xvi
xvi
Acknowledgments
This book could not have been written without the assistance of numerous parties that I would
like to acknowledge here. The first and foremost party is my former employer, the University of
Auckland, who bestowed me with an honorable lecturership position following my earlier-than-
expected retirement, during which this book was written. Under this arrangement, I am privileged
to access the university’s vast digital resources, from which this book profited handsomely. The next
parties I am deeply in debt to are my former doctoral students whose PhD theses are widely cited,
and whose research outcomes are used as illustrations in the appropriate places; they are Vincent
Wang on undertaking the quantitative assessment of vegetation carbon using integrated LiDAR and
imagery data, and Yan Shi on quantification of grassland above-ground biomass using both drone
images at the micro-scale and satellite imagery at the catchment scale. Also, Daniel de le Torre on the
estimation of rice yield using machine learning methods. In particular, Yan also helped to produce
high-quality illustrations, the front cover image of this book. I would like to thank numerous authors
who have made their publications freely accessible to me, and whose graphics and tables are re-used
throughout this book to enhance its quality and ease the comprehension of the text. Finally, I would
like to thank the staff at Taylor & Francis who guided me through the maze of getting published in
this highly complex process. In particular, I am grateful for the assistance and immediate attention
of Irma Britton and Chelsea Reeves who are always reliable in answering my queries promptly.
xvii
xvi
xix
xix
xx
xxi
Acronyms
AAI absorbing aerosol index
ABI algal bloom index
AERONET AErosol RObotic NETwork
AFAI alternative floating algae index
AGB(C) above-ground biomass (carbon)
AHI Advanced Himawari Imager
AI aerosol index
AIRS Atmospheric Infrared Radiation Sounder
AISA Airborne Imaging Spectrometer for Application
ALOS Advanced Land Observing Satellite
ALS airborne laser scanning
AMF air mass factor
AMSR Advanced Microwave Scanning Radiometer
ANFIS adaptive neural fuzzy inference system
ANN artificial neural network
AOD aerosol optical depth
AOP Apparent optical property
AOT atmospheric optical thickness
ARVI atmospherically resistant vegetation index
ASCAT Advanced Scatterometer
ASTER Advanced Spaceborne Thermal Emission and Reflection Radiometer
ATCOR atmospheric and topographic correction
AVIRIS-NG Airborne Visible InfraRed Imaging Spectrometer -Next Generation
BLH boundary layer height
BN Bayesian network
BRDF bidirectional reflectance distribution function
BT brightness temperature
CAAS complex atmospheric algorithm scheme
CALIOP Cloud-Aerosol LiDAR with Orthogonal Polarization
CALIPSO Cloud-Aerosol LiDAR and Infrared Pathfinder Satellite Observation
CASI Compact Airborne Spectrographic Imager
CBI composite burn index
CC canopy closure
CC chlorophyll content
CCC canopy chlorophyll content
CD canopy diameter
CDOM colored (chromophoric) dissolved organic matter
CFNN cascade-forward neural network
CH canopy height
CHM canopy height model
CHRIS Compact High Resolution Imaging Spectrometer
CI chlorophyll index
CIB column-integrated biomass
CMVS clustering for multi-view stereopsis
CNN convolutional neural network
CSI coherent scatterer InSAR
CSM crop surface model
xxi
xxii List of Acronyms
Part I
Fundamentals
2
3
1 Introduction
Remote sensing is defined as the science and art of garnering information about features or phe-
nomena of interest in an area on or near the Earth’s surface via detecting and analyzing the electro-
magnetic radiation reflected, scattered, or emitted by them. The radiation received at the sensor is
rendered either as an image or non-imagery randomly distributed point clouds over the area of study.
In the former case, the image is unexceptionally multispectral and captures the spectral behavior of
the target and its variation with wavelength. In the latter case, the captured data manifest the 3D pos-
ition of the target at the sensed spot. Conventionally, imagery data are analyzed digitally to derive
qualitative, categorical information on the target, such as different types of vegetation or different
classes of water (e.g., deep water, shallow water) after the pixel values of the target in the spectral
domain are categorized into a pre-determined number of groups using analyst-defined criteria under
certain assumptions. In this process of converting continuously varying pixel values to categorical
surface covers enumerated as nominal data, the mapped covers rarely have a one-to-one correspond-
ence to the input pixel values. Instead, a range of pixel values is likely lumped together and assigned
to a single cover as image classification is virtually a process of simplification and amalgamation of
the original input pixel values. With only occasional exceptions, images are classified based solely
on image-derived inputs without resorting to external or auxiliary data. The number of classified
pixels having a certain specific value may be expressed quantitatively (e.g., area or percentage in
the scene of study), this kind of information extraction is not considered quantitative sensing as the
derived quantity is not associated directly with a unique pixel value or has a spatial component to it.
Instead, it is based on the count of the pixels having a given range of values. In essence, it is just a
type of qualitative remote sensing (e.g., land cover mapping) at a more detailed level.
Quantitative sensing refers to the process of converting the sensed imagery data to a param-
eter value on the ground in the spatial domain via some kind of mathematical manipulation or
sophisticated modeling. Since the pixel values are translated to the in situ values either directly or
indirectly, the quantified outcome is precise, able to preserve all the subtle variations in the attribute
value of the target parameter in the output. Non-imagery data themselves may be quantitative, but
they do not pertain to quantification as they simply indicate the 3D position of randomly distributed
points, of which the third dimension (height) is the frequent target of quantification, be it the bare
ground elevation or surface relief. The quantitative information covered in this book refers to the
change in its position (e.g., rate of debris displacement) or height (e.g., ground subsidence).
The purpose of this chapter is to present an overarching overview of quantitative remote sensing
and lay the foundation of how to undertake it. It starts with narrating its definition, nature, and
requirements. The second part of this chapter elaborates how to collect field data, including in situ
sampling, spectral data measurement, and gathering of third-party auxiliary data that serve as an
indispensable benchmark of remotely sensing data against their in situ observed parameter values.
DOI: 10.1201/9781003517504-2 3
4 Quantitative Remote Sensing
This discussion is followed by a review of image-derived variables that have been commonly used
to quantify the target parameter. The fourth section of this chapter expounds how the quantified
results are evaluated and validated for their reliability and accuracy. The fifth section elucidates the
obstacles commonly facing the quantification process and the factors degrading the reliability of the
quantified results. Finally, this chapter outlines the organization of this book and briefly introduces
the content of the chapters to follow.
TABLE 1.1
Disparities between qualitative and quantitative remote sensing in various aspects
Primary input Spectral bands, facilitated by image-derived Spectral data, field data, and possibly
spatial attributes, nearly all internal data environmental data; combined use of both
internal and external data
Field data Non-essential, only needed in accuracy Essential, and must be collected currently with
assessment imagery data
Use of models Not relevant Essential in certain cases
Decision rules Spectral domain partitioned into non- Mathematical equations or models to translate
overlapping spheres, each corresponding pixel values to in situ observed parameter
to a unique category of ground cover; values;
Almost no assumptions involved Assumptions are rife to simplify quantification
Ease of implementation Relatively simple computation; Sophisticated analysis and modeling;
Very easy using standard software Complex, lengthy, and challenging;
Niche packages or even scripting needed
Atmospheric effects Safe to ignore without making noticeable Imperative to eliminate, together with
changes to the output topographic effects (if relevant) to generate
authentic results
Nature of output Imprecise, categorical, limited in quantity, Precise, continuous values;
mostly one-off; Dynamic, current at the time of sensing, time-
Static, mostly time-invariable; series outputs possible
Indicative of current state Able to indicate what is likely going to happen
Reliability High accuracy easier to achieve, affected Accuracy tends to be lower, or even unknown;
by classifier and the homogeneity of the affected by diverse factors, including the
target sensing environment, or the reliability of
models
Domain of applications Limited, mostly in the terrestrial sphere and Widely ranging spheres from the ground all the
biosphere way up to the air
data, or make use of external data. Qualitative remote sensing can be regarded mostly as static as
the produced output is considered time-invariable or slightly variable with time. In contrast, quan-
titative remote sensing is dynamic, producing outputs that are current at the time of sensing or that
have a strong dependency on time. It is able to generate time-series outputs in near real time and
predict the status of the target parameter in the future based on what is observed on current images.
Nevertheless, the output of quantitative remote sensing is generally less reliable than its counterpart
of qualitative remote sensing as it is affected by more factors, including the sensing environment.
However, quantitative remote sensing can retrieve much more detailed and valuable information to
meet the needs of more applications in many fields than qualitative remote sensing. Its wider scope
of applications encompasses all the spheres from the ground level all the way up to the stratosphere.
In comparison, qualitative remote sensing is of little use in some applications. Although there is no
direct linkage between qualitative remote sensing and quantitative remote sensing as the quantitative
information can be derived from pixel values directly without the need to establish its identity first,
in certain cases the latter is built upon the former by going one step further to derive detailed numer-
ical information for each of the mapped covers. For instance, instead of classifying the vegetative
covers categorically as forest, shrubland, and grassland as is commonly implemented in qualitative
remote sensing, their aboveground biomass (AGB) or carbon stock can be derived from these covers
quantitatively with the assistance of field data and well-established estimation models if the input
data are imagery (but not so with non-imagery data). Needless to say, quantitative remote sensing
6 Quantitative Remote Sensing
is much more complex and demanding to realize than qualitative remote sensing. Frequently, this
realization is possible only if facilitated by external data or highly sophisticated physical models that
may demand simplification to ensure model invertibility. Understandably, the accuracy of the quan-
titative information retrieved from remotely sensed data is markedly lower than that of qualitative
results, or impossible to validate because of the lack of ground truth data.
To a large degree, the development and sophistication of quantitative remote sensing have been
made possible and even spurred by the rapid advances in sensing technology and computing power,
especially the easy availability of purpose-built computing platforms and packages. They enable
remotely sensed data to be calibrated at an unprecedented accuracy level and to be analyzed either
solely or in conjunction with other non-remote sensing data to derive quantitative information about
the target in ever expanding fields of application.
the biosphere and hydrosphere, such as forest fire intensity and circulation of ocean currents, are
also the targets of study in quantitative remote sensing.
On the surface, human activities and their impacts on the environment are not considered parts
of the natural environment, and hence are beyond the scope of this book. In reality, certain human
activities may leave lasting legacies or imprints behind in the environment, such as plantation forests
and human-induced landslides. They are all considered the targets of quantification as natural forest
and plantation forest can be quantitatively assessed from identical data using the same method of
data analysis irrespective of their origin. Similarly, fires can be ignited naturally by lightning or
deliberately lit by humans as a way of clearing the land in swidden agriculture or the excessive
accumulation of flammable fuels to minimize the risk of bush fires. Thus, the quantification of fire
properties such as burning intensity and fire spread rate are also covered in this book as quantifica-
tion focuses primarily on the phenomenon itself irrespective of its causes.
The quantitative information about a parameter that can be retrieved from remotely sensed data
may be temporally stable, ephemeral, or in a state of perpetual change. Stable parameters tend to
be static mostly. Ephemeral parameters last only minutes or hours, such as forest fires and volcanic
eruptions. They require images of a super-fine temporal resolution to quantify. Some elements of the
environment are not stationary but in a state of constant motion, such as winds and ocean currents.
Constantly changing parameters pertain mostly to the atmosphere (and to a lesser extent, the tropo-
sphere) and hydrosphere, such as air pollutants under the effects of winds and suspended sediments
in coastal waters. It is very crucial to distinguish the temporal dimension of such features as it affects
not only the ease of quantification but also the accuracy at which the target can be quantified. For
instance, it is much easier to collect a large number of in situ samples to construct the estimation
model (see Eq. 1.1) for static and stationary parameters. Conversely, it is highly demanding and
almost impossible to collect in situ samples to verify the quantification accuracy of fast changing
and quickly moving targets such as air pollutants in a volcano plume.
Certain aspects of the environment, such as the elevation of a topographic surface, may be
quantified from stereoscopic aerial photographs as in conventional photogrammetry, although this
method has been gradually replaced with LiDAR sensing. The derivation of precise bare ground
elevation from aerial photographs is considered as the task of photogrammetry, not quantitative
remote sensing in this book. Only its close cousin, the highly automated and computationally
intensive Structure from Motion (SfM) photogrammetry (see Section 4.5.1 for more details) will
be covered in this book. However, the derivation of local relief, such as tree height and changes
in elevation induced by volcano eruptions, landslides, and ground subsidence from LiDAR data
is deemed quantitative remote sensing, and hence extensively elaborated in this book. The widely
ranging parameters of the environment that have been quantified from remotely sensed data are
compared and contrasted in Table 1.2, together with the best data to use and the requirements of
quantification. How exactly the quantification is achieved will be expounded in detail in the rele-
vant chapters to follow.
1.1.3 Nature of Quantification
1.1.3.1 General Principle
How exactly the quantitative information is retrieved from remotely sensed data depends ultim-
ately upon the nature of the data used, namely, whether they are imagery (graphic) or non-imagery
(non-graphic). Prior to the advent and wide adoption of LiDAR technology, all remotely sensed
data are exclusively graphic, either in analogue prints or the digital format. These images may be
acquired over different portions of the spectrum and contain multispectral bands. With the use of
imagery data, quantitative remote sensing refers to the derivation of a quantitative measure for a
target parameter from its pixel value in a single band or the transformed pixel values in multiple
newgenrtpdf
8
TABLE 1.2
Summary and comparison of commonly quantified environmental parameters covered in this book, their best sources of data to use, and
requirements
Static State -surface Albedo, temperature, heat flux, moisture; SST VNIR and TIR bands In situ samples, regression analysis
Significant wave height TerraSAR & GNSS-R Complex inversion of wind-sea wave spectrum
Leaf area index Multispectral bands Use of special indices or complex physical models
State -height/ Tree height, canopy height; LiDAR data Construction of CHM;
depth Bathymetry Optical bands, bathymetric LiDAR Correction for the impact of in-water substances
Content Salinity, soil salt, moisture, soil contaminants Hyperspectral & multispectral data Use of numerous predictor variables and complex algorithms
AGB and AGC LiDAR and imagery Allumetric equations; in situ tree parameters
In-water matter, Chl-a, CDOM Coarse resolution ocean satellite data Use of empirical or semi-analytical models
Solid particles, dust intensity, PM2.5, trace Coarse resolution atmospheric satellite dataComplex modeling and physical models, assumptions
gases (CH4, O3, CO, NO2, SO2, CO2) (simplifications)
Events Algal and phytoplankton blooms Ocean color satellite data, shortwave bands Use of indices and normalized water-leaving radiance
Potential Crop yield Multispectral bands Use of co-variables and crop yield models, harvest index
Quality LUE, FPAR, NPP Vexcel UltracamX, optical data Use of indices and regression models
Dynamic Rate Flow, rain, wind, ocean circulation Two-time optical and radar images Tracking of the same feature in both images or based on
spatial auto-correlation
Thickness/depth Lava deposit, snow, debris Two-time LiDAR data, SAR images Co-registered DEMs of same grid size
Volume Wood, trees Airphotos, drone images, LiDAR, InSAR In situ samples, models, and DSM and CHM
bands. The relationship between the quantitative information to be retrieved (Y) and the pixel value
(X) is expressed conceptually as:
Y = f ( X1 , X 2 ,..., X n ) (1.1)
where Xi =pixel value in spectral band i, or transformed ratio of pixel values in multiple bands, or
the ith predictor variable; Y = value of the target parameter to be quantified; n = total number of
bands or predictor variables considered.
Since imagery data contain only spectral information on the target parameter to be retrieved,
the estimation of Y from such data alone may not be adequately feasible. It is likely that Y may be
related to other environmental factors, such as topography that also exerts an effect on vegetation
biomass. Thus, Xi have been expanded to include environmental variables in sophisticated modeling
to predict Y using advanced machine learning algorithms. With the advent of LiDAR data, non-
imagery-based quantification makes use of multi-temporal data about the same target. The two-time
data may be acquired at a temporal separation as short as a few months or as long as a decade. This
quantification can be mathematically implemented as
( ) (
Y = ∆ ( X, Y, Z) = X t1 , Yt1 , Z t1 − X t 2 , Yt 2 , Zt 2 ) (1.2)
where ∆ = difference in the 3D coordinates at time 1 (t1) and time 2 (t2) or quantitative change.
This differencing may be undertaken either for ∆(X,Y) or for ∆Z separately. The former applies to
the quantification of non-stationary targets whose horizontal position has shifted in the interim of
two data acquisitions, and is normally carried out to quantify the pace or rate of mobility. The target
itself does not change its identity or state, only its location. Neither does its height that is assumed
constant in the interim. Therefore, the difference is meaningful only in the horizontal position (X,Y),
or the displacement from (Xt1,Yt1) to (Xt2, Yt2) over the interval of the two data acquisitions. It can
be turned to the speed of motion, such as the downslope creeping rate of landslide debris using the
following equation:
(X ) + (Y )
2 2
Velocity = ∆d/∆t = t2
− X t1 t2
− Yt1 /∆t (1.3)
where Dt =t2 –t1 or time lapse between the two data acquisitions.
In some cases, the third (Z) coordinate is differenced for features that have a relief, such as
topographic surface and tree height. This differencing effectively yields quantitative information
on surface erosion (or sediment deposition) or tree growth in relation to the bare ground or refer-
ence height. Such quantitative information may pertain to the subsidence rate of the ground or the
surface erosion rate, which is essential in quantifying the volume of sediment yield in a catchment.
Conceptually, the differencing on Z (or h) can be simplified as
∆Z ( ∆h ) = Zt1 − Zt 2 (1.4)
This is based on the understanding that the two layers used for differencing have been co-registered
with each other to a sufficiently high accuracy.
1.1.3.2 Components
Quantitative remote sensing comprises four main parts: field data collection, remote sensing data
acquisition, data processing and analysis, and result validation, of which data manipulation and
10 Quantitative Remote Sensing
transformation form the core. At first glimpse, these four essential components may not seem indis-
pensable but they are vital to the success of quantification, especially image-based quantification.
The first component is commonly known as ground remote sensing during which vital data are
collected in the field. It is a preparatory step for model development and result validation. The
collected data serve as the bridge to link remotely sensed data with the actual value of the target par-
ameter at the time of sensing. In a sense, they benchmark remotely sensed data (and their transform-
ations) against the quantitative measure of the target parameter on the ground. In addition, ground
remote sensing also supplies the data needed to verify the quantified results. This topic is so complex
that it will be discussed in Section 1.2 separately later. Remote sensing data, either imagery or non-
imagery, are the primary source from which the target parameter’s value is retrieved. A large variety
of space-borne data has accumulated for quantitative remote sensing (see Chapter 2 for details).
Their properties vary widely, and the best data to use depend on the target of quantification. If the
remotely sensed data are acquired or to be acquired by satellites routinely, they can be purchased or
downloaded from the data supplier’s websites. If not, plans need to be made to fly over the area of
study, usually under calm weather conditions. In case of drone data, prior flight authorization must
be secured first (refer to Section 2.1.1 for more details).
The ultimate objective of data manipulation and analysis is to develop the estimation model,
through which the input data are translated to the quantitative value of the target. Of the four
components of quantitative remote sensing, it is the most complex involving several steps, one of
which is data preparation. It aims to transfer the remotely sensed data to a usable format. This may
involve cloud removal from optical images, and spatial interpolation of LiDAR data to the desired
spatial resolution. Data preparation also includes unification of coordinate reference systems and
vertical benchmark for LiDAR data. If multi-temporal data are involved in a quantification, they
have to be calibrated and standardized to the same radiometric scale, and geo-referenced to the
same ground coordinate system if necessary. Radiometric calibration is performed on imagery data
to remove the atmospheric effects prior to formal quantification (see Chapter 3). The type of data
analysis and its complexity vary widely with the target of quantification and its nature. It may mean
simple division of one spectral band by another as in producing a vegetation index or transforming
the derived index into a new format via regression models. Once the estimation model is constructed
and deemed acceptable (as with the training samples), it is applied to the remotely sensed data to
produce a quantitative distribution map of the target parameter.
Result validation does not impact the quantification outcome. Instead, it is merely an attempt to
attach a quantitative measure to the retrieved value, usually via independent samples collected in
the field that have not been used in model construction. The derived validation measures not only
indicate the reliability and accuracy of the quantification, but also allow the comparison of different
retrieval models (and model parameters) and the assessment of the effectiveness of the considered
input variables and data. Through the generated accuracy indicators it is possible to pinpoint the
factors that have adversely impacted the quantification results and enlighten us about how the quan-
tification may be improved in future. The actual implementation of validation and the expression of
validation outcome are such a broad topic that they require a full section (Section 1.5) to explain.
The general procedure of quantification comprises several sequential steps in transforming the
input remotely sensed data to the desired quantitative values of the target parameter. As illustrated in
Figure 1.1, this process can be described as data collection → data analysis → model construction →
model application → results validation sequentially. In terms of timing, only ground data sampling
and remote sensing data acquisition need to be concurrent. The synchronization of in situ sampling
with air-and space-borne imaging is especially important and imperative in quantifying ephemeral
features that change their properties or values within a short time, such as concentrations of in-water
constituents. For terrestrial features whose property or height does not experience noticeable tem-
poral changes, a discrepancy of a few days between in situ sampling and image acquisition will not
tangibly degrade the quantification outcome. If exact synchronization is not feasible, then the two
Introduction 11
FIGURE 1.1 General procedure of quantification from remotely sensed data involving data transformation
and validation to yield the accuracy indication. (Verrelst et al., 2015a, used with permission (5751050246505)
from Elsevier.)
should take place as closely as possible to minimize temporal variation in the target parameter value
or the impact of the changed atmospheric conditions on the quantification outcome. Sufficient in situ
samples must be collected at representative sites to build a sound model to bridge the two types of
data together. They also offer plenty of leverage in splitting them into two sizable parts randomly,
one used to construct the model, and the other for model validation. Once the model is deemed
acceptable, it is then applied to the remote sensing data to generate the spatial distribution of the
parameter value for the target of interest, followed by validation.
1.1.4 Requirements of Quantification
The aforementioned quantification procedure cannot be successfully accomplished unless the
following four requirements are fulfilled:
(i) Ground data. Ground data may not show up in the final quantitative results directly, but they
are fundamental to assessing the accuracy of the intermediate land cover maps produced
from satellite images. How reliably the quantification has been achieved and what factors
have contributed to its inaccuracy are ascertained by comparing the quantified outcome
with some sort of ground truth and analyzing the two sets of data statistically. Although
quantification is based on remotely sensed data, remote sensing alone cannot fulfill the
whole requirements of quantification. Ground data play three vital roles in the quantifica-
tion process. First, they serve as the bridge to link remotely sensed data with real-world
parameter values. Samples are essential to establish the relationship between sampled
ground properties and those on satellite imagery at the corresponding locations so that
point-observed properties can be upscaled to the spatial extent of pixels and extrapolated
to the entire area of study covered by the image. Without ground data, remotely sensed
results are just abstract numbers devoid of meaningful values attached to them. Second,
they are also needed to parameterize physical models. The parameters in semi-physical
models need to be properly tuned using ground data. Without ground data, their values
cannot be determined, and the models will probably yield vastly inaccurate simulation
results. Third, ground truth data are essential in evaluating the quality of the retrieved result
or in delivering its quality assurance. Validation is commonly accomplished using the in
situ samples not used in constructing the estimation model to attain independence.
(ii) Powerful analytical algorithms and computing systems. In order for the quantification to
be successful, there must be powerful computing packages to mine the input data and to
determine the most effective predictors to be included in the prediction model from a large
pool of remotely sensed and other auxiliary data. This task becomes increasingly important
if more variables are considered in the retrieval and they are analyzed using sophisticated
12 Quantitative Remote Sensing
machine learning algorithms. Although diverse metrics can be generated from remotely
sensed and auxiliary data with the assistance of machine learning algorithms, not all of
them are useful or equally effective in predicting the dependent variable. It is hence a pre-
requisite to identify the most reliable parameters or metrics, and satellite images (pixel
values) and their transformations for certain features, such as tree biomass to minimize the
computation cost. Even for tree heights, they cannot be linked directly to a LiDAR data
point cloud, so various height metrics of LiDAR data are derived and compared among
themselves to see which one allows the dependent parameter to be modeled most reli-
ably. Without powerful computation, it is impossible to identify the most relevant pre-
dictor variables and attach an importance value to each of them. Quantification is feasible
only when a reputable relationship between the imaged or remotely sensed values and the
targeted quality to be quantified can be established, usually through powerful computing
packages or machine learning and deep learning algorithms to be covered in Section 4.3.
They also enable the target parameter to be quantified at higher accuracy.
(iii) Target visibility. In order for a target to be quantified from remotely sensed data, first and
foremost, it must be visible on the images or have some traces lingering in the remote
sensing data either directly or indirectly. Thus, nearly all quantifiable features must lie
on or above the ground, or be suspended in a medium of a sufficiently high transparency.
For vegetation, quantification is limited to aboveground features visible from images (e.g.,
not roots). Quantification of below-ground biomass is still possible only indirectly via its
relationship with AGB. As for in-water constituents and soil contents, their quantification
is confined to the skin layer through which the solar radiation used for sensing is able to
penetrate. Therefore, the quantified surface concentration may not reflect the whole 3D dis-
tribution of the feature of interest, such as the vertical distribution of sediments suspended
in a lake. For the quantification of atmospheric parameters, their concentration is usually
integrated over the entire air column from the sensor all the way down to the ground surface
as it is difficult to separate them into layers at high accuracy from space.
(iv) Precise positioning. All quantifications from remotely sensed data are inherently spatial
in nature. In order to produce a field view of the quantified parameter, all field samples
collected on the ground must be precisely geo-referenced. Sample positional information
plays three important roles in the quantification. First, it enables the association of in situ
sampled attribute values with their corresponding image properties. Through their loca-
tion on the image, a relationship between image properties and in situ measured quality
is established. Second, they enable the geo-referencing of images and the mosaicking of
multiple UAV images to cover a large ground area. Finally, they enable multi-temporal
remote sensing data to be co-registered with each other. Image co-registration is vital to
dynamic quantification. Equally critically, the images must also be geo-referenced to the
same system using onboard Global Positioning System (GPS)-logged positional and orien-
tational data, and the two systems must match so that the two sets of data can be overlaid
with each other spatially.
The location of the ground samples collected is usually determined via a GPS unit. Needless to say,
the accuracy of the logged GPS coordinates also impacts the accuracy of quantification, especially
when the target of quantification is spatially heterogeneous and the image used has a fine spatial
resolution. A smaller pixel size means less room for geo-location inaccuracy as the in situ sample
could correspond geographically to the neighboring pixels instead of the correct one.
It must be noted that even if all the necessary requirements are met, successful quantification is
not always guaranteed because of the limitations with remote sensing, be it imagery (e.g., poor spa-
tial resolution) or non-imagery LiDAR (e.g., in LiDAR shadow) or the subtlety of the target param-
eter (e.g., indistinct from each other or from other features). Only when the target parameter’s signal
Introduction 13
exceeds the minimal radiometric resolution of the image used can it be successfully quantified to a
credible accuracy.
Fixed-location observations are limited by their spatial sparsity and location distant from the area
of study, unable to provide adequate ground truth in most cases. So, samples have to be collected
at ad hoc positions in the field. They may be distributed in the area of study randomly, systematic-
ally, clustered, or stratified (Figure 1.2). Of the four, random sampling is the most popular (Figure
1.2a). It can ensure a wide spatial distribution of samples all over the area of study. Systematic
sampling guarantees that all the collected samples are widely distributed all over the area of study
at a uniform spatial interval (Figure 1.2b). This strategy may prove impractical in the field due to
site accessibility. In comparison, clustered sampling has a narrow scope of application, and is not
used widely (Figure 1.2c). Stratified sampling is the best and the most efficient sampling strategy
in terms of balancing sample size and sample reliability to yield an unbiased sample (Figure 1.2d).
In this strategy, the study area is partitioned into a number of sub-areas of a uniform shape and size,
and a specific number of samples are then collected from each of them. The distribution of samples
within each sub-area can be random or clustered. Stratified random sampling guarantees that the
collected samples are widely distributed over the entire area of study, and hence are geographically
representative. Apart from geographic stratification, samples can also be collected via thematic
stratification, in which the number of samples having a pre-defined attribute value is selected, usu-
ally randomly. Such thematic stratification can guarantee the selection of samples with the pre-
determined values. Which method is the best and should be used is governed by the nature of the
target parameter and its spatial distribution in space.
Spatially, field samples should be scattered as widely as possible across the study area to increase
their geographic representativeness. A well-balanced and widespread distribution all over the study
area increases the robustness of the estimation models between environmental parameters and image
properties. A high level of geographical representativeness is guaranteed if the collected samples are
widely scattered all over the area of study. A more representative sample distribution likely ensures
that all the possible values of the target parameter are captured by the collected samples. A larger
range of parameter value is conducive to a more reliable and applicable estimation model being
established. Its application to all the pixels in the input image is virtually a process of interpolating
the parameter value from pixel values. Conversely, if the parameter value at a given pixel falls out-
side the range of the sampled values, then the application of the model to this pixel represents a case
of extrapolation, which produces a far less reliable estimate than interpolation, or even an erroneous
outcome. For instance, the constructed biomass model of grassland can produce negative AGB
values if applied to bare ground pixels of no vegetation.
In certain applications, none of the sample distributions in Figure 1.2 can effectively capture the
spatial pattern of the target parameter. Even a sufficiently large sample size of randomly distributed
points may fail to encapsulate subtle elevational variations along certain directions, as in quanti-
fying erosion of sand dunes in foreshore coastal areas (Figure 1.3a). This may be remedied by
sampling elevations linearly along transects. A reliable picture of dune erosion can be established
FIGURE 1.2 Four strategies used in collecting spatially representative samples in quantitative remote
sensing. (Gao, 2022.)
Introduction 15
FIGURE 1.3 Three types of sampling units and/ or sample distribution. (a) (random) point sampling;
(b) transect sampling ideal for studying elevational profile at strategic positions. The sampling points are
distributed along a transect proportionally to surface elevation variability; and (c) plot sampling to collect the
target enclosed by a square or circle that can be conveniently subdivided into quadrants (e.g., half or a quarter
of a plot). (Gao, 2022.)
by recording elevations along a number of strategically located, parallel transects, all perpendicular
to the shoreline (Figure 1.3b). Along each transect, elevations are sampled at critical spots and vari
able intervals proportional to the surface complexity with more samples collected in highly variable
sections of a transect. Such distributed samples can reveal the change in beach morphology econom-
ically. Similarly, in quantifying coastal bathymetry and sediment concentration in a channel, all
sample points should be distributed linearly either perpendicular to the shoreline or parallel to the
channel median.
In considering sample representativeness, attention should also be paid to sample site acces-
sibility. It is challenging or even impossible to collect samples that are located at inconvenient
or inaccessible sites, such as in the middle of a swamp. If relevant, the vertical distribution of
samples also deserves consideration, for instance in studying the vertical distribution of vegetation
over a mountain range. Equally, the slope aspect at which vegetation is distributed may also need
to be taken into consideration in sampling design. Depth of samples is another consideration in
studying the vertical distribution of parameter values, such as soil moisture and suspended sediment
concentrations at different depths. Depth must be factored in if the target parameter has a vertically
non-uniform value and the sensing radiation is able to penetrate the target of study. Ideally, samples
should be collected at a maximal depth to which the signal of the target parameter can be recorded
by the remotely sensed data. Thus, a larger sampling depth should be adopted with optical imagery
data, especially VIS (visible light) data, in quantifying bathymetry and suspended sediment concen-
tration. However, when it comes to temperature, the sampling depth should be confined to the skin
depth due to the absorption of TIR radiation by water, whereas ocean wave height has to be sampled
on the surface.
1.2.1.2 Sampling Unit
Once a suitable sampling position has been settled, the next consideration is the topological dimen-
sion of the sampling site or sampling unit. Basically, there are two fundamental types of sampling
units, point samples and plot samples. In point sampling, the parameter value is measured at a
particular location (Figure 1.3a). In plot sampling, the attribute value of the target parameter is
enumerated over an area (Figure 1.3c). Whether samples should be collected at a point or over a
plot in the field depends solely on the nature of the parameter to be sampled. Point sampling is the
default choice if the parameter value is observable at a site whose location can be expressed by
a pair of Cartesian coordinates (x, y). These parameters may encompass, but not limited to, soil
moisture, pH, salinity, organic matter content (SOC), elevation, temperature, and even in-water
sediment concentration determined from point-collected water samples. Fundamentally, all targets
16 Quantitative Remote Sensing
of quantification that are observable and measurable at points in the field must be sampled as point
features.
Point sampling is relatively easy to accomplish as only one measurement is needed to yield
the quantitative result. However, this is much easier said than done for ephemeral targets, such as
atmospheric pollutants at different elevations. The collection of point samples is relatively easy in
the terrestrial sphere in comparison with its aquatic counterpart, even though it may still be subject
to site inaccessibility imposed by natural barriers such as river channels and wetlands. In oceano-
graphic sampling, the sampling outcome is severely compromised by the sea conditions that, in turn,
are controlled by the weather conditions, and the cruising ship’s position and orientation in the open
sea. The ship of measurement itself may disturb the target through its waves as in phytoplankton
sampling and introduces an additional influence to its signal (e.g., secondary reflection off the ship
to the target).
For certain biophysical variables, such as vegetation biomass and plant carbon density, point
sampling is utterly incompetent, as they are available and meaningful only if enumerated within a
spatially aggregated unit. Their sampling unit must be expanded to encompass an area known as sam-
pling plot. Plot sampling is the norm in sampling attributes that do not exist at points (Figure 1.3c).
Conceptually, plot sampling is fully compatible with remote sensing data to be translated to the in
situ plot-sampled parameter value as pixel values are derived from the radiative energy originating
from an area on the ground. Their spectral properties on a satellite image are enumerated over a
square-shaped ground area. Theoretically, sampling plot size should be commensurate with the pixel
size or spatial resolution of the image to be analyzed. However, the exact match of the two sizes
may not matter much if the attribute value of the target parameter under study is spatially uniform or
roughly scale-invariant (i.e., scale-independent), such as grassland AGB. Compared with grassland,
trees are much more heterogeneous spatially in their stature, species composition, and spatial distri-
bution. Naturally, a much larger plot size should be adopted in sampling parameters of a higher spa-
tial heterogeneity. Invariably, the nominal square shape of image pixels is more likely to be a circle
(and even elliptical) due to off-nadir scanning. Correspondingly, the sampling plot should have a
square or circular shape with a certain radius (e.g., r =1.5 m). Compared to circles, squares are
preferable as they can be conveniently and precisely partitioned into four equal quadrants with the
assistance of the two diagonal axes (Figure 1.3c). This partitioning is quite useful and particularly
important in assessing the change of dynamic parameters (e.g., biomass) in a longitudinal study. For
instance, biomass can be harvested in one of the four quarters of the sampling plot in the first year,
and any of the three remaining quarters in subsequent years. In the field, plot sampling location is
usually finalized by tossing a ring into the air randomly. Wherever it lands is then sampled. This way
of determining the sample location ensures both randomness and independence of spatial samples.
Compared with point sampling, plot sampling takes much longer to complete and is more sub-
jective as the results vary with the size of the sampling plot, especially when the target parameter to
be sampled has a high degree of spatial heterogeneity. A small plot size means less fieldwork as the
grasses can be clipped very quickly, and the results tend to be more reliable. However, a too-small
plot may not be representative. On the other hand, a large plot can be more representative, but the
amount of fieldwork required is going to quadruple. Thus, a delicate balance must be struck between
sample plot size and sufficient representativeness.
Once sampling is completed, it must be followed by the recording of the sample location.
Depending on the measuring device used and the nature of sampling, this recording can be
synchronized with sample data collection. In the case of plot sampling, the plot’s centroid is logged
as the position of the collected sample. In order to be linked to pixels in a geo-referenced image,
their precise location must be expressed in the same coordinate system as that of the remotely sensed
data as accurately as possible, usually with the assistance of a GPS unit. GPS receivers vary vastly
in their functionality, number of channels, and hence accuracy. Irrespective of the GPS receiver
used, in general, the positioning accuracy at a fixed location is higher than single, rover coordinate
Introduction 17
readings, especially if the coordinates have been differentially corrected as multiple positions can be
logged at the same sampling site and averaged to cancel out the random error. Accurate positioning
is particularly critical as the coordinates are the only clues for linking the sampled parameter values
to their spectral properties in remote sensing imagery, and when the attribute value varies drastic-
ally within a short spatial range. Inaccurate positions of the collected samples may cause the in situ
sampled attribute values to be misaligned with the spectral properties of the neighboring pixels.
Inevitably, this misalignment between in situ samples and their corresponding spectral values on the
satellite image degrades the reliability of the estimation model and ultimately the quantified results.
Typical positional accuracy used to be around 3–5 m, but can now be reduced to sub-meters
after post-processing, subject to the logging environment. More accurate positioning is possible
with Ramon filtering of the logged data, but may not prove proportionally beneficial if the satellite
image in use has a spatial resolution of a few meters (i.e., ⁓3 m). Actually, a positioning inaccuracy
of in situ samples comparable to the spatial resolution of the remote sensing data to be used is
adequate enough as it ensures that the sample position on the image coincides with the position
actually logged on the ground. The misalignment of the logged and genuine positions of samples
by a few meters, however, is inconsequential if the image has a spatial resolution on the order of
tens of meters, as with Earth observation or resources satellite images, or if the sampled attribute is
spatially continuous with little local variation. A shift in sampling position by a few meters on the
ground will not destroy the correspondence of the sample to the correct pixel value on the image.
However, caution needs to be exercised to extrapolate this claim to drone images that have a spatial
resolution on the order of centimeters. In this case a discrepancy of 1 m on the ground will cause a
misalignment of the sample to its corresponding true location on the image by a few pixels. Thus, a
high positioning accuracy is a highly crucial prerequisite for minimizing subsequent quantification
inaccuracy with drone images.
field measurements are more authentic, but the measured results are subject to the influence of sev-
eral environmental factors, such as solar elevation and azimuth, changing solar intensity during the
measurement (especially when repeated measurements need to be carried out for multiple targets
over a large area), and the ambient settings of the target.
In the field, spectral measurements are usually taken using a portable hand-held multispectral
or hyperspectral spectrometer. A field spectroradiometer has a typical spectral sensitivity range of
325–1075 nm and a spectral resolution of 3 nm. For instance, the Adjustable speed drives (ASDs)
FieldSpec 4 Hi-Res NG spectroradiometer (Analytical Spectral Devices, Boulder, CO, USA) is able
to measure the hyperspectral reflectance and transmittance of surface sediments, soils, plants, water
bodies, and artificial targets at 1,875 wavelengths over the 350–2500 nm spectral range. The spec-
tral resolution ranges from 3 nm in the VNIR (350–1000 nm) region to 6 nm in the SWIR (1000–
2500 nm) region. They have a 25° field of view (FOV), for a viewing area of 58 cm in diameter at
the canopy level.
In the field, the sensor heads of the spectroradiometer should be mounted in a pole and oriented
squarely toward the target (e.g., the nadir viewing direction), centered over the area of measure-
ment to replicate the manner of airborne and space-borne sensing. If the target is a crop field, the
device should be stationed at 1.3 m above the ground (e.g., mounted on a commercial tripod). If
the measured canopy is high, then the device should be hoisted some height (e.g., 40 cm) above
the target (Figure 1.4). Spectral reflectance at the canopy level is much more difficult to measure
as the hyperspectral radiometers must be mounted on an all-terrain sensor platform (Rundquist
et al., 2014). Both the downwelling and upwelling radiance of the canopy should be measured to
calculate its reflectance, which is best accomplished using a dual-radiometer system connected
with an optical fiber. One of them equipped with a 25° FOV optical fiber is positioned downward
to measure the upwelling radiance of the target. The other is oriented skywards to simultaneously
measure the incident irradiance of the target. The down viewing spectrometer must be elevated
to a sufficient height a few meters above the canopy to cover a sizable sample area, such as 5.4
m above the canopy, resulting in a sample area with a diameter of 2.4 m. A reflective reference
panel is needed to calibrate the measured solar radiance, and the measurements are repeated
twice. First, the surface reflectance is measured off the panel and then the radiance reflected off
the target. The measured radiance and irradiance are finally converted to reflectance expressed as
a ratio or percentage.
Regardless of the target, all spectroradiometric measurements should take place under clear sky
conditions close to solar noon between 11:00 and 13:30 local time, when there are minimal changes
in the solar zenith angle and the solar radiation is at its peak stability. The measurements may be
repeated several times (e.g., >10) at a given spot. All the spectra measured at the same site are
averaged to yield the final site reflectance spectrum. Field-measured spectral reflectance curves of
the target have been analyzed to demonstrate the feasibility of remote sensing quantification. In a
strict sense, it is not the quantification this book is about because it simply generates non-spatial
information on the best spectral ranges or bands to use and illustrates how they should be processed
to maximize the disparity of the quantitative value of the target. It is of limited utility as spectral
measurements are undertaken at specific spots and they fail to yield spatial distribution of the quan-
titative value of the target because no imaging data are involved. No matter how many spots are
measured, they are at most point-based, unable to yield a field view of the quantified parameter
value, the biggest strength of remote sensing. More importantly, the feasibility may not translate into
reality due to the impact of the atmosphere and the ambient sensing environment. Even if ground
spectral measurements enable the target to be quantified at a satisfactory accuracy, there is no evi-
dence to suggest that similar results can be replicated from actual space-borne data as the spectral
resolution of available satellite images may not match that of the hand-held spectrometer. Besides,
space-borne remote sensing data are prone to degradation in quality by the atmosphere. In reality,
the target also interacts with its surrounding environs that further complicate the quantification, or
Introduction 19
FIGURE 1.4 Field spectral measurement of the vegetative canopy using a spectroradiometer that must be
hoisted some height above the target using a pole. (Ouyang et al., 2013, open access.)
even compromise the feasibility of quantification. Additionally, the actual concentration of the target
parameter may have a spectral signal with a strength well below the radiometric resolution of satel-
lite images to reach the minimum quantifiable level.
Soil moisture may be measured using a variety of meters via a probe inserted into the soil at a
few spots in a sampling plot, and the average reading is used as the final measurement for this plot.
In the measurement, the probe should be inserted into the top soil layer only as its moisture can be
remotely sensed. Soil salinity expressed as the resistivity of soil may be measured using an earth
resistivity meter in combination with an electrical conductivity (EC) probe, with readings dependent
on soil structure and texture, moisture content, and the salinity of the ground water. Soil salt content
is much harder to measure, and has been quantified using only spectral data.
SOC can only be determined by sampling the soil near the surface (e.g., 0–10 cm), usually at a
few spots in the same sampling plot. The separately collected samples are mixed to form one com-
posite sample representing that particular plot. The samples are then taken to the laboratory where
they are air-or oven-dried, gently crushed, grounded if necessary, and sieved at 2 mm. Total SOC
may be analyzed using the VarioMax C-N Analyzer. It requires the soil samples to be dry combusted
at 950°C, and the measured result is equivalent to total carbon if the soil parent material is non-
calcareous. Soil oxidizable carbon may be determined using the Walkley–Black method by treating
the samples with 10% HCl. If they show reaction, their carbonate content is analyzed using the
pressure-calcimeter method. Then, SOC is calculated as the difference between total carbon and the
inorganic carbon content. Alternatively, the SOC content may be measured using a stable isotope
mass spectrometer, and the outcome is expressed as g•kg-1 or percentage (%).
Soil-heavy metal content is much more difficult to analyze in the lab than SOC due to their
minute concentrations. The collected soil samples may be similarly processed as in measuring SOC.
In addition, they may also be grounded and homogenized into a 32 mm mold to squeeze a tablet
under 30-ton pressure. Commonly quantified heavy metals in polluted soils include Cu, Pb, As,
Hg, Cd, and Cr. Ca, Mg, and Pb have been measured using reflectance or atomic-absorption spec-
troscopy because of their differential absorption behavior. Other measurement methods include
inductively coupled plasma-atomic emission spectrometry, flame atomic-absorption spectrometry
(spectrophotometry), or SPECTRO xSORT X-ray fluorescence for determining the Cu and Pb
contents. As (arsenic) concentration can be analyzed using the silver diethyldithiocarbamate photo-
metric method, while the content of Hg can be measured using atomic fluorescence spectrometry.
In order for these methods to work, the soil samples must be digested with an electric heating board
acid (HCL-HNO-3-HCLO4). Besides, they may be pre-processed by adding aqua regia (3:1 ratio of
HCl to HNO3) to decompose the greater part of any heavy metals present in the soil samples.
1.2.3.1 LAI
LAI is defined as the portion of leaf-occupied area vertically projected from the canopy to the horizon
over a fully vegetated area. It is indicative of the surface area available for plants to exchange light,
water, and CO2 with the environment. This dimensionless index quantitatively indicates the number
of leaves in the canopy within unit ground area, and is expressed as a ratio with a value between 0
and 10. A value of 1 means an equal amount of leaf area to ground area. LAI can be measured using
either direct or indirect methods (Chason et al., 1991). Grounded on the leaf area per unit dry leaf
mass, the direct method is expensive, laborious, inefficient, and destructive as the plants have to be
harvested within a sampling plot for later indoor measurement based on fresh and dry matter mass.
LAI is calculated from the total dry matter mass of the harvested sample and the specific leaf area.
This method is rather restrictive and time-consuming, suitable for grassland only. In contrast, the
indirect or semi-indirect method of measurement is non-destructive as the measurement is based
on the amount of light reflected off the target of measurement that can be easily and accurately
implemented non-destructively using several instruments, such as LI-COR 3000C leaf area meter,
and LI-COR LAI-2200 and CI-100 Plant Canopy Analyzer. LI-COR3000C measures leaf area on
intact plants via rectangular approximation. It comprises two parts, a scanning head and a readout
control unit (console) that are connected to each other via a fiber optic cable. Data concerning
leaf length, mean width, maximum width, area, and accumulated area are logged as the leaf is
scanned by the head. Their readings are summed in a secondary register and stored locally in the
console, or transferred to a computer via an interface software that permits real-time data collection
in the field. This device can measure LAI at the leaf level only. Canopy-level measurements can
be gathered using the LI-COR LAI-2200 analyzer. It scans leaves with its sensor head projecting a
nearly hemispheric view onto five concentric silicon ring detectors (Pepper et al., 1998). The optical
sensor is connected to a data logger that records ring detector readings of above-and below-canopy
light conditions using built-in software. The LAI-2000 canopy analyzer measures transmittance at
five solar zenith angles simultaneously, each elementary sampling unit is assigned one LAI value,
obtained as a statistical mean of multiple measurements (including multiple data readings, and a
few replica) with standard errors ranging estimation of LAI and average leaf angle (Verrelst et al.,
2015b).
Ideally, LAI should be measured under conditions of uniformly overcast skies to preclude under-
estimation. Alternatively, measurements may be taken at sunset or sunrise. If measurements are
obtained during daylight on sunny days, the lens must be shielded using a 90° view cap to restrict
direct sunlight from striking the optical sensor. In measuring LAI in the field, the instrument should
be deployed at approximately 0.9–1.3 m below the bottom of the tree crown at easily accessible
locations. All measurements are taken at the base of each tree bole by leveling the instrument’s
probe placed on top of a stake in each cardinal direction, 20 cm from the base of the bole and 15 cm
above the ground (Darvishzadeh et al., 2019). The standard protocol is to log one reference (above
the canopy) reading in the nearest open field, and five below-canopy readings in each cardinal dir-
ection for a full tree crown in a plot. The below-crown readings are recorded at 90° to each other.
The measurement is repeated after the crown is reduced by ten 0.016 m3 samples of leaves, and
finally after the removal of a total of twenty 0.016 m3 samples (Peper et al., 1998). Throughout the
measurement session, illumination conditions for the above-and below-canopy readings should be
maintained maximally constant.
As the name implies, CI-100 Plant Canopy Analyzer does not measure LAI directly. Instead, it
generates the LAI measurement by analyzing high-resolution images taken with a digital camera
equipped with a 150° fish-eye lens positioned at the end of a probe to scan plant canopies. Up to 32
measurements can be acquired in the field before downloading the images. They may be adjusted
based on the crown or canopy being measured. The instrument is operational under sunny, cloudy,
or partly cloudy sky conditions. The fraction of sky (solar beam transmission coefficient) visible in
each sector is analyzed from the images after they have been partitioned into a user-defined number
22 Quantitative Remote Sensing
of zenith and azimuthal divisions by tallying the blue-colored pixels in a sector. After all sectors
have been analyzed, solar beam transmission coefficients are averaged by zenith division, from
which LAI is calculated for the selected zenith angles instantly, together with PAR and extinction
coefficient.
Canopy and solar radiations may be analyzed using WinSCANOPY (Regent Instruments Inc.,
Quebec, Canada) and HemiView (Delta-T Devices Ltd, Cambridge, UK). They take colored
hemispherical images as the input, from which LAI, gap fraction, canopy openness, site factors,
Normalized Difference Vegetation Index (NDVI), and much more are derived without requiring
above-canopy measurements. The images are taken by a digital camera with a calibrated fish-eye
lens of a narrower view angle in the field, and processed externally using specific software to derive
leaf-angle distribution and mean leaf angle, angular distribution of gap frequencies, and site factors
(direct, diffuse, and global) (Bréda, 2003). It can also predict radiation values beneath the canopy.
Most of the outputs are available by sky sector or aggregated into a single overall whole sky or
annual value.
Above-canopy measurements are generated by positioning the device 0.5 m above the canopy, or
0.15 m above the ground to measure under-canopy PAR. In measuring planted crops, both ends of
the probe sensing part should aim at the middle position between rows and the probe midpoint at the
top of plant row (Tan et al., 2018). In either measurement, the sensor should be horizontally leveled
as judged by the midway position of the spirit level bubble.
1.2.3.3 Chlorophyll Measurement
Chlorophyll content (CC) is one of the critical leaf biochemical properties affecting photosyn-
thetic activities. Chlorophyll pigments are valuable for plants’ energy conversion. Leaf chlorophyll
effectively manifests plant growth and nutritional state. Changes in plant CC is indicative of plant
health and environmental stress. This biochemical variable is essential for quantifying carbon and
water fluxes, primary productivity, and light use efficiency (LUE). Chl-a can be measured using
several instruments at the leaf level in the field, including FieldScout CM 1000 Chlorophyll Meter,
CCM-300 Chlorophyll Content Meter, LI-600 (600N), and SPAD-502Plus. CM 1000 can easily
and quickly determine chlorophyll content of plant leaves and turf grass using the “point en shoot”
method accurately and directly. It functions by measuring reflectance of the ambient and reflected
light (700–840 nm) and by calculating the relative chlorophyll index over a conical viewing area
between 30 and 180 cm at a minimum distance of 30 cm from lens. Since the coordinates of the
samples have to be logged, with this device, stamp GPS coordinates are easily measured in com-
bination with a GPS device if connected to the CM 1000 software (ST-2950S). CCM-300 can
measure CC in plants and crops accurately, reliably, repeatably, readily, and non-destructively.
It is the most efficient meter owing to its ample onboard data storage, integral GPS module, and
portability (e.g., hand-held design). If integral with GPS, it can store 160,000 datasets and perform
data averaging.
LI-600 and LI-600N are compact photometers with Pulse-Amplitude Modulation fluorometers
that simultaneously measure stomatal conductance, Chl-a fluorescence, and leaf angle over the
same leaf or needle for various leaf sizes and morphologies, including many needles and narrow
leaf grasses. They are embedded with a GPS receiver to track sampled location and an accelerom-
eter/magnetometer to log data essential to compute leaf’s angle of incidence to the sun based on
measured heading, pitch, and roll, and record 3D coordinates in seconds. While LI-600 is ideal for
quickly measuring Chl-a fluorescence and stomatal conductance on plants in ambient conditions,
LI-600N does so only in light-controlled environments. LI-600 can also rapidly screen up to 200
samples per hour to identify candidates for detailed measurements. It is easy to use but has only the
basic configuration options. In contrast, LI-600N allows multiple independent controls, including
light, CO2, H2O, and temperature. When used jointly, LI-600 and LI-600N yield highly complemen-
tary data. For example, the LI-600 and LI-600N can be used to screen a large population and the
LI-6800 can be used to measure selected individuals from that population in more detail.
The SPAD-502Plus is a compact, portable, light instrument for quickly and readily measuring
the CC of leaves and has been widely used to optimize the timing and quantity of fertilization to
improve crop yields. It quickly and easily measures the CC of plant leaves without damaging them.
Instead of yielding the desired actual amounts of chlorophyll per unit area of leaf tissue directly,
the SPAD-502 meter provides the data only in arbitrary units. Thus, the measured results have to
be standardized to determine the amount of chlorophyll in a leaf sample. The SPAD readings are
transformed to absolute leaf CCs according to the following equation (Markwell et al., 1995):
Apart from direct measurement, leaf CC can also be determined via spectral measurement
(R750-800/R710-730 –1) using a portable spectroradiometer at 380–2500 nm normal to the canopy with
24 Quantitative Remote Sensing
an FOV of 25° and a distance of about 1.3 m above the ground (Gitels et al., 2005). This result can
be updated to the canopy level via LAI.
If measured at the leaf level, crop leaves must be collected in situ during the growing season.
In case of trees, sample leaves (needles) are collected from two to three branches of representa-
tive trees in each sampling plot. In order to ensure spatial representativeness, a certain number of
leaves (e.g., 20) must be randomly selected from each plot. They must total around 30 and have a
sufficiently large size (e.g., 90 m×90 m) to ensure the existence of a pure pixel in the plot. Multiple
measurements are usually taken and averaged to iron out the random influence of the ambient envir-
onment. In each plot, a crossbow (e.g., Excalibur Matrix 310 crossbow) may be used to sample
leaves/shoots from the top-of-canopy mature sunlit part. An average of two to three sunlit leaves
are collected from a branch by shooting at it with an arrow attached to a fishing line (Ali et al.,
2016). Leaves/shoots CC is immediately measured using one of the aforementioned CC meters and
averaged to determine leaf CC per plot. Canopy-level CC of each plot is calculated by multiplying
the plot’s average canopy CC by LAI.
1.2.3.4 Biomass Measurement
Plant biomass can be classified as aboveground and below-ground. AGB refers to the biomass stored
in live plants growing above the ground that is visible to the human eyes. It includes the biomass
of trunks, branches, twigs, and leaves of plants, but excludes roots and dead wood, in contrast to
below-ground biomass. As indicated in Section 1.2.1.2, the collection of AGB is always based on
plot sampling. In addition to being arduous and time-consuming, plot sampling can also be destruc-
tive as the grasses must be clipped to determine their weight and fresh AGB. The aboveground live
plant material in each sampling plot is harvested, usually with the assistance of a pair of scissor.
The clipped fresh grass in the field is weighted on site. In the laboratory it may be necessary to sort
the fractions of leaves and stalks and weight them separately. The collected plant material may be
desiccated by drying at 105°C in an oven for 24 h, and weighted again to find the dry biomass. The
total water content of both fractions is calculated by subtracting the dry weights from the previously
measured fresh weights (Vohland et al., 2010).
The workload of sampling biomass and the destruction it causes to plants may be minimized
by adopting an appropriate plot size that can be subdivided into smaller quadrants (Figure 1.3c).
Another way of minimizing the destruction and expediting the sampling process is to measure the
biophysical parameters of the target, from which the targeted variable is calculated. For instance, it
is impossible to harvest trees to weigh their biomass and carbon that becomes known only after the
felled trees have been combusted. Such a destructive sampling is not permissible, hence is replaced
with measuring tree height and diameter at breast height (DBH). Thus, above-ground biomass
sampling becomes measurements of tree structural parameters in a sampling plot. The measured
tree biophysical parameters are then converted to wood volume and carbon content using well-
established equations (see Section 6.7.3.1 for details).
Given the much larger sample plot size of trees, it is impossible to sample every tree in a plot.
Criteria must be set for minimal tree size. This size depends on the age of the tree. In the presence
of huge trees weighing tens of tons, small trees can be safely ignored as the inaccuracy in esti-
mating their weight from their biophysical parameters can easily overwhelm the total biomass of all
small trees combined in the same plot. After the AGB of all trees have been individually calculated
from the appropriate allometric equation, the calculated AGB of individual trees and shrubs is then
summed and aggregated to the plot level to be correlated with their corresponding pixel’s value on
the concurrently acquired satellite image.
Whatever field data are collected, they must be processed indoor to prepare for the subsequent
quantification and presented in the proper geospatial format such as a point shapefile in a GIS so that
they can be overlaid with the remotely sensed data (see Chapter 2) to identify their corresponding
Introduction 25
properties on the images in preparation for constructing the quantification model. The type, diffi-
culty, and requirements of data processing vary with the nature of the collected samples.
1.2.4 Water Data
In comparison with in situ terrestrial data collection, sampling of water properties has both its
strengths and limitations. It is advantageous in that all samples can be collected at points or several
depths that can be completed within minutes. It is disadvantaged and more challenging because the
sampling points are located offshore, accessible only via a sailing vessel. It may introduce an extra
influence on water through hydrographic perturbations, such as ship wake, ship hull and propeller-
induced mixing, and bow wave. In measuring the reflectance of offshore waters, the influence of
the ship must be minimized. The measured radiance stems from a few features. Apart from surface
reflection, reflection by surface waves, scattering and absorption by in-water substances, and poten-
tial bottom reflection in case the focus is on in-water constituents, the measured outcome is also
subject to the influence of the nearby shore and the ambient environment (e.g., winds and waves in
an open sea), so it requires more care and preparation.
Water parameters fall into two broad categories of general water parameters and water optical
properties. General water parameters that can be remotely quantified are numerous in number and
widely ranging in their nature and species. They can be further broken down to water quality and
water content parameters that are measured differently. Standard water quality parameters such
as temperature, pH, conductivity, salinity, dissolved oxygen, turbidity or transparency (Secchi
depth), can be measured using a submersible multi-parameter sonde (YSI 6600V2) together with
in-vivo Chl-a and colored dissolved organic matter (CDOM) levels spectrophotometrically at each
anchored station. The determination of in-water constituents is so complex that it will be deferred to
Section 1.2.4.4. Optical properties can be further differentiated into inherent and apparent. Inherent
optical properties (IOPs) are properties of the medium not affected by the ambient light field, such
as absorbance, transmittance, and scattering coefficient. All of them can be measured in a controlled
environment, together with the concentration of in-water substances. Apparent optical properties
(AOPs) have a value that varies with the medium of light propagation and the viewing geometry of
the radiance distribution, in addition to IOPs. They behave with sufficient regularity and stability
that allow a water body to be studied, such as downwelling and upwelling radiance, diffuse attenu-
ation, and spectral irradiance reflectance. AOPs must be measured on site. In order to quantify in-
water substances, it is imperative to measure water spectral behavior. The most commonly measured
property is water-leaving radiance.
1.2.4.1 Water-leaving Radiance
Although the same measurement protocol of the terrestrial sphere can be applied to the aquatic
environment, caution needs to be exercised to avoid disturbance to water and reflection off water
surface. Besides, water-leaving radiance is more complex to measure as both normalized upwelling
radiance and downwelling irradiance must be measured to calculate remote sensing reflectance
(Rrs). The same visible and near-infrared (VNIR) spectroradiometer as in terrestrial remote sensing
described in Section 1.2.2 can be used, such as the FieldSpec Spectroradiometer with a wavelength
range of 350–1075 nm. Hyperspectral reflectance of water may be measured using an ASD field
spectrometer at a spectral resolution of 3 nm, and a sampling interval of 1 nm over the spectral
region of 350–1050 nm.
Water-leaving radiance (Lw) cannot be measured directly because the upwelling radiance above
the sea surface also encompasses solar and sky radiance reflected from the surface of water (Lsr). It
does not contain any information on the seawater content, so has to be eliminated from Lw after it
has been measured. Lw may be measured indirectly by deploying a sensing device above water or
26 Quantitative Remote Sensing
FIGURE 1.5 Two methods of measuring water-leaving radiance. (a) The onboard above-water method (Lee
et al., 2021, open access); (b) The skylight blocking method that measures only water upwelling radiance.
(Shang et al., 2017, reprinted with permission from © Optical Society of America.)
by submerging it into water under clear sky conditions. In the “above-water method” (Tang et al.,
2004), the water surface spectral reflectance is measured by a sensor connected with a fiber optic
cable. The sensor should be positioned at nadir, on a mount extending approximately 1 m off the
ship (Figure 1.5a). Spectral measurement should be confined to an azimuth angle of 90°–135° from
the sun and a nadir viewing angle <90°, preferably between 30° and 45° (Figure 1.6a) so as to effect
ively minimize the impact of the ship-cast shadow and direct solar radiance (Zhang et al., 2016). The
upwelling radiance of the water surface Lsr(λ, 0+) is measured slightly (e.g., 0.3 m) above the water
surface. After the water radiance measurement, the spectrometer is immediately rotated upward to
>90° up to 120° to measure the downwelling sky radiance Lsky(λ) at the same viewing angle as that
adopted in measuring upwelling radiance (Figure 1.6b). The measured upwelling radiance has to be
calibrated with the assistance of the downwelling radiance [Lp(λ, 0+)] measurements from the refer-
ence panel regarded as an optical standard. The total integration time of measurements should be
maintained sufficiently long (e.g., >10s) to enable fluctuations in the reflectance with surface waves
to cancel out. Measurements are carried out in multiple duplicates repeatedly and averaged to derive
the final outcome.
In hydrospheric measurements, it is preferable to use spectroradiometers equipped with two
sensors for simultaneously measuring upwelling and downwelling radiance of the target to increase
the measurement efficiency and to minimize the solar radiance variation, such as the Ocean Optics
Introduction 27
FIGURE 1.6 The reference system of nadir and azimuth viewing angle centered on the water surface (black
dot). (a) Side view showing the viewing nadir angle (θ) referenced from downward vertical axis: θ < 90°
for measuring upwellling radiance, >90° for downwelling radiance (from sky and sun); (b) Top-down view
of azimuth viewing angle (φ) and relative azimuth viewing angle (Δφ) referenced from viewing direction
clockwise from North and the sun, respectively.(Ruddick et al., 2019, open access.)
USB2000 radiometer. This dual-fiber optic system has a spectral sensitivity range of 400–1100 nm
(spectral resolution: ⁓1.5 nm). The measured radiance and irradiance are then converted to percent
reflectance.
The on-water method measures the upwelling radiance using an optical fiber attached to an
extendable pole with the tip submerged just slightly underneath the water but oriented in the nadir
direction so as to minimize the difference between the upwelling radiance at depth z and the surface.
The device may be tethered to a ship or a fixed offshore platform or moored buoy, or untethered
and horizontally drifting. At a given sampling site, the spectral signal is measured several times to
minimize the signal-to-noise ratio. This method is subject to the interference from impurities in
the water column, but does not require measuring skylight irradiance (Lsky) as it has been blocked
(Figure 1.5b). It is particularly suited to stratified water, shallow bottoms, or seagrass/kelp beds.
The underwater methods of measurement can be differentiated as fixed-depth or vertical pro-
filing. In the fixed-depth method, a radiometer is deployed underwater and attached to permanent
floating structures, to measure nadir upwelling radiance at a minimum of two fixed depths z1 and z2
(Figure 1.7a). z1 should be set maximally shallow to reduce errors due to propagation to the surface
and the chances, subject to the sea state at the measurement spot. z2 should differ from z1 maximally
to reduce the uncertainty of the derived diffuse attenuation coefficient for upwelling radiance and
heterogeneity of water column over the measurement depth. The nadir water-leaving radiance (Lwn)
is calculated by first estimating the nadir upwelling radiance just beneath the water surface, Lun(0−),
by extrapolating from, preferably, the two shallowest measurements at depths of z1 and z2 under the
assumption that the depth variation of Lun(z) between z1 and z2 is exponential with a constant diffuse
attenuation coefficient for upwelling radiance. For this method to work, the water downwelling
irradiance has to be measured to calculate Rrs.
The vertical profiling method is suitable for free-fall radiometers deployed from ships and fixed
platforms (Figure 1.7b). A fixed platform enables the measurements to be automated and unsuper
vised. In principle, this method is identical to the fixed-depth method except that measurements are
taken at slightly different times at a range of depths between z1 and z2 for estimating the vertical
28 Quantitative Remote Sensing
FIGURE 1.7 Schematic of two underwater water-leaving radiance measurement methods. (a) fixed-depth;
(b) profiling using a typical free-fall radiometer. (Ruddick et al., 2019, open access.)
variation of Lun(z). The radiance measurements have to be corrected for variations in above-water
downwelling irradiance. If vertical profile radiometry is measured from winches attached to ships,
it is important to avoid optical (shadow/reflection) and hydrographic perturbations from the ship,
and vertical perturbation of the device. Measurements are made from the ship’s stern with the sun’s
relative bearing aft of the beam at a minimum distance of 1.5/KLu from the ship or further with large
vessels (Ruddick et al., 2019).
but the results may need to be calibrated for temperature, salinity, and scattering effects to improve
the accuracy if deployed in seawater.
Scattering coefficient can be measured using the ECO BB9 Backscattering Sensor that has a
modular suite of sensors for measuring bio-optical and physical parameters of water in situ. The
WET Labs BB9 resolves the volume scattering coefficient (β) at nine wavelengths. The instrument
illuminates a volume of water using modulated LEDs and detects scattered light at an acceptance
angle of 124° from the source beam. A centroid angle of 117° is normally maintained to minimize
the error of the extrapolated total backscattering coefficient. Measurements can be made continu-
ously by deploying the instrument from a cruising ship, or solely at a fixed position for cross-
sectional measurement. The deployed sensor should not be oriented towards the sun or other strong
sources of light. If properly configured (e.g., with the necessary sensors included), it allows tem-
perature, salinity, depth, absorption/attenuation, and Chl, phycocyanin, and CDOM fluorescence to
be measured concurrently. BB9 outputs the measured scattering coefficient as β(θ,λ) in (m•sr)-1. This
volume scattering also includes water molecule scatting. It must be subtracted from the calibrated
output β(117°,λ) to derive the scattering of suspended sediments. It can also be converted to the
backscattering coefficient bbp(λ) by multiplying it by a coefficient.
Some biophysical parameters are impossible to gather in the field at a lead time of a few months,
such as crop yield. It becomes known only after the crop has been harvested for financial gains not
scientific endeavor except in purposely designed experimental plots. At the regional scale yield
data have to be collected from the third party, such as relevant government departments, usually
enumerated by sub-regions. On the other hand, the prediction of crop yield is meaningful and valu-
able only at a lead time of a few months. The absence of the ground truth data means that it is impos-
sible to validate the results for a single region or in real-time quantitative sensing. For some other
parameters, such as SO2 concentration in a volcano plume, no ground truth data can be gathered
at all.
Auxiliary data differ from second-hand data in that they do not pertain to the target of quan-
tification. Instead, they may exert an influence on the target. Although remotely sensed data
serve as the backbone of the input needed in most quantification cases, they are by no means
the exclusive source of data. The quantification of those features that have a geographic com-
ponent (e.g., vegetation biomass on different slopes) also benefits from the consideration of co-
variables (e.g., climate, topographic, and hydrologic) that might help to improve quantification
accuracy and reliability. For instance, crop yield may be affected by temperature, soil fertility,
topography, and irrigation. These factors are related to crop health and potentially grain yield,
even though their exact influence remains mostly unknown. Such ancillary data may be collected
from remote sensing data (product), but their spatial scale and resolution must match those of
the primary data.
TABLE 1.3
Vegetation indices commonly derived from drone RGB bands that have been used to
estimate grassland AGB
in Table 1.3, of the three bands, the green and red ones are used more widely than the blue band
of a shorter wavelength. They are either differenced, ratioed, or ratioed after differencing, as in
deriving the Normalized Green Red Difference Index (GRVI). Simple to calculate, drone-derived
VIs are effective at revealing the vegetation fraction of crop fields (Torres-Sánchez et al., 2014). In
monitoring grassland forage yield, RGB-based GRVI is correlated closely to forage yield (Lussem
et al., 2018).
TABLE 1.4
Major VIs widely used in quantitative remote sensing and their calculation formula
and WDVI cannot account for the atmospheric effects, a deficiency that can be rectified by simple
ratio indices. The simplest is called the Simple Ratio (SR) or just Ratio Vegetation Index (RVI):
SR value is high for vegetation, low for other covers such as soil, ice, and water. It indicates the
amount of vegetation and reduces the atmospheric and topographic effects that are canceled out via
division if they remain constant in the two bands used. Image ratioing largely eliminates irradiance
from Eq. 1.8, and thus the topographic effects, transmittance, and the atmospheric effects.
SR has been modified and expanded to form the most common and useful index called NDVI
(Rouse et al., 1974) from pixel values on multispectral bands, or
The use of the NIR and red bands is grounded on the fact that vegetation spectral reflectance peaks
at infrared wavelengths but is much subdued at visible light wavelengths. It is particularly low at
red wavelength (0.6–0.7 μm). In contrast, the reflectance of both soil and water has much less vari-
ation in these two wavebands. The differencing of the NIR and red bands effectively maximizes the
34 Quantitative Remote Sensing
spectral disparity between vegetation and these two covers, thus accentuating the visibility of vege-
tation on the derived NDVI layer. NDVI is an effective index able to indicate quantitatively biomass
on the ground and has been widely used to estimate vegetation biomass and monitor its temporal
variation, even though it does not eliminate the atmospheric effects.
NDVI works well with dense vegetation covers as commonly found in the tropics, but is less
effective with sparse and patchy vegetation intermingled with prevalent bare ground, and in areas
of a low surface biomass such as grassland. In such environs, NDVI and other biological indicators-
based methods tend to overestimate the degree of bare or desertified ground due to seasonal fluc-
tuation in vegetation cover and the severe effect of rainfall. Consequently, it has been replaced
with several alternative VIs, including soil-adjusted vegetation index (SAVI) (Huete, 1988) and its
variants of SAVI1 and SAVI2, transformed SAVI, and enhanced VI (EVI). SAVI is calculated as:
SAVI =
(ρ NIR )
− ρr ⋅ (1 + L )
(1.10)
ρ NIR + ρr + L
where L =soil fudge factor with a value between 0 and 1, depending on the soil. It is often set to 1.
SAVI minimizes soil brightness influences from vegetation indices involving red and NIR bands.
The origin of reflectance spectra in the NIR-red wavelength scatterplot is shifted to account for first-
order soil-vegetation interactions and differential red and NIR flux extinction through vegetated
canopies. For cotton and range grass canopies, the transformation nearly eliminates soil-induced
variations in vegetation indices. SAVI can depict dynamic soil-vegetation interactions from remotely
sensed data. The L-factor in Eq. 1.10 has been dynamically adjusted using the image-derived NDVI
and WDVI as (Qi et al., 1994):
where γ = primary soil line parameter or slope of the soil line in the reflectance scatterplot of red
vs NIR bands. Its value is commonly taken as 1.06 (Qi et al., 1994), and the factor 2 increases the L
dynamic range. Furthermore, SAVI has also been modified to form a few new indices. Two of them
are called Modified Soil-Adjusted Vegetation Index (MSAVI) (Qi et al., 1994) and optimized soil-
adjusted vegetation index (OSAVI) (Rondeaux et al., 1996), calculated as:
(2ρ ) ( )
2
MSAVI = ρ NIR + 0.5 − 0.5 NIR
+ 1 − 8 ρ NIR − ρR (1.12)
MSAVI is more sensitive to vegetation abundance than SAVI and other indices. It has a higher SNR
than other vegetation indices (including the original version of SAVI) because the L function not
only enlarges the vegetation dynamic responses, but also further suppresses the influences of the
soil background.
SAVI has been further transformed by Baret and Guyot (1991) to form a new index called TSAVI,
calculated as:
TSAVI =
(
a ⋅ ρNIR − aρr − b ) (1.14)
a × ρNR + ρr − a × b + 0.08 (1 + a 2 )
Introduction 35
where a and b denote, respectively, the slope and intercept of the soil line (NIRsoil =aRsoil +b), and
the coefficient value of 0.08 is adjusted to minimize soil effects.
The enhanced VI (EVI) measures the greenness and health of vegetation and vegetation product-
ivity using blue, red, and NIR bands, calculated as:
ρNIR − ρr
EVI1 = 2.5 (1.15)
ρNIR + C1 ⋅ ρr − C2 ⋅ ρb + L
where L =soil adjustment factor or canopy background brightness correction factor (Eq. 1.11)
(default =1); C1 = atmosphere resistance red correction coefficient (default = 6); C2 =atmosphere
resistance blue correction coefficient (default = 7.5). For hyperspectral data, ρNIR = 800, ρr =670,
and ρb =445. The blue band in EVI is absent from some optical satellite images such as SPOT, so
it is changed to EVI2 as:
ρNIR − ρr
EVI2 = 2.5 (1.16)
ρNIR + 2.4ρr + 1
Other less frequently used indices are perpendicular vegetation index (PVI) that is similar to Tasseled
Cap Greenness and the 2nd principal component. It is measured by the orthogonal distance from a
point corresponding to a feature’s reflectance to the soil line in the spectral domain of red vs NIR
bands, calculated as:
ρNIR − a ⋅ ρr − b WDVI − b
PVI = = (1.17)
a2 +1 a2 +1
where a, b = coefficient and offset of the soil line that vary slightly among soils. PVI is functionally
equivalent to DVI, and sensitive to the optical properties of bare soil background that has a higher index
value for a given quantity of incomplete vegetation cover. All VIs (e.g., PVI, SAVI, TSAVI) devised
to minimize the soil background effect strongly reduce the noise for low leaf area indices (LAI < 2–3)
(Baret and Guyot, 1991). Since the red radiance subtraction in the numerator of NDVI is considered
irrelevant (Crippen, 1990), it has been simplified as the infrared percentage vegetation index (IPVI):
ρNIR
IPVI = = NDVI + 1 (1.18)
ρNIR + ρr
IPVI is functionally equivalent to NDVI and RVI with a narrow range of value (0.0–1.0). It is less
complex mathematically.
Three special indices have been developed to quantify specifically vegetation properties. They
are modified simple ratio (MSR) by Chen (1996), red edge position index (REP) by Horler et al.
(1983), and wide dynamic range vegetation index (WDRVI) by Gitelson (2004), calculated as:
ρNIR /ρr + 1
MSR = (1.19)
ρNIR /ρr + 1
0.5( B4 + B7 ) − B5
REP = 705 + 35 × (1.20)
B6 − B5
36 Quantitative Remote Sensing
αρ800 − ρ670
WDRVI ( α = 0.1 or 0.5) = (1.21)
αρ800 + ρ670
One member of the atmospherically corrected index family is the Atmospherically Resistant
Vegetation Index (ARVI) (Kaufman and Tanré, 1992) calculated as:
ρNIR − RB
ARVI = (1.22)
ρNIR + RB
where RB is determined from the reflectance in the blue (B) and red (R) bands as:
(
RB = ρr − γ ρb − ρr ) (1.23)
where γ depends on the aerosol type (a good value is γ =1 in the absence of the aerosol model). This
concept can be applied to other indices. For instance, SAVI can be changed to SARVI by replacing
R with RB.
The Global Environmental Monitoring Index (GEMI) proposed by Pinty and Verstraete (1992)
can also suppress the atmospheric effects, calculated as:
ρr − 0.125
GEMI = η (1 − 0.25η) − (1.24)
1 − ρr
η=
( )
2 ρ2NIR − ρr2 + 1.5ρNIR + 0.5ρr
(1.25)
ρNIR + ρr + 0.5
This non-linear index can account for soil and atmospheric effects simultaneously. GEMI, computed
from measurements at the top of the atmosphere, is therefore both (i) more useful to compare
observations under varying atmospheric and illumination conditions, and (ii) more representative
of actual surface conditions than SR or NDVI over the bulk of the range of vegetation conditions.
This index is seemingly transparent to the atmosphere, and represents plant information at least as
effective as NDVI, but is complex and difficult to use and interpret. Since most of the data to be
used for quantitative remote sensing will have to be corrected for the atmospheric effects, these
atmospherically corrected indices have not found wide applications in the retrieval of quantitative
information of environmental parameters.
connotations, model validation and validation of the modeled outcome. The former is achieved
much more easily than the latter, especially with machine learning methods of analysis (see
Chapter 4 for more details). Validation can be implemented using several means, depending on the
number of samples collected during ground sensing. The most common method of validation is to
make use of in situ measurements that serve as the reference to judge the quality of the quantified
value. During model construction (or data analysis), the available field samples are divided into two
parts. One portion is reserved for model construction and another for model validation without com-
promising the model reliability. After the model has been properly trained using the first portion of
samples, it is run with the second group of samples to examine how it can accurately predict them.
The in situ observed values are compared with the model-predicted ones, and the disparity between
these two sets of quantitative values is analyzed statistically to indicate the reliability at which the
quantification has been achieved. This validation can produce accuracy indicators, but the valid-
ation results are not independent and objective because there could be a spatial component in the
estimated variable (e.g., the residuals or discrepancies are somehow spatially correlated). It does
not indicate where inaccurate quantification occurs, either. In contrast, validation against observed
values distributed in different parts of the study area produces much more authentic indications of
accuracy. The generated results show the accuracy of the quantified parameters, not just the estima-
tion model itself.
However, caution needs to be exercised in interpreting the validation results due to several
challenges imposed by the limited coverage of in situ sites, the large spatial scale mismatch between
field measurements and image pixels (see Section 1.5.3 for more details), the lack of understanding
of the intrinsic heterogeneity of the parameters being quantified, the defects and deficiency in the-
ories on the scale problem embedded in the validation, and the unavailability of trustworthy in
situ datasets with continuity, completeness, and consistency (Jin et al., 2016). Validation based
on in situ measurements is complicated by the spatial scale mismatch between satellite imagery-
derived estimates and ground measurements, a challenge in quantitative remote sensing (refer to
Section 1.5.3 for more details). This spatial scale mismatch between satellite-and ground-based
observations creates serious uncertainties when the target of quantification has a high degree of
spatial heterogeneity. The effect of this scale mismatch is particularly pronounced on the accuracy
indicators for spatially heterogeneous parameters.
In certain applications, validation based on in situ observations is not always feasible due to
the small number of field samples available. They are either expensive or impossible to collect
due to the constraint of data collection being synchronized with the satellite imagery recording. In
certain applications no in situ measurements are collectable, or it is extremely difficult to collect
a sufficient number of samples, such as aerosol optical thickness (AOT) from the AERONET
observation stations in monitoring atmospheric quality. Their number is likely to be limited if the
area under study is small. Thus, all the available samples must be used to construct the quantifica-
tion model without any spare ones reserved for validating model accuracy. Sometimes no ground
truth data are available for validation at all. For instance, the gaseous content of volcanic plumes
cannot be measured in time, either on ground or in the air as flying through the plume is dangerous
and prohibited. This leaves no independent dataset to verify the remotely quantified concentration
levels.
Nevertheless, it is still possible to shed light on model stability or reliability via cross-validation
using bootstrapping, in which in situ samples are randomly removed, one at a time. It has a number
of permutations, including Holdout Validation (HV), K-fold Cross-Validation (KCV), and Leave-
One-Out Cross-Validation (LOOCV). Each permutation has unique requirements and fulfills
different needs. HV requires all available samples to be partitioned into two groups at a ratio of
70% vs 30% or 80% vs 20%. The second and lower portion is used to validate the model developed
using the remaining samples (Kim, 2009). KCV requires all the available samples to be divided into
K groups equally and repeatedly. Each time only one of them is selected as the testing dataset while
38 Quantitative Remote Sensing
the remaining K-1 groups are reserved for model development. Every group is used just once in turn
for validation (Kohavi, 1995).
LOOCV is almost identical to KCV except that each sample in the dataset is used, in turn, to
test the model developed using the remaining samples minus the one in use for validation each
time. A model is constructed from the remaining N-1 observations. The same regression analysis
is run repeatedly, potentially yielding 10s or even 100s of models. The discrepancy between all the
observed and predicted values from each model run is analyzed statistically. The developed model is
thus validated n times (n =number of samples) by comparing the observed value with the predicted
one (Hoek et al., 2008). The final validation outcome is then calculated by averaging all the indi
vidual evaluations. LOOCV overcomes the drawback associated with a small training dataset that
cannot be sensibly partitioned into two parts, one for model construction and another for model
validation. Instead, all of the samples have to be used for model construction. Leave-one-out testing
is the most commonly used, even though it is computationally intensive and subject to overfitting
as all the training data except one are fed to the model. In a sense, this method indicates the model
accuracy, not the accuracy at which a parameter’s value is estimated. Thus, cross-validation can
never truly reveal the range of prediction discrepancy, nor the magnitude of incorrect estimations
and their locations.
1.4.2 Accuracy Expressions
The validation outcome is statistically analyzed and summarized using several indicators. The
common ones are accuracy or mean bias, precision or repeatability, relative difference, uncertainty
or root mean square error (RMSE). Accuracy measures the quality of quantification and indicates
the degree at which the estimated values of the target parameter is in agreement with the observed
ones. In the literature, it refers to model accuracy in some cases, but not the accuracy at which the
dependent variables or target parameters are quantified. Accuracy is commonly treated as a proxy
for consistency (e.g., how the accuracy indicator varies with the number of samples used) in case
of insufficient samples. Since all the quantified results are quantitative, they can be expressed in a
number of ways. The commonly used ones are coefficient of determination (R2 value), and RMSE,
calculated as:
∑
N
(Y − Y )
2
i =1 i i
R = 1−
2
(1.26)
(Y − Y )
2
i i
∑
N
(Y − Y )
2
i =1 i i
RMSE = (1.27)
N
where N =sample size, Yi =the measured value of the target to be sensed, Y i =the predicted value
of the sensed target, Y =the average value of all the collected samples.
The coefficient of determination (R²) is calculated statistically between the observed and
predicted values for all the validation samples. It refers to the portion of variation in the dependent
variable (i.e., the target parameter to be quantified) that can be explained by the predictor variables
(see Section 1.3). Usually, the two are plotted out as a scatterplot to reveal not only the general R2
(Eq. 1.24) but also how the prediction behaves (e.g., whether there exists any trend between the two
or where the fit is loose) (Figure 1.8a). It illustrates the degree of fitness between the two sets of
data. Inaccurate quantification can be easily appreciated from the 1:1 trend line (Figure 1.8b). If all
the estimated values and observed ones converge closely along the 1:1 trend line, then the accuracy
Introduction 39
FIGURE 1.8 Scatterplots to illustrate the quality of quantifying suspended sediment concentration (SSC) from
digital number (DN) of a spectral band. (a) Regression model (solid red line) of in situ measured SSC against
DN and its coefficient of determination (R²); (b) Residuals of the estimated SSC vs the observed SSC along the
1:1 trend line with wider deviation of more observations from this line suggesting less reliable quantification.
of quantification is high. Apart from R² value, the quality of estimation (or model reliability) is also
judged by the p value. It indicates the probability at which the observed R2 is obtained or is statistic-
ally significant. The commonly adopted p value is p<0.05 or p<0.01. A small p value means the
relationship between the two sets of variables has a high degree of being genuine and repeatable.
If the p value is too high, then the relationship between the two can stem from random match, and
the predictor variables are unable to estimate the dependent variable competently. The constructed
estimation model is not sufficiently reliable or valid for quantification and should be abandoned.
Dissimilar to the more general R², RMSE is more restrictive, generated from the independent
samples only. It yields more information on the range of inaccuracy, and can indicate the spatial
distribution or location of the independent samples. In order to make RMSEs in multiple studies
and the results quantified at multiple times directly comparable with each other, they have been
expressed in relative terms or normalized to form rRMSE (%) (Eq. 1.28) and nRMSE (Eq. 1.29).
Both of them are expressed as a percentage between 0 and 100, with a lower value suggesting a
higher accuracy of quantification.
RMSE
rRMSE = 100 (1.28)
∑
N
Y /N
i =1 i
40 Quantitative Remote Sensing
∑ ( y − y )
n 2
i =1 i i
nRMSE = /y (1.29)
n
Other accuracy indicators include the ratio of the performance to the derivation (RPD), and the
mean absolute percentage error (MAPE), calculated as:
SDs
RPD = (1.30)
RMSE
1 N
Yi − Y i
MAPE =
N
∑ Yi
(1.31)
i =1
where SDs =standard deviation of the measured attribute value of the target parameter. A larger R² and
RPD value and a smaller RMSE and MAPE are indicative of a superior model performance. Finally,
mean prediction error (MPE) has been used to calculate the variance of prediction errors (VAR) as:
1 N
(
∑ PEi − MPE )
2
VAR = (1.32)
N − 1 i =1
TABLE 1.5
Commonly used validation metrics and their calculation
1 N
(
∑ X − Y − Acc )
2
Precision (Prec) or repeatability
N − 1 i=1 i i
X i − Yi
Relative difference [Δ, %]
Yi
N
∑(X )
1 2
Uncertainty (Unc) or Root Mean Squared Difference (RMSD) MSD = i
− Yi
N i =1
1 N
Root of the unsystematic mean product difference (RMPDu) MPD u = ∑ X − X i × Yi − Y i
N i=1 i
Root of the systematic mean product difference (RMPDs) MSD − MPD u
σ ( X ,Y )
2
Coefficient of determination (R2)
σ ( X ) σ (Y )
P (d i ) − P (d i + 2 )
Temporal smoothness (δ) σ(d i ) = P(d i+1 ) − P(d i ) −
d i − d i+ 2
(di − di+1 )
Time-series smoothness index (TSI) 1 N −2
∑σ d i ( )
2
N − 2 i=1
Note: *: N = number of valid samples used for comparison, σ(X) and σ(Y) = standard deviation of X and Y, σ(X,Y) the
co-variation of X and Y, and X and X =observed and model estimated true value of the attribute. P(d ), P(d )
i i+1
and P(di+2) are three consecutive observations on dates di, di+1, and di+2.
LiDAR data are generally immune from cloud contamination, radar imagery may be noisy in rainy
conditions. Besides, flying in stormy weather degrades image geometry. Although airborne LiDAR
data are not affected by clouds, they are liable for blind spots where no LiDAR pulses can reach, usu-
ally in some background area hidden behind the line of sight (LOS). This issue of blind spots worsens
with terrestrial laser scanning. Related to the use of drone images is the absence of infrared bands that
can capture the most subtle spectral variations in the target parameter. While this issue does not affect
the success/failure of quantification, it does adversely impact the reliability of the quantified results.
Apart from remote sensing data quality, data availability can also be problematic if the period
of study spans a decade or longer since most satellites have a life expectancy less than this dur-
ation. The data used to be acquired from one sensor can cease to exist, creating headaches in the
long-term monitoring of the quantified parameter. The only exception is Landsat images that have
always maintained a consistent standard over multiple decades. Whenever new data from a recently
launched satellite become available, the existing estimation models (mostly empirical and semi-
empirical) and the best predictor variables may not be working or optimal any more, so new estima-
tion models have to be developed from scratch with the newly emerged data. Long-term quantitative
remote sensing may be accomplished using satellite data acquired from multiple sensors. Even so,
it is largely impossible to undertake historic quantification (e.g., unable to collect historic in situ
samples) retrospectively in some cases, so quantification is mostly restricted to the present time.
42 Quantitative Remote Sensing
1.5.2 Retrospective Quantification
Quantitative remote sensing can be carried out only when both remotely sensed and ground data are
collected concurrently. Since in situ data cannot be collected retrospectively, it is impossible to quan-
tify the past state in spite of the existence of historic imagery data. One way of circumventing this
problem to accomplish retrospective quantification is to make use of invariant targets on the image
such as snow. Such covers have a constant spectral reflectance irrespective of the time of sensing
except the influence from the varied atmospheric conditions. Through a mutual radiometric cali-
bration equation of the current and historic images of the same area, the pixel values on the historic
image can be converted to the ground parameter value without the need for in situ samples. This
method also works for multi-temporal images and can save the processing of individual images and
expedite the quantification process. This method is underpinned by the assumption that the atmos-
pheric conditions remain unchanged during multi-temporal sensing. Admittedly, this method of rela-
tive calibration does not work if spectrally invariant ground objects are absent from the scene of study.
If the target of quantification has a rather low value or concentration below the radiometric reso-
lution of the images used (or falling within the uncertainty range of LiDAR data), then its signature
could be rather indistinct, raising the possibility of unsuccessful quantification. The accuracy of
quantification can be further lowered by the co-existence of multiple targets in the same sensing
environment. Their spectral signature may interfere with each other, and cause the isolation of the
specific signature of individual targets almost impossible. Similarly, if the target of quantification
is mobile, then the pace of motion must exceed the pixel size of the imagery being used or the
positioning uncertainty of LiDAR data. Naturally, both the minimum size of movement and the
lowest magnitude of vertical shift or pace of movement must be an order higher than the image spa-
tial resolution or the data inaccuracy to generate a viable quantitative value.
1.5.3 Data Mismatch
As stated previously, the accuracy of the results quantified from remotely sensed data is validated
via comparison with in situ sampled data. However, the two do not always match perfectly. In fact,
there are potentially four mismatches between them that degrade the accuracy indicators generated.
(i) Temporal mismatch. Temporal mismatch between imagery-derived and field-collected data may
arise from the ill synchronization of in situ sampling with image acquisition because satellite
images can be acquired quickly for a huge ground area within seconds or at most hours with
airborne data. In comparison, field sampling is a strenuous and time-consuming process, taking
up to days and even weeks to complete in several campaigns. The entire study area has to be
traversed to collect spatially representative samples, as in cruising to the open ocean of tens of
1,000s of square kilometers to measure bathymetry. Although bathymetry may not change over
a short time, the sensing conditions do (e.g., changed wind speed and wave height). The sam-
pling itself may be lengthy under unfavorable conditions such as inland river waters shrouded in
a thickly wooded area. The synchronization becomes more difficult if both the remotely sensed
data and field data are obtained in the field. It becomes increasingly challenging to perfectly syn-
chronize the two due to shortage of field personnel. In some applications, a delay of a few days
will not introduce noticeable changes to the target parameter of quantification on the ground
(e.g., forest biomass), but it could mean drastically altered weather condition of sensing from
the ground truthing conditions. Other target parameters of quantification may experience marked
changes in the interim such as suspended sediments or algal blooms. A delay of a few hours in
sampling means a huge variation to the target state. This temporal mismatch potentially reduces
the reliability of the quantification outcome, or even thwarts the quantification efforts completely.
Introduction 43
(ii) Scale mismatch. The spatial scale mismatch between in situ samples and image pixel size can
be as large as tens of square kilometers. The minimal area that can be resolved on a remote
sensing image is dictated by its spatial resolution that is measured on the order of kilometers
to meters for space-borne data, and even to centimeters with drone images. As discussed in
Section 1.2.1, field sampling takes place mostly on the point scale. Even if on the areal scale,
plot size has to be limited to tens of meters at most to be manageable. The collected ground
truth data enumerated at the sampling points or over the small sample plot can never match
image pixel size. They represent the target parameter value integrated over the ground area
corresponding to the pixel size of the imagery data in use. When the point-sampled values are
upscaled to the pixel size, it is assumed that the quantified target parameter is spatially uni-
form within the pixel. A lot of global data products produced from coarse resolution satellite
data have a pixel size measured by kilometers on an ongoing basis. They are not applicable
to the surface level or on the local scale in validating quantification outcome generated from
medium-resolution images. Besides, ancillary environmental variables, such as rainfall and
temperature, that are commonly used in certain quantitative remote sensing applications are
enumerated at points, not over an area.
(iii) Dimension mismatch. Remotely sensed data, especially imagery data, capture the surface
reflectance of the target and render it as 2D images, even though it may be inherently 3D
in nature, such as tree biomass. When ground data are compared with imagery-derived
results to assess their accuracy, it induces a dimensional mismatch between them. Certain
parameters such as air pollutants may be 3D, they are estimated in the in situ measurements
at a dimension different from that of the remotely quantified results. Namely, the estimates
from space-borne imagery data refer to the atmosphere-column concentration integrated
from the surface to the top-of-atmosphere where the sensor is located. However, air pol-
lutant concentrations used for accuracy assessment are recorded at points near the Earth’s
surface. This mismatch in the sensible depth is also common with in-water components.
The depth to which the radiation used in sensing can penetrate the target is a function of the
target itself and the wavelength of the captured radiation. For visible sunlight and infrared
radiation, the depth of ground penetration is rather shallow, especially when the surface
is moist. As for water, the depth to which visible light of sensing can penetrate it may not
match exactly the depth at which in situ water samples are collected. On the other hand,
in situ data that are used to validate the remotely quantified results are obtained at a cer-
tain depth below the surface (e.g., 20 cm in sampling soil properties). Unless the in-water
constituents have a uniform vertical distribution, this mismatch in depth will instigate errors
to the assessed accuracy. This means that it may be necessary to calibrate the remotely
sensed values by the vertical distribution of the target, such as suspended sediments, in
order to generate volume information. With TIR sensing, as in the quantification of sea
surface temperature, the radiation is mostly absorbed by water, leading to little downward
penetration, so satellite imagery-produced temperature of the sea surface is confined to the
top few microns or slightly deeper. In comparison, it is impossible to measure sea surface
temperature at this depth in the field as the thermometer must be immersed in water to
measure its temperature properly, not the air temperature. This depth mismatch will intro-
duce uncertainty to the validation accuracy and make it reflective of the genuine accuracy
of the remotely quantified outcome.
All of the mismatches will inevitably create discrepancies between ground data and remotely quan-
tified results and subvert the quantification accuracy.
44 Quantitative Remote Sensing
REFERENCES
Adamsen FG, PJ Pinter, EM Barnes, RL LaMorte, GW Wall, SW Leavitt, and BA Kimball (1999) Measuring
wheat senescence with a digital camera. Crop Sci 39: 719–724.
Ali AM, R Darvishzadeh, AK Skidmore, I van Duren, U Heiden, and M Heurich (2016). Estimating leaf func-
tional traits by inversion of PROSPECT: assessing leaf dry matter content and specific leaf area in mixed
mountainous forest. Int. J. Appl. Earth Obs. Geoinf. 45: 66–76.
Baret F and G Guyot (1991) Potentials and limits of vegetation indices for LAI and APAR assessment. Rem
Sens Environ 35(2–3): 161–173. doi: 10.1016/0034-4257(91)90009-U
Bendig J, K Yu, H Aasen, A Bolten, S Bennertz, J Broscheit, M Gnyp, and G Bareth (2015). Combining UAV-
based plant height from crop surface models, visible, and near infrared vegetation indices for biomass
monitoring in barley. Int J Appl Earth Obs Geoinfo 39: 79–87. doi: 10.1016/j.jag.2015.02.012
Introduction
Adamsen FG , PJ Pinter , EM Barnes , RL LaMorte , GW Wall , SW Leavitt , and BA Kimball (1999) Measuring
wheat senescence with a digital camera. Crop Sci 39: 719–724.
Ali AM , R Darvishzadeh , AK Skidmore , I van Duren , U Heiden , and M Heurich (2016). Estimating leaf
functional traits by inversion of PROSPECT: assessing leaf dry matter content and specific leaf area in mixed
mountainous forest. Int. J. Appl. Earth Obs. Geoinf. 45: 66–76.
Baret F and G Guyot (1991) Potentials and limits of vegetation indices for LAI and APAR assessment. Rem
Sens Environ 35(2–3): 161–173. doi: 10.1016/0034-4257(91)90009-U
Bendig J , K Yu , H Aasen , A Bolten , S Bennertz , J Broscheit , M Gnyp , and G Bareth (2015). Combining
UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass
monitoring in barley. Int J Appl Earth Obs Geoinfo 39: 79–87. doi: 10.1016/j.jag.2015.02.012
Bréda NJJ (2003) Ground-based measurements of leaf area index: A review of methods, instruments and
current controversies. J Exp Bot 54(392): 2403–2417. doi: 10.1093/jxb/erg263
Chason JW , DD Baldocchi , and MA Huston (1991) A comparison of direct and indirect methods for estimating
forest canopy leaf area. Agri Forest Meteo 57: 107–128.
Chen JM (1996) Evaluation of vegetation indices and a modified simple ratio for boreal applications. Can J Rem
Sens 22(3): 229–242. doi: 10.1080/07038992.1996.10855178
Clevers JGPW (1989) The application of a weighted infrared-red vegetation index for estimating leaf area index
by correcting for soil moisture. Rem Sens Environ 29: 25–37.
Crippen RE (1990) Calculating the vegetation index faster. Rem Sens Environ 34(1): 71–73. doi: 10.1016/0034-
4257(90)90085-Z
Darvishzadeh R , A Skidmore , H Abdullaha , E Cherenet , A Ali , T Wang , W Nieuwenhuis , M Heurich , A
Vrieling , B O'Connor , and M Paganini (2019) Mapping leaf chlorophyll content from Sentinel-2 and RapidEye
data in spruce stands using the invertible forest reflectance model. Int J Appl Earth Obs Geoinfo . 79: 58–70.
doi: 10.1016/j.jag.2019.03.003
Gao J (2022) Fundamentals of Spatial Analysis and Modelling. Boca Raton: CRC Press, 348 p.
Gitelson AA (2004) Wide dynamic range vegetation index for remote quantification of biophysical
characteristics of vegetation. J Plant Physiol 161: 165–173.
Gitelson AA , YJ Kaufman , R Stark , and D Rundquist (2002) Novel algorithms for remote estimation of
vegetation fraction. Rem Sens Environ 80: 76–87
Gitelson AA , A Viña , V Ciganda , D Rundquist , and TJ Arkebauer (2005) Remote estimation of canopy
chlorophyll content in crops. Geophy Res Lett. 32, L08403. doi:10.1029/2005GL022688
Hoek G, R Beelen, K De Hoogh, D Vienneau, J Gulliver, P Fischer, and D Briggs (2008) A review of land-use
regression models to assess spatial variation of outdoor air pollution. Atmos Environ 42(33): 7561–7578.
Horler DNH, M Dockray, and J Barber (1983) The red edge of plant leaf reflectance. Int J Rem Sens 4(2):
273–288. doi: 10.1080/01431168308948546
Huete AR (1988) A soil adjusted vegetation index (SAVI). Rem Sens Environ 25: 295–309.
Huete A et al. (2002) Overview of the radiometric and biophysical performance of the MODIS vegetation
indices. Rem Sens Environ 83: 195–213.
Jin R , X Li , M Ma , Y Ge , T Che , ... and Q Xiao (2016) Remote sensing products validation activity and
observation network in China. IEEE Int Geosci Rem Sens Sympo (IGARSS), Beijing, China, pp. 7623–7626.
doi: 10.1109/IGARSS.2016.7730988
Jordan CF (1969) Derivation of leaf area index quality of light on the forest floor. Ecol 50(4): 663–666.
Kaufman YJ and D Tanré (1992) Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE
Trans Geosci Rem Sens 30: 261–270.
Kawashima S and M Nakatani (1998) An algorithm for estimating chlorophyll content in leaves using a video
camera. Ann Bot 81: 49–54.
Kim JH (2009) Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap.
Comput Stat Data Analysis 53(11): 3735–3745.
Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. Int Joint
Conf on Art Int 14(2): 1137–1145.
Lee M-S , K-A Park , and F Micheli (2021) Derivation of red tide index and density using Geostationary Ocean
Color Imager (GOCI) data. Rem Sens 13: 298. doi: 10.3390/rs13020298
Li L , RE Sengpiel , DL Pascual , LP Tedesco , JS Wilson , and E Soyeux (2010) Using hyperspectral remote
sensing to estimate chlorophyll-A and phycocyanin in a mesotrophic reservoir. Int J Rem Sens 31(15):
4147–4162. doi: 10.1080/01431161003789549
Liu HQ , and AA Huete (1995) Feedback based modification of the NDVI to minimize canopy background and
atmospheric noise. IEEE Trans Geosci Rem Sens 33: 457–465. doi: 10.1109/36.377946
Louhaichi M , MM Borman , and DE Johnson (2001) Spatially located platform and aerial photography for
documentation of grazing impacts on wheat. Geocarto Int 16: 65–70.
Lussem U , A Bolten , M Gnyp , J Jasper , and G Bareth (2018). Evaluation of RGB-based vegetation indices
from UAV imagery to estimate forage yield in grassland. ISPRS - Intern Archives of the Photogram, Rem Sens
and Spat Info Sci. XLII-3. 1215–1219. 10.5194/isprs-archives-XLII-3-1215-2018
Markwell J , JC Ostermann , and JJ Mitchell (1995) Calibration of the Minolta SPAD-502 leaf chlorophyll meter.
Photosynth Res 46: 467–472.
Mitchell BG , M Kahru , J Wieland , and M Stramska (2002) Determination of spectral absorption coefficients of
particles, dissolved material and phytoplankton for discrete water samples. Ocean Opt Protocols Satell Ocean
Color Sensor Validation 3(2): 231–257.
Ouyang Z , Y Gao , X Xie , H Guo , T-T Zhang , and B Zhao (2013) Spectral discrimination of the invasive plant
Spartina alterniflora at multiple phenological stages in a saltmarsh wetland. PloS One 8 : e67315. doi:
10.1371/journal.pone.0067315
Peper PJ and EG McPherson (1998) Comparison of five methods for estimating leaf area index of open grown
deciduous trees. J Arboricult 24(2): 98–111.
Pinty B and MM Verstraete (1992) GEMI: A non-linear index to monitor global vegetation from satellites.
Vegetatio 101: 15–20.
Qi J , A Chehbouni , AR Huete , YH Kerr , and S Sorooshian (1994) A modified soil adjusted vegetation index.
Rem Sens Environ 48(2): 119–126 . doi: 10.1016/0034-4257(94)90134-1
Rondeaux G , M Steven , and F Baret (1996) Optimization of soil-adjusted vegetation indices. Rem Sens
Environ 55: 95–107.
Rouse JW et al. (1974) Monitoring vegetation systems in the great plains with ERTS. NASA Spec Publ 351:
309.
Rouse JW , RH Haas , JA Schell , DW Deering , and JC Harlan (1974) Monitoring the vernal advancement of
retrogradation of natural vegetation, NASA/GSFC, Type III, Final Report, Greenbelt, MD, 371 pp.
Ruddick KG , K Voss , E Boss , A Castagna , R Frouin , A Gilerson , M Hieronymi , BC Johnson , J Kuusk , Z
Lee , M Ondrusek , V Vabson , and R Vendt (2019) A review of protocols for fiducial reference measurements
of water-leaving radiance for validation of satellite remote-sensing data over water. Rem Sens 11(19): 2198.
doi: 10.3390/rs11192198
Rundquist D , A Gitelson , B Leavitt , A Zygielbaum , R Perk , and G Keydan (2014) Elements of an integrated
phenotyping system for monitoring crop status at canopy level. Agronomy 4(1): 108.
Shang Z , Z Lee , Q Dong , and J Wei (2017) Self-shading associated with a skylight-blocked approach system
for the measurement of water-leaving radiance and its correction. Appl Optics 56(25): 7033–7040.
doi:10.1364/ao.56.007033
Tan C , D Wang , J Zhou , Y Du , M Luo , Y Zhang , and W Guo (2018) Remotely assessing Fraction of
Photosynthetically Active Radiation (FPAR) for wheat canopies based on hyperspectral vegetation indexes.
Front Plant Sci 9: 776. doi: 10.3389/fpls.2018.00776
Tang JW , GL Tian , XY Wang , XM Wang , and QJ Song (2004) The methods of water spectra measurement
and analysis I: Above-water method. J Rem Sens 8: 37–44.
Torres-Sánchez J , JM Peña-Barragán , A De Castro , and F López-Granados (2014). Multi-temporal mapping
of the vegetation fraction in early-season wheat fields using images from UAV. Compu Electr Agric. 103:
104–113. 10.1016/j.compag.2014.02.009
Tucker CJ (1979) Red and photographic infrared linear combinations for monitoring vegetation. Rem Sens
Environ 8: 127–150.
Verrelst J , G Camps-Valls , J Muñoz-Marí , JP Rivera , F Veroustraete , JGPW Clevers , and J Moreno
(2015a) Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties – A review.
ISPRS J Photogram Rem Sens 108: 273–290. doi: 10.1016/j.isprsjprs.2015.05.005
Verrelst J , JP Rivera , F Veroustraete , J Muñoz-Marí , JGPW Clevers , G Camps-Valls , and J Moreno
(2015b) Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval
methods – A comparison. ISPRS J Photogram Rem Sens 108: 260–272. doi: 10.1016/j.isprsjprs.2015.04.013
Vohland M , S Mader , and W Dorigo (2010) Applying different inversion techniques to retrieve stand variables
of summer barley with PROSPECT+SAIL. Int J Appl Earth Obs Geoinfo 12(2): 71–80. doi:
10.1016/j.jag.2009.10.005
Weng Y , P Gong , and Z Zhu (2008) Soil salt content estimation in the Yellow River delta with satellite
hyperspectral data. Canadian J Rem Sens 34(3): 259–270. doi.org/10.5589/m08-017
Woebbecke D , G Meyer , K VonBargen , and D Mortensen (1995) Color indices for weed identification under
various soil, residue, and lighting conditions. Trans ASAE 38 (1): 271–281.
Wolters E , C Toté , S Sterckx , S Adriaensen , C Henocq , J Bruniquel , S Scifoni , and S Dransfeld (2021)
iCOR atmospheric correction on Sentinel-3/OLCI over land: Intercomparison with AERONET, RadCalNet, and
SYN level-2. Rem Sens 13(4): 654. doi: 10.3390/rs13040654
Zhang Y , Y Zhang , K Shi , Y Zha , Y Zhou , and M Liu (2016) A Landsat 8 OLI-based, semi-analytical model
for estimating the total suspended matter concentration in the slightly turbid Xin’anjiang Reservoir (China). IEEE
J Sel Top Appl Earth Obs Rem Sens 9(1): 398–413.
Sensing Platforms and Data
Chapman JW , DR Thompson , MC Helmlinger , BD Bue , RO Green , ML Eastwood , S Geier , W Olson-Duvall
, and SR Lundeen (2019) Spectral and radiometric calibration of the next generation airborne visible infrared
spectrometer (AVIRIS-NG). Rem Sens 11: 2129. doi: 10.3390/rs11182129
Cheng KH , SN Chan , and JHW Lee (2020) Remote sensing of coastal algal blooms using unmanned aerial
vehicles (UAVs). Mar Pollut Bull 152: 110889.
Cocks T , R Jenssen , A Stewart , I Wilson , and T Shields (1998) The HyMaptm airborne hyperspectral sensor:
The system, calibration and performance. Paper Presented at 1st EARSEL Workshop on Imaging
Spectroscopy, Zurich, October 1998.
Gao J (2023) Remote Sensing of Natural Hazards. Boca Raton: CRC Press, 437 p.
Geipel J , J Link , and W Claupein (2014) Combined spectral and spatial modeling of corn yield based on aerial
images and crop surface models acquired with an unmanned aircraft system. Rem Sens 6(11): 10335–10355.
doi: 10.3390/rs61110335
Ihab J (2017) Hyperspectral Imaging for Landmine Detection. PhD thesis, Lebanese University and Politecnico
di Torino, 119 p.
Lechner AM , GM Foody , and DS Boyd (2020) Applications in remote sensing to forest ecology and
management. One Earth 2: 405–412. doi: 10.1016/j.oneear.2020.05.001
Lee CM , ML Cable , SJ Hook , RO Green , S. Ustin , DJ Mandl , and EM Middleton (2015) An introduction to
the NASA Hyperspectral InfraRed Imager (HyspIRI) mission and preparatory activities. Rem Sens Environ 167:
6–19 . doi: 10.1016/j.rse.2015.06.012
Lefsky MA , WB Cohen , GG Parker , and DJ Harding (2002) Lidar remote sensing for ecosystem studies.
Biosci 52: 19–30.
Lu H , D Qiao , Y Li , S Wu , and L Deng (2021) Fusion of China ZY-1 02D hyperspectral data and multispectral
data: Which methods should be used? Rem Sens 13(12): 2354. doi: 0.3390/rs13122354
Luetzenburg G , A Kroon , and AA Bjørk (2021). Evaluation of the Apple iPhone 12 Pro LiDAR for an
application in geosciences. Sci Rep 11: 22221. doi: 10.1038/s41598-021-01763-9
Planet Team (2017) Planet Application Program Interface: In Space for Life on Earth. San Francisco, CA, 2017,
40 p.
Pyo JC , SM Hong , JP Jang , P Sanghun , N Jongkwan , Noh, H Jae , and KH Cho (2022) Drone-borne
sensing of major and accessory pigments in algae using deep learning modeling. GISci Rem Sens 59(1):
310–332. doi:10.1080/15481603.2022.2027120
Ren K , W Sun , X Meng , G Yang , and Q Du (2020) Fusing China GF-5 hyperspectral data with GF-1, GF-2
and Sentinel-2A multispectral data: Which methods should be used? Rem Sens 12(5): 882. doi:
10.3390/rs12050882
Swayze NC , WT Tinkham , JC Vogeler , and AT Hudak (2021) Influence of flight parameters on UAS-based
monitoring of tree height, diameter, and density. Rem Sens Environ 263, 112540, doi:
10.1016/j.rse.2021.112540
Thorpe AK , C Frankenberg , A Aubrey , D Roberts , A Nottrott , T Rahn , J Sauer , M Dubey , K Costigan , C
Arata , A Steffke , S Hills , C Haselwimmer , D Charlesworth , C Funk , R Green , S Lundeen , J Boardman , M
Eastwood , C Sarture , S Nolte , I Mccubbin , D Thompson , and J McFadden (2016) Mapping methane
concentrations from a controlled release experiment using the next generation airborne visible/infrared imaging
spectrometer (AVIRIS-NG). Rem Sens Environ 179: 104–115. doi: 10.1016/j.rse.2016.03.032
Yi Y and W Zhang (2020) A new deep-learning-based approach for earthquake-triggered landslide detection
from single-temporal RapidEye satellite imagery. IEEE J Sel Top Appl Earth Obs Rem Sens 13: 6166–6176.
doi: 10.1109/JSTARS.2020.3028855
Yu Z , J Wang , Y Li , CK Shum , B Wang , X He , H Xu , Y Xu , and B Zhou (2022) Remote sensing of
suspended sediment in high turbid estuary from Sentinel-3A/OLCI: A case study of Hangzhou Bay. Front Mar
Sci 9: 1–17. doi: 10.3389/fmars.2022.1008070
Zhang D , L Yuan , S Wang , H Yu , C Zhang , D He , G Han , J Wang , and Y Wang (2019) Wide swath and
high resolution airborne hyperspectral imaging system and flight validation. Sensors 19(7): 1667. doi:
10.3390/s19071667
Radiometric Correction
Berk A , GP Anderson , PK Acharya , ML Hoke , JH Chetwynd , LS, Bernstein , EP Shettle , MW Matthew , and
SM Adler-Golden (2003) MODTRAN4 Version 3, Revision 1, User’s Manual. Bedford: Air Force Research
Laboratory, Hanscom AFB.
Bilal M , M Nazeer , JE Nichol , MP Bleiweiss , Z Qiu , E Jäkel , JR Campbell , L Atique , X Huang , and S Lolli
(2019) A simplified and robust surface reflectance estimation method (SREM) for use over diverse land
surfaces using multi-sensor data. Rem Sens 11(11): 1344 . doi: 10.3390/rs11111344
Brockmann C , R Doerffer , M Peters , S Kerstin , S Embacher , and A Ruescas (2016) Evolution of the C2RCC
neural network for Sentinel 2 and 3 for the retrieval of ocean colour products in normal and extreme optically
complex waters. In Ouwehand L (ed.) Living Planet Symposium, Proceedings of the conference held 9-13 May
2016 in Prague, Czech Republic. ESA-SP 740, 54.
Casal G , X Monteys , J Hedley , P Harris , C Cahalane , and T McCarthy (2019) Assessment of empirical
algorithms for bathymetry extraction using Sentinel-2 data. Int J Rem Sens 40: 2855–2879.
Chavez PS (1988) An improved dark-object subtraction technique for atmospheric scattering correction of
multispectral data. Rem Sens Environ 24(3): 459–479. doi: 10.1016/0034-4257(88)90019-3
Doxani G , E Vermote , J-C Roger , F Gascon , S Adriaensen , D Frantz , O Hagolle , A Hollstein , G Kirches , F
Li , J Louis , A Mangin , N Pahlevan , B Pflug , and Q Vanhellemont (2018) Atmospheric correction inter-
comparison exercise. Rem Sens 10(2): 352. doi: 10.3390/rs10020352
Frantz D (2019) FORCE—Landsat + Sentinel-2 analysis ready data and beyond. Rem Sens 11: 1124.
doi:10.3390/rs11091124
Frantz D , A Röder , M Stellmes , and J Hill (2016) An operational radiometric Landsat preprocessing
framework for large-area time series applications. IEEE Trans Geosci Rem Sens PP(99): 1–16.
Gafoor FA , MR Al-Shehhi , C-S Cho , and H Ghedira (2022) Gradient boosting and linear regression for
estimating coastal bathymetry based on Sentinel-2 images. Rem Sens 14(19): 5037. doi: 10.3390/rs14195037
Gascon F , C Bouzinac , O Thépaut , M Jung , B Francesconi , J Louis , V Lonjou , B Lafrance , S Massera , A
Gaudel-Vacaresse , F Languille , B Alhammoud , F Viallefont , B Pflug , J Bieniarz , S Clerc , L Pessiot , T
Trémas , E Cadau , R De Bonis , C Isola , P Martimort , and V Fernandez (2017) Copernicus Sentinel-2A
calibration and products validation status. Rem Sens 9(6): 584 . doi: 10.3390/rs9060584
Kaufman YJ , AE Wald , LA Remer , B-C Remer , R-R Li , and L Flynn (1997) The MODIS 2.1 -/spl mu/m
channel—correlation with visible reflectance for use in remote sensing of aerosol. IEEE Trans Geosci Rem
Sens 35: 1286–1298.
Kaufman YJ and C Sendra (1988) Algorithms for automatic atmospheric corrections to visible and near-infrared
satellite imagery. Int J Rem Sens 9:1357–1381.
Kobayashi S and K Sanga-Ngoie (2008) The integrated radiometric correction of optical remote sensing
imageries. Int J Rem Sens 29(20): 5957–5985. doi: 10.1080/01431160701881889
Lantzanakis G , Z Mitraka , and N Chrysoulakis (2016) Comparison of physically and image based atmospheric
correction methods for Sentinel-2 satellite imagery. Proc SPIE 9688 . doi: 10.1117/12.2242889
Lavender SJ , MH Pinkerton , GF Moore , J Aiken , and D Blondear-Patissier (2005) Modification to the
atmospheric correction of SeaWiFS ocean color images over turbid waters. Cont Shelf Res 25(4): 539–555.
Li L , Z Chen , Q Wang , Q Li , and F Zheng (2009) Research on dark dense vegetation algorithm based on
environmental satellite CCD data. IEEE Int Geosci Remote Sens Symposium, Cape Town, South Africa, II-515-
II-518, doi: 10.1109/IGARSS.2009.5418132
Liu Y , Y Qian , N Wang , L Ma , C Gao , S Qiu , C Li , and L Tang (2019) An improved dense dark vegetation
based algorithm for aerosol optical thickness retrieval from hyperspectral data. Proc. SPIE 11028, Optical
Sensors, 1102812, doi: 10.1117/12.2524488
Main-Knorn M , B Pflug , J Louis , V Debaecker , U Müller-Wilm , and F Gascon (2017) Sen2Cor for Sentinel-2.
Conference Paper October 2017. doi: 10.1117/12.2278218
Perkins T , SM Adler-Golden , MW Matthew , A Berk , LS Bernstein , J Lee , and M Fox (2012) Speed and
accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery. Opt Eng 51: 1707. doi:
10.1117/1.OE.51.11.111707
Rahman R and G Dedieu (1994) SMAC: A simplified method for the atmospheric correction of satellite
measurements in the solar spectrum. Int J Rem Sens 15(1): 123–143.
Richter R and D Schläpfer (2023) Atmospheric/Topographic Correction for Satellite Imagery (ATCOR-2/3 User
Guide, Version 9.4.1). ReSe Applications Schläpfer, Switzerland.
Richter R , D Schläpfer , and A Muller (2006) An automatic atmospheric correction algorithm for visible/NIR
imagery. Int J Rem Sens 27(9–10): 2077–2085.
Richter R , J Louis , and U Müller-Wilm (2012) Sentinel-2 MSI—Level 2A Products Algorithm Theoretical Basis
Document; S2PAD-ATBD-0001, Issue 2.0. Telespazio VEGA Deutschland GmbH, Darmstadt, Germany.
Schiller H and R Doerffer (1999) Neural network for emulation of an inverse model operational derivation of
Case II water properties from MERIS data. Int J Rem Sens 20(9): 1735–1746. doi: 10.1080/014311699212443
Shanmugam P (2012) CAAS: An atmospheric correction algorithm for the remote sensing of complex waters.
Ann Geophys 30(1): 203–220. doi: 10.5194/angeo-30-203-2012
Sterckx S , E Knaeps , S Adriaensen , I Reusen , L De Keukelaere , P Hunter , C Giardino , and D Odermatt
(2015) OPERA: An atmospheric correction for land and water. In Ouwehand L (ed.) Sentinel-3 for Science
Workshop Proceedings, 2–5 June 2015. Venice, Italy.
Vanhellemont Q and K Ruddick (2016) ACOLITE for Sentinel-2: Aquatic applications of MSI imagery. Proc. of
the 2016 ESA Living Planet Symposium, Prague, Czech Republic, 9-13 May 2016, ESA Special Publication
SP-740.
Vanhellemont Q and K Ruddick (2018) Atmospheric correction of metre-scale optical satellite data for inland
and coastal water applications . Rem Sens Environ 216: 586–597 . doi: 10.1016/j.rse.2018.07.015
Vermote EF , D Tanré , JL Deuze , M Herman , and J-J Morcette (1997) Second simulation of the satellite
signal in the solar spectrum, 6S: An overview. IEEE Trans Geosci Rem Sens 35: 675–686, doi:
10.1109/36.581987
Vermote EF , D Tanré , JL Deuze , M Herman , and J-J Morcette (2006) Second simulation of a satellite signal
in the solar spectrum-vector (6SV). 6S User Guide Version 3, University of Maryland, 56 p.
Wang J , Y Wang , Z Lee , D Wang , S Chen , and W Lai (2022) A revision of NASA SeaDAS atmospheric
correction algorithm over turbid waters with artificial Neural Networks estimated remote-sensing reflectance in
the near-infrared. ISPRS J Photogram Rem Sens 194: 235–249. doi: 10.1016/j.isprsjprs.2022.10.014
Wang Y , X Wang , H He , and G Tian (2019) An improved dark object subtraction method for atmospheric
correction of remote sensing images. In Wang Y , Q Huang , and Y Peng (eds.) Image and graphics
technologies and applications. IGTA 2019. Communications in computer and information science, vol 1043.
Springer, 425–435. doi: 10.1007/978-981-13-9917-6_41
Wolters E , C Toté , S Sterckx , S Adriaensen , C Henocq , J Bruniquel , S Scifoni , and S Dransfeld (2021)
iCOR atmospheric correction on Sentinel-3/OLCI over land: Intercomparison with AERONET, RadCalNet, and
SYN Level-2. Rem Sens 13(4): 654. doi: 10.3390/rs13040654
Zhang M , R Ma , J Li , B Zhang , and H Duan (2014) A validation study of an improved SWIR iterative
atmospheric correction algorithm for MODIS-Aqua measurements in Lake Taihu, China. IEEE Trans Geosci
Rem. Sens 52(8): 4686– 4695. doi: 10.1109/TGRS.2013.2283523
Analytical Methods
Ahmed AAM , E Sharma , SJJ Jui , RC Deo , T Nguyen-Huy , and M Ali (2022) Kernel ridge regression hybrid
method for wheat yield prediction with satellite-derived predictors. Rem Sens 14: 1136. doi:
10.3390/rs14051136
Arya S , MD Mount , NS Netanyahu , R Silverman , and AY Wu (1998) An optimal algorithm for approximate
nearest neighbour searching fixed dimensions. J Assoc Computing Machinery 45: 891–923.
Awad M and R Khanna (2015) Chapter 4 – Support vector regression. In M Awad and R Khanna (eds.) Efficient
learning machines—theories, concepts, and applications for engineers and system designers, pp 67–80. New
York: Springer Science+Business Media.
Ayoub F , LePrince S , and Keene L (2009) User’s guide to COSI-Corr: Co-registration of optically sensed
images and correlation. www.tectonics.caltech.edu/slip_history/spot_coseis/pdf_files/cosi-corr_guide.pdf
Breiman L (1996) Bagging predictors. Mach Learn 24: 123–140. doi: 10.1007/BF00058655
Brownlee J (2019) How to choose a feature selection method for machine learning. In Data preparation ,
retrieved from https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/
Cheng E , B Zhang , D Peng , L Zhong , L Yu , Y Liu , C Xiao , C Li , X Li , Y Chen , H Ye , H Wang , R Yu , J
Hu , and S Yang (2022) Wheat yield estimation using remote sensing data based on machine learning
approaches. Front Plant Sci 13: 1–16. doi: 10.3389/fpls.2022.1090970
Cooper GF and E Herskovits (1992) A Bayesian method for the induction of probabilistic networks from data.
Mach Learn 9: 309–347.
DiPietro R and GD Hager (2020) Chapter 21 - Deep learning: RNNs and LSTM. In SK Zhou , D Rueckert , and
G Fichtinger (eds.) Handbook of medical image computing and computer assisted intervention. Academic
Press, p. 503–519. doi: 10.1016/B978-0-12-816176-0.00026-0
Esposito G , R Salvini , F Matano , M Sacchi , M Danzi , R Somma , and C Troise (2017) Multitemporal
monitoring of a coastal landslide through SfM-derived point cloud comparison. Photogram Rec 32: 459–479.
doi: 10.1111/phor.12218
Gafoor FA , MR Al-Shehhi , C-S Cho , and H Ghedira (2022) Gradient boosting and linear regression for
estimating coastal bathymetry based on Sentinel-2 images. Rem Sens 14(19): 5037. doi: 10.3390/rs14195037
Gao J (2022) Fundamentals of Spatial Analysis and Modelling. Boca Raton: CRC Press, 346 p.
Gao J (2023) Remote Sensing of Natural Hazards. Boca Raton: CRC Press, 437 p.
Guerin A , GM Stock , MJ Radue , M Jaboyedoff , BD Collins , B Matasci , N Avdievitch , and M-H Derron
(2020) Quantifying 40 years of rockfall activity in Yosemite Valley with historical structure-from-motion
photogrammetry and terrestrial laser scanning. Geomor 356 . doi: 10.1016/j.geomorph.2020.107069
Harwin S , and A Lucieer (2012) Assessing the accuracy of georeferenced point clouds produced via multi-view
stereopsis from unmanned aerial vehicle (UAV) imagery. Rem Sens 4: 1573–1599.
Hiestand R (2019) Regard3D. www.regard3d.org/index.php.
Horn BKP (1987) Closed-form solution of absolute orientation using unit quaternions. J Opt Soc Am 4:
629–642.
Huang GB , QY Zhu , and CK Siew (2006) Extreme learning machine: Theory and applications.
Neurocomputing 70: 489–501.
James G , D Witten , T Hastie , and R Tibshirani (2023) An Introduction to Statistical Learning: With
Applications in R (2nd ed.). New York: Springer, 604 p.
Jia K , S Liang , S Liu , Y Li , Z Xiao , Y Yao , B Jiang , X Zhao , X Wang , S Xu , and J Cui (2015) Global land
surface fractional vegetation cover estimation using general regression neural networks from MODIS surface
reflectance. IEEE Trans Geosci Rem Sens 53: 4787–4796.
Lague D , N Brodu , and J Leroux (2013) Accurate 3D comparison of complex topography with terrestrial laser
scanner: Application to the Rangitikei canyon (N-Z). ISPRS J Photogram Rem Sens 82: 10–26. doi:
10.1016/j.isprsjprs.2013.04.009
Lee C , K Lee , S Kim , J Yu , S Jeong , and J Yeom (2021) Hourly ground-level PM2.5 estimation using
geostationary satellite and reanalysis data via deep learning. Rem Sens 13(11): 2121. doi: 10.3390/rs13112121
Liu Y , Y Yin , Z Chu , and S An (2020) CDL: A cloud detection algorithm over land for MWHS-2 based on the
gradient boosting decision tree. IEEE J Sel Topics Appl Earth Obs Rem Sens 13: 4542–4549. doi:
10.1109/JSTARS.2020.3014136
López OAM , A Montesinos López , and J Crossa (2022) Chapter 9 – Support vector machines and support
vector regression. In Multivariate statistical machine learning methods for genomic prediction, pp. 337–378.
Switzerland: Springer. doi: 10.1007/978-3-030-89010-0_9
Lucieer A , SM de Jong , and D Turner (2014) Mapping landslide displacements using Structure from Motion
(SfM) and image correlation of multi-temporal UAV photography. Prog Phys Geog: Earth and Environ 38(1):
97–116. doi: 10.1177/0309133313515293
Mapillary K (2021) OpenSfM https://github.com/mapillary/OpenSfM, Accessed: 2024-06-18.
Mateo-García G , V Laparra , and L Gómez-Chova (2018) Optimizing kernel ridge regression for remote
sensing problems. IGARSS 2018 - IEEE Int Geosci Rem Sens Symp, Valencia, Spain, 4007–4010. doi:
10.1109/IGARSS.2018.8518016
Mello MP , J Risso , C Atzberger , P Aplin , E Pebesma , CAO Vieira , and BFT Rudorff (2013) Bayesian
Networks for Raster Data (BayNeRD): Plausible reasoning from observations. Rem Sens 5(11): 5999–6025.
doi: 10.3390/rs5115999
Moulon P , P Monasse , R Perrot , and R Marlet (2017) OpenMVG: Open multiple view geometry. In Kerautret
B , M Colom , and P Monasse (eds.) Reproducible Research in Pattern Recognition. RRPR 2016. Lecture
Notes in Comp Sci, 10214: 60–74. Springer International. doi: 10.1007/978-3-319-56414-2_5
Mustafa YT , A Stein , V Tolpekin , and PE van Laake (2012) Improving forest growth estimates using a
Bayesian network approach. Photogram Eng Rem Sens 78 (1): 45–51.
Peterson KT , V Sagan , P Sidike , AL Cox , and M Martinez (2018) Suspended sediment concentration
estimation from Landsat imagery along the lower Missouri and middle Mississippi Rivers using an extreme
learning machine. Rem Sens 10(10): 1503. doi: 10.3390/rs10101503
Pourghasemi HR , N Kariminejad , M Amiri , M Edalat , M Zarafshar , T Blaschke , and A Cerda (2020)
Assessing and mapping multi-hazard risk susceptibility using a machine learning technique. Sci Rep 10: 3203.
doi: 10.1038/s41598-020-60191-3
R Core Team (2018) R: A Language and Environment for Statistical Computing. Vienna: R Foundation for
Statistical Computing. www.R-project.or.g/
Rossel RAV (2007) Robust modelling of soil diffuse reflectance spectra by “bagging-partial least squares
regression”. J Near Infra Spectro 15: 39–47. doi: 10.1255/jnirs.694
Schönberger JL , and JM Frahm (2016) Structure-from-motion revisited. IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Las Vegas, NV, USA, 4104–4113. doi: 10.1109/CVPR.2016.445
Schulz E , M Speekenbrink , and A Krause (2018) A tutorial on Gaussian process regression: Modelling,
exploring, and exploiting functions. J Math Psych 85: 1–16. doi: 10.1016/j.jmp.2018.03.001
Silva A , M Mello , and LMG Fonseca (2014) Enhancements to the Bayesian Network for Raster Data
(BayNeRD). Proc Brazilian Symp GeoInfo: 73–82.
Snavely N , SM Seitz , and R Szeliski (2008) Modeling the world from internet photo collections. Int J Comput
Vis 80: 189–210. doi: 10.1007/s11263-007-0107-3
Song KS , L Li , S Li , L Tedesco , B Hall , and LH Li (2012) Hyperspectral remote sensing of total phosphorus
(TP) in three central Indiana water supply reservoirs . Water Air Soil Poll 223: 1481–1502. doi: 10.1007/s11270-
011-0959-6
Specht DF (1991) A general regression neural network. IEEE Trans Neural Net 2(6): 568–576. doi:
10.1109/72.97934
Swayze NC , WT Tinkham , JC Vogeler , and AT Hudak (2021) Influence of flight parameters on UAS-based
monitoring of tree height, diameter, and density. Rem Sens Environ 263, 112540. doi:
10.1016/j.rse.2021.112540
Sweeney C (2016) Theia Multiview Geometry Library: Tutorial and Reference. http://theia-sfm.org/#
Verrelst J , G Camps-Valls , J Muñoz-Marí , JP Rivera , F Veroustraete , JGPW Clevers , and J Moreno (2015)
Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties – A review. ISPRS J
Photogram Rem Sens 108: 273–290. doi: 10.1016/j.isprsjprs.2015.05.005
Verrelst J , J Muñoz , L Alonso , J Delegido , J Rivera , G Camps-Valls , and J Moreno (2012a) Machine
learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and -3. Rem
Sens Environ 118: 127–139.
Verrelst J , L Alonso , G Camps-Valls , J Delegido , and J Moreno (2012b) Retrieval of vegetation biophysical
parameters using Gaussian process techniques. IEEE Trans Geosci Rem Sens 50 (5 PART 2): 1832–1843.
Verrelst J , L Alonso , J Rivera Caicedo , J Moreno , and G Camps-Valls (2013) Gaussian process retrieval of
chlorophyll content from imaging spectroscopy data. IEEE J Selected Topics Appl Earth Obs Rem Sens 6(2):
867–874.
Van der Meer F (2004) Analysis of spectral absorption features in hyperspectral imagery. Int J Appl Earth Obs
Geoinfo 5: 55–68.
Warsito B , R Santoso , Suparti, and H Yasin (2018) Cascade forward neural network for time series prediction.
J Phys: Conf Ser 1025: 012097. doi: 10.1088/1742-6596/1025/1/012097
Westoby MJ , J Brasington , NF Glasser , MJ Hambrey , and JM Reynolds (2012) ‘Structure-from-Motion’
photogrammetry: A low-cost, effective tool for geoscience applications. Geomor 179: 300–314. doi:
10.1016/j.geomorph.2012.08.021
Wikle CK , LM Berliner (2007) A Bayesian tutorial for data assimilation. Phys D: Nonlinear Phenom 230(1–2):
1–16. doi: 10.1016/j.physd.2006.09.017
Woodget AS , PE Carbonneau , F Visser , and IP Maddock (2015) Quantifying submerged fluvial topography
using hyperspatial resolution UAS imagery and structure from motion photogrammetry. Earth Surf Process
Landf 40(1): 47–64. doi: 10.1002/esp.3613
Wu C (2011) VisualSFM: A Visual Structure from Motion System. http://ccwu.me/vsfm/
Wu C (2013) Towards linear-time incremental structure from motion. 2013 Int Conf. on 3D Vision - 3DV 2013,
Seattle, WA, USA, 127–134. doi: 10.1109/3DV.2013.25.3D
Zhang Y , Y Qu , J Wang , S Liang , and Y Liu (2012) Estimating leaf area index from MODIS and surface
meteorological data using a dynamic Bayesian network. Rem Sens Environ 127: 30–43. doi:
10.1016/j.rse.2012.08.015
Atmospheric Quantification
Abdalla S (2012) Ku-band radar altimeter surface wind speed algorithm. Marine Geodesy 35(sup1): 276–298.
doi: 10.1080/01490419.2012.718676
Adirosi E , M Montopoli , A Bracci , F Porcù , V Capozzi , C Annella , G Budillon , E Bucchignani , AL Zollo , O
Cazzuli , G Camisani , R Bechini , R Cremonini , A Antonini , A Ortolani , and L Baldini (2021) Validation of
GPM rainfall and drop size distribution products through disdrometers in Italy. Rem Sens 13(11): 2018. doi:
10.3390/rs13112081
Apituley A , M Pedergnana , M Sneep , JP Veefkind , D Loyola , O Hasekamp , AL Delgado , and T Borsdorff
(2022) Sentinel-5 precursor/TROPOMI Level 2 Product User Manual – Methane . Royal Netherlands
Meteorological Institute, Amsterdam. https://sentinels.copernicus.eu/documents/247904/2474726/Sentinel-5P-
Level-2-Product-User-Manual-Methane.pdf
Bilal M , A Mhawish , Md A Ali , JE Nichol , G de Leeuw , KM Khedher , U Mazhar , Z Qiu , MP Bleiweiss , and
M Nazeer (2022) Integration of surface reflectance and aerosol retrieval algorithms for multi-resolution aerosol
optical depth retrievals over urban areas. Rem Sens 14(2): 373. doi: 10.3390/rs14020373
Boersma KF , HJ Eskes , A Richter , I De Smedt , A Lorente , S Beirle , JHGM van Geffen , M Zara , E Peters ,
M Van Roozendael , T Wagner , JD Maasakkers , RJ van der A Nightingale , A De Rudder , H Irie , G Pinardi ,
JC Lambert , and SC Compernolle (2018) Improving algorithms and uncertainty estimates for satellite NO2
retrievals: Results from the quality assurance for the essential climate variables (QA4ECV) project. Atmos
Meas Tech 11: 6651–6678. doi: 10.5194/amt-11-6651-2018
Boersma KF , HJ Eskes , RJ Dirksen , RJ van der AJP Veefkind , P Stammes , V Huijnen , QL Kleipool , M
Sneep , J Claas , J Leitão , A Richter , Y Zhou , and D Brunner (2011) An improved tropospheric NO2 column
retrieval algorithm for the Ozone Monitoring Instrument. Atmos Meas Tech 4: 1905–1928. doi: 10.5194/amt-4-
1905-2011
Borchardt J , K Gerilowski , S Krautwurst , H Bovensmann , AK Thorpe , DR Thompson , C Frankenberg , CE
Miller , RM Duren , and JP Burrows (2021) Detection and quantification of CH4 plumes using the WFM-DOAS
retrieval on AVIRIS-NG hyperspectral data. Atmos Meas Tech 14: 1267–1291. doi: 10.5194/amt-14-1267-2021
Bourassa MA , T Meissner , I Cerovecki , PS Chang, et al. (2019) Remotely sensed winds and wind stresses
for marine forecasting and ocean modeling. Front Mar Sci 6: 1–28. doi: 10.3389/fmars.2019.00443
Bovensmann H , JP Burrows , M Buchwitz , J Frerick , S Noël , VV Rozanov , KV Chance , and APH Goede
(1999) SCIAMACHY: Mission objectives and measurement modes. J Atmos Sci 56(2): 127–150. doi:
10.1175/1520-0469(1999)056<0127:SMOAMM>2.0.CO;2
Burrows JP , M Weber , M Buchwitz , V Rozanov , A Ladstätter-Weißenmayer , A Richter , R DeBeek , R
Hoogen , K Bramstedt , K Eichmann , M Eisinger , and D Perner (1999) The Global Ozone Monitoring
Experiment (GOME): Mission concept and first scientific results. J Atmos Sci 56(2): 151–175. doi:
10.1175/1520-0469(1999)056<0151:TGOMEG>2.0.CO;2CO2
de Graaf M , P Stammes , O Torres , and RBA Koelemeijer (2005) Absorbing aerosol index: Sensitivity
analysis, application to GOME and comparison with TOMS. J Geophys Res 110: 1–19.
doi:10.1029/2004JD005178
Di A , Y Xue , X Yang , J Leys , J Guang , L Mei , and Y Che (2016) Dust aerosol optical depth retrieval and
dust storm detection for Xinjiang region using Indian National Satellite Observations. Rem Sens 8: 702. doi:
10.3390/rs8090702
ESA (2024) DOAS Method - Level-2 Processing - Sentinel-5P Technical Guide - Sentinel Online - Sentinel
Online (copernicus.eu). https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-5p/level-2/doas-
method accessed on 9 February 2024
Foote MD , PE Dennison , AK Thorpe , DR Thompson , S Jongaramrungruang , C Frankenberg , and SC Joshi
(2020) Fast and accurate retrieval of point-source methane emissions from imaging spectrometer data using
sparsity prior. IEEE Trans Geosci Rem Sens 58(9): 6480–6492. doi: 10.1109/TGRS.2020.2976888
Gao R (2022) Research progress of atmospheric CO2 monitoring by satellite remote sensing. J Phys: Conf Ser
2386: 012028.
Garay MJ , ML Witek , RA Kahn , FC Seidel , JA Limbacher , MA Bull , DJ Diner , EG Hansen , OL
Kalashnikova , H Lee , AM Nastan , and Y Yu (2020) Introducing the 4.4 km spatial resolution Multi-Angle
Imaging SpectroRadiometer (MISR) aerosol product. Atmos Meas Tech 13: 593–628. doi: 10.5194/amt-13-593-
2020
Holben BN , TF Eck , I Slutsker , D Tanre , JP Buis , A Setzer , E Vermote , JA Reagan , Y Kaufman , T
Nakajima , F Lavenu , I Jankowiak , and A Smirnov (1998) AERONET - A federated instrument network and
data archive for aerosol characterization. Rem Sens Environ 66: 1–16.
Hsu NC , JR Herman , O Torres , BN Holben , D Tanre , TF Eck , A Smirnov , B Chatenet , and F Lavenu
(1999) Comparisons of the TOMS aerosol index with Sun-photometer aerosol optical thickness: Results and
applications. J Geophysical Res - Atmos 104(D6): 6269–6279. doi: 10.1029/1998JD200086
Huang X and K Yang (2022) Algorithm theoretical basis for ozone and sulfur dioxide retrievals from DSCOVR
EPIC. Atmos Meas Tech 15: 5877–5915. doi: 10.5194/amt-15-5877-2022
Javed Z , A Tanvir , Y Wang , A Waqas , M Xie , A Abbas , O Sandhu , and C Liu (2021) Quantifying the
impacts of COVID-19 lockdown and spring festival on air quality over Yangtze river delta region. Atmosphere
12(6): 735. doi: 10.3390/atmos12060735
Jung C-R , W-T Chen , and SF Nakayama (2021) A national scale 1-km resolution PM2.5 estimation model
over Japan using MAIAC AOD and a two-stage random forest model. Rem Sens 13(18): 3657. doi:
10.3390/rs13183657
Khokhar MF , C Frankenberg , J Hollwedel , S Beirle , S Kühl , M Grzegorski , W Wilms-Grabe , U Platt , and T
Wagner (2005) Satellite Remote Sensing of Atmospheric SO2: Volcanic Eruptions and Anthropogenic
Emissions. Proc. 2004 Envisat & ERS Sympo (ESA SP-572). 6–10 September 2004, Salzburg, Austria. ESA
Special Publication.
Krautwurst S , K Gerilowski , H Jonsson , D Thompson , R Kolyer , A Thorpe , M Horstjann , M Eastwood , I
Leifer , S Vigil , T Krings , J Borchardt , M Buchwitz , M Fladeland , J Burrows , and H Bovensmann (2016)
Methane emissions from a Californian landfill, determined from airborne remote sensing and in-situ
measurements. Atmos Meas Tech 2016: 1–33 . doi: 10.5194/amt-2016-391
Landgraf J , J aan de Brugh , R Scheepmaker , T Borsdorff , H Hu , S Houweling , A Butz , I Aben , and O
Hasekamp (2016) Carbon monoxide total column retrievals from TROPOMI shortwave infrared measurements.
Atmos Meas Tech 9: 4955–4975. doi: 10.5194/amt-9-4955-2016
Lee C , K Lee , S Kim , J Yu , S Jeong , and J Yeom (2021) Hourly ground-level PM2.5 estimation using
geostationary satellite and reanalysis data via deep learning. Rem Sens 13(11): 2121. doi: 10.3390/rs13112121
Levelt PF , GHJ van den Oord , MR Dobber , and A Mälkki (2006) The ozone monitoring instrument. IEEE
Trans Geosci Rem Sens 44(5): 1093.
Li Y , S Yuan , S Fan , Y Song , Z Wang , Z Yu , Q Yu , and Y Liu (2021) Satellite remote sensing for estimating
PM2.5 and its components. Curr Pollution Rep 7: 72–87. doi: 10.1007/s40726-020-00170-4
Liu Y , JA Sarnat , V Kilaru , DJ Jacob , and P Koutrakis (2005) Estimating ground-level PM2.5 in the eastern
United States using satellite remote sensing. Environ Sci Tech 39(9): 3269–3278. doi: 10.1021/es049352m
Liu Y , RJ Park , DJ Jacob , QB Li , V Kilaru , and JA Sarnat (2004) Mapping annual mean ground-level PM2.5
concentrations using Multiangle Imaging Spectroradiometer aerosol optical thickness over the contiguous
United States. J Geophys Res- Atmos 109: 1–-10. doi: 10.1029/2004JD005025
Main-Knorn M , B Pflug , J Louis , V Debaecker , U Müller-Wilm , and F Gascon (2017) Sen2Cor for Sentinel-2.
Proc SPIE 3: 1–12. doi: 10.1117/12.2278218
Meissner T and FJ Wentz (2009) Wind-vector retrievals under rain with passive satellite microwave
radiometers. IEEE Trans Geosci Rem Sens 47(9): 3065–3083. doi: 10.1109/TGRS.2009.2027012
Michaelides S , V Levizzani , E Anagnostou , P Bauer , T Kasparis , and JE Lane (2009) Precipitation:
Measurement, remote sensing, climatology and modeling. Atmos Res 94(4): 512–533. doi:
10.1016/j.atmosres.2009.08.017
Nguyen NH and VA Tran (2014) Estimation of PM10 from AOT of satellite Landsat image over Hanoi city. Int
Symp Geoinfo for Spatial Infrastr Dev in Earth and Allied Sci ( GIS IDEAS) 2014. doi: gisws.media.osaka-
cu.ac.jp/gisideas14/viewpaper.php?id=518
O'Dell C , B Connor , H Boesch , D O'Brien , C Frankenberg , R Castaño , M Christi , D Crisp , A Eldering , B
Fisher , M Gunson , J McDuffie , C Miller , V Natraj , F Oyafuso , I Polonsky , M Smyth , T Taylor , G Toon , and
D Wunch (2012) The ACOS CO2 retrieval algorithm - Part 1: Description and validation against synthetic
observations. Atmos Measur Tech 4: 6097–6158. doi: 10.5194/amtd-4-6097-2011
Panfilova M and V Karaev (2021) Wind speed retrieval algorithm using Ku-band radar onboard GPM satellite.
Rem Sens 13(22): 4565. doi: 10.3390/rs13224565
Plane J and A Saiz-Lopez (2006) UV-Visible Differential Optical Absorption Spectroscopy (DOAS). (ISAC-
Bologna PPT). www.researchgate.net/publication/227555901
Prata AJ and C Bernardo (2007) Retrieval of volcanic SO2 column abundance from atmospheric infrared
sounder data. J Geophy Res – Atmos 112, D20204.
Prata AJ , WI Rose , S Self , and DM O'Brien (2003) Global, long–term sulphur dioxide measurements from the
TOVS data: A new tool for studying explosive volcanism and climate. Geophys Monograph 139: 75–92.
Rawat P and M Naja (2022) Remote sensing study of ozone, NO2, and CO: Some contrary effects of SARS-
CoV-2 lockdown over India . Environ Sci Pollut Res Int 29(15): 22515–22530. doi: 10.1007/s11356-021-17441-
2
Realmuto VJ , MJ Abrams , MF Buongiorno , and DC Pieri (1994) The use of multispectral thermal infrared
image data to estimate the sulfur dioxide flux from volcanoes: A case study from Mount Etna, Sicily, July 29,
1986. J Geophy Res 99: 481–488.
Refaat TF , M Petros , CW Antill , UN Singh , Y Choi , JV Plant , JP Digangi , and A Noe (2021) Airborne
testing of 2-μm pulsed IPDA Lidar for active remote sensing of atmospheric carbon dioxide. Atmosphere 12(3):
412. doi:10.3390/atmos12030412
Schneising O , M Buchwitz , M Reuter , H Bovensmann , JP Burrows , T Borsdorff , NM Deutscher , DG Feist ,
DWT Griffith , F Hase , C Hermans , LT Iraci , R Kivi , J Landgraf , I Morino , J Notholt , C Petri , DF Pollard , S
Roche , K Shiomi , K Strong , R Sussmann , VA Velazco , T Warneke , and D Wunch (2019) A scientific
algorithm to simultaneously retrieve carbon monoxide and methane from TROPOMI onboard Sentinel-5
Precursor. Atmos Meas Tech 12: 6771–6802. doi: 10.5194/amt-12-6771-2019
She L , Y Xue , X Yang , J; Guang , Y Li , Y Che , C Fan , and Y Xie (2018) Dust detection and intensity
estimation using Himawari-8/AHI observation. Rem Sens 10(4): 490. doi: 10.3390/rs10040490
Stolarski RS and RD McPeters (2003) Satellite remote sensing | TOMS ozone. In JR Holton (ed.) Encyclopedia
of atmospheric sciences. Academic Press, 1999-2005. doi: 10.1016/B0-12-227090-8/00351-1
Sun L , J Wei , M Bilal , X Tian , C Jia , Y Guo , and X Mi (2016) Aerosol optical depth retrieval over bright
areas using Landsat 8 OLI images. Rem Sens 8: 23. doi: 10.3390/rs8010023
Sun Y , Y Xue , X Jiang , C Jin , S Wu , and X Zhou (2021) Estimation of the PM2.5 and PM10 mass
concentration over land from FY-4A aerosol optical depth data. Rem Sens 13(21): 4276. doi:
10.3390/rs13214276
Thi Van T , NH Hai , VQ Bao , and HDX Bao (2018) Remote sensing-based aerosol optical thickness for
monitoring particular matter over the city. In Proceedings of the 2nd Intern Electronic Conf Rem Sens no. 7 :
362. doi: 10.3390/ecrs-2-05175
Thomas HE , IM Watson , SA Carn , AJ Prata , and VJ Realmuto (2011) A comparison of AIRS, MODIS and
OMI sulphur dioxide retrievals in volcanic clouds. Geomat Nat Haz Risk 2(3): 217–232. doi:
10.1080/19475705.2011.564212
Torres O , A Tanskanen , B Veihelmann , C Ahn , R Braak , PK Bhartia , P Veefkind , and P Levelt (2007)
Aerosols and surface UV products from Ozone Monitoring Instrument observations: An overview. J Geophys
Res-Atmos 112: D24S47. doi: 10.1029/2007JD008809
Torres O , PK Bhartia , JR Herman , Z Ahmad , and J Gleason (1998) Derivation of aerosol properties from
satellite measurements of backscattered ultraviolet radiation: Theoretical basis. J Geophys Res - Atmos 103:
17099–17110. doi: 10.1029/98JD00900
van Geffen J , KF Boersma , H Eskes , M Sneep , M ter Linden , M Zara , and JP Veefkind (2020) S5P
TROPOMI NO2 slant column retrieval: Method, stability, uncertainties and comparisons with OMI. Atmos Meas
Tech 13: 1315–1335. doi: 10.5194/amt-13-1315-2020
Villena CR , JS Anand , RJ Leigh , PS Monks , CE Parfitt , and JD Vande Hey (2020) Discrete-wavelength
DOAS NO2 slant column retrievals from OMI and TROPOMI. Atmos Meas Tech 13: 1735–1756. doi:
10.5194/amt-13-1735-2020
Wang Y , Md A Ali , M Bilal , Z Qiu , A Mhawish , M Almazroui , S Shahid , M N Islam , Y Zhang , and Md N
Haque (2021) Identification of NO2 and SO2 pollution hotspots and sources in Jiangsu Province of China. Rem
Sens 13(18): 3742. doi: 10.3390/rs13183742
Wang Z , L Chen , J Tao , Y Zhang , and L Su (2010) Satellite-based estimation of regional particulate matter
(PM) in Beijing using vertical-and-RH correcting method. Rem Sens Environ 114(1): 50–63. doi:
10.1016/j.rse.2009.08.009
Wei X , N-B Chang , K Bai , and W Gao (2020) Satellite remote sensing of aerosol optical depth: Advances,
challenges, and perspectives. Crit Rev Environ Sci Tech 50(16): 1640–1725. doi:
10.1080/10643389.2019.1665944
Wu J , F Yao , W Li , and M Si (2016) VIIRS-based remote sensing estimation of ground-level PM2.5
concentrations in Beijing–Tianjin–Hebei: A spatiotemporal statistical model. Rem Sens Environ 184: 316–328.
Xue W , J Wei , J Zhang , L Sun , Y Che , M Yuan , and X Hu (2021) Inferring near-surface PM2.5
concentrations from the VIIRS deep blue aerosol product in China: A spatiotemporally weighted random forest
model. Rem Sens 13(3): 505. doi: 10.3390/rs13030505
Yang K , NA Krotkov , AJ Krueger , SA Carn , PK Bhartia , and PF Levelt (2007) Retrieval of large volcanic
SO2 columns from the Aura Ozone Monitoring Instrument: Comparison and limitations. J Geophy Res– Atmos
112 (D24S43): 1–14. doi:10.1029/2007JD008825
Yao F , S Si , W Li , and J Wu (2018) A multidimensional comparison between MODIS and VIIRS AOD in
estimating ground-level PM2.5 concentrations over a heavily polluted region in China. Sci Total Environ 618:
819–828.
Yao F , J Wu , W Li , and Peng (2019) A spatially structured adaptive two-stage model for retrieving ground-
level PM2.5 concentrations from VIIRS AOD in China. ISPRS J Photogramm Rem Sens 151: 263–276.
Yueh SH , WJ Wilson , SJ Dinardo , and FK Li (1999) Polarimetric microwave brightness signatures of ocean
wind directions. IEEE Trans Geosci Rem Sens 37: 949–959.
Zeeshan J , A Tanvir , M Bilal , W Su , C Xia , A Rehman , Y Zhang , O Sandhu , C Xing , X Ji , M Xie , C Liu ,
and Y Wang (2021) Recommendations for HCHO and SO2 retrieval settings from MAX-DOAS observations
under different meteorological conditions. Rem Sens 13(12): 2244. doi: 10.3390/rs13122244
Zhang K , G De Leeuw , Z Yang , X Chen , X Su , and J Jiao (2019) Estimating spatio-temporal variations of
PM2.5 concentrations using VIIRS-derived AOD in the Guanzhong Basin, China . Rem Sens 11: 2679.
Zhang Y , and Z Li (2015) Remote sensing of atmospheric fine particulate matter (PM2.5) mass concentration
near the ground from satellite observation. Rem Sens Environ 160: 252–262.