Next Article in Journal
Investigating Banksia Coastal Woodland Decline Using Multi-Temporal Remote Sensing and Field-Based Monitoring Techniques
Next Article in Special Issue
Analysis of the Transport of Aerosols over the North Tropical Atlantic Ocean Using Time Series of POLDER/PARASOL Satellite Data
Previous Article in Journal
Bistatic High-Frequency Radar Cross-Section of the Ocean Surface with Arbitrary Wave Heights
Previous Article in Special Issue
Combination of AIRS Dual CO2 Absorption Bands to Develop an Ice Clouds Detection Algorithm in Different Atmospheric Layers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of Nighttime Cloud Layer Height and Classification of Cloud Types

1
State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
2
National Satellite Meteorological Center, China Meteorological Administration, Beijing 100081, China
3
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
4
Shanghai Institute of Satellite Engineering, Shanghai, 201109, China
*
Authors to whom correspondence should be addressed.
Submission received: 7 January 2020 / Revised: 11 February 2020 / Accepted: 13 February 2020 / Published: 18 February 2020
(This article belongs to the Special Issue Active and Passive Remote Sensing of Aerosols and Clouds)

Abstract

:
A cloud structure construction algorithm adapted for the nighttime condition is proposed and evaluated. The algorithm expands the vertical information inferred from spaceborne radar and lidar via matching of infrared (IR) radiances and other properties at off-nadir locations with their counterparts that are collocated with active footprints. This nighttime spectral radiance matching (NSRM) method is tested using measurements from CloudSat/Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) and Moderate Resolution Imaging Spectroradiometer (MODIS). Cloud layer heights are estimated up to 400 km on both sides of the ground track and reconstructed with the dead zone setting for an approximate evaluation of the reliability. By mimicking off-nadir pixels with a dead zone around pixels along the ground track, reconstruction of nadir profiles shows that, at 200 km from the ground track, the cloud top height (CTH) and the cloud base height (CBH) reconstructed by the NSRM method are within 1.49 km and 1.81 km of the original measurements, respectively. The constructed cloud structure is utilized for cloud classification in the nighttime. The same method is applied to the daytime measurements for comparison with collocated MODIS classification based on the International Satellite Cloud Climatology Project (ISCCP) standard. The comparison of eight cloud types over the expanded distance shows good agreement in general.

Graphical Abstract

1. Introduction

Clouds play important roles in the energy balance of the Earth system and contribute the largest uncertainty to the estimates and the interpretations of climate change [1]. Clouds cover roughly two thirds of the globe, with large variation in the horizontal and the vertical extent as well as other physical properties that alter their interaction with solar and terrestrial radiation [2,3,4]. Therefore, it is important to understand the distribution of cloud layers in three-dimensional (3D) space in addition to their properties.
Over the past decades, satellite-based remote sensing has become a key source of data for cloud studies. Satellite sensors have the unique ability to provide continuous observations of the atmosphere over the globe. Passive instruments detect clouds based on the radiance contrast, since clouds generally appear brighter and colder than the Earth’s surface, and retrieve cloud properties accordingly using forward radiative transfer models supplemented by ancillary data. However, the computation can be difficult when the difference between the cloud and THE underlying surface is small, as the clear sky scene variability is larger than usual [2]. It is difficult to distinguish clouds from highly reflective surfaces such as snow/ice and sun glint and also from very cold surfaces as in high latitudes [5]. Therefore, the analyses in this work were restricted to between latitudes 60° N and 60° S. These problems could be even worse during nighttime when visible channels are not available, and thus the algorithms are solely dependent on infrared (IR) measurements. Therefore, most of the cloud studies were focused on the calibration and the application using or partly using visible and near-infrared (NIR) measurements (e.g., [6,7,8]), whereas fewer studies have contributed to the analyses based solely on IR retrievals at night (e.g., [9,10]).
The principle of passive sensors limits their ability to separate overlapping cloud layers, which can lead to errors in modeling cloud processes or calculating cloud radiative effects [11,12]. The development of satellite active sensors, represented by the Cloud Profiling Radar (CPR) on the CloudSat satellite [13] and the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on board the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite [14], provides a possible solution to the limitation of passive sensors. Flying as a coordinated pair, the synergistic retrieval from the two active sensors combines CALIPSO’s strength in resolving nadir profiles of optically thin clouds and CloudSat’s ability to penetrate deeply into thick cloud layers before being attenuated. By generating their own light sources, CloudSat and CALIPSO provide continuous measurements without reducing accuracy during nighttime. In fact, CALIPSO is known to have better agreement with ground-based lidar and be able to detect more weakly scattering features during nighttime due to higher signal to noise ratio (SNR) [15,16]. As both satellites were members of A-Train satellite constellation, they also achieved measurement synergy with passive sensors such as Moderate Resolution Imaging Spectroradiometer (MODIS) on-board Aqua [17], which enabled more comprehensive analysis.
Since using passive or active instruments independently would result in lacking information pertaining to vertical structure or lacking spatial coverage (limited by active sensors’ nadir-viewing geometry), respectively, scientists have brought up and tested various methods combining the advantages of passive and active sensors [18,19,20,21,22]. The general approach is to gain understanding of nearby passive-only retrieved areas with information from the narrow ground track measured by both active and passive sensors. Noh et al. [23] briefly summarized the current methods utilizing collocated active and passive measurements to estimate nearby pixels into two categories, direct measurement extrapolation and semi-empirical estimation. The first category is more or less a “match-and-substitute” algorithm—a “donor” pixel with a vertical profile from radar or lidar is matched with a nearby “recipient” pixel based on the information retrieved by the wide-swath passive sensor [18,19,22,24]. Algorithms in the second category generally use a combination of retrieved cloud products, look-up table, or ancillary data to infer the desired properties of certain pixels from nearby pixels that share other retrieved properties [7,21,23,25,26]. However, few of these methods have been adapted to and tested during nighttime conditions.
In this work, an algorithm is proposed to construct the cloud structure in the region near the ground track of active sensors and classify cloud type using solely IR measurements. The 3D cloud structure of nighttime atmosphere is constructed following the similar radiance matching (SRM) hypothesis [18,22] that if two pixels have sufficiently similar multi-spectral radiances [or brightness temperatures (BT)], their vertical structures and column properties of clouds can be assumed to be similar. The construction method can provide reliable estimates of nearby cloud vertical structure simultaneously with satellite overpass at nighttime. It could also provide assessment to cloud related studies, such as the cloud–aerosol interaction, over a broader range than the lidar ground track [24,27].
The method and the adaptation to nighttime conditions are described in detail in the method section. The reliability of the construction is evaluated based on the reconstruction process. The results of the construction are used to infer the cloud type at off-nadir locations, which are compared against the cloud classification of MODIS images according to the standards from the International Satellite Cloud Climatology Project (ISCCP). Note that the comparison used daytime measurements but followed the same algorithm as the construction during nighttime.
The paper is outlined as follows: Section 2 lists the satellite sensors and specific datasets used in the study. Section 3 describes the algorithm and the constraints used in the construction. Section 4 discusses the reconstruction results in terms of profiles and classifications. Cloud classification results based on scene construction are also presented and discussed in detail. Section 5 provides a summary and scope for future applications.

2. Sensors and Data

In this study, we utilized data from CALIOP, CPR, and MODIS. The three satellites flew in this order within seconds to minutes of each other before the two active sensors exited the A-Train and lowered to the C-Train orbit in 2018. Collocation of the active and the passive sensors in the A-Train constellation provides opportunities to get synergistic insights and make improvements on the retrieval algorithms [28].
CPR, the 94 GHz nadir-looking radar on board the CloudSat, and CALIOP, the two-wavelength polarization-sensitive lidar on board the CALIPSO, worked together to provide complete vertical profiles of atmosphere for features such as clouds and aerosols. Unfortunately, CloudSat had a battery anomaly on 17 April 2011, and only daytime data collection was resumed later in the year. In this work, we used 2B-CLDCLASS-lidar product released by CloudSat Data Processing Center before the battery anomaly, which combines radar and lidar measurements to provide a more complete cloud vertical structure and classify clouds into eight classes. These classes are abbreviated as stratus (St), stratus cumulus (Sc), cumulus (Cu, including cumulus congestus), nimbostratus (Ns), altocumulus (Ac), altostratus (As), deep convective (cumulonimbus, DC), or high (cirrus and cirrostratus, Ci) clouds. Data are available for download from Data Processing Center website: http://www.cloudsat.cira.colostate.edu.
MODIS instrument with its 36 channels spanning visible to thermal wavelengths makes continuous observations of the Earth at near daily frequency. It retrieves both physical and radiative cloud properties using combined infrared and visible techniques at day and infrared only at night. The current Collection 6 (C6) refinements of operational cloud top properties algorithms are improved from a previous version based on improvements in the spectral response functions for the bands used in CO2 slicing algorithm and long-term comparison with CALIPSO measurements [10]. In this study, geographic product MYD03, calibrated radiances product MYD021KM, and cloud properties product MYD06 were used. Specifically, IR measurements in bands 27, 29, 31, 32, and 35 were used, which have bandwidths 6.535–6.895, 8.400–8.700, 10.780–11.280, 11.770–12.270, and 13.785–14.085 μm, respectively. Data are available for download at MODIS website: https://ladsweb.modaps.eosdis.nasa.gov/.
CPR profiles have a footprint size of approximately 1.3 km × 1.7 km, while all MODIS products used in this work have a resolution of 1 km × 1 km. Each CPR profile along its ground track was matched with a MODIS pixel determined by having the smallest sum of the squared absolute errors and square relative errors between geodetic latitudes and longitudes of these pixels. This orbit registration process was adopted from Wang and Xu [29] and the reference therein.

3. Method

The algorithm proposed in this work was used to construct a nearby cloud vertical structure along the CloudSat ground track based on solely IR measurements and passive retrieved properties. The algorithm was adapted to nighttime scene construction following the same principle of the SRM hypothesis as well as its major steps [18,22]. To reiterate briefly, the assumption is that if two pixels have sufficiently similar multi-spectral radiances, their vertical structures and column properties can be assumed to be similar. Therefore, the profiles and other properties, including cloud classification, of a pixel retrieved by active sensors along the track could be attributed to a pixel off-track. For detailed explanation and verification, please see the referenced works.
In this work, different IR bands from MODIS and brightness temperature difference (BTD) among these bands were used in the selection of potential donor pixels. Cloud top properties, including cloud top pressure (CTP), temperature (CTT), and height (CTH), were used as additional constraints. In the following section, the proposed algorithm is referred to as the nighttime similar radiance matching (NSRM) method. Figure 1 shows the concept diagram of the method and comparison in this work.
The NSRM method includes four major steps. The first step is orbit registration, which is introduced in Wang and Xu [29]. The measurements from the active sensors are collocated with MODIS measurements along the track, which create a narrow cross section of pixels with both active and passive measurements. This area is referred to as the active–passive retrieved cross section (RXS). In another word, the construction algorithm expands the cloud profiles in the RXS (the constructed cloud structure is referred to as RXS-expand in the following context), and thereby provides a possible way to classify off-track cloud types during the nighttime.
After the orbit registration process, the pixels with both active and passive measurements in the RXS are noted as potential donors. In contrast, pixels off-track, which only have passive measurements and need to be filled with vertical information, are noted as recipients. The essential of the scene construction method is to match each recipient within passive range (i,j) for all i and j ∈ [−J,−1] ∪ [1,J]) with the most appropriate donor (m*,0) and attribute the donor’s profile for the corresponding recipient.
To find the best matching pixels, the potential donors of a certain recipient are first filtered for the background conditions. Potential donors need to satisfy the following criteria to be considered a possible match for the specific recipient:
(1)
The potential donors must have the same surface type as the recipient. The surface type of each pixel is obtained from the MYD03 land/sea mask product.
(2)
The potential donors must be similar enough in their solar positions to the recipient. The difference of both solar zenith angles and solar azimuth angles need to be negligible.
(3)
The potential donors must have the same cloud scenario as the recipient, which means they are either both cloudy or both clear in the MYD06 cloud mask flags.
(4)
Based on the availability, potential donors should ideally have sufficiently small uncertainties with their retrieved properties.
For potential donors with right background conditions, a cost function F(i,j;m) is computed as:
F ( i , j ; m ) = k = 1 K ( r k ( i , j ) r k ( m , 0 ) r k ( i , j ) ) 2 ; m [ i m 1 , i + m 2 ] ,
where rk is the MODIS radiance for the kth band for each pixel. The NSRM method uses radiances from five bands (k = 5). The bands are chosen for their widely accepted usage in retrieving cloud cover, cloud top properties (CTP/CTT/CTH), and cloud phase [10,30,31].
The search range of potential donors along the RXS is denoted as m∈[im1i + m2], as shown in Equation (1), where im1 indicates the distance in the backward direction, and i + m2 indicates the distance in the forward direction of the track. This search range is optimized by Sun et al. [22] using following extension conditions:
m 1 = m 2 = { 200 + D m ; D m > 30 200 ; D m 30 ,
where Dm is calculated as the shortest distance between the recipient and the RXS.
The third step introduces additional constraints with BTD and passive retrieved cloud characteristics. Although limited in accuracy, measurements from passive sensors such as MODIS can convey cloud vertical geometric information to a certain extent. Previous studies have used retrieved data as constraints in the similar match-and-substitute process [22,25]. For the NSRM method, the difference of CTP, CTT, and CTH between the selected donor and the recipient are constrained using the following formula:
| C ^ ( i , j ) C ^ ( m , 0 ) | C ^ ( i , j ) α ,
where C ^ is the retrieved characteristic at each pixel, and α is a ratio factor of tolerance. Similarly, the BTD [29,30,31] and the BTD [31,32] between the donor–recipient pair are constrained by β:
B T D [ 29 31 ] + B T D [ 31 32 ] β ,
The BTDs are calculated using the radiance measurements from the MODIS021KM product. For example, BTD [31,32] is calculated using Equations (5) to (7):
B T D [ 31 32 ] = T 31 T 32 ,
T = [ h c k λ ] 1 ln [ 2 h c 2 λ 5 L 1 + 1 ] ,
L = 2 h c 2 λ 5 [ e h c k λ T 1 ] ,
where L is the blackbody radiance (Wm−2sr−1µm−1), T is the brightness temperature from a central wavelength, c is the light speed (2.998 × 108 ms−1), λ is the sensor’s central wavelength (µm), h is the Planck constant (6.626 × 10−34 Js), and k is the Boltzmann constant (1.380 × 10−23 JK−1). The value of α is set to be 0.3 in this work, while the value of β is set to 1.5. The influence of changing α or β to different values is discussed in the next section.
In the last step, the potential donors that meet the additional constraints are ordered from the smallest to the largest according to their F(i,j;m). The goal is to selected donor pixel (m,0), which is the closest to the targeted recipient and has sufficiently similar radiances as well as passive retrieved characteristics. This is expressed as:
arg min m * [ 1 , ( m 1 + m 2 + 1 ) f ] { D ( i , j ; m * ) } ; f ( 0 , 1 ) ,
where D(i,j;m) is the Euclidean distance between a potential donor at (m,0) and the recipient at (i,j), calculated for potential donors with the smallest 100f% of F(i,j;m). In this study, f is set to be 0.03.

4. Results and Discussion

4.1. Performance of the NSRM Method

To evaluate the performance of the NSRM method, we applied the method to global A-Train dataset composed of nighttime track on the 16th of each month in 2009 for a total of 143 orbits. Analyses were restricted to between latitudes 60° N and 60° S. Due to the scarcity of other sources of vertical profiles, we adopted the reconstruction algorithm, or the so-called dead zone test, to evaluate the performance [18,22,32]. The reconstruction meant reconstituting profiles along the track, which made it comparable to the actual measurements made by active sensors. A dead zone was created by defining the selection range for potential donors as [im1, in]∪[i + n, i + m2], which barred the selection of potential donors from the nearest ±n pixels. This dead zone made the reconstruction mimic the process of matching off-track recipient n pixels away. Therefore, it could give an approximate indication of how well the NSRM method can be expected to perform.
Figure 2 gives an example of the reconstruction of the CloudSat profiles along the first nighttime track on 16 January 2009 from 60° N to 60° S. Various cloud structures were observed along the track, providing a good illustration of the method. The blue area in the top panel represents cloud structure observed by CloudSat; the darker blue represents top layer, and the lighter blue indicates other layers. The red dots present the CTH retrieved by MODIS. In the majority of pixels (~70%), the CTH inferred from CloudSat and from MODIS were within 2 km. In the following analysis, we focused on these measurements, as the NSRM method is based on the assumption that the inputs from both sensors are reasonably accurate. In addition, as the dead zone range increased, the possibility that a suitable donor could not be found within searching range also increased (as the colorless pixels shown in the figure). The rates of failing to find a matching donor were 3.4% at 10 km, 7.1% at 50 km, 16.1% at 200 km, and 24.4% at 400 km.
Diagnostic variables, including root mean square error (RMSE) and mean deviation (MD), were calculated for further analysis. These variables were defined as:
R M S E = i = 1 n ( H i H i ) 2 n ,
and
M D = 1 n i = 1 n | H i H i | ,
where H i and H i denote estimated and observed height of cloud, respectively.
To analyze the impact of choosing different values of tolerance factor α and β and figure out the most suitable values to apply to the algorithm, we took the following steps to test each constrain separately. First, diagnostic variables were calculated for different values of β, while α was set to 0.3, a value recommended by Sun et al. [22], a study that performed a similar analysis but without constraints on the BTD. Data from 16 January and 16 May 2009 were reconstructed for dead zone ranges from 50 to 400 km (Figure 3). The MD and the RMSE of CTH and CBH were given between the original C-C profiles and the reconstructed profiles.
The results show that MD and RMSE increased when β or dead zone increased. The trend of response to β value was expected, since narrower constraints lead to more aggressive screening of the potential donors. However, a very small β (<1.5) could result in a sharp decrease in the number of successfully matched pairs. To make a balance, β = 1.5 was selected to be utilized in the NSRM method. The same dataset was then used to test the performance of different values of α (Figure 4). The results show that, with the additional constrain β, α = 0.3 was still an appropriate choice to apply to the algorithm. Therefore, α = 0.3 and β = 1.5 were applied to the construction of the entire dataset.
The general trend over the expanded distance can be summarized as MD and RMSE increased as the dead zone increased (Figure 5). The trend is logical because, as the distance between the donor and the recipient increased, the probability to find a good match decreased. The average MD between the original and the reconstructed CTH at 50 km was 0.97 km, which increased to 1.49 km at 200 km and 1.83 km at 400 km; the error bar shows that the standard deviation of 143 profiles increased from 0.25 at 50 km to 0.42 at 200 km and 0.49 at 400 km. The average RMSE between the original and the reconstructed CTH at 50 km was 2.49, which increased to 3.26 at 200 km and 3.76 at 400 km. The trend of CBH was similar. The average MD between the original and the reconstructed CBH at 50 km was 1.32 km, which increased to 1.81 km and 2.02 km at 200 km and 400 km, respectively; the error bar increased from 0.50 to 0.73 and 0.76, respectively. The average RMSE of CBH at 50 km was 2.92, which increased to 3.6 at 200 km and 3.95 at 400 km. Note that the constrained dataset focused on pixels that had a MODIS retrieved CTH within 2 km of the CloudSat-retrieved CTH; if all pixels were considered, these diagnostic variables would have been 10~35% larger, as shown in Figure 5.
Figure 5 also presents the comparison of diagnostic variables calculated among several methods. The green line shows the results from the profiles reconstructed with the SRM method, which utilized four MODIS bands (0.62–0.67, 2.105–2.155, 8.4–8.7, and 11.77–12.27 μm) in the same way as Equation (1). However, note that the SRM method is not adapted to nighttime condition. Therefore, only two bands in the IR range could be used for nighttime construction. The gray line shows the results of simply selecting the nearest cloud pixel outside the dead zone as the donor, which is not supposed to be a reasonable match due to the discontinuity of clouds. It is clearly shown in the figure that the NSRM method had better performance than the SRM method at night. However, the SRM method had a higher rate of finding a matching donor, partly due to the lack of constraints. The nearest donor method, on the other hand, had high accuracy at 10 km, but the errors increased quickly as the dead zone increased.
In general, it is not recommended to use the algorithm to construct scenes more than 400 km away from the ground track, since the possibility of successful construction is low and errors are large. However, some works have found good construction results at a very large range during special events, such as typhoons [22].
The performance of the NSRM method was also evaluated with MODIS measurements in the original pixel and the reconstructed one (Figure 6 and Figure 7). MD and RMSE for both α and β were much smaller compared to the evaluation against C-C measurements, but the trend was similar overall, but more gradual. The fact that MODIS is less sensitive to high thin clouds may have had a major influence here.
In reality, there is always a certain difference between the MODIS measurements and the C-C measurements of CTH along the track. Although as the MODIS algorithm improves, the difference is generally within several hundred meters, the difference in pixels with high cirrus clouds and multiple cloud layers remains large [33,34]. Since the algorithm uses MODIS retrieved cloud top properties as constraints, the additional test provided an evaluation of the influence from the difference between separate sensors.
Figure 8 shows the comparison between CTH inferred from the constructed profiles (from the donors) and the MODIS measurements (from the recipients) within 400 km of the track. The difference was generally within 2 km. The gray box indicates a portion of data that are suspected to have attributed a donor with high thin cloud layer on top of the MODIS detected low cloud layers. Therefore, a large difference between the CTHs is shown.

4.2. Nighttime Cloud Classification

The scene construction algorithm provides a possible method to identify cloud type in the swath near the track. Figure 9 shows the fractional distribution of the eight types of clouds identified by the NSRM method for the same tested dataset as in Section 4.1. The figure presents the distribution in terms of all cloud layers (all), the uppermost cloud layer (top), and the single layers (single). The most frequently occurring cloud type was Ci near the equatorial region and low level Sc from middle to higher latitudes. Cu clouds also contributed a significant proportion.
To test the reliability of the cloud distribution calculated from the RXS-expand results, again we used the reconstruction method. Figure 10 compares the distribution of each cloud type summarized from the successfully reconstructed profiles with dead zone set to 100 km with the original profiles on a one-to one base. In this case, the pixels that failed to find matching donors were not taken into account. The figure illustrates that most clouds—both high and low, optically thin and thick ones—were well matched in numbers by the reconstructed profiles. There was a major difference between the original and the reconstructed DC clouds, which could partly be explained by the scarcity of this cloud type causing it to be more difficult to find an accurate match.

4.3. Daytime Verification of the Cloud Classification Results

To compare the classification based on the NSRM method with the other classification method, we applied the NSRM method to the daytime dataset of the same period as in the earlier sections. In the daytime, the combined measurements in the visible and the infrared bands from MODIS allow retrievals of cloud optical thickness (COT) and multilayer flag (MLF). The ISCCP provides a simple and efficient method, using CTP and COT to separate clouds into nine types, which basically match the cloud types from CloudSat (except that the cirrus and the cirrostratus are combined as high clouds in CloudSat product). It is noted that MODIS only retrieves column COT, which means if there is more than one cloud layer, the ISCCP method could not work properly. Therefore, in the daytime study, we only compared the results from pixels with MLF = 1, which indicates a single cloud layer. This comparison in the daytime was used as an approximation of the reliability of cloud classification in the nighttime. However, it is noted that results from single layer clouds could be different from all cloud layers, as shown in Figure 9. Other methods such as the constrained spectral radiance matching (CSRM) algorithm [22] and the cloud-type matching algorithm [19] have been designed and performed for daytime observations, which might give better classification; however, these methods could not be applied to the nighttime condition.
Figure 11 shows the distribution of each cloud type identified by the MODIS measurements using the ISCCP method and from the RXS-expand results using the NSRM method. The −400 km indicates 400 km to the west of the track, and 400 km indicates 400 km to the east of the ground track. The figure illustrates that a generally good agreement was found between the two methods for most cloud types, showing a promising trend over the entire expanded range. The fact that the numbers of clouds classified by the ISCCP method (by MODIS) were almost consistently lower than those identified by the NSRM method (by construction based on active sensors) could be explained by the focus on pixels with MODIS MLF = 1. MODIS only detects one cloud layer at these pixels, but since the active sensor is more sensitive to thin and high clouds, the constructed profiles might have had more than one cloud layer, causing the numbers to be generally higher. The disagreement was relatively large for Cu, which might have been due to the problem that thin, low level clouds could be undetected by CPR and CALIOP [35]. The gradual decrease of the total number of cloud layers as the expanded range increased on the both sides partly resulted from the removal of multi-layer cloud pixels and pixels with CTH retrievals that significantly contradicted each other.

5. Conclusions

This work proposed and evaluated a cloud structure construction algorithm adapted for nighttime expansion of vertical information inferred from nadir-pointing cloud radar and lidar to cross-track locations next to the ground track. Based on matching and attributing nadir pixels (donors) into off-nadir pixels (recipients) with similar infrared radiances and passive retrieved cloud properties, the cloud vertical structure was expanded up to 400 km on both sides of the ground track. The constructed cloud structure was utilized for nighttime cloud classification and compared to the daytime ISCCP cloud classification based on the collocated MODIS measurements.
Reconstruction of nadir profiles during the tested days verified the overall performance of the NSRM methods, which is related to the minimum distance between the donor and the recipient. By mimicking the off-nadir distance with a dead zone along the ground track, the reconstruction of nadir profiles shows that, at 200 km from the ground track, the CTH and the CBH reconstructed by the NSRM method were within 1.49 km and 1.81 km of the original measurements, respectively. The RMSEs of CTH and CBH were 3.26 and 3.6, respectively. At 400 km, the MD of CTH increased to 1.83 km, and CBH increased to 2.02 km, while the RMSE of CTH increased to 3.76 and CBH increased to 3.95. These values were calculated when the tolerance factor α was set to 0.3, and β was set to 1.5, which could be changed for either narrower or looser constraints.
The reliability of using a constructed regional structure for cloud classification was tested through both reconstruction and daytime comparison with passive wide-swath observation. The comparison between distributions summarized from reconstructed profiles at 100 km and original profiles shows that the eight identified cloud types matched well averaged by latitude. The comparison with classification results inferred using the ISCCP standards and the MODIS measurements during daytime shows general good agreement, except for thin, low level clouds (Cu).
The construction of a cloud structure based on the NSRM method provides reliable estimates of regional cloud layer heights during nighttime. It can efficiently construct and provide vertical information almost simultaneously with radar and lidar overpass. It also has the potential to provide assessment of vertical information and classification for other clouds related studies, such as the cloud–aerosol interaction, over a broader range than the lidar ground track. In the future, reduction of the disagreement between active and passive retrieved cloud top properties will certainly benefit the application of constraints for selecting better donors. In addition, launching more satellites with active and passive sensors would help to increase the chance and the quality of selecting matching pixels.

Author Contributions

The contribution of each author to this research article is specified as follows: conceptualization: D.L. and X.Z.; methodology, S.C. and D.L.; software, C.C.; validation, S.C. and C.C.; formal analysis, S.C.; investigation, C.D.; resources, F.W.; data curation, B.T.; writing—original draft preparation, S.C.; writing—review and editing, D.L.; supervision, W.C.; project administration, B.C.; funding acquisition, L.S. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (2016YFC1400900, 2016YFC0200700); Natural Science Foundation of China (NSFC) (41775023); Excellent Young Scientist Program of Zhejiang Provincial Natural Science Foundation of China (LR19D050001); Public Welfare Project of Zhejiang Province (2016C33004); Fundamental Research Funds for the Central Universities; State Key Laboratory of Modern Optical Instrumentation Innovation Program.

Acknowledgments

The authors would like to thank the science teams of MODIS and CALIOP for providing excellent and accessible data products used in this investigation. We would also like to thank Howard W. Barker at Environment and Climate Change Canada for the technical support on the SRM method.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. IPCC. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. In Climate Change 2013: The Physical Science Basis; Stocker, T.F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S.K., Boschung, J., Nauels, A., Xia, Y., Bex, V., Midgley, P.M., Eds.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2013; p. 1535. [Google Scholar] [CrossRef] [Green Version]
  2. Stubenrauch, C.J.; Rossow, W.B.; Kinne, S.; Ackerman, S.; Cesana, G.; Chepfer, H.; Di Girolamo, L.; Getzewich, B.; Guignard, A.; Heidinger, A.; et al. Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel. Bull. Am. Meteorol. Soc. 2013, 94, 1031–1049. [Google Scholar] [CrossRef]
  3. Ramanathan, V.; Cess, R.D.; Harrison, E.F.; Minnis, P.; Barkstrom, B.R.; Ahmad, E.; Hartmann, D. Cloud-Radiative forcing and climate-Results from the earth radiation budget experiment. Science 1989, 243, 57–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Rossow, W.B.; Garder, L.C.; Lacis, A.A. Global, Seasonal Cloud Variations from Satellite Radiance Measurements. Part I: Sensitivity of Analysis. J. Clim. 1989, 2, 419–458. [Google Scholar] [CrossRef] [Green Version]
  5. Rossow, W.B.; Schiffer, R.A. Advances in understanding clouds from ISCCP. Bull. Am. Meteorol. Soc. 1999, 80, 2261–2287. [Google Scholar] [CrossRef] [Green Version]
  6. Wind, G.; Platnick, S.; King, M.D.; Hubanks, P.A.; Pavolonis, M.J.; Heidinger, A.K.; Yang, P.; Baum, B.A. Multilayer Cloud Detection with the MODIS Near-Infrared Water Vapor Absorption Band. J. Appl. Meteorol. Climatol. 2010, 49, 2315–2333. [Google Scholar] [CrossRef]
  7. Minnis, P.; Sun-Mack, S.; Young, D.F.; Heck, P.W.; Garber, D.P.; Chen, Y.; Spangenberg, D.A.; Arduini, R.F.; Trepte, Q.Z.; Smith, W.L., Jr.; et al. CERES Edition-2 Cloud Property Retrievals Using TRMM VIRS and Terra and Aqua MODIS Data-Part I: Algorithms. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4374–4400. [Google Scholar] [CrossRef]
  8. Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D., Jr.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Hughes, M.J.; Laue, B. Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sens. Environ. 2017, 194, 379–390. [Google Scholar] [CrossRef] [Green Version]
  9. Pavolonis, M.J. Advances in Extracting Cloud Composition Information from Spaceborne Infrared Radiances-A Robust Alternative to Brightness Temperatures. Part I: Theory. J. Appl. Meteorol. Climatol. 2010, 49, 1992–2012. [Google Scholar] [CrossRef]
  10. Baum, B.A.; Menzel, W.P.; Frey, R.A.; Tobin, D.C.; Holz, R.E.; Ackerman, S.A.; Heidinger, A.K.; Yang, P. MODIS Cloud-Top Property Refinements for Collection 6. J. Appl. Meteorol. Climatol. 2012, 51, 1145–1163. [Google Scholar] [CrossRef]
  11. Shonk, J.K.P.; Hogan, R.J.; Manners, J. Impact of improved representation of horizontal and vertical cloud structure in a climate model. Clim. Dyn. 2012, 38, 2365–2376. [Google Scholar] [CrossRef]
  12. Pincus, R.; Hemler, R.; Klein, S.A. Using stochastically generated subcolumns to represent cloud structure in a large-scale model. Mon. Weather Rev. 2006, 134, 3644–3656. [Google Scholar] [CrossRef] [Green Version]
  13. Stephens, G.L.; Vane, D.G.; Boain, R.J.; Mace, G.G.; Sassen, K.; Wang, Z.E.; Illingworth, A.J.; O′Connor, E.J.; Rossow, W.B.; Durden, S.L.; et al. The cloudsat mission and the a-train-A new dimension of space-Based observations of clouds and precipitation. Bull. Am. Meteorol. Soc. 2002, 83, 1771–1790. [Google Scholar] [CrossRef] [Green Version]
  14. Winker, D.M.; Vaughan, M.A.; Omar, A.; Hu, Y.X.; Powell, K.A.; Liu, Z.Y.; Hunt, W.H.; Young, S.A. Overview of the CALIPSO Mission and CALIOP Data Processing Algorithms. J. Atmos. Ocean. Technol. 2009, 26, 2310–2323. [Google Scholar] [CrossRef]
  15. Kim, S.W.; Berthier, S.; Raut, J.C.; Chazette, P.; Dulac, F.; Yoon, S.C. Validation of aerosol and cloud layer structures from the space-borne lidar CALIOP using a ground-based lidar in Seoul, Korea. Atmos. Chem. Phys. 2008, 8, 3705–3720. [Google Scholar] [CrossRef] [Green Version]
  16. Vaughan, M.A.; Powell, K.A.; Kuehn, R.E.; Young, S.A.; Winker, D.M.; Hostetler, C.A.; Hunt, W.H.; Liu, Z.; McGill, M.J.; Getzewich, B.J. Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements. J. Atmos. Ocean. Technol. 2009, 26, 2034–2050. [Google Scholar] [CrossRef]
  17. Platnick, S.; King, M.D.; Ackerman, S.A.; Menzel, W.P.; Baum, B.A.; Riedi, J.C.; Frey, R.A. The MODIS cloud products: Algorithms and examples from Terra. IEEE Trans. Geosci. Remote Sens. 2003, 41, 459–473. [Google Scholar] [CrossRef] [Green Version]
  18. Barker, H.W.; Jerg, M.P.; Wehr, T.; Kato, S.; Donovan, D.P.; Hogan, R.J. A 3D cloud-Construction algorithm for the EarthCARE satellite mission. Q. J. R. Meteorol. Soc. 2011, 137, 1042–1058. [Google Scholar] [CrossRef] [Green Version]
  19. Miller, S.D.; Forsythe, J.M.; Partain, P.T.; Haynes, J.M.; Bankert, R.L.; Sengupta, M.; Mitrescu, C.; Hawkins, J.D.; Vonder Haar, T.H. Estimating Three-Dimensional Cloud Structure via Statistically Blended Satellite Observations. J. Appl. Meteorol. Climatol. 2014, 53, 437–455. [Google Scholar] [CrossRef] [Green Version]
  20. Forsythe, J.M.; Vonder Haar, T.H.; Reinke, D.L. Cloud-Base height estimates using a combination of meteorological satellite imagery and surface reports. J. Appl. Meteorol. 2000, 39, 2336–2347. [Google Scholar] [CrossRef]
  21. Hutchison, K.; Wong, E.; Ou, S.C. Cloud base heights retrieved during night-Time conditions with MODIS data. Int. J. Remote Sens. 2006, 27, 2847–2862. [Google Scholar] [CrossRef]
  22. Sun, X.J.; Li, H.R.; Barker, H.W.; Zhang, R.W.; Zhou, Y.B.; Liu, L. Satellite-Based estimation of cloud-Base heights using constrained spectral radiance matching. Q. J. R. Meteorol. Soc. 2016, 142, 224–232. [Google Scholar] [CrossRef]
  23. Noh, Y.-J.; Forsythe, J.M.; Miller, S.D.; Seaman, C.J.; Li, Y.; Heidinger, A.K.; Lindsey, D.T.; Rogers, M.A.; Partain, P.T. Cloud-Base Height Estimation from VIIRS. Part II: A Statistical Algorithm Based on A-Train Satellite Data. J. Atmos. Ocean. Technol. 2017, 34, 585–598. [Google Scholar] [CrossRef]
  24. Liu, D.; Chen, S.; Cheng, C.; Barker, H.W.; Dong, C.; Ke, J.; Wang, S.; Zheng, Z. Analysis of global three-Dimensional aerosol structure with spectral radiance matching. Atmos. Meas. Tech. 2019, 12, 6541–6556. [Google Scholar] [CrossRef] [Green Version]
  25. Li, H.R.; Sun, X.J. Retrieving cloud base heights via the combination of CloudSat and MODIS observations. In Proceedings of the Conference on Remote Sensing of the Atmosphere, Clouds, and Precipitation V, Beijing, China, 13–15 October 2014. [Google Scholar]
  26. Hutchison, K.D. The retrieval of cloud base heights from MODIS and three-Dimensional cloud fields from NASA′s EOS Aqua mission. Int. J. Remote Sens. 2002, 23, 5249–5265. [Google Scholar] [CrossRef]
  27. Chand, D.; Anderson, T.L.; Wood, R.; Charlson, R.J.; Hu, Y.; Liu, Z.; Vaughan, M. Quantifying above-Cloud aerosol using spaceborne lidar for improved understanding of cloudy-Sky direct climate forcing. J. Geophys. Res.-Atmos. 2008, 113. [Google Scholar] [CrossRef] [Green Version]
  28. Savtchenko, A.; Kummerer, R.; Smith, P.; Kempler, S.; Leptoukh, G. A-Train Data Depot-Bringing Atmospheric Measurements Together. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2788–2795. [Google Scholar] [CrossRef]
  29. Wang, H.; Xu, X. Cloud Classification in Wide-Swath Passive Sensor Images Aided by Narrow-Swath Active Sensor Data. Remote Sens. 2018, 10, 812. [Google Scholar] [CrossRef] [Green Version]
  30. Ackerman, S.A.; Strabala, K.I.; Menzel, W.P.; Frey, R.A.; Moeller, C.C.; Gumley, L.E. Discriminating clear sky from clouds with MODIS. J. Geophys. Res.-Atmos. 1998, 103, 32141–32157. [Google Scholar] [CrossRef]
  31. Baum, B.A.; Soulen, P.F.; Strabala, K.I.; King, M.D.; Ackerman, S.A.; Menzel, W.P.; Yang, P. Remote sensing of cloud properties using MODIS airborne simulator imagery during SUCCESS 2. Cloud thermodynamic phase. J. Geophys. Res.-Atmos. 2000, 105, 11781–11792. [Google Scholar] [CrossRef]
  32. Barker, H.W.; Cole, J.N.S.; Shephard, M.W. Estimation of errors associated with the EarthCARE 3D scene construction algorithm. Q. J. R. Meteorol. Soc. 2014, 140, 2260–2271. [Google Scholar] [CrossRef]
  33. Marchant, B.; Platnick, S.; Meyer, K.; Arnold, G.T.; Riedi, J. MODIS Collection 6 shortwave-Derived cloud phase classification algorithm and comparisons with CALIOP. Atmos. Meas. Tech. 2016, 9, 1587–1599. [Google Scholar] [CrossRef] [Green Version]
  34. Platnick, S.; Meyer, K.G.; King, M.D.; Wind, G.; Amarasinghe, N.; Marchant, B.; Arnold, G.T.; Zhang, Z.; Hubanks, P.A.; Holz, R.E.; et al. The MODIS Cloud Optical and Microphysical Products: Collection 6 Updates and Examples From Terra and Aqua. IEEE Trans. Geosci. Remote Sens. 2017, 55, 502–525. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Chan, M.A.; Comiso, J.C. Cloud features detected by MODIS but not by CloudSat and CALIOP. Geophys. Res. Lett. 2011, 38. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the This nighttime spectral radiance matching (NSRM) method and the verification process in this work. In the demonstration of orbit registration, the yellow line indicates the track of CloudSat, while the grids are pixels of Moderate Resolution Imaging Spectroradiometer (MODIS).
Figure 1. Flowchart of the This nighttime spectral radiance matching (NSRM) method and the verification process in this work. In the demonstration of orbit registration, the yellow line indicates the track of CloudSat, while the grids are pixels of Moderate Resolution Imaging Spectroradiometer (MODIS).
Remotesensing 12 00668 g001
Figure 2. Reconstruction of the cloud profile measured by CloudSat on 16 January 2009 from 60° N to 60° S. The original profile is shown in panel (a). The dead zones were set to 10 km (b), 50 km (c), 200 km (d), and 400 km (e). The red dots indicate the original cloud top height (CTH) from MODIS, while the green dots indicate the original cloud base height (CBH) from the active sensor.
Figure 2. Reconstruction of the cloud profile measured by CloudSat on 16 January 2009 from 60° N to 60° S. The original profile is shown in panel (a). The dead zones were set to 10 km (b), 50 km (c), 200 km (d), and 400 km (e). The red dots indicate the original cloud top height (CTH) from MODIS, while the green dots indicate the original cloud base height (CBH) from the active sensor.
Remotesensing 12 00668 g002
Figure 3. Mean deviation (MD) (a,b) and root mean square error (RMSE) (c,d) of CTH and CBH between the original profiles and the reconstructed profiles, changing with β and dead zone.
Figure 3. Mean deviation (MD) (a,b) and root mean square error (RMSE) (c,d) of CTH and CBH between the original profiles and the reconstructed profiles, changing with β and dead zone.
Remotesensing 12 00668 g003
Figure 4. MD (a,b) and RMSE (c,d) of CTH and CBH between the original profiles and the reconstructed profiles, changing with α and dead zone.
Figure 4. MD (a,b) and RMSE (c,d) of CTH and CBH between the original profiles and the reconstructed profiles, changing with α and dead zone.
Remotesensing 12 00668 g004
Figure 5. Comparison of MD of CTH and CBH between the original profiles and the reconstructed profiles. The profiles were reconstructed with the NSRM method (α = 0.3 and β = 1.5), the similar radiance matching (SRM) method, and the direct selection of the nearest donor.
Figure 5. Comparison of MD of CTH and CBH between the original profiles and the reconstructed profiles. The profiles were reconstructed with the NSRM method (α = 0.3 and β = 1.5), the similar radiance matching (SRM) method, and the direct selection of the nearest donor.
Remotesensing 12 00668 g005
Figure 6. MD (a) and RMSE (b) of MODIS CTH between the original pixels and the reconstructed pixels, changing with α and dead zone.
Figure 6. MD (a) and RMSE (b) of MODIS CTH between the original pixels and the reconstructed pixels, changing with α and dead zone.
Remotesensing 12 00668 g006
Figure 7. MD (a) and RMSE (b) of MODIS CTH between the original pixels and the reconstructed pixels, changing with β and dead zone.
Figure 7. MD (a) and RMSE (b) of MODIS CTH between the original pixels and the reconstructed pixels, changing with β and dead zone.
Remotesensing 12 00668 g007
Figure 8. Comparison of CTH inferred from the retrieved cross section (RXS)-expand results up to 400 km on both sides of the ground track and collocated MODIS retrievals. The gray box indicates a mismatch caused by attributing a donor with high thin cloud layer on top of the low cloud layers detected by MODIS.
Figure 8. Comparison of CTH inferred from the retrieved cross section (RXS)-expand results up to 400 km on both sides of the ground track and collocated MODIS retrievals. The gray box indicates a mismatch caused by attributing a donor with high thin cloud layer on top of the low cloud layers detected by MODIS.
Remotesensing 12 00668 g008
Figure 9. Fractional distribution of the eight types of clouds identified by the NSRM algorithm during the nighttime over the tested period. Each panel presents the distribution of one of the eight cloud types, in terms of all cloud layers (all), the uppermost cloud layer (top), and the single layers (single). The panels are arranged from high, thin clouds to low, thick clouds.
Figure 9. Fractional distribution of the eight types of clouds identified by the NSRM algorithm during the nighttime over the tested period. Each panel presents the distribution of one of the eight cloud types, in terms of all cloud layers (all), the uppermost cloud layer (top), and the single layers (single). The panels are arranged from high, thin clouds to low, thick clouds.
Remotesensing 12 00668 g009
Figure 10. Summary of comparison between the clouds identified from the original profiles (ori) and the reconstruction profiles (rec) with a dead zone of 100 km. Each panel presents one of the eight cloud types, arranged from high, thin clouds to low, thick clouds.
Figure 10. Summary of comparison between the clouds identified from the original profiles (ori) and the reconstruction profiles (rec) with a dead zone of 100 km. Each panel presents one of the eight cloud types, arranged from high, thin clouds to low, thick clouds.
Remotesensing 12 00668 g010
Figure 11. Comparison between cloud identification from MODIS and from the RXS-expand results up to 400 km on both sides of the daytime ground track. Each panel presents one of the eight cloud types, arranged from high, thin clouds to low, thick clouds.
Figure 11. Comparison between cloud identification from MODIS and from the RXS-expand results up to 400 km on both sides of the daytime ground track. Each panel presents one of the eight cloud types, arranged from high, thin clouds to low, thick clouds.
Remotesensing 12 00668 g011

Share and Cite

MDPI and ACS Style

Chen, S.; Cheng, C.; Zhang, X.; Su, L.; Tong, B.; Dong, C.; Wang, F.; Chen, B.; Chen, W.; Liu, D. Construction of Nighttime Cloud Layer Height and Classification of Cloud Types. Remote Sens. 2020, 12, 668. https://doi.org/10.3390/rs12040668

AMA Style

Chen S, Cheng C, Zhang X, Su L, Tong B, Dong C, Wang F, Chen B, Chen W, Liu D. Construction of Nighttime Cloud Layer Height and Classification of Cloud Types. Remote Sensing. 2020; 12(4):668. https://doi.org/10.3390/rs12040668

Chicago/Turabian Style

Chen, Sijie, Chonghui Cheng, Xingying Zhang, Lin Su, Bowen Tong, Changzhe Dong, Fu Wang, Binglong Chen, Weibiao Chen, and Dong Liu. 2020. "Construction of Nighttime Cloud Layer Height and Classification of Cloud Types" Remote Sensing 12, no. 4: 668. https://doi.org/10.3390/rs12040668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop