Survey On LiDAR Perception in Adverse Weather Conditions
Survey On LiDAR Perception in Adverse Weather Conditions
TABLE I
OVERVIEW OVER REAL - WORLD PERCEPTION DATASETS FOR AUTONOMOUS DRIVING UNDER ADVERSE WEATHER CONDITIONS FEATURING
L I DAR DATA : SEPERATED INTO OBJECT DETECTION -, SEMANTIC SEGMENATION - AND S IMULTANEOUS L OCALIZATION AND M APPING
(SLAM)- FORWARD DATASETS . *W EATHER INFORMATION EXPLANATION : AMBIENT DATA = EXACT WEATHER MEASUREMENTS FROM WEATHER
STATION , WEATHER TYPE = KIND / CATEGORICAL INTENSITY OF WEATHER ( E . G . FOG , ( HEAVY / LIGHT ) RAIN / SNOW, ...), POINT- WISE = POINT- WISE
WEATHER LABEL
as a linear system by convolving the emitted pulse with the and randomly sampled scatterer with a distance dependent
scene response. The scene response models the reflection on noise threshold. A scatter point is added if the power from
solid objects as well as back-scatter and attenuation due to scatter points exceeds the one from the solid object. A point
adverse weather. is lost if it falls below a distance dependent noise threshold.
A more practical augmentation for fog that can be applied Other power domain-driven augmentation methods can
to point clouds directly is introduced in [9]. It is based be found in [3] and [4] for fog and snow, respectively. In
on the maximum viewing distance which is a function of contrast to [36], they explicitly compute the intensity profile
measured intensity, LiDAR parameters and optical visibility relying on the theoretical formulation from [32]. Therefore,
in fog. If the distance of a clear weather point falls below the the different scatterer as well as the solid object contribute
maximum viewing distance, a random scatter point occurs to the peak profile. This allows for modelling of occlusions
or the point is lost with a certain probability. This model is and a more physically accurate augmentation model. Fur-
adapted to rain by translating visibility parameters and scatter thermore, [4] introduces a wet ground augmentation model
probabilities to rainfall rates [10]. Another rain augmentation that models lost ground points due to the water film on the
model is described in [33]. Rain drops are either causing road. This also allows to estimate the noise floor in a more
scatter or lost points depending on if the attenuated intensity data-driven way compared to the heuristic one used in [36].
falls below a noise threshold estimated from the sensor’s The authors of [37] suggest a physically sound method to
maximum range. estimate both the attenuation and backscattering coefficient
Yet, these models ignore the beam divergence of the emit- to further improve the model proposed in [3].
ted LiDAR pulse for rain augmentation, which is considered Aside from physics-based models, empirical models can
by [34]. Here, the number of intersections of supersampled also be used for augmentation. An empirical augmentation
beams modelling the beam divergence with the spherical rain method for spray whirled-up by other vehicles can be found
drops is computed. If the number of intersections exceed a in [38]. This model is centered around the observation from
certain threshold, a scatter point is added. The augmentation dedicated experiments that spray is organized into clusters.
method in [35] extends this approach such that lost points Another data-driven approach is presented in [39], which
can occur. Furthermore, it is adapted for snow and fog. relies on spray scenes from the Waymo dataset. In [40], a
Another augmentation for fog, snow and rain is presented more computationally expensive spray augmentation method
in [36]. This model operates in the power domain and is presented that relies on a renderer with a physics engine.
does not rely e.g. on counting intersections as the previ- Finally, DL-based methods can be applied to adverse
ously discussed methods. Additionally, beam divergence is weather augmentation. In [41], a Generative Adversarial Net-
simulated with a computationally more efficient sampling works (GAN)-based approach inspired by image-to-image
strategy for scatter point distances. In general, the model first translation is presented that is able to transform point clouds
compares the attenuated power reflected from solid objects from sunny to foggy or rainy conditions. They compare their
results qualitatively with real foggy and rainy point clouds The first work to actually model the influence of rain on
from a weather chamber. the LiDAR sensor is presented in [33]. The authors present
However, assessing the quality and degree of realism a mathematical model derived from the LiDAR equation and
of the augmentation method is challenging. Some authors allow for a performance degradation estimation based on the
use weather-chambers or other controlled environments that rain rate and maximum sensing range.
allow for a comparison with real-world weather effects In subsequent research works, the estimation of the sensor
[10], [27]. Furthermore, an augmentation method is often degradation under adverse weather conditions was formu-
considered realistic if it aids the perception performance lated as an anomaly detection task [47] and a validation
under real-world adverse weather conditions [42]. task [48]. The former employs a DL-based model which
aims to learn a latent representation that separates clear from
III. P OINT C LOUD P ROCESSING & D ENOISING rainy LiDAR scans and thus is able to quantify the degree
In this section, we present approaches on how to deal of the performance decrease. The latter method suggests a
with adverse weather conditions which are sensor technique- reinforcement learning (RL) model to determine failures in
or point cloud-based, i.e. are independent of the actual per- an object detection and tracking model.
ception task. Thereby we analyze general sensor-dependent While the above-mentioned methods aim to quantify the
weather robustness and the possibility to estimate the de- decrease in the sensor performance itself, another stream
gree of performance degradation depending on the weather of research focuses on the classification of the surrounding
conditions. Furthermore, there are streams of research on weather conditions (i.e. clear, rain, fog and snow). Satisfying
removing the weather-induced noise from the LiDAR point results were achieved with the help of classical machine
clouds with both classical denoising methods and DL. learning methods (k-Nearest Neighbors and Support Vector
Machines) based on hand-crafted features3 from LiDAR
A. Sensor-related Weather Robustness point clouds: [10] proposed a feature set to conduct point-
wise weather classification, a similar frame-wise approach
Depending on the technology, the characteristics and the
can be found in [49].
configuration, different LiDAR models are more or less in-
[51] developed a probabilistic model for frame-wise re-
fluenced by the weather conditions [7], [8], [15], [43]. Due to
gressions of the rain rate. With a mixture of experts they
eye safety restrictions and the suppression of ambient light,
accurately infer the rain rate from LiDAR point clouds.
two operation wavelengths for LiDAR sensors prevailed:
It should be noted that most of the methods were trained
905nm and 1550nm, with 905nm being the majority of the
and evaluated on data collected in a weather chamber. While
available sensors. Yet, the 1550nm models appear to have
the ability to carefully control the weather conditions allow
an improved visibility under heavy fog conditions, due to
for high reproducibility, the data usually do not exactly reflect
the higher power emit [44]. For a thorough discussion on
real-world conditions. In order to assess each method’s
LiDAR technologies under adverser weather conditions, we
classification abilities, thorough studies on real-world data
refer to [17].
are necessary [50].
Furthermore, the performance Full Waveform LiDAR
(FWL) has been investigated under adverse weather condi-
C. Point Cloud Denoising
tions [46]. FWL measures not only one or two returns but
all weaker returns, effectively measuring more noise but also Weather effects reflect in LiDAR point clouds in terms
gathering more information about the surrounding. Despite of specific noise patterns. As described in Section I, they
it requires high computational resources, FWL has proven might affect factors like the number of measurements in
useful to analyse the surrounding medium, which can lay a point cloud and the maximum sensing range. Instead of
the groundwork for understanding even changing conditions augmenting point clouds with weather-specific noise, the
and adjusting dynamically to them. point clouds can be denoised by various means in order to
reconstruct clear measurements. Additionally to classical fil-
B. Sensor Degradation Estimation and Weather Classifica- ter algorithms, some works on DL-based denoising emerged
tion recently.
As LiDAR sensors degrade differently under varying Besides applying perception tasks like object detection on
weather conditions, estimating the degree of sensor degra- the denoised point clouds, metrics like precision (preserve
dation is a first step towards dealing with corrupted LiDAR environmental features) and recall (filter out weather-induced
point clouds. Effords have been made in developing methods noise) are crucial to evaluate the performance of classical
to better identify the sensing limits to prevent the propagation filtering methods. To calculate these metrics, point-wise
of false detections into downstream tasks. labels are required which account for weather classes like
Firstly, some studies on characterizing sensor degradation snow particles [26].
under various weather conditions [14], [43], [44] represent
3 The optimal feature set appear to depend on the sensing surface, i.e. the
a solid basis for sensor calibration under adverse weather
feature set most suitable for classifications based on atmospheric regions
conditions and further development, although they are not yet might not be the best choice for classifications based on street regions, and
evaluated with regard to their weather classification abilities. vice versa [49], [50]
Radius Outlier Removal (ROR) filters out noise based IV. ROBUST L I DAR P ERCEPTION
on any point’s neighborhood. This becomes problematic for
LiDAR measurements of distant objects, as the point cloud While there are promising efforts in reducing the domain
becomes naturally sparse. Advanced methods solve this by shift introduced through adverse weather, there are multiple
dynamically adjusting the threshold as a function of the possible approaches on making LiDAR perception models
sensing distance (Dynamic Radius Outlier Removal (DROR), more robust towards adverse weather conditions, indepen-
[52], [53]) or taking into account the average distance to dently of the quality and the noise level of the data. There are
each point’s neighbors within the point cloud (Statistical Out- three streams of work here: utilizing sensor fusion, enhancing
lier Removal (SOR)). Both methods exhibit high runtimes, training by data augmentation with weather-specific noise,
making them hardly applicable in autonomous driving. The or general approaches on model robustness against domain
Fast Cluster Statistical Outlier Removal (FCSOR) [54] and shifts to compensate performance decrease.
the Dynamic Statistical Outlier Removal (DSOR) [26] both It should be noted that sensor fusion approaches are the
suggest methods to lower the computational load while still only ones tackling multiple perception tasks besides object
removing weather artifacts from point clouds. detection. To the best of our knowledge, there is no literature
on other perception tasks like semantic segmentation.
A thorough analysis revealed that weather-induced mea-
surement errors are associated with high density, low in- A. Combating Adverse Weather with Sensor Fusion
tensity, close range and fast decay of points [55]. Ad-
ditionally to weather-characteristic neighborhood features, Generally it can be said, that every sensor in an au-
the Low-Intensity Outlier Removal (LIOR) [56] and the tonomous driving sensor set has its strengths and weaknesses.
Dynamic Distance-Intensity Outlier Removal (DDIOR) [55] The most common sensors within such sensor sets are RGB
algorithms take measurement intensity into account to re- cameras, radars and LiDARs. As discussed in Section I,
move weather-induced artifacts. The former one utilizes the LiDAR perception suffers when encountering visible
assumptions about the particle size and manually tuned airborne particles like dust, rain, snow or fog. Cameras
”snow-intensity” threshold, while the latter one aims to are more sensitive to strong light incidence and blooming
unite multiple of the existing filtering ideas into a more effects. The radar in turn is affected by neither but lacks the
sophisticated version. It keeps the computational costs low capability to detect static objects and finer structures. Thus,
with the help of a pre-filtering step and achieves compelling it imposes itself to fuse different sensors in order to alleviate
results on snowy LiDAR scans. their respective shortcomings under different surrounding
conditions and facilitate a robust perception.
Denoising methods for roadside LiDARs rely on back- Early works on sensor fusion for combating the adverse
ground models from historical data (which is available for effect of weather on sensor perception concentrate on the de-
stationary roadside sensors) to identify dynamic points in velopment of robust data association frameworks [64], [65].
combination with basic principles used in classical denoising More recent research streams utilize DL-based approaches
[57], [58]. While [57] filters the weather noise from the actual for robust multi-modal perception and mainly address the
objects with the help of intensity thresholds (compare [56]), question of early vs. late fusion to achieve robustness under
[58] filters outliers based on the characteristic local density adverse weather conditions.
(compare [52]). Unfortunately, this is not easily applicable The answer to the question whether to prefer early or
to LiDAR sensors mounted on moving vehicles. late fusion seems to be governed by the choice of the
Contrary to classical denoising methods, DL-based de- sensors, the data representation, and the expected failure
noising of LiDAR point clouds became popular due to the rates. Provided that not all fused sensors are degraded equally
model’s abilities to directly understand the underlying struc- and at least one of them is fully functional, late fusion
ture of weather-induced noise: Firstly, Convolutional Neural appears to outperform early fusion [66], [67], [68]. In that
Network (CNN)-based models have been used for efficient case, the model has the ability to treat the sensor streams
weather denoising [59], [60], [61]. The use of temporal data independently, it is able to rely on the working sensor and
to distinguish further leverages the weather-specific noise ignore the failing one. Contrary, an early fusion of e.g. radar
removal [62], because naturally, the weather noise changes and LiDAR depth maps helps to filter out false detections in
in a higher frequency than the scene background and even order to achieve clean scans [69] 4 .
the objects within that scene. CNN-based approaches (espe- The data representation is another factor that partially
cially voxel-based) outperform classical denoising methods contributes to answering the question of early vs. late fusion.
in terms of noise filtering. Additionally, they have a lower The Birds Eye View (BEV) of the LiDAR sensor greatly
inference time due to faster GPU computations [60]. facilitates object detection by improved obejct distinguisha-
bility. Thus, any model that has learned to rely on the
Additional to the supervised CNN methods, unsupervised respective LiDAR features will suffer from a performance
methods like CycleGANs are able to turn noisy point cloud loss when the LiDAR data is corrupted [70]. Complete sensor
inputs into clear LiDAR scans [60]. Yet, they remain noisy
in their nature and the resulting point clouds can hardly be 4 Although this work does not explicitly take adverse weather into account,
validated with respect to their realism [63]. it evaluates the proposed approaches on haze and mist.
failure has successfully been combated by utilizing teacher- the object detection performance even under adverse weather
student networks [71]. conditions. This indicates that not only training with weather
Ultimately, some sensor fusion approaches rely on com- augmentation aids the detection performance under clear
bining early and late fusion into one model and exploit weather conditions [4], interestingly, it also appears to work
concepts like temporal data and region-based fusion [72] inversely [75].
or attention maps [73]. Another possibility is the adaptive,
entropy-steered fusion proposed in [21]. C. Robust Perception Algorithms
Besides the predictive performance, model runtime should While fusion methods with complementary sensors alle-
also be taken into consideration when developing novel viate the weather-induced performance degradation of each
perception approaches [72]. [68] introduced a new metric single sensor, they only act as a workaround for the actual
which incorporates the predictive performance for drivable problem at hand. Changes in the weather conditions can be
space segmentation with the inference runtime. Interestingly, seen as a special case of domain shift [76], thus approaches
the LiDAR-only model scored best on that metric. developed to bridge domain gaps might be applied to the
Undoubtedly, it is convenient to compensate sensor failure weather-to-weather (e.g. clear-to-rain/fog/snow) domain shift
6 . Since there are no extensive datasets adressing the weather-
under adverse weather conditions with unaffected sensors5 .
Yet, by striving for improving the LiDAR-only perception to-weather domain shift only, it can be evaluated as part
under adverse weather conditions, safety-critical applications of the dataset-to-dataset domain shift. Thus, two works on
like autonomous driving can become even more reliable. developing robust LiDAR pereption algorithms indirectly
evaluate the performance under adverse weather conditions.
B. Enhancing Training with Data Augmentation While the works provide interesting insights into the problem
While data augmentation is widely used in DL training at hand, it should be noted, that since the domain gap was not
strategies, it is the creation of specific weather noise which limited to the shift between weather conditions, other factors
is particularly challenging. Section II-B presented a variety of like sensor resolution and label strategy might overshadow
methods to generate weather-specific noise in LiDAR point the weather-induced gap. Thus, in the evaluation it is unclear
clouds. Utilizing data augmentation during the training of a which elements of the model attribute to the shift in the
perception model is the diametrical method of point cloud weather condition itself, since the dataset-to-dataset shift is
denoising, which has been discussed in III-C. Instead of very strong.
removing the weather-induced noise, the aim is to make [78] employ a teacher-student-setup for object detection
the model accustomed to that exact noise. It has been where the teacher is trained on Waymo Open (sunny) to
demonstrated that weather augmentation is more effective generate labels for part Waymo Open, part Kirkland (rainy),
than denoising in terms of robustness, which gives valuable student is trained on all label and applied to Kirkland.
hints on which research direction should be emphasized in Interestingly, the students appeared to generalize better to the
the future [4]. target domain, indicating that they were able to cope with
Generally, several works demonstrate the benefits of such the adverse weather. The authors of [79] proposed to robust
data augmentation at training time by evaluating them on the object detection pipeline including attention mechanisms and
task 3D object detection [3], [4], [36]. global context-aware feature extraction which allows the
Many works address the subject of choosing the best model to ignore weather-induced noise and at the same
feature extractor for robust LiDAR perception under adverse time, understand a whole scene. While their methods fail
weather conditions. Point-based and voxelizing methods ap- to perform well on two domains simultaneously (KITTI,
pear to be less affected by the augmented weather effects sunny & CADC, rainy), a joint training based on a maximum
[3], [4], [36], at least for object detection, hinting that discrepancy loss yields promising results and shows high
some robustness can be achieved by carefully choosing the performances on both source and target domain.
perception model. Also, there seems to be an interaction [80] focuses on alleviating weather-induced sensor degra-
between the model architecture and the kind of point cloud dation for both RGB camera and LiDAR. Although they
corruption due to adverse weather. The wet ground extension utilize sensor fusion (derived from the entropy fusion pre-
presented in [4] only aids some models, indicating that the sented in [21]) as well as data augmentation for both
detection problems caused by ray scattering are more or less sensors, their work strongly contributes towards exploiting
grave, depending on the model architecture. a set of methods to bridge the gap to multiple unknown
Furthermore, the size and shape of objects seem to play target domains for object detection. They achieve this by
a role in the degree of any detection model’s performance introducing domain discriminators and domain alignment by
degradation [3], [4], [75]. That means, smaller and underrep- self-supervised learning through a pre-training strategy. Their
resented classes (like cyclist in the STF dataset) suffer more results show that their multi-modal, multi-target domain
from the weather augmentation than well-represented classes, adaptation method is able to generalize well to e.g. fog
like car and pedestrian. Thus, the number of annotated scenarios.
objects in the (clear) training set is a good indicator on 6 [77] gives a comprehensive overview over the current state of the art
domain adaptation methods, but they mainly tackle problems related to
5 [74] proposes optimized sensor setups different sensor resolutions or the available data and their labels.
V. D ISCUSSION AND C ONCLUSION [5] M. Kutila, et al., “Benchmarking automotive lidar performance in arc-
tic conditions,” in IEEE 23rd International Conference on Intelligent
In this survey paper we outlined current research directions Transportation Systems (ITSC), 2020.
in LiDAR-based environment perception for autonomous [6] M. Jokela, et al., “Lidar performance review in arctic conditions,” in
IEEE 15th International Conference on Intelligent Computer Commu-
driving in adverse weather conditions. We thoroughly an- nication and Processing (ICCP), 2019.
alyzed and discussed the availability of training data for [7] K. Montalban, et al., “A quantitative analysis of point clouds from
deep learning algorithms, perception-independent point cloud automotive lidars exposed to artificial rain and fog,” Atmosphere, 2021.
processing techniques for detecting weather conditions and [8] R. K. Heinzler, “Lidar-based weather detection: Automotive lidar
sensors in adverse weather conditions,” Ph.D. dissertation, Karlsruher
denoising the LiDAR scans, and finally, current state-of-the- Institut für Technologie (KIT), 2022.
art approaches on robust LiDAR perception. In the following, [9] M. Bijelic, T. Gruber, and W. Ritter, “A benchmark for lidar sensors
we will summarize most promising research directions and in fog: Is detection breaking down?” in IEEE Intelligent Vehicles
Symposium (IV), 2018.
identify remaining gaps. [10] R. Heinzler, et al., “Weather influence and classification with auto-
Adverse Weather Data (Section II): There are several motive lidar sensors,” in IEEE Intelligent Vehicles Symposium (IV),
autonomous driving datasets which include LiDAR sensors 2019.
[11] F. Piewak, et al., “Boosting lidar-based semantic labeling by cross-
and simultaneously cover adverse weather conditions. Most modal training data generation,” in IEEE European Conference on
of them provide object labels, but only one has point-wise Computer Vision (ECCV) Workshops, 2019.
class labels. There clearly is a need for appropriate real-world [12] K. Yoneda, et al., “Automated driving recognition technologies for
adverse weather conditions,” IATSS Research, 2019.
datasets to train and validate the growing amount of deep
[13] J. Abdo, S. Hamblin, and G. Chen, “Effective Range Assessment
learning based LiDAR perception algorithms. Some works of Lidar Imaging Systems for Autonomous Vehicles Under Adverse
resort to weather-specific data augmentation to simulate Weather Conditions With Stationary Vehicles,” ASCE-ASME J Risk
adverse weather effects, yet, a method to evaluate the realism and Uncert in Engrg Sys Part B Mech Engrg, 2021.
[14] C. Linnhoff, et al., “Measuring the influence of environmental condi-
of the generated augmentations is missing. tions on automotive lidar sensors,” Sensors, 2022.
Point Cloud Processing & Denoising (Section III): Distinct [15] D. M. Neumeister and D. B. Pape, “Automated vehicles and adverse
LiDAR technologies react differently to adverse weather weather,” U.S. Department of Transportation, Tech. Rep., 2019.
[16] A. S. Mohammed, et al., “The perception system of intelligent ground
conditions. While thorough studies on sensor degradation vehicles in all weather conditions: A systematic literature review,”
under adverse weather conditions exist, a systematic analysis Sensors, 2020.
of the impact on perception algorithms is missing. Here, [17] Y. Zhang, et al., “Autonomous driving in adverse weather conditions:
A survey,” arXiv, 2021.
approaches on sensor degradation estimation will be useful. [18] M. Cordts, et al., “The cityscapes dataset for semantic urban scene
Furtheremore, there is ongoing research on cloud denoising, understanding,” in IEEE Conference on Computer Vision and Pattern
but existing statistical methods have been proven less ef- Recognition (CVPR), 2016.
ficient than utilizing weather augmentation during training. [19] P. Sun, et al., “Scalability in perception for autonomous driving:
Waymo open dataset,” in IEEE/CVF Conference on Computer Vision
Modern methods like CNN- or GAN-based approaches might and Pattern Recognition (CVPR), 2019.
bridge that gap. [20] H. Caesar, et al., “nuscenes: A multimodal dataset for autonomous
Robust LiDAR Perception (Section IV): A large body of driving,” in IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2020.
research focuses on alleviating sensor degradation with the [21] M. Bijelic, et al., “Seeing through fog without seeing fog: Deep mul-
help of sensor fusion. While this yields compelling results, timodal sensor fusion in unseen adverse weather,” in IEEE Conference
improving the LiDAR-only perception under adverse weather on Computer Vision and Pattern Recognition (CVPR), 2020.
[22] W. Ritter, et al., “Dense: Environment perception in bad weather—
conditions should not be neglected. Sophisticated domain first results,” Electronic Components and Systems for Automotive
adaptation approaches (like anomaly detection or uncertainty Applications, 2019.
modeling) might be useful to address that matter. Viewing the [23] M. Pitropov, et al., “Canadian adverse driving conditions dataset,” The
International Journal of Robotics Research, 2020.
presence of weather-induced noise in LiDAR point clouds [24] A. Pfeuffer, et al., “The aduulm-dataset - a semantic segmentation
from different perspectives might unlock novel streams of dataset for sensor fusion,” in British Machine Vision Conference
research on bridging the domain gap introduced by adverse (BMVC), 2020.
weather conditions. Investigating the quality of that domain [25] J. Bos, et al., “Autonomy at the end of the earth: Inclement weather
autonomous driving data set,” in Autonomous Systems: Sensors, Pro-
gap would give hints on the potential of general domain cessing and Security for Vehicles & Infrastructure 2020, 2020.
adaptation approaches. [26] A. Kurup and J. P. Bos, “Dsor: A scalable statistical filter for removing
falling snow from lidar point clouds in severe winter weather,” ArXiv,
R EFERENCES 2021.
[27] A. Carballo, et al., “Libre: The multiple 3d lidar dataset,” arXiv, 2020.
[1] R. Roriz, J. Cabral, and T. Gomes, “Automotive lidar technology: A [28] W. Maddern, et al., “1 Year, 1000km: The Oxford RobotCar Dataset,”
survey,” in IEEE Transactions on Intelligent Transportation Systems The International Journal of Robotics Research (IJRR), 2017.
(T-ITS), 2022. [29] D. Barnes, et al., “The oxford radar robotcar dataset: A radar extension
[2] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous to the oxford robotcar dataset,” in IEEE International Conference on
driving? the kitti vision benchmark suite,” in IEEE/CVF Conference Robotics and Automation (ICRA), 2020.
on Computer Vision and Pattern Recognition (CVPR), 2012. [30] M. Sheeny, et al., “Radiate: A radar dataset for automotive perception,”
[3] M. Hahner, et al., “Fog simulation on real lidar point clouds for arXiv, 2020.
3d object detection in adverse weather,” in IEEE/CVF International [31] A. Dosovitskiy, et al., “CARLA: An open urban driving simulator,”
Conference on Computer Vision (ICCV), 2021. in Conference on Robot Learning (CoRL), 2017.
[4] ——, “Lidar snowfall simulation for robust 3d object detection,” in [32] R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather
IEEE/CVF Conference on Computer Vision and Pattern Recognition phenomena on automotive laser radar systems,” Advances in Radio
(CVPR), 2022. Science, 2011.
[33] C. Goodin, et al., “Predicting the influence of rain on lidar in adas,” [61] M.-Y. Yu, R. Vasudevan, and M. Johnson-Roberson, “Lisnownet: Real-
Electronics, 2019. time snow removal for lidar point cloud,” IEEE/RSJ International
[34] S. Hasirlioglu and A. Riener, “A model-based approach to simulate Conference on Intelligent Robots and Systems (IROS), 2022.
rain effects on automotive surround sensor data,” in 2018 21st In- [62] A. Seppänen, R. Ojala, and K. Tammi, “4denoisenet: Adverse weather
ternational Conference on Intelligent Transportation Systems (ITSC), denoising from adjacent point clouds,” IEEE Robotics and Automation
2018. Letters (RA-L), vol. 8, 2023.
[35] S. Teufel, et al., “Simulating realistic rain, snow, and fog variations [63] L. T. Triess, et al., “Quantifying point cloud realism through adversari-
for comprehensive performance characterization of lidar perception,” ally learned latent representations,” in Proc. of the German Conference
in IEEE Vehicular Technology Conference (VTC), 2022. on Pattern Recognition (GCPR), 2021.
[36] V. Kilic, et al., “Lidar light scattering augmentation (lisa): Physics- [64] P. Radecki, M. E. Campbell, and K. Matzen, “All weather perception:
based simulation of adverse weather conditions for 3d object detec- Joint data association, tracking, and classification for autonomous
tion,” arXiv, 2021. ground vehicles,” ArXiv, 2016.
[37] Y. Liu, et al., “Parallel lidars meet the foggy weather,” IEEE Journal [65] P. Fritsche, et al., “Fusing lidar and radar data to perform slam in harsh
of Radio Frequency Identification, 2022. environments,” in International Conference on Informatics in Control,
[38] C. Linnhoff, et al., “Simulating road spray effects in automotive lidar Automation and Robotics (ICINCO), K. Madani, D. Peaucelle, and
sensor models,” 2022. O. Gusikhin, Eds., 2018.
[39] Y.-C. Shih, et al., “Reconstruction and synthesis of lidar point clouds [66] A. Pfeuffer and K. Dietmayer, “Optimal sensor data fusion architecture
of spray,” IEEE Robotics and Automation Letters, 2022. for object detection in adverse weather conditions,” in International
[40] J. R. Vargas Rivero, et al., “Data augmentation of automotive lidar Conference on Information Fusion (FUSION), 2018.
point clouds under adverse weather situations,” Sensors, 2021. [67] A. Pfeuffer and K. C. J. Dietmayer, “Robust semantic segmentation
[41] J. Lee, et al., “Gan-based lidar translation between sunny and adverse in adverse weather conditions by means of sensor data fusion,”
weather for autonomous driving and driving simulation.” Sensors, International Conference on Information Fusion (FUSION), 2019.
2022. [68] N. A. Rawashdeh, J. P. Bos, and N. J. Abu-Alrub, “Camera–Lidar
[42] I. Fursa, et al., “Worsening perception: Real-time degradation of sensor fusion for drivable area detection in winter weather using
autonomous vehicle perception performance for simulation of adverse convolutional neural networks,” Optical Engineering, SPIE, 2022.
weather conditions,” ArXiv, 2021. [69] G. Xie, et al., “Obstacle detection based on depth fusion of lidar and
[43] A. Filgueira, et al., “Quantifying the influence of rain in lidar perfor- radar in challenging conditions,” Industrial Robot: the international
mance,” Measurement, 2017. journal of robotics research and application, 2021.
[44] M. Kutila, et al., “Automotive lidar performance verification in fog [70] M. J. Mirza, et al., “Robustness of object detectors in degrading
and rain,” in International Conference on Intelligent Transportation weather conditions,” in IEEE International Intelligent Transportation
Systems (ITSC), 2018. Systems Conference (ITSC), 2021.
[71] Y.-J. Li, et al., “Modality-agnostic learning for radar-lidar fusion in
[45] M. E. Warren, “Automotive lidar technology,” in Symposium on VLSI
vehicle detection,” in Proceedings of the IEEE/CVF Conference on
Circuits, 2019.
Computer Vision and Pattern Recognition (CVPR), 2022.
[46] A. M. Wallace, A. Halimi, and G. S. Buller, “Full waveform lidar
[72] K. Qian, et al., “Robust multimodal vehicle detection in foggy weather
for adverse weather conditions,” IEEE Transactions on Vehicular
using complementary lidar and radar signals,” in IEEE/CVF Confer-
Technology, 2020.
ence on Computer Vision and Pattern Recognition (CVPR), 2021.
[47] C. Zhang, et al., “Lidar degradation quantification for autonomous [73] S. S. Chaturvedi, L. Zhang, and X. Yuan, “Pay ”attention” to adverse
driving in rain,” in IEEE/RSJ International Conference on Intelligent weather: Weather-aware attention-based object detection,” in Interna-
Robots and Systems (IROS), 2021. tional Conference on Pattern Recognition (ICPR), 2022.
[48] H. Delecki, et al., “How do we fail? stress testing perception in [74] Z. Elmassik, M. Sabry, and A. El Mougy, “Understanding the scene:
autonomous vehicles,” in 2022 IEEE/RSJ International Conference on Identifying the proper sensor mix in different weather conditions,”
Intelligent Robots and Systems (IROS), 2022. in International Conference on Agents and Artificial Intelligence
[49] J. R. Vargas Rivero, et al., “Weather classification using an automotive (ICAART), 2022.
lidar sensor based on detections on asphalt and atmosphere,” Sensors, [75] T. Vattem, G. Sebastian, and L. Lukic, “Rethinking lidar object
2020. detection in adverse weather conditions,” in International Conference
[50] G. Sebastian, et al., “Rangeweathernet for lidar-only weather and road on Robotics and Automation (ICRA), 2022.
condition classification,” in IEEE Intelligent Vehicles Symposium (IV), [76] T. Sun, et al., “SHIFT: a synthetic driving dataset for continuous multi-
2021. task domain adaptation,” in Computer Vision and Pattern Recognition,
[51] R. Karlsson, et al., “Probabilistic rainfall estimation from automotive 2022.
lidar,” in 2022 IEEE Intelligent Vehicles Symposium (IV), 2022. [77] L. T. Triess, et al., “A Survey on Deep Domain Adaptation for LiDAR
[52] N. Charron, S. Phillips, and S. L. Waslander, “De-noising of lidar Perception,” in IEEE Intelligent Vehicles Symposium (IV) Workshops,
point clouds corrupted by snowfall,” in Conference on Computer and 2021.
Robot Vision (CRV), 2018. [78] B. Caine, et al., “Pseudo-labeling for scalable 3d object detection,”
[53] M. H. Prio, S. Patel, and G. Koley, “Implementation of dynamic arXiv, 2021.
radius outlier removal (dror) algorithm on lidar point cloud data [79] J. Lin, et al., “Improved 3d object detector under snowfall weather
with arbitrary white noise addition,” in IEEE Vehicular Technology condition based on lidar point cloud,” IEEE Sensors Journal, 2022.
Conference (VTC2022), 2022. [80] G. Eskandar, et al., “An unsupervised domain adaptive approach for
[54] H. Balta, et al., “Fast statistical outlier removal based method for multimodal 2d object detection in adverse weather conditions,” arXiv,
large 3d point clouds of outdoor environments,” in IFAC Symposium 2022.
on Robot Control (SYROCO), 2018.
[55] W. Wang, et al., “A scalable and accurate de-snowing algorithm for
lidar point clouds in winter,” Remote Sensing, 2022.
[56] J.-I. Park, J. Park, and K.-S. Kim, “Fast and accurate desnowing
algorithm for lidar point clouds,” IEEE Access, 2020.
[57] J. Wu, et al., “Vehicle detection under adverse weather from roadside
lidar data,” Sensors, 2020.
[58] P. Sun, et al., “Objects detection with 3-d roadside lidar under snowy
weather,” IEEE Sensors Journal, 2022.
[59] R. Heinzler, et al., “Cnn-based lidar point cloud de-noising in adverse
weather,” IEEE Robotics and Automation Letters (RA-L), 2020.
[60] J. Bergius and J. Holmblad, “Lidar point cloud de-noising for adverse
weather,” Master’s thesis, Halmstad University, School of Information
Technology, 2022.