0% found this document useful (0 votes)
25 views20 pages

Ge Et Al 2023 Roadside Lidar Sensor Configuration Assessment and Optimization Methods For Vehicle Detection and

This document summarizes research on optimizing the configuration of roadside LiDAR sensors for vehicle detection and tracking in connected and automated vehicle applications. It develops an assessment and simulation model to analyze detection blind zones from a given LiDAR location and road geometry. Evaluation metrics assess the severity of blind zones, and the simulation model can evaluate and identify optimal configurations for different scenarios. The models were validated using a 15-minute traffic dataset. Different configurations were simulated and compared, demonstrating the models' ability to plan optimal LiDAR sensor installation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views20 pages

Ge Et Al 2023 Roadside Lidar Sensor Configuration Assessment and Optimization Methods For Vehicle Detection and

This document summarizes research on optimizing the configuration of roadside LiDAR sensors for vehicle detection and tracking in connected and automated vehicle applications. It develops an assessment and simulation model to analyze detection blind zones from a given LiDAR location and road geometry. Evaluation metrics assess the severity of blind zones, and the simulation model can evaluate and identify optimal configurations for different scenarios. The models were validated using a 15-minute traffic dataset. Different configurations were simulated and compared, demonstrating the models' ability to plan optimal LiDAR sensor installation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Research Article

Transportation Research Record


1–20
Ó National Academy of Sciences:
Roadside LiDAR Sensor Configuration Transportation Research Board 2023
Article reuse guidelines:
Assessment and Optimization Methods sagepub.com/journals-permissions
DOI: 10.1177/03611981231172949

for Vehicle Detection and Tracking in journals.sagepub.com/home/trr

Connected and Automated Vehicle


Applications

Yi Ge1 , Peter J. Jin2 , Tianya T. Zhang1 , and Anjiang Chen1

Abstract
This paper develops an assessment and optimization model for configuring roadside LiDAR (Line Detection and Ranging)
installation. More specifically, an analytic and a simulation model have been developed to analyze the detection blind zones
and their impact on vehicle detection and tracking capabilities in Connected and Automated Vehicle (CAV) applications. The
proposed model can derive the area and height of the detection blind zones from a given roadside LiDAR location and road
geometry. Evaluation metrics are also proposed to assess the severity of the blind zone including laser beam density, blind
zone height and duration, vehicle trajectory missing rate and duration. The simulation model can be used to evaluate and
identify optimal configurations for different installation scenarios. To validate the proposed model, the 15-min US101 NGSIM
(Next Generation SIMulation) dataset was used for validating the proposed model. Different configuration settings were
simulated and compared. The evaluation results demonstrate the capabilities of the proposed models in planning for optimal
roadside LiDAR sensor installation for vehicle detection and tracking.

Keywords
roadside LiDAR, connected and automated vehicles, LiDAR coverage simulation, LiDAR blind zone analysis

Sensor systems in prevailing automated driving system reflected light photons from the object. When multiple
(ADS) and advanced driver assistance system (ADAS) beams are used to scan an area simultaneously, LiDAR
platforms can be classified into systems based on camera sensors can detect many reflecting points from the sur-
and radar only, such as in Tesla and ADAS packages face of an object for processing algorithms to determine
offered by most consumer auto brands, and line detec- the location and shape of the object in real time. In recent
tion and ranging (LiDAR)-based systems used by com- years, with the increasing demand from the self-driving
panies such as Waymo, Uber, uTonomy, that are testing industry, LiDAR sensing technologies have significantly
and operating Level 4 automated vehicle fleets. In most advanced as regards their functionalities, range, resolu-
cases, the latter sensor system also includes other sensors tion, form factors, and affordability.
such as cameras and radars. However, in many LiDAR-
based Level 4 + ADS systems, LiDAR sensors are used
to create a high-fidelity, live 3D map of the surrounding
environment including the shapes and the locations of 1
Department of Civil and Environmental Engineering, Rutgers, The State
surrounding vehicles, pedestrians, road geography, and University of New Jersey, Piscataway, NJ
2
roadside infrastructure. Department of Mathematical Sciences, Rutgers, The State University of
New Jersey, Piscataway, NJ
LiDAR sensor determines the distance and three-
dimensional (3D) location of a targeting object by emit- Corresponding Author:
ting laser beams and calculating the return time of the Yi Ge, [email protected]
2 Transportation Research Record 00(0)

Compared with their increasing roles in on-board the single-origin design of LiDAR sensors. Furthermore,
ADAS and ADS platforms, LiDAR sensors have not yet for object detection and tracking, a sufficient number of
been widely used at roadside infrastructure to support laser points or rings (circular traces of points) need to be
prevailing connected and automated vehicle (CAV) reflected from the surface of an object. Prior exploration
applications. Recently, the challenges facing CAV tech- of LiDAR sensor quality has focused on general LiDAR
nologies has led to increased interest in research and test- sensing patterns against static objects (7) or noises caused
ing of roadside LiDAR sensing (1). The current by sensor quality and cleanness (8). This paper presents a
penetration rate of CAV on-board units (OBUs) in gen- comprehensive analytic and assessment model for the
eral traffic is still extremely low in the existing transpor- sensing blind zones of roadside LiDAR sensors in vehicle
tation network, and the elimination of almost half of the detection and tracking applications. A sensing blind zone
5.9 GHz bandwidth that is reserved for CAV applica- simulation model is developed to assess the dynamic pat-
tions in the recent Federal Communications Commission terns of blind zone occurrence and heights in different
(FCC) ruling creates even more uncertainty (2). The con- areas of a road and how that pattern changes with differ-
ventional system design of CAV applications that rely on ent vehicle conditions based on vehicle trajectory and
in-vehicle OBUs to broadcast their locations will not be shape inputs from traffic simulation models or next-
feasible in the near or medium term. For CAV applica- generation simulation (NGSIM)-like vehicle trajectory
tions to be functional on Day 1 and ready for users, the datasets. The model outputs create several new visualiza-
integration of high-resolution sensors such as LiDAR, tions that shed light on the sensing capability and limita-
camera, and radar, currently used in ADS and ADAS tions of roadside LiDAR sensors and optimal sensor
technologies, becomes extremely important. Those high- configuration strategies.
resolution sensors can detect the trajectories of vehicles,
pedestrians, motorcycles, bicycles, and other moving or
static objects on the road. Roadside units (RSUs) can then
Literature Review
give every road user a ‘‘voice’’ by announcing their trajec- The literature review consists of three sections: vehicle-
tories through basic safety messages, pedestrian safety based LiDAR data analytics, roadside LiDAR data ana-
messages, and traveler information messages sent on their lytics, and recent studies on factors affecting LiDAR sen-
behalf, or simply integrate those sensor data inputs into sor data qualities.
the decision and control logics of the CAV applications.
Although computer vision sensors traditionally have been
widely deployed at the roadside, their limitations are also Vehicle-based LiDAR Sensing
well known. Cameras can be strongly affected by illumina- The use of LiDAR sensors, with its unique capabilities
tion, weather, and other environmental conditions, and of high-resolution 3D depth sensing, has significantly
many existing roadside camera systems lack the resolution, advanced the self-driving industry. Many existing studies
range, reliability, and edge intelligence to obtain high- have explored LiDAR-based object/semantic segmenta-
resolution trajectory data 24 h a day seven days a week. tion, among which most used convolutional neural net-
LiDAR sensors, on the other hand, can potentially create work (CNN)-based methods for segmentation and used
rich 3D high-resolution situational awareness data to the KITTI dataset (9) for evaluation. To address the
enable the full functionalities of CAV applications without computational challenge of large LiDAR data volume,
100% OBU data inputs. Recent exploration of the integra- Lyu et al. developed a CNN to perform semantic seg-
tion of roadside LiDAR sensors with RSUs includes the mentation and evaluated it with Ford and KITTI data-
CAV test sites at Univesity of Nevada, Reno (3), Virginia sets (10). The proposed CNN model achieved fast and
(4), Florida (5), and New Jersey (6). highly accurate road segmentation using an efficient
However, directly adapting LiDAR sensors designed hardware design. Wu et al. further improved the accu-
for vehicle-based sensing is not without challenges. racy and efficiency of object detection with 3D LiDAR
LiDAR sensors, similar to other cameras or radars, still data by proposing a SqueezeSeg CNN for the semantic
have ‘‘Line-of-Sight’’ issues where the occlusions of vehi- segmentation of road objects (11, 12). The neural net-
cles can lead to detection blind zones. The factory config- work takes projected LiDAR data and generates a
uration of the prevailing LiDAR sensors assumes a point-wise label map refined by a conditional random
mounting position on top of a vehicle much lower than field. Milioto et al. developed RangeNet + + for fast
the safe and secure height of roadside sensors against and accurate real-time LiDAR semantic segmentation
knockoffs, theft, and vandalization. The beams of some with range images transformed from LiDAR data (13).
LiDAR sensors can penetrate long distances (e.g., 500 ft). The SuMa + + model was developed for the LiDAR-
However, the beam density and laser intensity can vary based semantic simultaneous localization and mapping
significantly from close-by to distant locations because of (SLAM) application using the surfel-mapping approach
Ge et al 3

which utilizes the semantic information from LiDAR grid-based clustering (RGBC)-based lane identification,
(14). The raw LiDAR point cloud was first projected to and global nearest neighbor (GNN)-based vehicle track-
the spherical coordinate system to efficiently process the ing (23). Wu performed and analyzed the speed estima-
point cloud and then back-projected to the three- tion using the average point with it using the nearest
dimensional Cartesian coordinate system. The semantic point, and concluded that the nearest point can better
labels of point clouds were used to filter out moving represent the vehicle as regards speed estimation with a
objects. Yin et al. also developed a mobile LiDAR object max error of 1.11876 mph (24). Zhang et al. proposed a
detection algorithm that converts LiDAR data into sphe- Euclidean cluster extraction-based clustering method,
rical coordinates (15). Their method separates ground and instead of directly referencing the nearest point, the
LiDAR points from moving objects by identifying authors used unscented Kalman filter (UKF) tracker
breaking points of the radial distance curves. VoxelNet with centroid as a reference and achieved a root-mean-
developed a generic 3D object detection network to unify square error of 0.22 m/s, which equals to 0.492 mph (25).
the feature extraction and bounding box prediction (16). However, their tracking method only estimated the speed
The three-dimensional space was first divided equally of the vehicles that are 5–15 m away from the LiDAR,
into 3D voxels. Then the voxel feature encoding layer which significantly limits the applications. Zhao et al.
was applied to encode each voxel into descriptive volu- applied a discrete Kalman filter tracking method to not
metric representation. The learned features were then fed only track the vehicles but also the pedestrians, with the
into convolutional middle layers and region proposal classification results from a backpropagation artificial
network for object detection. neural network (BP-ANN)-based classification model
(26). Their tracking method achieved an accuracy of
95% within approximately 30 m, along with which the
Roadside LiDAR Sensing average absolute speed difference was 1.43 mph. Most of
Recently, several research teams have started to explore the roadside LiDAR processing methods still use the
roadside LiDAR and its applications in CAV applica- experience to estimate the coverage but lack a compre-
tions. Unlike vehicle-based LiDAR, roadside LiDAR hensive assessment of the effective coverage that a
sensors deal with a relatively stable background which LiDAR is capable of covering.Unlike vehicle-based
can help reduce the difficulty in background filtering and LiDAR, the installation scenarios of roadside LiDAR
further improve the accuracy of the vehicle and pedes- are more complicated, and it takes significant effort to
trian detection. For background filtering, Wu et al. pro- find the optimal positions for all the different sites given
posed 3D density-statistic-filtering (3D-DSF) based on different road geometry, grades, infrastructure layout,
the spatial distribution of laser points which can filter up and other environmental factors. As Table 1 shows,
to 99.62% of the background points in their testing sce- existing research has designed some generic metrics for
narios (17). They also mentioned that the performance evaluating the LiDAR sensor performance, for example,
could still be affected by the traffic volume. Zhao et al. accuracy (24, 27), range (28), and data density (29). But
proposed an azimuth-height background filtering most of them focus on calibrating the LiDAR itself for
method that filtered over 98% of the background points high accuracy and long range, instead of optimizing the
in similar testing scenarios while losing only 1%–2% of sensor position and tilting parameters of the roadside
the target LiDAR points (18). Lv et al. improved the LiDAR sensors for better coverage and objection detection
3D-DSF algorithm and achieved a performance of performance. There are commercial simulation models
99.9% filtering of background points (19). The devel- with LiDAR simulation based on its manufacture config-
oped algorithm improves the performance of the earlier uration and preset effective range, for example, Blensor
algorithm under high traffic volume and is non-sensitive (30), Ondulus LiDAR (31) for ideal optical simulation.
to stopped vehicles at intersections thanks to the 15-min However, in field deployment, LiDAR sensor performance
background model initiation period. However, they can be affected by many factors, such as weather (32–36),
pointed out that the proposed algorithm could not cleanness of the lenses (37), surface reflexibility, and other
exclude the hazes and snowflakes under severe weather. road geometry and infrastructure limitations. Real-world
For vehicle tracking, Cui et al. proposed roadside traffic can also cause occlusions that temporarily block
LiDAR-enhanced connected infrastructure of which vehicle detection, especially toward the farther end of the
98% of the estimated speed had errors lower than 2 mph detection ranges. LiDAR devices have a limited effective
(20). It utilized many LiDAR processing procedures, detection range, which depends not only on the distance
including 3D-DSF-based background filtering (17), the laser beam can travel but also on the number of beams.
density-based spatial clustering of applications with noise As the distance increases, the laser beam becomes less
(DBSCAN)-based object clustering (21), artificial neural dense, and there may not be enough laser beams on an
network (ANN)-based vehicle recognition (22), revised object for detection.
4 Transportation Research Record 00(0)

Table 1. Literature on Roadside LiDAR Sensor Configuration and Factors Affecting Data Quality

Type Quality concerns Papers

Device performance Accuracy Toth et al.(38) (2007), ASPRS (27) (2015)


Range Cooper et al. (28) (2018), This paper
Data density Heidemann (29) (2012), This paper
Physical factors Weather Wu et al. (32, 36) (2020a, 2020b), Rasshofer et al. (33) (2011),
Hasirlioglu et al. (34–35) (2016, 2017)
Cleanness of the lenses Toshniwal (37) (2021)
Effective coverage Blind zone and blind spot This paper
Dynamic lane coverage
Vehicle detection capability

Note: The shaded cells indicate the quality issues considered in this paper.

Prevailing LiDAR sensors have built-in field-of-view and numerical methods that can simulate the grid-by-
(FOV) design both horizontally and vertically to ensure grid blind zone dynamics with respect to live traffic flow.
all laser beams go around vehicles, thus creating a small To facilitate further discussion, a notation list is created.
blind zone underneath or behind the sensor but maximiz- 0: the origin of all LiDAR beams at
ing the density of the beams within meaningful detection (xo = 0, yo = 0, zo = hL ).
sections (e.g., 245 to + 30 degrees) (39). For roadside hL : the height of LiDAR.
deployment, such vehicle-based FOV configurations are g: the tilting angle of the LiDAR sensor.
not ideal and can leave significant blind spots under- fE : the altitude angle of the edge beam.
neath the LiDAR sensors; therefore tilting is needed to fi : the altitude angle of a laser beam i, where
reduce the blind spot and take advantage of the FOVs i = 0, . . . , I, and the total number of beams emitted
configured for vehicle-based mounting. Based on the from a LiDAR is I + 1. We define the order of the beam
specific layout of roads, intersections, and infrastructure, i from the lower bound of the FOV to the upper bound
the LiDAR devices will be tilted at a correct angle to of the FOV. The altitude angle of the upper and lower
focus the dense close-to-horizon laser beams for vehicle- bound of the vertical FOV is fI and f0 , respectively. f
mount positions onto the region of interest (ROI) for has a negative value for below horizon, a positive value
roadside sensing. for above horizon, and is 0 if it is horizon.
This paper studies the optical geometries related to ui : the azimuth angle of a laser beam i.
LiDAR sensing on roadways from a roadside position. A Rspe : the range of a laser beam of a LiDAR as in its
grid-based blind zone analytic model is derived to assess model specifications.
the dynamic characteristics of detection blind zones with Reff : the actual range of a laser beam as a result of var-
respect to different infrastructure configurations and ious environmental variables.
occlusion dynamics of on-road traffic. The model can be RBS : the range of the blind spot beneath the LiDAR.
used to assess the spatial–temporal distribution of the xV , yV : the center coordinate of vehicle V .
depths and sizes of blind zones with vehicle trajectories zV : the height of vehicle V .
from field data or simulation models. The proposed wV , lV : the width and length of the vehicle V .
model takes into account the impact of 3D position, tilt- xB , yB , zB : the coordinate of point B, where B is a point
ing, vehicle sizes, and vehicle tracking and reconstruction on a laser beam.
needs in CAV applications. Several evaluation metrics fB : the altitude angle of OB.
are introduced to assess the overall and dynamic perfor- V1 , V2 , V3 : the three farther corner points of vehicle V ,
mance detection for given traffic flow scenarios and can V2 is the farthest corner point, V1 and V3 are the neighbor
be used to optimize the LiDAR sensor configuration for two corners of V2 .
roadside infrastructure planning, design, and operations E1 , E2 , E3 : the three edge points extended from
with the LiDAR sensor system. OV1 , OV2 , OV3 , projected on plane of height of interest
(HOI), where zE = E.
E: the HOI.
Methodology rE : the horizontal distance between O ðxO = 0, yO = 0Þ
and E (xE , yE ).
Notations and Definitions ABZ (E): the area of blind zone at HOI E.
The proposed methodology includes the analysis of the SBZ : the volume of blind zone.
optical characteristics of roadside LiDAR blind zones xg , yg : the coordinates of the center of grid g.
Ge et al 5

Figure 1. Dimensions of the blind spot underneath the LiDAR (Vehicle Icons: 40).

UG : the side length of the grid. LiDAR Blind Spot Underneath the LiDAR and LiDAR
rg : the distance between xO , yO and xg , yg . Detection Blind Zone Resulting from Occlusions
hg, i : the z coordinate of the intersection between the i th
LiDAR beam and the center of the cuboid above grid g. LiDAR Blind Spot Underneath the LiDAR. The size of the blind
xn , yn : the center coordinate of vehicle n. spot on the roadway surface is determined by the height
ln , wn , hn : the length, width, height of vehicle n. of the LiDAR origin hL and the tilting or pitch angle g;
Hv : the average height of the vehicle type. Figure 1 shows the blind spot of a LiDAR without tilt-
hadj : an adjusted value to filter vehicle of interest. ing. As the roadside LiDAR in the proposed paper is
R: the transformation matrix to transform the laser always higher than the vehicles, the beams of interest that
beams and grids based on the tilting angle of LiDAR. may detect vehicles only exist in the green-dashed effec-
rf : the laser density map at frame f . tive detection zone between the ‘‘Blind Spot’’ and the
mf : the lowest laser height map at frame f . ‘‘Above Horizon.’’
TA: the time aggregation of the lowest height map. We define the lowest laser beam as beam 0, and as it is
FTotal : the total number of frames. below the horizon of the LiDAR sensor its altitude angle
Pv, l, f : the percentage of vehicle v in lane l at frame f. f0 has a negative value. We define the highest laser beam
MTl : the missing trajectory rate in lane l. as beam I; its altitude angle fI is a positive value as it is
Pvm : the percentage threshold for vehicle missing. above the horizon of the LiDAR origin.
fv : the count of frames that vehicle v appears in the Without tilting, the radius of the blind spot on a level
coverage area. road surface can be calculated as follows,
vl : the count of vehicles that are in lane l.
hL
VMTDl : the vehicle-based missing time distribution. RBS = ð1Þ
fvB : the count of the frames that vehicle v stays in the tanðjf0 jÞ
blind zone.
The optical analysis of the blind zones of LiDAR sen- LiDAR Detection Blind Zone Resulting from Occlusions. In this
sing will include the analysis of two types of blind areas. paper, we define blind zones as areas that are theoreti-
The first type is the area directly underneath the LiDAR, cally reachable by LiDAR laser beams but are undetect-
caused by the lower limit of the FOV of the LiDAR. The able because of the occlusions from vehicles and
second type is the area where LiDAR laser beams can infrastructure objects, as the shaded polygon
theoretically reach but becomes unreachable because of V1 E1 E2 E3 V3 V2 shows in Figure 2a. Without vehicle V at
occlusions of live traffic and roadside infrastructure. The xV , yV , the LiDAR should be able to cover as far as Reff ,
following formulations show the calculations of the blind but the existence of the vehicle blocks some of the laser
spot area, then locate the intersections of the vehicle sil- beams and makes the objects lower than HOI E (if any)
houette and the laser beams to get the projected bound- in the polygon V1 E1 E2 E3 V3 V2 invisible from the LiDAR.
ary and the approximated area of the blind zone. To Therefore, by definition, vehicle V causes a blind zone
further simulate the blind zone, the intersection height of that can be represented by polygon V1 E1 E2 E3 V3 V2 .
each laser beam and grid is calculated. By checking the As the research objective is to assess and optimize the
intersections of the laser beams and the vehicles, the sensing capabilities of LiDAR, we define a parameter, e
blocked beams are determined. To perform the tilting as the HOI for the impact analysis of blind zones on
configuration, a transformation matrix is applied before object detection. Note that when E = 0, the HOI is the
the calculation of the intersection heights. ground. However, most vehicle detection algorithms do
6 Transportation Research Record 00(0)

Figure 2. Illustrations of Roadside LiDAR Blind Zones Caused by Vehicle Occlusions: (a) Roadside LiDAR Blind Zone Calculation, (b)
Four Different Blind Zone Types, and (c) Examples of Four Different Blind Zone Types

not need full detection from the ground-up to recognize ðhL  zB Þ2


a vehicle or a pedestrian. In the subsequent evaluation, x2B + y2B = ð2Þ
tan2 fB
we will adjust the HOI at different levels to assess the
actual impact on vehicle detection. where f is the altitude angle shown in Figure 1,  tan f
Then the 2D area of the blind zone on the level plane is used because f is a negative value for below horizon.
at the HOI can be determined by HOI, the position of Next, we define the vehicle dimension with respect to
the LiDAR, and the dimensions of a vehicle. Take Figure the 3D coordinates of the top center of a vehicle V
1 as an example; if the HOI is set to be the height of the (xV , yV , zV ), then the three farther corners of the vehicle
truck, the blind zone area is 0 because no vehicles in V1 , V2 , and V3 as depicted in Figure 2 can be represented
Figure 1 can cause a blind zone at that height. However, as follows,
if we set the HOI to be the height of the coupe, all the "   
wV lV wV lV
vehicles other than the coupes will cause blind zones, and xV + , y V + , zV , x V  , y V + , zV ,
2 2 2 2
the truck will cause a huge blind zone for which we can-
not tell whether there is any lower vehicle behind it.  #
wV lV
Meanwhile, if the LiDAR is set higher, the blind zone xV + , y V  , zV ð3Þ
2 2
area caused by vehicle occlusion will decrease at the cost
of increasing the blind spot beneath the LiDAR device. where wV and lV are the width and length of the vehicle
Similarly, if the LiDAR is set closer to the vehicles/road, V respectively.
the blind zone area will decrease, while more blind spot We then calculate the locations of the projected points
area will cover the road. If the vehicle at the position of E1 , E2 , and E3 at the HOI for the laser beams that pass
the truck is a lower vehicle instead of a truck, the blind through the three far-side corner points V1 , V2 , and V3 at
zone area will also be smaller, and there will be a chance the height of zV . To simplify further presentation, these
that the coupe can be detected and recognized from the beams are called ‘‘edge beams’’ hereafter.
LiDAR data. Combining Equations 2 and 3, the altitude angle of
To calculate the area of the blind zone, given any three edge beams fE can be calculated as follows,
point B (xB , yB , zB ) within Reff , the horizontal distance
0sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
from point B to the LiDAR sensor origin can be calcu-
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðhL  zB Þ2 A
lated by both x2B + y2B (Figure 2) and htan L zB
fB (Figure 1),
fE =  arctan@ ð4Þ
x2B + y2B
therefore,
Ge et al 7

where the three sets of xB , yB , zB are hðL

8 SBZ = ABZ ðEÞdE ð9Þ


> w l
< xV1 = jxV j 2V , yV1 = jyV j + 2V , zV1 = zV 0
xV2 = jxV j + 2 , yV2 = jyV j + l2V , zV2 = zV
wV
ð5Þ
>
: x = jx j + wV , y = jy j lV , z = z If multiple vehicles cause overlapping blind zones, the
V3 V 2 V3 V 2 V3 V
final blind zone can be the union of blind zones caused
by all the vehicles The invisible vehicle parts will already
Match the above fE with the altitude angle of the closest
be behind the object surface visible to the LiDAR within
actual LiDAR beam f, then the distance rE on the level
the blind zone and will have no major impact on the
plane at the HOI, z = E, rE , can be calculated as follows,
shape and depth of the blind zone. In this paper, the
hL e overlap area is kept to reflect the blind zone impact
r= ð6Þ caused by the LiDAR installation/configuration.
tanf

Given the azimuth angle u of a laser beam, the x-y coor-


dinates of a projected edge point E on the HOI plane
Numerical Methods for LiDAR Blind Zone Pattern
z = E can be calculated as follows, Approximation
The analytic formula of vehicle-based blind zone area
xE = rE sin u, yE = rE cos u# ð7Þ determination is accurate and continuous, but it is not
quite efficient when calculating against live traffic flow
With the coordinates of all the corner points determined, with many vehicles occurring at the same time. To ana-
the area of the blind zone ABZ (E) at each HOI E level can lyze the blind zone distribution in different traffic condi-
be calculated by subtracting the area of four triangles, tions, a grid-based numerical estimation method is
the vehicle, and one or two rectangle(s) from the big rec- developed to simulate and determine the blind zone dis-
tangle as illustrated in blue-dashed boxes depending on tributions given a pre-determined HOI, for example, 4–
the blind zone type in Figure 2b. The four different types 6 ft for vehicle and pedestrian detection. The proposed
are summarized for blind zone area calculation. For each numerical method takes several key steps including initial
type, the calculation has slight differences in the gridding, 3D laser beam and vehicle model construction,
deducted rectangles. The distribution of E1 , E2 , E3 may occlusion detection, and blind zone height determination.
be against instinct because of the discreteness of the ji ,
as Figure 2c shows. Classifying them into four types can
cover all the scenarios. To simplify the calculation, we Initial Gridding and Grid Laser Beam Scanning Height
define DxMN ðEÞ = jxM ðEÞ  xN ðEÞj and similarly Modeling. First, the entire surface of a roadway is first
DyMN ðEÞ = jyM ðEÞ  yN ðEÞj where M and N are any of divided into GX 3 GY square grids with a side length of
the projected points on the plane z = E. UG . Given the altitude angles of all the lasers

ABZ (E) = jðmaxðxE2 (E), xE3 (E)Þ  xV1 (E)Þðmax(yE1 (E), yE2 (E))  min(yV 3 (E), yE3 (E))Þj
     
1  1  1 
 
  DyE2E3 (E)DxE2E3 (E)   DyE3V 3 (E)DxE3V 3 (E)   DyE1E2 (E)DxE1E2 (E)
  
2 2 2
 
1 
 jDyV 1V 3 (E)DxV 1V 3 (E)j   DyE1V 1 (E)DxE1V 1 (E)
2 ð8Þ
8
>
> jDyE2E1 (E)DxE2E3 (E)j, jxE3 (E)j.jxE2 (E)j and jyE1 (E)j ø jyE2 (E)j (i)
>
< jDyE2E1 (E)DxE1V 1 (E)j, jxE3 (E)j.jxE2 (E)j and jyE1 j\jyE2 j (ii)

>
> jDyE3V 3 (E)DxE3E2 (E)j, jxE3 j ł jxE2 j and jyE1 j ø jyE2 j (iii)
>
:
(ðiiÞ + (iii)), jxE3 j ł jxE2 j and jyE1 j\jyE2 j (iv)

Furthermore, if there are other laser beams existing fi (i = 1, . . . , I) and the height of the LiDAR origin hL ,
between E1 and E2 or between E2 and E3, more corners the exact scanning point of the laser beam over the grid
can be calculated to create a more detailed blind zone g can then be calculated.
pattern to adjust the area calculation. The blind zone vol- For any grid g at the gx th row and gy th column, the
ume can then be represented as the integral of the ABZ at coordinates of the grid center can be calculated as
any HOI E, follows,
8 Transportation Research Record 00(0)

Figure 3. VOI Determination with Tilted LiDAR: (a) Top View, and (b) Side View (Truck Icon: 41).

xg = ðgx + 0:5Þ 3 UG , yg = ðgy + 0:5Þ 3 UG ð10Þ duration of the lane changes is often short, its impact on
the blind zone distribution in the freeway scenario
The distance between the grid center and the LiDAR experimented in the paper is limited. The ground is also
point on the HOI plane, rg , is given as follows assumed to be leveled and the grade situation will be
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi considered in future works. To simplify the derivation,
rg = x2g + y2g ð11Þ we assume the traveling direction is along the direction
of the y-axis. The following boundary lines can be
Then the z coordinate of the intersection between the i th obtained for each vehicle at any HOI plane:
LiDAR beam and the center of the cuboid above grid g 8
can be calculated as >
> Left boundary line : x = xn  w2n
<
Right boundary line : x = xn + w2n
ð13Þ
hg, i = hL + rg 3 tanfi ð12Þ >
> Front boundary line : y = yn
:
Rear boundary line : y = yn  ln
where hg, i is the z coordinate of the intersection of laser i
and grid g. fi has a negative value for beams below hori-
zon and a positive value for those above horizon.
The Vehicles-of-Interests (VOI) Filtering for Occlusion Detection
By calculating hg, i for every grid g with respect to
Preparation. A tilted LiDAR has a slightly different
every laser beam i, a matrix HGx 3 Gy 3 I can be created to
blind spot, as Figure 3b shows. This step detects vehi-
store the intersecting heights on each grid that can be
cles that may occlude any of the scanning laser beams
scanned by all laser beams from the LiDAR.
for a particular grid. To save the computing resources,
a distance-based filter will first be applied to remove
Vehicle Dimensions and Boundaries. The dimensions of a the vehicles far away from the line segment between
vehicle can be described by a reference point that is the grid g and the LiDAR laser origin. To obtain vehicles
front center point of a vehicle n ðxn , yn Þ, the length, of interest VOIg for grid g, all the vehicles close to the
width, and height of the vehicle ln , wn , and hn . To sim- horizontal line are detected between the LiDAR and
plify the implementation of the vehicles, we only move grid g. Given the coordinates of the center of grid g
vehicles along the center of the lane without heading (xg , yg ) from Equation 10, the line formula can be cal-
changes for lane-changing maneuvers. This does help culated easily as the LiDAR is positioned at the origin
simplify the blind zone calculation, and given the of x-y coordinates:
Ge et al 9

Figure 4. Occlusion Detection Scenarios: (a) Grid between LiDAR and Vehicle, and (b) Grid behind Vehicle, Laser ends outside

yg admit more vehicles than necessary for occlusion


y= x ð14Þ
xg detection.
where x is between 0 and xg . Then we can calculate the
Grid-Based Vehicle Occlusion Detection Algorithm. To further
distance d between the front center point of the vehicle
detect the vehicle-laser-beam occlusion, the line formula
and the line, and only the vehicles that can block the seg-
for the laser should also be obtained. The laser beam
ment line in Figure 3a top view will be retained for fur-
crosses both (0, 0, hL ) and (xg , yg , hg, i ). Use the 3D line
ther filtering using Equation 15.
equation based on two points (x1 , y1 , z1 ) and (x2 , y2 , z2 ),
  xx1 yy1 zz1
that is x2x1
 yg  rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
w 2 y2y1 z2z1. The formula of laser beam i scan-
xg xn yn 
d = rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi \
n
+ ln )
2
ning grid g can be formulated as follows,
 2 2 ð15Þ
yg
xg + 1 x y z  hL
= = ð18Þ
xg yg hg, i  hL
After filtering by the distances d from the line segment,
only vehicles with enough height to block the laser fi A two-step logic is introduced to detect the occlusions
based on distance r from the LiDAR will be retained. based on the relationship between the position of the pro-
Using the average vehicle height Hv based on vehicle type jected end point of a laser beam on the ground and the
for hn because of the lack of individual vehicle height hn , bottom and side walls of a vehicle.
it follows that Figure 4 illustrates two different scenarios for occlu-
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sion detection. In scenario (a), the current grid is between
r= x2n + y2n ð16Þ the LiDAR and the vehicle. There is no vehicle between
the LiDAR and the grid, therefore the laser is not
H = hL + r 3 tanfi ð17Þ blocked. In scenario (b), the grid is behind the vehicle,
and the laser is blocked by the vehicle because the cur-
If Hv + hadj .H, which means that the vehicle’s top is rent laser height in the current grid is in the blind zone.
definitely higher than the beam, then the vehicle will be Based on these two scenarios, it can be summarized that
kept in VOIg , where hadj occurs because r is calculated only if the laser goes through any of the vertical sides of
with the front center rather than the real intersection of the vehicle, while it does not end at the bottom of the
the laser and the object, and it is an adjusted value to vehicle, then it is blocked.
ensure appropriate handling for the scenario when the For each grid centered at (xg , yg ), look up the scan-
vehicle’s front center is lower than the laser while the ning point (xg , yg , hg, i ) at the grid for each laser beam
rear/side still has the possibility of blocking the laser, like i = 1, . . . , I, from low to high altitude angles. For each
Figure 3b shows. Obviously, the distance rint between the scanning point, check every vehicle n identified in the
intersection and the LiDAR is smaller than VOI set of the current grid.
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
wn 2 ffi Then the following two-step logic is introduced to
r+ 2
+ ln ), combined with Equation 17, as long as
2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  detect occlusions between each VOI vehicle n based on
 w 2  the relative position of the projected ground point of
hadj is equal to or larger than   n
+ ln ) 3 tanfi , the
2
2 laser beam i with respect to the bottom and side walls of
VOI should catch all the necessary vehicles. hadj can be the rectangular prism of vehicle n.
either set for each vehicle in real time, or set by the maxi-
mum value based on historical data. The former will take Step 0 Vehicle and Grid Overlap Check: If the grid is
longer for VOI filtering, whereas the latter tends to part of the bottom area of a vehicle, report no
10 Transportation Research Record 00(0)

Table 2. The Crossing Points of a Laser Beam i Scanning (x g , y g , hg, i ) through the Sidewalls of Vehicle n

x y z
x  wn xn w2n 
Left xn  w2n yg 3 n xg 2 xg 3 Hg, i  hL + hL
wn xn + w2n xn + w2n 
Right xn + 2 yg 3 xg xg 3 Hg, i  hL + hL

yn yn

Front yg 3xg yn yg 3 Hg, i  hL + hL

yn ln yn ln 
Rear yg 3xg yn  ln yg 3 Hg, i  hL + hL

occlusion. In this case, laser beams will be able to land inside the bottom of vehicle n, then any inter-
reach the vehicle and therefore experience no occlu- sections with the side walls of vehicle n will cause
sion. The conditions can be written as the following. occlusions and blind zones at the current grid g.
Given the formulas of the four boundaries, the
( intersection points with the side walls can be calcu-
xn  w2n \xg \xn + wn
2
lated also by Equation 18. The detailed formula is
ð19Þ listed in Table 2.
yn  l2n \yg \yn + ln
2

If not, go to Step 1 for further checking. After getting the coordinates of the intersections from
Table 2, the occlusion detection logic is then to check if
Step 1 Vehicle Bottom Landing Point Check: If the the z coordinate has a value below the targeted HOI hn
projected ground point of the current laser beam i as follows:
landed inside the bottom rectangle of vehicle n on the
ground, then the current beam has a direct line-of-  For left and right boundaries, if yn  l2n \yg, i, lb
ln
sight toward vehicle n and no blind zone will be gen- or rb \yn + 2 and 0\z\hn , then an occlusion
erated. More specifically, given the current beam occurs at grid g attributable to vehicle n with laser
scanning point above grid g at (xg , yg , hg, i ), the beam i.
ground point of the current beam i can be located as  For the front and rear boundaries, if xn  w2n
follows based on Equation 18. \xg, i, fb or rb \xn + w2n and 0\z\hn , then an
occlusion occurs at grid g attributable to vehicle n
8   with laser beam i. Otherwise, no blind zone is
> hL
< xg, i = xg h h detected.
L g, i
  ð20Þ
>
: yg, i = yg h h
h L
Once we iterate all the grids, all the laser beam positions
L g, i
scanned at each grid, and the corresponding VOI vehicle
If the projected ground point (xg, i , yg, i ) is within the four set, the scanning height of the lowest unblocked laser
boundaries of the bottom of the target vehicle, then the beam will be recorded for each grid. In extreme cases,
laser beam can detect a surface point on the vehicle and when no unblocked laser beams are found, the unblocked
does not cause a blind zone situation for the current grid. scanning height will be set as the maximum value, for
This rule can be written as the following. If the coordi- example, 14 ft.
nates of the projected ground point, (xg, i , yg, i ), satisfy the
following inequalities,
Treatment for Tilted Installation of LiDAR Sensors
(
xn  w2n \xg, i \xn + wn
2 Tilting LiDAR sensors will lead to changes in the spatial
ð21Þ
yn  l2n \yg, i \yn + ln distribution of laser beams. The main change is turning
2
the shape of a laser beam scanning 360 degrees from a
Then the current grid has no occlusion. If not, go to circular cone to an oblique circular cone. This leads to
Step 2. some adjustments to Equation 12 for calculating the
scanning height above each grid for each laser beam.
Step 2 Vehicle Sidewall Intersection Check: If the pro- The 3D equation for the shape of the scanning cone of a
jected end point of the current laser beam i does not laser beam without tilting is as follows,
Ge et al 11

 2 
z Acos2 g + 1 sin2 u + 2ABcosgsinu + AB2  1 = 0
x2 + y2 = ð22Þ
tanfi ð31Þ
2 2
Transform Equation 22 into Equation 23 with the 3D Let a = Acos g + 1, b = 2ABcosg, c = AB  1, then
polar coordinate system as follows, we can use Equation 32 to get the solution.
8
< x = rcosu , ðr.0,  p\u ł pÞ
> D = b2  4ac ð32Þ
y = rsinu ð23Þ
>
: If D\0, there is no laser in the current xg , yg , skip to
z = rtanfi , ð258\fi \158Þ the next grid.
Under the assumption that the tilting angle g is
where the vertical FOV range  258\fi \158 is the spe- smaller than 658, D will always be bigger than 0.
cification of the Velodyne Alpha Prime sensor. If D ø 0, continue to calculate the following:
After tilting g around the y-axis, a transformation
matrix can be generated as follows, pffiffiffiffi
b 6 D ð33Þ
roll sinu =
zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl
ffl}|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl3
ffl{ 2a
2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 0 0
6 7 ð24Þ x2g ð34Þ
R = 4 0 cosg sing 5 r= , ðr.0 by definitionÞ
1  sin2 u
0 sing cosg
0
z = rsinusing  rtanfi cosg ð35Þ
Applying the transformation matrix to the
coordinates, If y\0,
roll  0
2 3
2 3 zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl
2 ffl}|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl3{ 2 3 hg, i = min z + hL ð36Þ
0
x x 1 0 0 x
6 07 6 7 6 76 7 ð25Þ
4 y 5 = R4 y 5 = 4 0 cosg sing 5 4 y 5 Else,
0
z z 0 sing cosg z  0
hg, i = max z + hL ð37Þ
The laser beam i’s scanning cone becomes
8
>
0
x = rcosu After replacing Equation 12 with Equations 36 and
<
0 37, the rest of the blind zone detection algorithm remains
y = y  cosg  z  sing = rsinucosg + rtanfi sing ð26Þ
>
: 0
the same for tilted LiDAR sensors.
z = y  sing + z  cosg = rsinusing  rtanfi cosg

Combining with the coordinate of the grid g’s center


Experiment Design
(xg , yg ) from Equation 10, it follows that In the experimental design, the proposed blind zone
( detection model is implemented with vehicle trajectory
rcosu = xg data from the NGSIM US101 dataset consisting of 2,052
ð27Þ unique vehicles and a total of 10,013 frames. The data
rsinucosg + rtanfi sing = yg
fields used as the simulation inputs include location
xg yg information (local_x, local_y) and shape information
r= = ð28Þ
cosu sinucosg + tanfi sing (length, width, and type). Vehicle type information from
the NGSIM dataset was used to infer the vehicle heights
With two equations in Equation 27 consisting of con- based on typical vehicle height for the corresponding
stant values and two variables r and u, it can be solved as vehicle type. The first vehicle entered the coverage area
x2
follows. Let A = y2g , B = tanfi sing, which both fully con- from the 330th frame, and the last vehicle left at the
g
sist of known variables, where xg , yg are the coordinates, 9,920th frame. A warm-up period of 1,000 frames is used
fi is the altitude angle of laser i, g is the tilting angle. to allow the vehicles to fully distribute over the entire
simulated roadway. The 1,001st–8,200th frames are cho-
1  sin2 u 1  sin2 u sen to represent the real-world traffic conditions within
A= 2
=
(sinucosg + B) cos2 g sin2 u + 2Bcos g sinu + B2 12 min. The proposed numerical methods are deployed
ð29Þ over grids of 1 ft(0:305m) 3 1 ft(0:305m). We assume
the LiDAR sensor is located at the mid-block of the
Acos2 gsin2 u + 2ABcosgsinu + AB2 = 1  sins2 u ð30Þ roadway to the right side of the traveling lane. Table 3
12 Transportation Research Record 00(0)

Table 3. Factors and Sensitivity Effects

Categories Factors Sensitivity effects

LiDAR Tilting angle By tilting the sensor, the Blind Spot can be adjusted to a non-
region of interest (ROI) area, and the densest beams can be
focused onto the ROI area.
Installation height The increase of the installation height can increase the blind spot
area directly beneath the LiDAR, if no tilting angle. It can
reduce the impact of occlusion compared with low installation
height.
Distance from traffic When the distance from traffic increases, the line-of-sight angle
can reduce and may cause more severe occlusion and blind
zones. The increase also reduces the detection range.
Vehicle Length/width Given the same height and relative position to the LiDAR, the
longer/wider the vehicle is, the bigger and more lingering the
blind zone.
Height Vehicles with higher heights cause bigger and higher blind zones
at same spot than vehicles with lower heights.
Speed For the same vehicle traveling at the same location, the faster the
vehicle moves, the shorter the blind zone lasts.
Density Denser traffic usually has more severe occlusions that can lead
to larger and longer-lasting blind zones.

lists the factors and their sensitivity effects on sensing Lowest Effective Laser Height in Each Grid. For each config-
coverage and accuracy. The proposed paper adopted dif- uration, a lowest laser height map can be generated.
ferent tilting angles, installation heights, and distances Similar to the laser beam density map, the lowest laser
from traffic, while keeping the trajectories constant, to height map mf can be defined as:
optimize the installation of the LiDAR devices. 
mf , gx, gy = min HGx 3 Gy 3 I ð39Þ

Blind Zone Impact Evaluation Metrics where min(HGx 3 Gy 3 I ) is the minimum laser height in
each grid (gx, gy).
To further assess the impact of LiDAR configuration on
vehicle detection, the design of the metrics takes into
account the laser beam density, the ceiling of the blind Temporal Average of Grid-based Lowest Effective Laser
zones, and the frequency and duration of blind zones. Height. This is the time aggregation of the lowest height
map, which shows the grid-based average blind zone
height over time. It reflects the overall performance of
Numbers of Effective Lasers in Each Grid. For each frame f the LiDAR covering the area and can be calculated as
(0.1-s snapshots in the NGSIM dataset), the laser beam P
density map rf is constructed by calculating the beam f mf
TAgx, gy = ð40Þ
density in each grid g as follows: FTotal
P
i hi, g, hL \E and beam i is not occluded where FTotal is the total number of frames.
rf , gx, gy =
E
ð38Þ Lane-by-Lane Missing Trajectory Rate Histogram. This metric
assesses the missing percentage of vehicle trajectories
where the laser beam density map rf is a 2D array to
where they fall into blind zone grids. Then the percent-
store the density data in frame f . gx and gy are the grid
age is aggregated by frames and lanes. For each individ-
indices. ½C  is a counter function where if the clause C ual lane l, the aggregated missing trajectory rate is the
satisfies, ½C = 1, otherwise, ½C  = 0. E is the HOI. In this following,
study, we focus on pedestrians and sedan/SUVs; the
HOIs were set to be between 1 ft and 7 ft, then the P
P f
Pv, l, f
HOI = 7  1 = 6ft. Effective laser beams mean that the v fv ð41Þ
MTl =
beams are not occluded. vl
Ge et al 13

where fv is the count of frames that vehicle v appears in consecutive time instead of the total time. Consecutive
the coverage area, Pv, l, f is the percentage of vehicle v in loss of vehicle detection can have a significant impact on
lane l at frame f , vl is the count of vehicles that are in the detection and tracking capabilities of LiDAR sensing
lane l, and MTl is the missing trajectory rate in lane l. algorithms. This consecutive blind zone duration map
can reveal whether acceptable consecutive vehicle miss-
Lane-by-Lane Vehicle Missing Time Distribution. This metric ing time can be achieved to allow effective tracking and
assesses the percentage of time when a vehicle trajectory inferences of vehicle positions based on their prior driv-
is traveling over blind zone grids. Then the percentage is ing behavior.
aggregated by frames and by lanes. The missing time dis-
tribution is calculated as follows, Result Analysis
P sumðfvB Þ The simulation studies take the NGSIM trajectory data
v sumðfv Þ ð42Þ
VMTDl = as inputs and simulated the blind zone dynamics under
vl
different LiDAR sensor installation and configuration
where fvB is the count of the frames that vehicle v stays in scenarios. The detailed performance evaluation results
the blind zone, fv is the count of frames that vehicle v are as follows.
appears in the coverage area, and vl is the count of vehi-
cles that are in lane l. The percentage threshold Pvm for
vehicle missing is set as 15, 50, and 85, separately. Blind Zone Areas at Different HOI Levels
Figure 5 shows the total blind zone area calculated by
Grid-based Blind Zone Duration Map. Different from the Equation 8 at different HOI levels. A test vehicle is set
average lowest laser height map, the time in the blind on Lane 3, which is 36 ft (10.97 m) (centerline) from the
zone map focuses on the time only when the height map LiDAR, to illustrate the geometric characteristics of
exceeds a preset threshold for the height of the blind blind zones. The height of the vehicle is set to be 6 ft
zone: hB . It is calculated as the percentage of the total (1.83 m), with a length of 14 ft (4.27 m) and a width of
time that a grid is a blind zone. The grid is considered a 6 ft (1.83 m). The LiDAR is set 25 ft above ground.
blind zone when hg, f \hB , where hg, f is the lowest effec- Figure 5, a and b, are the roadside LiDAR blind zone
tive laser height in grid g at frame f , and hB is the thresh- areas on the ground and HOI of 4 ft. Though oscillation
old value for the blind zone. The threshold hB needs to exists, macroscopically the blind zone area increases
be adjusted based on the height of pedestrian and moving along with the increase of the distance. Figure 5c is the
objects that may collide with vehicles for the target road- visualization of the blind zone at HOI of 4 ft. With the
way or intersections. In the current experimental design, vehicle moving farther, the blind zone area becomes nar-
given this is a freeway environment, we set 4 ft, 6 ft, and rower but longer, which results in an increase in the area.
14 ft as the threshold hB separately, as there is no pedes- Because of the discrete distribution of the laser beams,
trian in the NGSIM US101 dataset; 4 ft is the lowest the area oscillates when the vehicle corners B1 , B2 , B3
height for most daily cars. If the lowest laser height is jumping from one beam to another.
within 4 ft, it means that almost all the vehicles appearing
in the coverage will be found in the LiDAR data. If the Birdseye View Snapshots of Simulations with Different
lowest laser height is over 6 ft, it means that the majority
Configurations
of the vehicles are missing except for some of the higher-
than-average SUVs, pickups, and trucks (41). If the low- The grid-based detection performance metrics intro-
est laser height is over 14 ft, it means that the LiDAR can duced in the previous chapter will be evaluated by using
barely find any vehicles. a pseudo-color map that displays those metrics in color-
P coded diagrams. In this study, we assess a 1,000 ft seg-
BGx, Gy ment, as 500 ft is the detection range observed in our
TIBZGx, Gy = ð43Þ
FTotal field tests with Velodyne Alpha Prime LiDAR. The
widths of the following maps are 192 ft. The lane mark-
where TIBZGx, Gy is the time in the blind zone at grid ings have been presented with white solid lines, as lane
Gx, Gy, fBGx, Gy is the frame at which grid Gx, Gy is in the changing is beyond the scope of the proposed simulation
blind zone. model.
As Figure 6 shows, the simulation covers five lanes of
Maximum Consecutive Blind Zone Duration Map. Similar to traffic plus a frontage road and ramps, with the vehicles
the grid-based time in the blind zone map, the maximum marked by black bounding boxes. The LiDAR sensor
consecutive blind zone duration map focuses on the locates at the bottom center, which created the semicircle
14 Transportation Research Record 00(0)

Figure 5. Roadside LiDAR Blind Zone Area versus Distances from LiDAR Sensors: (a) Roadside LiDAR Blind Zone Area Lane 3 with
HOI=0 ft, (b) Roadside LiDAR Blind Zone Area Lane 3 with HOI=4 ft, and (c) Roadside LiDAR Blind Zone Lane 3 with HOI=4 ft.

Figure 6. Blind zone height map, 5° at 23 ft, right.

or half-eclipse blind spot underneath. The blind zone light traffic, which has an impact on the blind zone
height measures in each grid are used to create the pattern.
pseudo-color map with blue for 0 ft and dark red for Figure 7a shows the scenario where the LiDAR sen-
14 ft. The generated blind zone height map is based on sor is installed on the left side of the road, at the height
the absolute coordinate system around the sensor, on of 15 ft (4.57 m) without tilting. Compared with Figure
which both lateral occlusion and longitudinal occlusion 7b, the right-side installation, the maximum blind zone
are reflected. height from the left-side installation is lower. This is pri-
Figure 7 provides the simulation results with six dif- marily because of the frequent occurrence of trucks on
ferent configurations. Each configuration consists of five the second and third lanes from the left. Trucks blocked
blind zone height maps, and the time interval between fewer beams in the left-side installation because of their
two consecutive blind zone maps is 10 s. The colors in proximity to the LiDAR sensor. It should be noted that
the map, as the color bar shows, denote the blind zone the left-side installation of the LiDAR sensor actually
height, from 0 (all clear) to 14 ft (4.27 m, totally blocked). requires the installation of the LiDAR sensor at the
To be noted, the first and second lanes from the right in medium or gantries, which makes the configuration
the dataset are frontage and ramps instead of lanes with unrealistic. Therefore, the following analysis will only
Ge et al 15

Figure 7. 10-Sec-Interval Simulation Blind Zone Map: (a) 0° at 15 ft. Left, (b) 0° at 15 ft. Right, (c) 5° at 15 ft. Right, (d) 0° at 23 ft. Left,
(e) 0° at 23 ft. Right, and (f) 5° at 23 ft. Right.

use the right-side installation as an example of tilting center), without any laser beams. With more laser beams
operations, though the simulation model has the ability in the interested height range, the shape and model of the
for both right-side and left-side simulation and analysis. vehicles can be better recognized and reconstructed.
Comparing Figure 7a with Figure 7d, the higher the Horizontally comparing Figure 8, the tilting operation
installation, the smaller the impact that the vehicle has can significantly improve the laser density but will also
on the LiDAR coverage, but the bigger the blind spot shorten the vertical detection range. Vertically comparing
beneath the sensor. Comparing Figure 7e with Figure 7f, Figure 8, the increasing height of the LiDAR will slightly
the 5° tilting operation slightly reduced the blind spot reduce the laser density and gradually increase the area
area underneath the LiDAR. Comparing Figure 7, b and of the blind spot.
c, versus Figure 7, e and f, the tilting reduces the blind The following results are from a LiDAR set to be
spot beneath the LiDAR sensor, and the higher installa- installed at a height of 23 ft (7.01 m) with a 5° tilting, on
tion reduces the blind zone caused by the vehicles. the right side of the road.
Figure 9 shows the histogram of the blind zone dura-
tions, with the white lines representing the lane mark-
Configuration Evaluation ings. Jet colormap is applied to visualize the blind zone
Figure 8 shows the maps of effective laser numbers that height, in which dark red is the highest value, 14 ft, and
can inform the vehicle detection and reconstruction cap- dark blue is the lowest value, 0 ft. Figure 9a shows that
abilities based on the grid positions. Jet colormap is used, light traffic does not have much influence on any lane as
in which the highest value is represented by dark red, and long as the LiDAR sensors are installed higher than the
the lowest value, 1, is represented by dark blue. If there vehicles. Figure 9b shows that, for half of the cases, the
are no beams, it is white. We use 4 ft, 6 ft, and 14 ft as the LiDAR may have trouble recognizing and reconstructing
targeted HOIs separately. As can be easily seen, there is small vehicles beyond 200 ft, especially for the first two
always a blind spot under the LiDAR device (bottom lanes from the left. Figure 9c shows that for 15% of the
16 Transportation Research Record 00(0)

Figure 8. Laser density maps at different installation height levels and tilting angles.

Figure 9. Duration Distribution of Blind Zone Height Map of 5 ° at 23 Ft.: (a) 15 Percentile Blind Zone Height, (b) 50 Percentile Blind
Zone Height, and (c) 85 Percentile Blind Zone Height.
Ge et al 17

Figure 10. Lane-by-lane missing trajectory and lane-by-lane vehicle-based missing time of 5° at 23ft.: (a) Lane by Lane Missing Trajectory
Rate Histogram, (b) Lane by Lane Vehicle-based Missing Time Distribution - 15%, (c) Lane by Lane Vehicle-based Missing Time
Distribution - 50%, and (d) Lane by Lane Vehicle-based Missing Time Distribution - 85%.

cases, it will have trouble finding/tracking small and Figure 10b illustrates that most vehicles cannot meet the
medium-size vehicles hidden behind trucks at or beyond requirement of less than a 15% missing rate for most of
100 ft. The effective coverage may be significantly influ- the frames. Figure 10c shows that most vehicles between
enced. Although the laser beam may reach as far as the third lane and the sixth lane from the left can meet
500 ft, the blind zone caused by heavy traffic can reduce the requirement of having 50% of trajectories detected
it to a very short range. Therefore, it may be worth try- for most of the frames. Comparing Figure 10c with
ing installing the LiDAR at an appropriate height with Figure 10d, not much improvement can be found for the
tilting to gain better blind zone performance. first two lanes from the left.
Figure 10a shows the histograms of the lane-by-lane The two metrics presented in Figure 10 can be used
missing trajectory rate. Figure 10, b to d, show the lane- not only to find the optimal configuration of LiDAR
by-lane vehicle-based missing time distribution with the installation through simulation, but they can also be used
threshold of missing areas at 15%, 50%, and 85%. All as performance measurements in evaluating real-world
four figures consist of eight histograms, and each histo- LiDAR detection by lane-based missing trajectory rate
gram corresponds to one lane from the left lanes to the and vehicle-based missing time.
right lanes including frontage road, on and off ramps. Figure 11 shows the grid-based blind zone duration
Figure 10a shows that the first two lanes from the left distribution and the maximum consecutive missing dura-
have a bigger missing trajectory rate than the other lanes. tion in the blind zone at HOI levels of 4 ft, 6 ft, and 14 ft,
18 Transportation Research Record 00(0)

Figure 11. Time in blind zone and max consecutive time in blind zone of 5° at 23 ft: (a) Time in Blind Zone of 4 Ft., (b) Max
Consecutive Time in Blind Zone of 4 Ft., (c) Time in Blind Zone of 6 Ft., (d) Max Consecutive Time in Blind Zone of 6 Ft., (e) Time in
Blind Zone of 14 Ft., and (f) Max Consecutive Time in Blind Zone of 14 Ft.

representing heights of sedans/coupes, SUVs/pickups, method is for designing and optimizing roadside sensor
and trucks. Figure 11, a, c, and e, are the percentage of installation (e.g., installation height, tilting angle, and
blind zone durations over the entire data period. Figure location selection) to support CAV applications. The
11a shows that under the 5° at 23 ft configuration and simulation model takes vehicle trajectory and dimension
the given traffic flow, if the vehicle is lower than 4 ft, then data, and can be used to assess the detection efficiencies
the vehicle can only be potentially detected within 200 ft through proposed metrics for different traffic conditions
because there are blind zone durations over 50% starting as long as the provided trajectory data contain informa-
from around 200 ft. Figure 11c shows that when relaxed tion including coordinates and vehicle dimension. In light
to 6 ft, the algorithm can most likely detect and track the traffic conditions, the blind zones will be less frequent
vehicles within the coverage area as most of the coverage and less severe. For design purposes, the sensor config-
area has less than 50% blind zone duration. Figure 11e uration should consider more severe blind zone situa-
shows that if the vehicle is higher than 14 ft, then it can tions during the peak hours rather than during free flow.
be detected and tracked easily, as overall the blind zone The proposed model is applied to the NGSIM US101
durations are less than 10%. vehicle trajectory data to assess the detection perfor-
Figure 11, b, d, and f, are the maximum consecutive mance if a Velodyne Alpha Prime LiDAR is used for
blind zone durations. It is proposed to evaluate the per- roadside LiDAR sensing. The output of the proposed
formance of object detection and tracking in LiDAR data method generates a 2D map of the blind zone distribu-
analytic algorithms. Figure 11b shows that, even if the tion based on the absolute coordinate system around a
vehicle height is within 4 ft, the vehicle will be detectable sensor. The results do not distinguish between lateral or
but hard to track because there are spatial-discrete tem- longitudinal occlusion. The height of the occlusion of
poral-consecutive missings even near the LiDAR. Figure vehicles from the far-side of the sensor were modeled
11d shows that if the vehicle is around 6 ft, then the through the optical geometric equations. The experimen-
LiDAR can track it for around 300 ft. However, for the tal results indicate that an appropriate installation height
300–500 ft, the tracking capability may be influenced by for LiDAR sensors can help reduce the occlusions and
the blind zone created by trucks/buses, because temporal- allow laser beams to penetrate gaps between vehicles.
consecutive missings become denser beyond 3/5 of the Tilting the LiDAR sensor is also found to be effective in
covered 500 ft. Figure 11f shows the same result as Figure reducing the blind spot directly underneath the LiDAR
11e shows, that the vehicles higher than 14 ft will always sensor and increasing beam density close to LiDAR sen-
be detected and tracked within the coverage area. sors. With regard to the blind zone impact, higher and
longer vehicles tend to cause spatially bigger and tempo-
rally longer blind zone.
Conclusions Future work of the proposed study includes the devel-
The paper proposed a method to quantify the detection opment of simulation models to accommodate more
blind zones of LiDAR sensors caused by occlusions. A complicated roadway geometry and network settings,
simulation model is developed to assess and find the opti- especially arterial intersections. More detailed road user
mal LiDAR installation configuration. The proposed models with their individual heights and shapes shall be
Ge et al 19

engaged, for example pedestrians and bicyclists. 4. Viray, R., A. Sarkar, Z. R. Doerzaph, and C. Vehicle. Vir-
Modeling the heading changes will be involved to model ginia Connected Vehicle Test Bed System Performance
lane changes, turning movements, road sections with sig- (V2I System Performance). Final Research Reports. Virgi-
nificant horizontal and vertical curves. Meanwhile, the nia Tech Transportation Institute, Blacksburg, 2016.
efficiency of the blind zone simulation should be further 5. University of Florida Transportation Institute. The I-
STREET Testbed - University of Florida Transportation
improved. Multi-LiDAR-sensor deployment assessment
Institute. 2021. https://www.transportation.institute.ufl.
will also be explored to allow corridor-wide planning.
edu/2021/03/the-i-street-testbed/. Accessed August 2, 2021.
Similar models may also be developed to assess other 6. NJDOT Technology Transfer. Research Project - NJDOT
sensing technologies that rely on line-of-sight for detec- Technology Transfer. 2021. https://www.njdottechtransfer.
tion, such as video and radar sensors. net/2018/01/01/research-project/?pdb=50. Accessed August
2, 2021.
Author Contributions 7. Zhao, J., H. Xu, Y. Zhang, Y. Tian, and H. Liu. Traffic
Volume Detection Using Infrastructure-Based LiDAR
The authors confirm contribution to the paper as follows: Under Different Levels of Service Conditions. Journal of
study conception and design: Peter J. Jin, Yi Ge; data collec- Transportation Engineering, Part A: Systems, Vol. 147, No.
tion: Yi Ge, Tianya T. Zhang; analysis and interpretation of 11, 2021, p. 04021080.
results: Yi Ge, Tianya T. Zhang, Anjiang Chen; draft manu- 8. Farin, D. Sensor Cleaning: Virtual Tool for Cleaning Per-
script preparation: Yi Ge, Peter J. Jin, Tianya T. Zhang. All formance to Maintain Availability of AD Sensor System.
authors reviewed the results and approved the final version of Public Report. The National Academies of Sciences, Engi-
the manuscript. neering, and Medicine, Washington, D.C., 2019.
9. Geiger, A., P. Lenz, C. Stiller, and R. Urtasun. Vision
Declaration of Conflicting Interests Meets Robotics: The Kitti Dataset. The International Jour-
nal of Robotics Research, Vol. 32, No. 11, 2013,
The author(s) declared no potential conflicts of interest with
pp. 1231–1237.
respect to the research, authorship, and/or publication of this
10. Lyu, Y., L. Bai, and X. Huang. ‘‘Chipnet: Real-time lidar
article.
processing for drivable region segmentation on an fpga.’’
IEEE Transactions on Circuits and Systems I: Regular
Funding Papers, Vol. 66, No. 5, 2018, pp. 1769–1779.
11. Wu, B., A. Wan, X. Yue, and K. Keutzer. ‘‘Squeezeseg:
The author(s) disclosed receipt of the following financial sup-
Convolutional neural nets with recurrent crf for real-time
port for the research, authorship, and/or publication of this
road-object segmentation from 3d lidar point cloud.’’ Proc.,
article: This paper is based on work partially supported by New
Jersey Department of Transportation and Federal Highway IEEE International Conference on Robotics and Automa-
Administration Research Project 21-60168, Middlesex County tion (ICRA), Brisbane, QLD, Australia, IEEE, NY, 2018.
Resolution 21-821-R, and the National Science Foundation pp. 1887–1893.
under Grant No. 1952096 and 2133516. 12. Wu, B., X. Zhou, S. Zhao, X. Yue, and K. Keutzer.
‘‘Squeezesegv2: Improved model structure and unsuper-
vised domain adaptation for road-object segmentation
ORCID iDs from a lidar point cloud.’’ Proc., International Conference
Yi Ge https://orcid.org/0000-0002-7186-9024 on Robotics and Automation (ICRA), Montreal, QC,
Peter J. Jin https://orcid.org/0000-0002-7688-3730 Canada, IEEE, NY, 2019, pp. 4376–4382.
Tianya T. Zhang https://orcid.org/0000-0002-7606-9886 13. Milioto, A., I. Vizzo, J. Behley, and C. Stachniss.
Anjiang Chen https://orcid.org/0000-0002-2645-9851 ‘‘RangeNet + + : Fast and Accurate LiDAR Semantic
Segmentation.’’ Proc., IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), Macau, China,
References IEEE, NY, 2019, pp. 4213–4220.
1. U.S. Department of Transportation [Internet]. transporta- 14. Chen, X., A. Milioto, E. Palazzolo, P. Giguère, J. Behley,
tion.gov. https://www.transportation.gov/research-and-tec and C. Stachniss. ‘‘Suma + + : Efficient lidar-based
hnology/operational-connected-vehicle-deployments-us/. semantic slam.’’ Proc., IEEE/RSJ International Conference
Accessed August 1, 2021. on Intelligent Robots and Systems (IROS), Macau, China,
2. FCC Modernizes 5.9 GHz Band to Improve WI-Fi and IEEE, NY, 2019, pp. 4530–4537.
Automotive Safety [Internet]. Fcc.gov. 2020. https://www. 15. Yin, H., X. Yang, and C. He. Spherical Coordinates Based
fcc.gov/document/fcc-modernizes-59-ghz-band-improve- Methods of Ground Extraction and Objects Segmentation
wi-fi-and-automotive-safety. Accessed October 12, 2021. Using 3-D LiDAR Sensor. IEEE Intelligent Transportation
3. Xu, H., Z. Tian, J. Wu, H. Liu, and J. Zhao. High-Resolu- Systems Magazine, Vol. 8, No. 1, 2016, pp. 61–68.
tion Micro Traffic Data from Roadside LiDAR Sensors for 16. Zhou, Y., and O. Tuzel. Voxelnet: End-to-End Learning
Connected-Vehicles and New Traffic Applications.Final for Point Cloud Based 3d Object Detection. Proc., IEEE
Report. University of Nevada, Reno. Solaris University Conference on Computer Vision and Pattern Recognition,
Transportation Center, 2018. Salt Lake City, IEEE, NY, 2018, pp. 4490–4499.
20 Transportation Research Record 00(0)

17. Wu, J., H. Xu, Y. Sun, J. Zheng, and R. Yue. Automatic 28. Cooper, M., J. Raquet, and R. Patton. Range Information
Background Filtering Method for Roadside LiDAR Data. Characterization of the Hokuyo UST-20LX LIDAR Sen-
Transportation Research Record: Journal of the Transporta- sor. Photonics, Vol. 5, No. 2, 2018, p. 12.
tion Research Board, 2018. 2672: 106–114. 29. Heidemann, H. K. Lidar Base Specification [Internet].
18. Zhao, J., H. Xu, X. Xia, and H. Liu. Azimuth-Height Techniques and Methods. US Geological Survey, 2012.
Background Filtering Method for Roadside LiDAR Data. https://pubs.usgs.gov/tm/11b4/pdf/tm11-B4.pdf.
Proc., IEEE Intelligent Transportation Systems Conference 30. Blend Sensor Simulation [Internet]. Blensor.org. https://
(ITSC), Auckland, New Zealand, IEEE, NY, 2019. www.blensor.org/. Accessed July 29, 2021.
pp. 2421–2426. 31. Ondulus LiDAR [Internet]. Presagis.com. https://www.pre-
19. Lv, B., H. Xu, J. Wu, Y. Tian, and C. Yuan. Raster-Based sagis.com/en/product/ondulus-lidar/. Accessed July 29,
Background Filtering for Roadside LiDAR Data. IEEE 2021.
Access, Vol. 7, 2019, pp. 76779–76788. 32. Wu, J., H. Xu, Y. Tian, R. Pi, and R. Yue. Vehicle Detec-
20. Cui, Y., H. Xu, J. Wu, Y. Sun, and J. Zhao. Automatic tion Under Adverse Weather from Roadside LiDAR Data.
Vehicle Tracking with Roadside LiDAR Data for the Con- Sensors (Basel), Vol. 20, No. 12, 2020, p. 3433.
nected-Vehicles System. IEEE Intelligent Systems, Vol. 34, 33. Rasshofer, R. H., M. Spies, and H. Spies. Influences of
No. 3, 2019, pp. 44–51. Weather Phenomena on Automotive Laser Radar Systems.
21. Zhao, J., H. Xu, D. Wu, and H. Liu. An Artificial Neural Advances in Radio Science, Vol. 9, 2011, pp. 49–60.
Network to Identify Pedestrians and Vehicles from Road- 34. Hasirlioglu, S., I. Doric, C. Lauerer, and T. Brandmeier. Mod-
side 360-Degree LiDAR Data. Presented at 97th Annual eling and Simulation of Rain for the Test of Automotive Sen-
Meeting of the Transportation Research Board, Washing- sor Systems. Proc., IEEE Intelligent Vehicles Symposium (IV),
ton, D.C., 2018. Gothenburg, Sweden, IEEE, NY, 2016, pp. 286–291.
22. Wu, J., H. Xu, Y. Zheng, Y. Zhang, B. Lv, and Z. Tian. 35. Hasirlioglu, S., and A. Riener. Introduction to Rain and
Automatic Vehicle Classification Using Roadside LiDAR Fog Attenuation on Automotive Surround Sensors. Proc.,
Data. Transportation Research Record: Journal of the IEEE 20th International Conference on Intelligent Transpor-
Transportation Research Board, 2019. 2673: 153–164. tation Systems (ITSC), Yokohama, Japan, IEEE, NY,
23. Yin, F., D. Makris, and S. A. Velastin. Performance Eva- 2017, pp. 1–7.
luation of Object Tracking Algorithms. Proc., 10th IEEE 36. Wu, J., H. Xu, J. Zheng, and J. Zhao. Automatic Vehicle
International Workshop on Performance Evaluation of Detection with Roadside LiDAR Data Under Rainy and
Tracking and Surveillance, Rio De Janeiro, Brazil, Vol. 25, Snowy Conditions. IEEE Intelligent Transportation Sys-
2007. tems Magazine, Vol. 13, No. 1, 2020, pp. 197–209.
24. Wu, J. An Automatic Procedure for Vehicle Tracking with 37. Toshniwal, S. Automatic Detection of Stains on Lidar Glass
a Roadside LiDAR Sensor. Institute of Transportation Houses and Notice for Cleaning. Doctoral dissertation. Uni-
Engineers. ITE Journal, Vol. 88, No. 11, 2018, pp. 32–37. versity of Nevada, Reno, May 2021.
25. Zhang, J., W. Xiao, B. Coifman, and J. P. Mills. Vehicle 38. Toth, C., E. Paska, and D. Brzezinska. Using Pavement
Tracking and Speed Estimation from Roadside Lidar. Markings to Support the QA/QC of LiDAR Data. Interna-
IEEE Journal of Selected Topics in Applied Earth Observa- tional Archives of Photogrammetry, Remote Sensing and
tions and Remote Sensing, Vol. 13, 2020, pp. 5597–5608. Spatial Information Sciences, Vol. 36, 2007, pp. 173–178.
26. Zhao, J., H. Xu, H. Liu, J. Wu, Y. Zheng, and D. Wu. 39. Industries [Internet]. Velodynelidar.com. 2019. https://velo-
Detection and Tracking of Pedestrians and Vehicles Using dynelidar.com/industries/autonomous/. Accessed July 29,
Roadside LiDAR Sensors. Transportation Research Part 2021.
C: Emerging Technologies, Vol. 100, 2019, pp. 68–87. 40. Vehicle Icons in Figure 1 and Figure 3. flaticon.com.
27. American Society for Photogrammetry and Remote Sen- 41. Large SUV and 4 3 4 cars comparison with dimensions
sing (ASPRS).ASPRS Positional Accuracy Standards for and boot capacity [Internet]. Automobiledimension.com.
Digital Geospatial Data. Photogrammetric Engineering & https://www.automobiledimension.com/large-suv-4x4-cars.
Remote Sensing, Vol. 81, No. 3, 2015, pp. A1–A26. php. Accessed August 6, 2021.

You might also like