0% found this document useful (0 votes)
12 views

Multiple_Objects_Tracking_using_Radar_for_Autonomous_Driving

Uploaded by

tecbranch48
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Multiple_Objects_Tracking_using_Radar_for_Autonomous_Driving

Uploaded by

tecbranch48
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Multiple Objects Tracking using Radar for

Autonomous Driving
Muhamamd Ishfaq Hussain, Shoaib Azam, Farzeen Munir, Zafran Khan, and Moongu Jeon
School of Electrical Engineering and Computer Science
Gwangju Institute of Science and Technology
Gwangju, Republic of Korea
email:(ishfaqhussain, shoaibazam, farzeen.munir, mgjeon)@gist.ac.kr, [email protected]

Abstract—Object detection and tracking are the integral ele-


ments for the perception of the spatio-temporal environment. The
availability and affordability of camera and lidar as the leading
sensor modalities have used for object detection and tracking in
research. The usage of deep learning algorithms for the object
detection and tracking using camera and lidar have illustrated
the promising results, but these sensor modalities are prone
to weather conditions, have sparse data and spatial resolution
problems. In this work, we explore the problem of detecting
distant objects and tracking using radar. For the efficacy of
our proposed work, extensive experimentation in different traffic
scenario are performed by using our self-driving car test-bed.
Index Terms—Self-Drving Car, Radar, Obstacle Detection and
Tracking, Robotics, Extended Kalman Filter

I. I NTRODUCTION
For Advanced Driving Assistance Systems (ADAS) and
autonomous driving, perception of the environment is essential
for robotics and autonomous driving vehicles. According to Fig. 1. The quantitative results of objects detection and tracking in real
environment. Left side of the image showing the results of object detection
the latest technical findings by the US department of trans- using ’camera image’ while right side displays radar’s markers after removing
portation National Highway Traffic and Safety Administration the noise and implementing the proposed methodology for object detection
(NHTSA), more than 90 % of road accidents are due to and tracking.
human mistakes [4]. Therefore, objects detection is considered
paramount for safe navigation on highways and roads within
the cities. In particular, object detection in autonomous driving action. From start to end, the most intricate and challenging
pertains to detecting the obstacles (i.e, 2 wheel and 4 wheel job is to perceive all the possible information from the
vehicles, pedestrians, and any random obstacles on-road), environment. Scenes having complex dynamic environment
aimed to operate the automated systems safely. Overall, to like cross-section intersection, where different maneuvers are
avoid any mishap, the perception system should be strong happening at different speeds, become tedious for safe and
enough to accurately detect the small distant objects on the robust planning algorithms.
roads. The dawn of autonomous vehicles is emerging and will
The reliability and safety of autonomous systems are de- hit the public roads in coming years. The hindrance in
pendent on vital components fig. 2 which work hierarchically. class-specific detectors for all possible obstacles on the road
These components fig. 2 act as backbone of the autonomous may lead to complexity in training deep neural networks.
systems and improve their performance. In the first phase, In autonomous vehicles, different sensors (i.e lidar, camera,
different types of sensors are used by the autonomous vehicles radar) are mostly used for detection and tracking the obstacles.
to obtain the information (raw data) about the environment Nowadays most frequently used sensor is lidar due to its
in which it is operating. Subsequently, the received raw data dense and perfect range measurement. However, lidar is an
is processed in the perceive step, to detect and track the expensive instrument with limited range constraints. The con-
obstacles. The planning and actuation phase is comprised of tinuous development in deep learning vision-based detection
path planing, obstacle avoidance and performing a suitable and tracking gives us state-of-the-art results. But, training and
detecting all the unknown-classes is an expensive and time
taking phenomena. Besides these advancements, we are still
978-1-5386-5541-2/18/$31.00 ©2020 IEEE unable to detect the distant obstacles through lidar and cameras

Authorized licensed use limited to: Lovely Professional University - Phagwara. Downloaded on October 15,2024 at 10:41:01 UTC from IEEE Xplore. Restrictions apply.
TABLE I
C OMPARATIVE ANALYSIS OF SENSORS IN LIEU OF OBJECT TRACKING

Sensor Modality lidar camera radar


All-weather - X -
Cost High Low Medium
Range Medium (Velodyne HDL-32E) Medium Long Approx (200m)
Working in Darkness - - X
Direct Velocity Estimation - - X
Illumination affects X X -

due to long-range, bad weather, and various other reasons lower detection quality [1]. Particularly R-CNN and Faster
which hamper its performance. Whereas the only leftover R-CNN [11] give the state of the art solution with limited
sensor which can accurately detect the distant obstacle is the classes and higher computational cost [1]. [8] uses the radar
radar. Radar bears long-range and it gives the measurement data for obstacle detection by fusing it with a monocular
in diverse weather conditions despite of having noisy and camera. High-resolution radar is being used, making it possible
sparse data. Radars are being widely used fro object detection to extract the size of the object as well as its motions precisely.
and tracking due being economical, it better performance in This paper works on ambiguities and extended-object detection
deteriorated weather conditions and it ability to cater to the problem, which is a worth-mentioning effort in radar-based
redial velocity of detected objects. tracking [18]. Multi-modal networks are discussed which have
In this study, we are focusing on the importance of distant helped us in detecting small objects at the earliest. Early fusion
obstacles detection and tracking using radar scans data. The and segmentation are the hot research areas in tracing the
use of radar scans provide a promising outcome in terms of de- distant obstacles in real-time for unmanned aerial vehicles.
tecting the distant obstacles, and also reduce the computational What, When, and how to fuse are open-ended questions in this
cost as compared to the camera and Lidar [1]. The rest of the research area which create new horizons to be explored in the
paper is organized as follows: Section-II explains the related world of autonomous driving vehicles [1]. Based on radar and
work. Multi-Object Detection and tracking are discussed in camera [9] proposed network is a solely radar-based object
section III. Section IV focuses on the experimentation and detection mechanism achieving high-quality results in real-
results, and finally, section-V concludes the paper. time. The same is amply related to our work as it incorporates
the attention mechanism in its implementation. Exploration
of resource-efficient solutions are the need of the hour for
object detection in real-time. Efficiency of such solutions may
be increased by allocating system resources to real objects
only and ignoring distractions like pictures of vehicles on
billboards.
Our work is greatly inspired by the technique used in
[12], [9] and [1] which led to improved accuracy, speed, and
optimization using multi-modal sensors in novel approach of
attention guided multi-modality fusion mechanism [12]. This
work follows a very loosely coupled technique for research in
multi-modal fusion approaches.

III. M ULTI -O BJECT T RACKING


Radar can directly measure the velocity of object using the
doppler effect [5]. For the multiple object tracking, and to
Fig. 2. The components which are showing in above figure, playing an
important role in the autonomy of Autonomous Vehicle and robotics.
update their state dynamically, the Extended Kalman Filter
(EKF) is implemented by utilizing radar measurements for
different types of road entities that includes vehicle and
II. R ELATED W ORK pedestrian. Kalman filter (KF) only handles the linear data
Multi-object tracking plays an important role in the advance [14] while the Extended Kalman filter (EKF) [15] can handle
driving assistance systems (ADAS), and also for autonomous the non-linear data as well. The measurement directly coming
driving. In recent years many fully convolutional detection and from the radar sensor are in raw form and carries too much
tracking algorithms are proposed and implemented in a real- noise and clutter. In order to remove the noise from the data,
time environment. Single-shot detectors (SSD) [7] and You density-based clustering techniques are opted for raw radar
Only Look Once (YOLO) [6] are using one stage algorithm data. The measurements (px, py, vx, vy) obtained from the
which are computationally less expensive, however, they pose radar scan are shown in the fig. 4 in detail. The function H

Authorized licensed use limited to: Lovely Professional University - Phagwara. Downloaded on October 15,2024 at 10:41:01 UTC from IEEE Xplore. Restrictions apply.
(detection). The vehicle also carried a lidar (Velodyne HDL-
32E) and a monocular camera (FLIR BlackFly S), for esti-
mating the ground truth. We performed multiple experiments
to measure the accuracy and reliability of radar tracking. The
experiments were performed on the main road with different
environmental conditions. In order to evaluate the proposed
method, we performed two different experiments. In fig. 1 left
image shows the camera detection using Yolo3 [16], while the
right image depicts the radar tracking data. In this image, it is
clear that the camera image is unable to detect the pedestrian
crossing the road, while the radar is able to detect perfectly.
In experiment-II with a different scenario the camera is again
unable to detect the vehicle while radar detects and tracks
Fig. 3. Vehicle (Kia-soul EV) is used for performing experiments. The radar
is mounted in the front bumper of the car. While the lidar and camera, sensors
the same vehicle. In experiment-III, the same problem was
are also mounting on the roof of the car. observed during a pedestrian detection which was not detected
by camera while the radar detected that person perfectly.
We created the ground truth manually, and calculated the In-
used for measurement is given in the Eq.1, where x is the tersection over Union (IoU) score, to undertake the quantitative
mean of state vector, and n is the noise. The ρ is the radial evaluation. The ground truth is based on the image data from
distance from the origin and θ is the angle between the ρ the camera mounted on the vehicle. The qualitative results
and x. While the radial velocity is the change between the of three different experiments, are shown in fig. 1, 5, and 6
range rate ρ̇. EKF has the built-in functionality to linearize respectively. We are only focusing on the obstacles which are
the non-linear function using the Jacobian matrix. The filtering unable to be detected by the camera and lidar.
process includes the complete hierarchy from initialization to
prediction and followed by updating the state.

Fig. 4. The radar is giving us the range and angle of the next vehicle in
the above mentioned image. And on the basis of these values, additional
measurement of different components (px, py, vx, vy) is calculated, which Fig. 5. In part-a showing the camera image while in part-b is the radar data
are used in Extended Kalman Filters (EKF) for object level detection and of detection and tracking on real environment.
then further used for the tracking of objects.

TABLE II
0
I NTERSECTION OVER U NION (I O U)
H = h(x ) + n (1)

IV. E XPERIMENTATION AND R ESULTS Experiment IoU (Score)


I 80.44 %
A series of experiments were performed to ascertain the II 79.11 %
performance and quality of distant objects tracking while using II 80.29 %
radar data. The radar is mounted on the front bumper of our
autonomous vehicle (KIA Soul-Ev) as shown in fig. 3. The
mounting height above from the ground is 50cm. Radar having V. C ONCLUSION
two separate beams long and short (±45 deg and ±20 deg) In this work, we have evaluated the distant obstacles which
with 76GHz pulse-doppler, while the scan interval is 20Hz. are unable to be detected by the camera and lidar sensor. The
We collected the two dimensional (2D) data of radar scans same can be detected by the radar sensor by employing the

Authorized licensed use limited to: Lovely Professional University - Phagwara. Downloaded on October 15,2024 at 10:41:01 UTC from IEEE Xplore. Restrictions apply.
[7] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A.
C. Berg, “Ssd: Single shot multibox detector,” in European conference
on computer vision. Springer, 2016, pp. 21–37.
[8] John, V., and Mita, S. ”RVNet: deep sensor fusion of monocular camera
and radar for image-based obstacle detection in challenging environ-
ments” In Pacific-Rim Symposium on Image and Video Technology
(pp. 351-364). Springer, Cham. 2019.
[9] Nabati, Ramin, and Hairong Qi. ”RRPN: radar Region Proposal Network
for Object Detection in Autonomous Vehicles.” In 2019 IEEE Interna-
tional Conference on Image Processing (ICIP), pp. 3093-3097. IEEE,
2019.
[10] Farzeen Munir, Shoaib Azam, Ishfaq Hussain, Ahmad Muqeem Sheri,
and Moongu Jeon, ”Autonomous Vehicle: The Architectural Aspect of
Self Driving Car”, Sensors, Signal and Image Processing(SSIP), 2018,
Prague, Czech.
[11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards realtime
object detection with region proposal networks,” in Advances in neural
information processing systems, 2015, pp. 91–99.
[12] Zhang, Wenwei, Hui Zhou, Shuyang Sun, Zhe Wang, Jianping Shi, and
Chen Change Loy. ”Robust multi-modality multi-object tracking.” In
Proceedings of the IEEE International Conference on Computer Vision,
pp. 2365-2374. 2019.
Fig. 6. In part-a showing the camera image while in part-b is the radar data [13] Kim, Du Yong, and Moongu Jeon. ”Data fusion of radar and image mea-
of detection and tracking on real environment. surements for multi-object tracking via Kalman filtering.” Information
Sciences 278 (2014): 641-652.
[14] Welch, Greg, and Gary Bishop. ”An introduction to the Kalman filter.”
EKF. Tracking the on-road distant vehicles and pedestrians (1995).
improves the safety of the autonomous vehicle’s manifold [15] Bolognani, Silverio, Luca Tubiana, and Mauro Zigliotto. ”Extended
Kalman filter tuning in sensorless PMSM drives.” IEEE Transactions
despite the fact that object classification, can’t be achieved on Industry Applications 39, no. 6 (2003): 1741-1747.
with the radar data only. Detecting and then tracking such [16] Redmon, Joseph, and Ali Farhadi. ”Yolov3: An incremental improve-
obstacles that are distracting can be further utilized in image ment.” arXiv preprint arXiv:1804.02767 (2018).
[17] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss
and lidar frame to classify and pay more attention to only those for dense object detection,” in Proceedings of the IEEE Conference on
particular areas which are not occupied. It is also pertinent Computer Vision and Pattern Recognition, 2017, pp. 2980–2988.
to mention that, using radar in an autonomous stack is an [18] Scheel, Alexander, and Klaus Dietmayer. ”Tracking multiple vehicles
using a variational radar model.” IEEE Transactions on Intelligent
increment for the safety, and reliability of autonomous driving. Transportation Systems 20, no. 10 (2018): 3721-3736.
For the future, we will work on radar data to increase the
attention of some specific regions which are neglected, due
to bad weather and distance in the image or lidar frame to
enhance the quality.
ACKNOWLEDGMENT
This work was partly supported by the ICT R&D program
of MSIP/IITP. (2014-0-00077, Development of global multi-
target tracking and event prediction techniques based on
real-time large-scale video analysis) and GIST Autonomous
Vehicle project.
R EFERENCES
[1] S. Chadwick,w. Maddetn,and P. Newman. ”Distant vehicle detection
using radar and vision. In 2019 International Conference on Robotics
and Automation” (ICRA) (pp. 8311-8317).
[2] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous
driving? the kitti vision benchmark suite,” in Conference on Computer
Vision and Pattern Recognition (CVPR), 2012.
[3] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benen-
son, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for
semantic urban scene understanding,” in Proc. of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
[4] S. Singh. ”Critical reasons for crashes investigated in the National Motor
Vehicle Crash Causation Survey.” (Traffic Safety Facts Crash•Stats.
Report No. DOT HS 812 506). Washington, DC: National Highway
Traffic Safety Administration. (2018, March)
[5] L. Stanislas and T. Peynot, “Characterisation of the delphi electronically
scanning radar for robotics applications,” 2015.
[6] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only
look once: Unified, real-time object detection,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2016,
pp.779–788.

Authorized licensed use limited to: Lovely Professional University - Phagwara. Downloaded on October 15,2024 at 10:41:01 UTC from IEEE Xplore. Restrictions apply.

You might also like