0% found this document useful (0 votes)
67 views39 pages

Survey Deep Learning in Autonomous Driving

Uploaded by

Selda Aytaç
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views39 pages

Survey Deep Learning in Autonomous Driving

Uploaded by

Selda Aytaç
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337264008

A survey of deep learning techniques for autonomous driving

Article  in  Journal of Field Robotics · November 2019


DOI: 10.1002/rob.21918

CITATIONS READS
190 6,714

4 authors, including:

Sorin Mihai Grigorescu Bogdan Trasnea


Elektrobit Automotive Universitatea Transilvania Brasov
54 PUBLICATIONS   493 CITATIONS    20 PUBLICATIONS   231 CITATIONS   

SEE PROFILE SEE PROFILE

Gigel Macesanu
Universitatea Transilvania Brasov
17 PUBLICATIONS   239 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Sorin Mihai Grigorescu on 18 December 2019.

The user has requested enhancement of the downloaded file.


A Survey of Deep Learning Techniques for Autonomous Driving

Sorin Grigorescu∗ Bogdan Trasnea


Artificial Intelligence, Artificial Intelligence,
Elektrobit Automotive. Elektrobit Automotive.
Robotics, Vision and Control Lab, Robotics, Vision and Control Lab,
Transilvania University of Brasov. Transilvania University of Brasov.
Brasov, Romania Brasov, Romania
[email protected] [email protected]

Tiberiu Cocias Gigel Macesanu


Artificial Intelligence, Artificial Intelligence,
Elektrobit Automotive. Elektrobit Automotive.
Robotics, Vision and Control Lab, Robotics, Vision and Control Lab,
Transilvania University of Brasov. Transilvania University of Brasov.
Brasov, Romania Brasov, Romania
[email protected] [email protected]

Abstract

The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly

backed up by advances in the area of deep learning and artificial intelligence. The objective of this

paper is to survey the current state-of-the-art on deep learning technologies used in autonomous

driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent

neural networks, as well as the deep reinforcement learning paradigm. These methodologies form

a base for the surveyed driving scene perception, path planning, behavior arbitration and motion

control algorithms. We investigate both the modular perception-planning-action pipeline, where

each module is built using deep learning methods, as well as End2End systems, which directly map

sensory information to steering commands. Additionally, we tackle current challenges encountered

in designing AI architectures for autonomous driving, such as their safety, training data sources and

computational hardware. The comparison presented in this survey helps to gain insight into the

strengths and limitations of deep learning and AI approaches for autonomous driving and assist with

design choices.1

∗ The authors are with Elektrobit Automotive and the Robotics, Vision and Control Laboratory (ROVIS Lab) at the Department of Automation and

Information Technology, Transilvania University of Brasov, 500036 Romania. E-mail: (see http://rovislab.com/sorin_grigorescu.
html).
1 The articles referenced in this survey can be accessed at the web-page accompanying this paper, available at http://rovislab.com/

survey_DL_AD.html
Contents

1 Introduction 3

2 Deep Learning based Decision-Making Architectures used in Self-Driving Cars 3

3 Overview of Deep Learning Technologies 4

3.1 Deep Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.3 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Deep Learning for Driving Scene Perception and Localization 10

4.1 Sensing Hardware: Camera vs. LiDAR Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4.2 Driving Scene Understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.2.1 Bounding-Box-Like Object Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.2.2 Semantic and Instance Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.2.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.3 Perception using Occupancy Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5 Deep Learning for Path Planning and Behavior Arbitration 16

6 Motion Controllers for AI-based Self-Driving Cars 16

6.1 Learning Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.2 End2End Learning Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

7 Safety of Deep Learning in Autonomous Driving 20

8 Data Sources for Training Autonomous Driving Systems 23

9 Computational Hardware and Deployment 26

10 Discussion and Conclusions 27

10.1 Final Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


1 Introduction

Over the course of the last decade, Deep Learning and Artificial Intelligence (AI) became the main technologies behind
many breakthroughs in computer vision (Krizhevsky et al., 2012), robotics (Andrychowicz et al., 2018) and Natural
Language Processing (NLP) (Goldberg, 2017). They also have a major impact in the autonomous driving revolution
seen today both in academia and industry. Autonomous Vehicles (AVs) and self-driving cars began to migrate from
laboratory development and testing conditions to driving on public roads. Their deployment in our environmental
landscape offers a decrease in road accidents and traffic congestions, as well as an improvement of our mobility in
overcrowded cities. The title of ”self-driving” may seem self-evident, but there are actually five SAE Levels used
to define autonomous driving. The SAE J3016 standard (SAE Committee, 2014) introduces a scale from 0 to 5
for grading vehicle automation. Lower SAE Levels feature basic driver assistance, whilst higher SAE Levels move
towards vehicles requiring no human interaction whatsoever. Cars in the level 5 category require no human input and
typically will not even feature steering wheels or foot pedals.

Although most driving scenarios can be relatively simply solved with classical perception, path planning and motion
control methods, the remaining unsolved scenarios are corner cases in which traditional methods fail.

One of the first autonomous cars was developed by Ernst Dickmanns (Dickmanns and Graefe, 1988) in the 1980s.
This paved the way for new research projects, such as PROMETHEUS, which aimed to develop a fully functional
autonomous car. In 1994, the VaMP driverless car managed to drive 1, 600km, out of which 95% were driven au-
tonomously. Similarly, in 1995, CMU NAVLAB demonstrated autonomous driving on 6, 000km, with 98% driven
autonomously. Another important milestone in autonomous driving were the DARPA Grand Challenges in 2004 and
2005, as well as the DARPA Urban Challenge in 2007. The goal was for a driverless car to navigate an off-road course
as fast as possible, without human intervention. In 2004, none of the 15 vehicles completed the race. Stanley, the
winner of the 2005 race, leveraged Machine Learning techniques for navigating the unstructured environment. This
was a turning point in self-driving cars development, acknowledging Machine Learning and AI as central components
of autonomous driving. The turning point is also notable in this survey paper, since the majority of the surveyed work
is dated after 2005.

In this survey, we review the different artificial intelligence and deep learning technologies used in autonomous driving,
and provide a survey on state-of-the-art deep learning and AI methods applied to self-driving cars. We also dedicate
complete sections on tackling safety aspects, the challenge of training data sources and the required computational
hardware.

2 Deep Learning based Decision-Making Architectures used in


Self-Driving Cars

Self-driving cars are autonomous decision-making systems that process streams of observations coming from different
on-board sources, such as cameras, radars, LiDARs, ultrasonic sensors, GPS units and/or inertial sensors. These
observations are used by the car’s computer to make driving decisions. The basic block diagrams of an AI powered
autonomous car are shown in Fig. 1. The driving decisions are computed either in a modular perception-planning-
action pipeline (Fig. 1(a)), or in an End2End learning fashion (Fig. 1(b)), where sensory information is directly mapped
to control outputs. The components of the modular pipeline can be designed either based on AI and deep learning
methodologies, or using classical non-learning approaches. Various permutations of learning and non-learning based
components are possible (e.g. a deep learning based object detector provides input to a classical A-star path planning
algorithm). A safety monitor is designed to assure the safety of each module.

The modular pipeline in Fig. 1(a) is hierarchically decomposed into four components which can be designed using
either deep learning and AI approaches, or classical methods. These components are:

• Perception and Localization,


Figure 1: Deep Learning based self-driving car. The architecture can be implemented either as a sequential
perception-planing-action pipeline (a), or as an End2End system (b). In the sequential pipeline case, the components
can be designed either using AI and deep learning methodologies, or based on classical non-learning approaches.
End2End learning systems are mainly based on deep learning methods. A safety monitor is usually designed to ensure
the safety of each module.

• High-Level Path Planning,


• Behavior Arbitration, or low-level path planning,
• Motion Controllers.

Based on these four high-level components, we have grouped together relevant deep learning papers describing meth-
ods developed for autonomous driving systems. Additional to the reviewed algorithms, we have also grouped relevant
articles covering the safety, data sources and hardware aspects encountered when designing deep learning modules
for self-driving cars.

Given a route planned through the road network, the first task of an autonomous car is to understand and localize itself
in the surrounding environment. Based on this representation, a continuous path is planned and the future actions of
the car are determined by the behavior arbitration system. Finally, a motion control system reactively corrects errors
generated in the execution of the planned motion. A review of classical non-AI design methodologies for these four
components can be found in (Paden et al., 2016).

Following, we will give an introduction of deep learning and AI technologies used in autonomous driving, as well as
surveying different methodologies used to design the hierarchical decision making process described above. Addition-
ally, we provide an overview of End2End learning systems used to encode the hierarchical process into a single deep
learning architecture which directly maps sensory observations to control outputs.

3 Overview of Deep Learning Technologies

In this section, we describe the basis of deep learning technologies used in autonomous vehicles and comment on
the capabilities of each paradigm. We focus on Convolutional Neural Networks (CNN), Recurrent Neural Networks
(RNN) and Deep Reinforcement Learning (DRL), which are the most common deep learning methodologies applied
to autonomous driving.

Throughout the survey, we use the following notations to describe time dependent sequences. The value of a variable
is defined either for a single discrete time step t, written as superscript < t >, or as a discrete sequence defined in the
< t,t + k > time interval, where k denotes the length of the sequence. For example, the value of a state variable z is
defined either at discrete time t, as z<t> , or within a sequence interval z<t,t+k> . Vectors and matrices are indicated by
bold symbols.

3.1 Deep Convolutional Neural Networks

Convolutional Neural Networks (CNN) are mainly used for processing spatial information, such as images, and can
be viewed as image features extractors and universal non-linear function approximators (Lecun et al., 1998), (Bengio
et al., 2013). Before the rise of deep learning, computer vision systems used to be implemented based on handcrafted
features, such as HAAR (Viola and Jones, 2001), Local Binary Patterns (LBP) (Ojala et al., 1996), or Histograms of
Oriented Gradients (HoG) (Dalal and Triggs, 2005). In comparison to these traditional handcrafted features, convo-
lutional neural networks are able to automatically learn a representation of the feature space encoded in the training
set.

CNNs can be loosely understood as very approximate analogies to different parts of the mammalian visual cor-
tex (Hubel and N.Wiesel, 1963). An image formed on the retina is sent to the visual cortex through the thalamus.
Each brain hemisphere has its own visual cortex. The visual information is received by the visual cortex in a crossed
manner: the left visual cortex receives information from the right eye, while the right visual cortex is fed with visual
data from the left eye. The information is processed according to the dual flux theory (Goodale and Milner, 1992),
which states that the visual flow follows two main fluxes: a ventral flux, responsible for visual identification and object
recognition, and a dorsal flux used for establishing spatial relations between objects. A CNN mimics the functioning
of the ventral flux, in which different areas of the brain are sensible to specific features in the visual field. The earlier
brain cells in the visual cortex are activated by sharp transitions in the visual field of view, in the same way in which
an edge detector highlights sharp transitions between the neighboring pixels in an image. These edges are further used
in the brain to approximate object parts and finally to estimate abstract representations of objects.

An CNN is parametrized by its weights vector θ = [W, b], where W is the set of weights governing the inter-neural
connections and b is the set of neuron bias values. The set of weights W is organized as image filters, with coefficients
learned during training. Convolutional layers within a CNN exploit local spatial correlations of image pixels to learn
translation-invariant convolution filters, which capture discriminant image features.

Consider a multichannel signal representation Mk in layer k, which is a channel-wise integration of signal representa-
tions Mk,c , where c ∈ N. A signal representation can be generated in layer k + 1 as:

Mk+1,l = ϕ(Mk ∗ wk,l + bk,l ), (1)

where wk,l ∈ W is a convolutional filter with the same number of channels as Mk , bk,l ∈ b represents the bias, l is a
channel index and ∗ denotes the convolution operation. ϕ(·) is an activation function applied to each pixel in the input
signal. Typically, the Rectified Linear Unit (ReLU) is the most commonly used activation function in computer vision
applications (Krizhevsky et al., 2012). The final layer of a CNN is usually a fully-connected layer which acts as an
object discriminator on a high-level abstract representation of objects.

In a supervised manner, the response R(·; θ ) of a CNN can be trained using a training database D =
[(x1 , y1 ), ..., (xm , ym )], where xi is a data sample, yi is the corresponding label and m is the number of training ex-
amples. The optimal network parameters can be calculated using Maximum Likelihood Estimation (MLE). For the
clarity of explanation, we take as example the simple least-squares error function, which can be used to drive the MLE
process when training regression estimators:
Figure 2: A folded (a) and unfolded (b) over time, many-to-many Recurrent Neural Network. Over time t, both
the input s<t−τi ,t> and output z<t+1,t+τo > sequences share the same weights h<·> . The architecture is also referred to
as a sequence-to-sequence model.

m
θ̂ = arg max L (θ ; D) = arg min ∑ (R(xi ; θ ) − yi )2 . (2)
θ θ i=1

For classification purposes, the least-squares error is usually replaced by the cross-entropy, or the negative log-
likelihood loss functions. The optimization problem in Eq. 2 is typically solved with Stochastic Gradient Descent
(SGD) and the backpropagation algorithm for gradient estimation (Rumelhart et al., 1986). In practice, different
variants of SGD are used, such as Adam (Kingma and Ba, 2015) or AdaGrad (J. Duchi and Singer, 2011).

3.2 Recurrent Neural Networks

Among deep learning techniques, Recurrent Neural Networks (RNN) are especially good in processing temporal
sequence data, such as text, or video streams. Different from conventional neural networks, a RNN contains a time
dependent feedback loop in its memory cell. Given a time dependent input sequence [s<t−τi > , ..., s<t> ] and an output
sequence [z<t+1> , ..., z<t+τo > ], a RNN can be ”unfolded” τi + τo times to generate a loop-less network architecture
matching the input length, as illustrated in Fig. 2. t represents a temporal index, while τi and τo are the lengths of
the input and output sequences, respectively. Such neural networks are also encountered under the name of sequence-
to-sequence models. An unfolded network has τi + τo + 1 identical layers, that is, each layer shares the same learned
weights. Once unfolded, a RNN can be trained using the backpropagation through time algorithm. When compared
to a conventional neural network, the only difference is that the learned weights in each unfolded copy of the network
are averaged, thus enabling the network to shared the same weights over time.

The main challenge in using basic RNNs is the vanishing gradient encountered during training. The gradient signal
can end up being multiplied a large number of times, as many as the number of time steps. Hence, a traditional RNN
is not suitable for capturing long-term dependencies in sequence data. If a network is very deep, or processes long
sequences, the gradient of the network’s output would have a hard time in propagating back to affect the weights of
the earlier layers. Under gradient vanishing, the weights of the network will not be effectively updated, ending up with
very small weight values.

Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) networks are non-linear function approxi-
mators for estimating temporal dependencies in sequence data. As opposed to traditional recurrent neural networks,
LSTMs solve the vanishing gradient problem by incorporating three gates, which control the input, output and memory
state.

Recurrent layers exploit temporal correlations of sequence data to learn time dependent neural structures. Consider
the memory state c<t−1> and the output state h<t−1> in an LSTM network, sampled at time step t − 1, as well as the
input data s<t> at time t. The opening or closing of a gate is controlled by a sigmoid function σ (·) of the current input
signal s<t> and the output signal of the last time point h<t−1> , as follows:
Γ<t>
u = σ (Wu s<t> + Uu h<t−1> + bu ), (3)

Γ<t>
f = σ (W f s<t> + U f h<t−1> + b f ), (4)

Γ<t>
o = σ (Wo s<t> + Uo h<t−1> + bo ), (5)

<t>
where Γ<t>
u , Γf and Γ<t>
o are gate functions of the input gate, forget gate and output gate, respectively. Given
current observation, the memory state c<t> will be updated as:

c<t> = Γ<t>
u ∗ tanh(Wc s<t> + Uc h<t−1> + bc ) + Γ f ∗ c<t−1> , (6)

The new network output h<t> is computed as:

h<t> = Γo<t> ∗ tanh(c<t> ). (7)

An LSTM network Q is parametrized by θ = [Wi , Ui , bi ], where Wi represents the weights of the network’s gates and
memory cell multiplied with the input state, Ui are the weights governing the activations and bi denotes the set of
neuron bias values. ∗ symbolizes element-wise multiplication.

In a supervised learning setup, given a set of training sequences D =


[(s<t−τ
1
i ,t> <t+1,t+τo >
, z 1 ), ..., (s <t−τi ,t> <t+1,t+τo >
q , z q )], that is, q independent pairs of observed sequences with as-
signments z<t,t+τo > , one can train the response of an LSTM network Q(·; θ ) using Maximum Likelihood Estimation:

θ̂ = arg max L (θ ; D)
θ
m
= arg min ∑ li (Q(s<t−τ
i
i ,t>
; θ ), z<t+1,t+τ
i
o>
),
θ i=1 (8)
m τo
= arg min ∑ ∑ li<t> (Q<t> (s<t−τ
i
i ,t>
; θ ), z<t>
i ),
θ i=1 t=1

where an input sequence of observations s<t−τi ,t> = [s<t−τi > , ..., s<t−1> , s<t> ] is composed of τi consecutive data
samples, l(·, ·) is the logistic regression loss function and t represents a temporal index.

In recurrent neural networks terminology, the optimization procedure in Eq. 8 is typically used for training ”many-to-
many” RNN architectures, such as the one in Fig. 2, where the input and output states are represented by temporal
sequences of τi and τo data instances, respectively. This optimization problem is commonly solved using gradient
based methods, like Stochastic Gradient Descent (SGD), together with the backpropagation through time algorithm
for calculating the network’s gradients.

3.3 Deep Reinforcement Learning

In the following, we review the Deep Reinforcement Learning (DRL) concept as an autonomous driving task, using
the Partially Observable Markov Decision Process (POMDP) formalism.
In a POMDP, an agent, which in our case is the self-driving car, senses the environment with observation I<t> ,
performs an action a<t> in state s<t> , interacts with its environment through a received reward R<t+1> , and transits to
<t+1>
the next state s<t+1> following a transition function Tss<t> ,a<t> .

<t> to a
In RL based autonomous driving, the task is to learn an optimal driving policy for navigating from state sstart
<t+k> <t> <t> <t>
destination state sdest , given an observation I at time t and the system’s state s . I represents the observed
environment, while k is the number of time steps required for reaching the destination state s<t+k>
dest .

In reinforcement learning terminology, the above problem can be modeled as a POMDP M := (I, S, A, T, R, γ), where:

• I is the set of observations, with I<t> ∈ I defined as an observation of the environment at time t.

• S represents a finite set of states, s<t> ∈ S being the state of the agent at time t, commonly defined as the
vehicle’s position, heading and velocity.

• A represents a finite set of actions allowing the agent to navigate through the environment defined by I<t> ,
where a<t> ∈ A is the action performed by the agent at time t.
<t+1>
• T : S × A × S → [0, 1] is a stochastic transition function, where Tss<t> ,a<t> describes the probability of arriving
in state s<t+1> , after performing action a<t> in state s<t> .
<t+1>
• R : S × A × S → R is a scalar reward function which controls the estimation of a, where Rss<t> ,a<t> ∈ R. For a
<t+1>
state transition s<t> → s<t+1> at time t, we define a scalar reward function Rss<t> ,a<t> which quantifies how
well did the agent perform in reaching the next state.

• γ is the discount factor controlling the importance of future versus immediate rewards.

Considering the proposed reward function and an arbitrary state trajectory [s<0> , s<1> , ..., s<k> ] in observation space,
at any time tˆ ∈ [0, 1, ..., k], the associated cumulative future discounted reward is defined as:

k
R<tˆ> = ∑ γ <t−tˆ> r<t> , (9)
t=tˆ

where the immediate reward at time t is given by r<t> . In RL theory, the statement in Eq. 9 is known as a finite horizon
learning episode of sequence length k (Sutton and Barto, 1998).

The objective in RL is to find the desired trajectory policy that maximizes the associated cumulative future reward. We
define the optimal action-value function Q∗ (·, ·) which estimates the maximal future discounted reward when starting
in state s<t> and performing actions [a<t> , ..., a<t+k> ]:

Q∗ (s, a) = maxE [R<tˆ> |s<tˆ> = s, a<tˆ> = a, π], (10)


π

where π is an action policy, viewed as a probability density function over a set of possible actions that can take place
in a given state. The optimal action-value function Q∗ (·, ·) maps a given state to the optimal action policy of the agent
in any state:

∀s ∈ S : π ∗ (s) = arg maxQ∗ (s, a). (11)


a∈A
The optimal action-value function Q∗ satisfies the Bellman optimality equation (Bellman, 1957), which is a recursive
formulation of Eq. 10:

   
∗ s0 s0 ∗ 0 0 s0 ∗ 0 0
Q (s, a) = ∑ Ts,a Rs,a + γ · maxQ (s , a ) = Ea0 Rs,a + γ · maxQ (s , a ) , (12)
s a0 a0

where s0 represents a possible state visited after s = s<t> and a0 is the corresponding action policy. The model-based
policy iteration algorithm was introduced in (Sutton and Barto, 1998), based on the proof that the Bellman equation is
a contraction mapping (Watkins and Dayan, 1992) when written as an operator ν:

∀Q, lim ν (n) (Q) = Q∗ . (13)


n→∞

However, the standard reinforcement learning method described above is not feasible in high dimensional state spaces.
In autonomous driving applications, the observation space is mainly composed of sensory information made up of
images, radar, LiDAR, etc. Instead of the traditional approach, a non-linear parametrization of Q∗ can be encoded
in the layers of a deep neural network. In literature, such a non-linear approximator is called a Deep Q-Network
(DQN) (Mnih et al., 2015) and is used for estimating the approximate action-value function:

Q(s<t> , a<t> ; Θ) ≈ Q∗ (s<t> , a<t> ), (14)

where Θ represents the parameters of the Deep Q-Network.

By taking into account the Bellman optimality equation 12, it is possible to train a deep Q-network in a reinforcement
learning manner through the minimization of the mean squared error. The optimal expected Q value can be estimated
within a training iteration i based on a set of reference parameters Θ̄i calculated in a previous iteration i0 :

0
y = Rss,a + γ · maxQ(s0 , a0 ; Θ̄i ), (15)
a0

where Θ̄i := Θi0 . The new estimated network parameters at training step i are evaluated using the following squared
error function:

h i
∇JΘ̂i = min Es,y,r,s0 (y − Q(s, a; Θi ))2 , (16)
Θi

0
where r = Rss,a . Based on 16, the maximum likelihood estimation function from Eq. 8 can be applied for calculating
the weights of the deep Q-network. The gradient is approximated with random samples and the backpropagation
algorithm, which uses stochastic gradient descent for training:

∇Θi = Es,a,r,s0 [(y − Q(s, a; Θi )) ∇Θi (Q(s, a; Θi ))] . (17)

The deep reinforcement learning community has made several independent improvements to the original DQN algo-
rithm (Mnih et al., 2015). A study on how to combine these improvements on deep reinforcement learning has been
provided by DeepMind in (Hessel et al., 2017), where the combined algorithm, entitled Rainbow, was able to outper-
form the independently competing methods. DeepMind (Hessel et al., 2017) proposes six extensions to the base DQN,
each addressing a distinct concern:
• Double Q Learning addresses the overestimation bias and decouples the selection of an action and its evalu-
ation;
• Prioritized replay samples more frequently from the data in which there is information to learn;
• Dueling Networks aim at enhancing value based RL;
• Multi-step learning is used for training speed improvement;
• Distributional RL improves the target distribution in the Bellman equation;
• Noisy Nets improve the ability of the network to ignore noisy inputs and allows state-conditional exploration.

All of the above complementary improvements have been tested on the Atari 2600 challenge. A good implementation
of DQN regarding autonomous vehicles should start by combining the stated DQN extensions with respect to a desired
performance. Given the advancements in deep reinforcement learning, the direct application of the algorithm still
needs a training pipeline in which one should simulate and model the desired self-driving car’s behavior.

The simulated environment state is not directly accessible to the agent. Instead, sensor readings provide clues about
the true state of the environment. In order to decode the true environment state, it is not sufficient to map a single
snapshot of sensors readings. The temporal information should also be included in the network’s input, since the
environment’s state is modified over time. An example of DQN applied to autonomous vehicles in a simulator can be
found in (Sallab et al., 2017).

DQN has been developed to operate in discrete action spaces. In the case of an autonomous car, the discrete actions
would translate to discrete commands, such as turn left, turn right, accelerate, or break. The DQN approach described
above has been extended to continuous action spaces based on policy gradient estimation (Lillicrap et al., 2016).
The method in (Lillicrap et al., 2016) describes a model-free actor-critic algorithm able to learn different continuous
control tasks directly from raw pixel inputs. A model-based solution for continuous Q-learning is proposed in (Gu
et al., 2016a).

Although continuous control with DRL is possible, the most common strategy for DRL in autonomous driving is
based on discrete control (Jaritz et al., 2018). The main challenge here is the training, since the agent has to explore its
environment, usually through learning from collisions. Such systems, trained solely on simulated data, tend to learn
a biased version of the driving environment. A solution here is to use Imitation Learning methods, such as Inverse
Reinforcement Learning (IRL) (Wulfmeier et al., 2016), to learn from human driving demonstrations without needing
to explore unsafe actions.

4 Deep Learning for Driving Scene Perception and Localization

The self-driving technology enables a vehicle to operate autonomously by perceiving the environment and instrument-
ing a responsive answer. Following, we give an overview of the top methods used in driving scene understanding,
considering camera based vs. LiDAR environment perception. We survey object detection and recognition, semantic
segmentation and localization in autonomous driving, as well as scene understanding using occupancy maps. Surveys
dedicated to autonomous vision and environment perception can be found in (Zhu et al., 2017) and (Janai et al., 2017).

4.1 Sensing Hardware: Camera vs. LiDAR Debate

Deep learning methods are particularly well suited for detecting and recognizing objects in 2D images and 3D point
clouds acquired from video cameras and LiDAR (Light Detection and Ranging) devices, respectively.

In the autonomous driving community, 3D perception is mainly based on LiDAR sensors, which provide a direct
3D representation of the surrounding environment in the form of 3D point clouds. The performance of a LiDAR is
measured in terms of field of view, range, resolution and rotation/frame rate. 3D sensors, such as Velodyne® , usually
have a 360◦ horizontal field of view. In order to operate at high speeds, an autonomous vehicle requires a minimum of
200m range, allowing the vehicle to react to changes in road conditions in time. The 3D object detection precision is
dictated by the resolution of the sensor, with most advanced LiDARs being able to provide a 3cm accuracy.

Recent debate sparked around camera vs. LiDAR (Light Detection and Ranging) sensing technologies. Tesla® and
Waymo® , two of the companies leading the development of self-driving technology (O’Kane, 2018), have different
philosophies with respect to their main perception sensor, as well as regarding the targeted SAE level (SAE Committee,
2014). Waymo® is building their vehicles directly as Level 5 systems, with currently more than 10 million miles driven
autonomously2 . On the other hand, Tesla® deploys its AutoPilot as an ADAS (Advanced Driver Assistance System)
component, which customers can turn on or off at their convenience. The advantage of Tesla® resides in its large
training database, consisting of more than 1 billion driven miles3 . The database has been acquired by collecting data
from customers-owned cars.

The main sensing technologies differ in both companies. Tesla® tries to leverage on its camera systems, whereas
Waymo’s driving technology relies more on Lidar sensors4 . The sensing approaches have advantages and disad-
vantages. LiDARs have high resolution and precise perception even in the dark, but are vulnerable to bad weather
conditions (e.g. heavy rain) (Hasirlioglu et al., 2016) and involve moving parts. In contrast, cameras are cost efficient,
but lack depth perception and cannot work in the dark. Cameras are also sensitive to bad weather, if the weather
conditions are obstructing the field of view.

Researchers at Cornell University tried to replicate LiDAR-like point clouds from visual depth estimation (Wang et al.,
2019). An estimated depth map is reprojected into 3D space, with respect to the left sensor’s coordinate of a stereo
camera. The resulting point cloud is referred to as pseudo-LiDAR. The pseudo-LiDAR data can be further fed to 3D
deep learning processing methods, such as PointNet (Qi et al., 2017) or AVOD (Ku et al., 2018). The success of image
based 3D estimation is of high importance to the large scale deployment of autonomous cars, since the LiDAR is
arguably one of the most expensive hardware component in a self-driving vehicle.

Apart from these sensing technologies, radar and ultrasonic sensors are used to enhance perception capabilities. For
example, alongside three Lidar sensors, Waymo also makes use of five radars and eight cameras, while Tesla® cars are
equipped with eights cameras, 12 ultrasonic sensors and one forward-facing radar.

4.2 Driving Scene Understanding

An autonomous car should be able to detect traffic participants and drivable areas, particularly in urban areas where
a wide variety of object appearances and occlusions may appear. Deep learning based perception, in particular Con-
volutional Neural Networks (CNNs), became the de-facto standard in object detection and recognition, obtaining
remarkable results in competitions such as the ImageNet Large Scale Visual Recognition Challenge (Russakovsky
et al., 2015).

Different neural networks architectures are used to detect objects as 2D regions of interest (Redmon et al., 2016) (Law
and Deng, 2018) (Zhang et al., 2017) (Girshick, 2015) (Iandola et al., 2016) (Dai et al., 2016) or pixel-wise segmented
areas in images (Badrinarayanan et al., 2017) (Zhao et al., 2018a) (Treml et al., 2016) (He et al., 2017), 3D bounding
boxes in LiDAR point clouds (Qi et al., 2017) (Zhou and Tuzel, 2018) (Luo et al., 2018), as well as 3D representations
of objects in combined camera-LiDAR data (Qi et al., 2018) (Chen et al., 2017) (Ku et al., 2018). Examples of
scene perception results are illustrated in Fig. 3. Being richer in information, image data is more suited for the object
recognition task. However, the real-world 3D positions of the detected objects have to be estimated, since depth
information is lost in the projection of the imaged scene onto the imaging sensor.
2 https://arstechnica.com/cars/2018/10/waymo-has-driven-10-million-miles-on-public-roads-thats-

a-big-deal/
3 https://electrek.co/2018/11/28/tesla-autopilot-1-billion-miles/
4 https://www.theverge.com/transportation/2018/4/19/17204044/tesla-waymo-self-driving-car-data-

simulation
Figure 3: Examples of scene perception results. (a) 2D object detection in images. (b) 3D bounding box detector
applied on LiDAR data. (c) Semantic segmentation results on images.

4.2.1 Bounding-Box-Like Object Detectors

The most popular architectures for 2D object detection in images are single stage and double stage detectors. Popular
single stage detectors are ”You Only Look Once” (Yolo) (Redmon et al., 2016) (Redmon and Farhadi, 2017) (Redmon
and Farhadi, 2018), the Single Shot multibox Detector (SSD) (Liu et al., 2016), CornerNet (Law and Deng, 2018) and
RefineNet (Zhang et al., 2017). Double stage detectors, such as RCNN (Girshick et al., 2014), Faster-RCNN (Ren
et al., 2017), or R-FCN (Dai et al., 2016), split the object detection process into two parts: region of interest candidates
proposals and bounding boxes classification. In general, single stage detectors do not provide the same performances
as double stage detectors, but are significantly faster.

If in-vehicle computation resources are scarce, one can use detectors such as SqueezeNet (Iandola et al., 2016) or (Li
et al., 2018), which are optimized to run on embedded hardware. These detectors usually have a smaller neural
network architecture, making it possible to detect objects using a reduced number of operations, at the cost of detection
accuracy.

A comparison between the object detectors described above is given in Figure 4, based on the Pascal VOC 2012 dataset
and their measured mean Average Precision (mAP) with an Intersection over Union (IoU) value equal to 50 and 75,
respectively.

A number of publications showcased object detection on raw 3D sensory data, as well as for combined video and
LiDAR information. PointNet (Qi et al., 2017) and VoxelNet (Zhou and Tuzel, 2018) are designed to detect objects
solely from 3D data, providing also the 3D positions of the objects. However, point clouds alone do not contain the rich
visual information available in images. In order to overcome this, combined camera-LiDAR architectures are used,
such as Frustum PointNet (Qi et al., 2018), Multi-View 3D networks (MV3D) (Chen et al., 2017), or RoarNet (Shin
et al., 2018).

The main disadvantage in using a LiDAR in the sensory suite of a self-driving car is primarily its cost5 . A solution
here would be to use neural network architectures such as AVOD (Aggregate View Object Detection) (Ku et al.,
2018), which leverage on LiDAR data only for training, while images are used during training and deployment. At
deployment stage, AVOD is able to predict 3D bounding boxes of objects solely from image data. In such a system,
a LiDAR sensor is necessary only for training data acquisition, much like the cars used today to gather road data for
navigation maps.

4.2.2 Semantic and Instance Segmentation

Driving scene understanding can also be achieved using semantic segmentation, representing the categorical labeling
of each pixel in an image. In the autonomous driving context, pixels can be marked with categorical labels representing
5 https://techcrunch.com/2019/03/06/waymo-to-start-selling-standalone-lidar-sensors/
Figure 4: Object detection and recognition performance comparison. The evaluation has been performed on the
Pascal VOC 2012 benchmarking database. The first four methods on the right represent single stage detectors, while
the remaining six are double stage detectors. Due to their increased complexity, the runtime performance in Frames-
per-Second (FPS) is lower for the case of double stage detectors.

drivable area, pedestrians, traffic participants, buildings, etc. It is one of the high-level tasks that paves the way towards
complete scene understanding, being used in applications such as autonomous driving, indoor navigation, or virtual
and augmented reality.

Semantic segmentation networks like SegNet (Badrinarayanan et al., 2017), ICNet (Zhao et al., 2018a), ENet (Paszke
et al., 2016), AdapNet (Valada et al., 2017), or Mask R-CNN (He et al., 2017) are mainly encoder-decoder architectures
with a pixel-wise classification layer. These are based on building blocks from some common network topologies, such
as AlexNet (Krizhevsky et al., 2012), VGG-16 (Simonyan and Zisserman, 2014), GoogLeNet (Szegedy et al., 2015),
or ResNet (He et al., 2016).

As in the case of bounding-box detectors, efforts have been made to improve the computation time of these systems
on embedded targets. In (Treml et al., 2016) and (Paszke et al., 2016), the authors proposed approaches to speed up
data processing and inference on embedded devices for autonomous driving. Both architectures are light networks
providing similar results as SegNet, with a reduced computation cost.

The robustness objective for semantic segmentation was tackled for optimization in AdapNet (Valada et al., 2017). The
model is capable of robust segmentation in various environments by adaptively learning features of expert networks
based on scene conditions.

A combined bounding-box object detector and semantic segmentation result can be obtained using architectures such
as Mask R-CNN (He et al., 2017). The method extends the effectiveness of Faster-RCNN to instance segmentation by
adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition.

Figure 5 shows tests results performed on four key semantic segmentation networks, based on the CityScapes dataset.
The per-class mean Intersection over Union (mIoU) refers to multi-class segmentation, where each pixel is labeled as
belonging to a specific object class, while per-category mIoU refers to foreground (object) - background (non-object)
Figure 5: Semantic segmentation performance comparison on the CityScapes dataset (Cityscapes, 2018). The
input samples are 480px × 320px images of driving scenes.

segmentation. The input samples have a size of 480px × 320px.

4.2.3 Localization

Localization algorithms aim at calculating the pose (position and orientation) of the autonomous vehicle as it navigates.
Although this can be achieved with systems such as GPS, in the followings we will focus on deep learning techniques
for visual based localization.

Visual Localization, also known as Visual Odometry (VO), is typically determined by matching keypoint landmarks in
consecutive video frames. Given the current frame, these keypoints are used as input to a perspective-n-point mapping
algorithm for computing the pose of the vehicle with respect to the previous frame. Deep learning can be used to
improve the accuracy of VO by directly influencing the precision of the keypoints detector. In (Barnes et al., 2018),
a deep neural network has been trained for learning keypoints distractors in monocular VO. The so-called learned
ephemerality mask, acts a a rejection scheme for keypoints outliers which might decrease the vehicle localization’s
accuracy. The structure of the environment can be mapped incrementally with the computation of the camera pose.
These methods belong to the area of Simultaneous Localization and Mapping (SLAM). For a survey on classical
SLAM techniques, we refer the reader to (Bresson et al., 2017).

Neural networks such as PoseNet (Kendall et al., 2015), VLocNet++ (Radwan et al., 2018), or the approaches intro-
duced in (Walch et al., 2017), (Melekhov et al., 2017), (Laskar et al., 2017), (Brachmann and Rother, 2018), or (Sarlin
et al., 2018) are using image data to estimate the 3D pose of a camera in an End2End fashion. Scene semantics can be
derived together with the estimated pose (Radwan et al., 2018).

LiDAR intensity maps are also suited for learning a real-time, calibration-agnostic localization for autonomous
cars (Barsan et al., 2018). The method uses a deep neural network to build a learned representation of the driving
scene from LiDAR sweeps and intensity maps. The localization of the vehicle is obtained through convolutional
matching. In (Tinchev et al., 2019), laser scans and a deep neural network are used to learn descriptors for localization
Figure 6: Examples of Occupancy Grids (OG). The images show a snapshot of the driving environment together
with its respective occupancy grid (Marina et al., 2019).

in urban and natural environments.

In order to safely navigate the driving scene, an autonomous car should be able to estimate the motion of the surround-
ing environment, also known as scene flow. Previous LiDAR based scene flow estimation techniques mainly relied
on manually designed features. In recent articles, we have noticed a tendency to replace these classical methods with
deep learning architectures able to automatically learn the scene flow. In (Ushani and Eustice, 2018), an encoding
deep network is trained on occupancy grids with the purpose of finding matching or non-matching locations between
successive timesteps.

Although much progress has been reported in the area of deep learning based localization, VO techniques are still
dominated by classical keypoints matching algorithms, combined with acceleration data provided by inertial sensors.
This is mainly due to the fact that keypoints detectors are computational efficient and can be easily deployed on
embedded devices.

4.3 Perception using Occupancy Maps

An occupancy map, also known as Occupancy Grid (OG), is a representation of the environment which divides the
driving space into a set of cells and calculates the occupancy probability for each cell. Popular in robotics (Garcia-
Favrot and Parent, 2009), (Thrun et al., 2005), the OG representation became a suitable solution for self-driving
vehicles. A couple of OG data samples are shown in Fig. 6.

Deep learning is used in the context of occupancy maps either for dynamic objects detection and tracking (Ondruska
et al., 2016), probabilistic estimation of the occupancy map surrounding the vehicle (Hoermann et al., 2017),(Ramos
et al., 2016), or for deriving the driving scene context (Seeger et al., 2016), (Marina et al., 2019). In the latter case,
the OG is constructed by accumulating data over time, while a deep neural net is used to label the environment into
driving context classes, such as highway driving, parking area, or inner-city driving.

Occupancy maps represent an in-vehicle virtual environment, integrating perceptual information in a form better suited
for path planning and motion control. Deep learning plays an important role in the estimation of OG, since the
information used to populate the grid cells is inferred from processing image and LiDAR data using scene perception
methods, as the ones described in this chapter of the survey.
5 Deep Learning for Path Planning and Behavior Arbitration

The ability of an autonomous car to find a route between two points, that is, a start position and a desired location,
represents path planning. According to the path planning process, a self-driving car should consider all possible
obstacles that are present in the surrounding environment and calculate a trajectory along a collision-free route. As
stated in (Shalev-Shwartz et al., 2016), autonomous driving is a multi-agent setting where the host vehicle must apply
sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right
turns, all while navigating unstructured urban roadways. The literature findings point to a non trivial policy that
should handle safety in driving. Considering a reward function R(s̄) = −r for an accident event that should be avoided
and R(s̄) ∈ [−1, 1] for the rest of the trajectories, the goal is to learn to perform difficult maneuvers smoothly and safe.

This emerging topic of optimal path planning for autonomous cars should operate at high computation speeds, in order
to obtain short reaction times, while satisfying specific optimization criteria. The survey in (Pendleton et al., 2017)
provides a general overview of path planning in the automotive context. It addresses the taxonomy aspects of path
planning, namely the mission planner, behavior planner and motion planner. However, (Pendleton et al., 2017) does
not include a review on deep learning technologies, although the state of the art literature has revealed an increased
interest in using deep learning technologies for path planning and behavior arbitration. Following, we discuss two
of the most representative deep learning paradigms for path planning, namely Imitation Learning (IL) (Rehder et al.,
2017), (Sun et al., 2018), (Grigorescu et al., 2019) and Deep Reinforcement Learning (DRL) based planning (Yu et al.,
2018b) (Paxton et al., 2017).

The goal in Imitation Learning (Rehder et al., 2017), (Sun et al., 2018), (Grigorescu et al., 2019) is to learn the behavior
of a human driver from recorded driving experiences (Schwarting et al., 2018). The strategy implies a vehicle teaching
process from human demonstration. Thus, the authors employ CNNs to learn planning from imitation. For example,
NeuroTrajectory (Grigorescu et al., 2019) is a perception-planning deep neural network that learns the desired state
trajectory of the ego-vehicle over a finite prediction horizon. Imitation learning can also be framed as an Inverse
Reinforcement Learning (IRL) problem, where the goal is to learn the reward function from a human driver (Gu et al.,
2016b), (Wulfmeier et al., 2016). Such methods use real drivers behaviors to learn reward-functions and to generate
human-like driving trajectories.

DRL for path planning deals mainly with learning driving trajectories in a simulator (Shalev-Shwartz et al.,
2016), (Panov et al., 2018), (Yu et al., 2018b) (Paxton et al., 2017). The real environmental model is abstracted
and transformed into a virtual environment, based on a transfer model. In (Shalev-Shwartz et al., 2016), it is stated
that the objective function cannot ensure functional safety without causing a serious variance problem. The proposed
solution for this issue is to construct a policy function composed of learnable and non-learnable parts. The learnable
policy tries to maximize a reward function (which includes comfort, safety, overtake opportunity, etc.). At the same
time, the non-learnable policy follows the hard constraints of functional safety, while maintaining an acceptable level
of comfort.

Both IL and DRL for path planning have advantages and disadvantages. IL has the advantage that it can be trained
with data collected from the real-world. Nevertheless, this data is scarce on corner cases (e.g. driving off-lanes, vehicle
crashes, etc.), making the trained network’s response uncertain when confronted with unseen data. On the other hand,
although DRL systems are able to explore different driving situations within a simulated world, these models tend to
have a biased behavior when ported to the real-world.

6 Motion Controllers for AI-based Self-Driving Cars

The motion controller is responsible for computing the longitudinal and lateral steering commands of the vehicle.
Learning algorithms are used either as part of Learning Controllers, within the motion control module from Fig. 1(a),
or as complete End2End Control Systems which directly map sensory data to steering commands, as shown in Fig. 1(b).
6.1 Learning Controllers

Traditional controllers make use of an a priori model composed of fixed parameters. When robots or other autonomous
systems are used in complex environments, such as driving, traditional controllers cannot foresee every possible sit-
uation that the system has to cope with. Unlike controllers with fixed parameters, learning controllers make use of
training information to learn their models over time. With every gathered batch of training data, the approximation
of the true system model becomes more accurate, thus enabling model flexibility, consistent uncertainty estimates and
anticipation of repeatable effects and disturbances that cannot be modeled prior to deployment (Ostafew et al., 2014).
Consider the following nonlinear, state-space system:

z<t+1> = ftrue (z<t> , u<t> ), (18)

with observable state z<t> ∈ Rn and control input u<t> ∈ Rm , at discrete time t. The true system ftrue is not known
exactly and is approximated by the sum of an a-priori model and a learned dynamics model:

z<t+1> = f(z<t> , u<t> ) + h(z<t> ) . (19)


a-priori model learned model

In previous works, learning controllers have been introduced based on simple function approximators, such as Gaus-
sian Process (GP) modeling (Nguyen-Tuong D and M, 2008), (Meier F and S, 2014), (Ostafew et al., 2015), (Ostafew,
2016), or Support Vector Regression (Sigaud et al., 2011).

Learning techniques are commonly used to learn a dynamics model which in turn improves an a priori system model in
Iterative Learning Control (ILC) (Ostafew et al., 2013), (Panomruttanarug, 2017), (Kapania and Gerdes, 2015), (Yang
et al., 2017b) and Model Predictive Control (MPC) (Lefvre et al., 2016) (Lefevre et al., 2015), (Ostafew et al.,
2015), (Ostafew, 2016), (Drews et al., 2017a), (Drews et al., 2017b), (Rosolia et al., 2017), (Pan et al., 2017), (Pan
et al., 2018).

Iterative Learning Control (ILC) is a method for controlling systems which work in a repetitive mode, such as path
tracking in self-driving cars. It has been successfully applied to navigation in off-road terrain (Ostafew et al., 2013),
autonomous car parking (Panomruttanarug, 2017) and modeling of steering dynamics in an autonomous race car (Ka-
pania and Gerdes, 2015). Multiple benefits are highlighted, such as the usage of a simple and computationally light
feedback controller, as well as a decreased controller design effort (achieved by predicting path disturbances and
platform dynamics).

Model Predictive Control (MPC) (Rawlings and Mayne, 2009) is a control strategy that computes control actions by
solving an optimization problem. It received lots of attention in the last two decades due to its ability to handle complex
nonlinear systems with state and input constraints. The central idea behind MPC is to calculate control actions at each
sampling time by minimizing a cost function over a short time horizon, while considering observations, input-output
constraints and the system’s dynamics given by a process model. A general review of MPC techniques for autonomous
robots is given in (Kamel et al., 2018).

Learning has been used in conjunction with MPC to learn driving models (Lefvre et al., 2016), (Lefevre et al., 2015),
driving dynamics for race cars operating at their handling limits (Drews et al., 2017a), (Drews et al., 2017b), (Rosolia
et al., 2017), as well as to improve path tracking accuracy (Brunner et al., 2017), (Ostafew et al., 2015), (Ostafew,
2016). These methods use learning mechanisms to identify nonlinear dynamics that are used in the MPC’s trajectory
cost function optimization. This enables one to better predict disturbances and the behavior of the vehicle, leading to
optimal comfort and safety constraints applied to the control inputs. Training data is usually in the form of past vehicle
states and observations. For example, CNNs can be used to compute a dense occupancy grid map in a local robot-
centric coordinate system. The grid map is further passed to the MPC’s cost function for optimizing the trajectory of
the vehicle over a finite prediction horizon.
A major advantage of learning controllers is that they optimally combine traditional model-based control theory with
learning algorithms. This makes it possible to still use established methodologies for controller design and stability
analysis, together with a robust learning component applied at system identification and prediction levels.

6.2 End2End Learning Control

In the context of autonomous driving, End2End Learning Control is defined as a direct mapping from sensory data to
control commands. The inputs are usually from a high-dimensional features space (e.g. images or point clouds). As
illustrated in Fig 1(b), this is opposed to traditional processing pipelines, where at first objects are detected in the input
image, after which a path is planned and finally the computed control values are executed. A summary of some of the
most popular End2End learning systems is given in Table 1.

End2End learning can also be formulated as a back-propagation algorithm scaled up to complex models. The paradigm
was first introduced in the 1990s, when the Autonomous Land Vehicle in a Neural Network (ALVINN) system was
built (Pomerleau, 1989). ALVINN was designed to follow a pre-defined road, steering according to the observed
road’s curvature. The next milestone in End2End driving is considered to be in the mid 2000s, when DAVE (Darpa
Autonomous VEhicle) managed to drive through an obstacle-filled road, after it has been trained on hours of human
driving acquired in similar, but not identical, driving scenarios (Muller et al., 2006). Over the last couple of years,
the technological advances in computing hardware have facilitated the usage of End2End learning models. The back-
propagation algorithm for gradient estimation in deep networks is now efficiently implemented on parallel Graphic
Processing Units (GPUs). This kind of processing allows the training of large and complex network architectures,
which in turn require huge amounts of training samples (see Section 8).

End2End control papers mainly employ either deep neural networks trained offline on real-world and/or synthetic
data (Bojarski et al., 2016), (Xu et al., 2017), (Eraqi et al., 2017), (Hecker et al., 2018), (Fridman et al., 2017), (Rausch
et al., 2017), (Bechtel et al., 2018), (Chen et al., 2015), (Yang et al., 2017a), or Deep Reinforcement Learning (DRL)
systems trained and evaluated in simulation (Sallab et al., 2017) (Perot et al., 2017), (Jaritz et al., 2018). Methods for
porting simulation trained DRL models to real-world driving have also been reported (Wayve, 2018), as well as DRL
systems trained directly on real-world image data (Pan et al., 2017), (Pan et al., 2018).

End2End methods have been popularized in the last couple of years by NVIDIA® , as part of the PilotNet architecture.
The approach is to train a CNN which maps raw pixels from a single front-facing camera directly to steering com-
mands (Bojarski et al., 2016). The training data is composed of images and steering commands collected in driving
scenarios performed in a diverse set of lighting and weather conditions, as well as on different road types. Prior to
training, the data is enriched using augmentation, adding artificial shifts and rotations to the original data.

PilotNet has 250.000 parameters and approx. 27mil. connections. The evaluation is performed in two stages: first in
simulation and secondly in a test car. An autonomy performance metric represents the percentage of time when the
neural network drives the car:

(no. o f interventions) ∗ 6 sec


autonomy = (1 − ) ∗ 100. (20)
elapsed time [sec]

An intervention is considered to take place when the simulated vehicle departs from the center line by more than one
meter, assuming that 6 seconds is the time needed by a human to retake control of the vehicle and bring it back to
the desired state. An autonomy of 98% was reached on a 20km drive from Holmdel to Atlantic Highlands in NJ,
USA. Through training, PilotNet learns how the steering commands are computed by a human driver (Bojarski et al.,
2017). The focus is on determining which elements in the input traffic image have the most influence on the network’s
steering decision. A method for finding the salient object regions in the input image is described, while reaching the
conclusion that the low-level features learned by PilotNet are similar to the ones that are relevant to a human driver.

End2End architectures similar to PilotNet, which map visual data to steering commands, have been reported in (Rausch
Neural network Sensor
Name Problem Space Description
architecture input
ALVINN stands for Autonomous Land Vehicle In a Neural
3-layer Network). Training has been conducted using simulated
ALVINN Camera, laser
Road following back-prop. road images. Successful tests on the Carnegie Mellon
(Pomerleau, 1989) range finder
network autonomous navigation test vehicle indicate that the
network can effectively follow real roads.
A vision-based obstacle avoidance system for off-road
DAVE 6-layer Raw camera mobile robots. The robot is a 50cm off-road truck, with two
DARPA challenge
(Muller et al., 2006) CNN images front color cameras. A remote computer processes the video
and controls the robot via radio.
Autonomous The system automatically learns internal representations of
NVIDIA PilotNet Raw camera
driving in real CNN the necessary processing steps such as detecting useful road
(Bojarski et al., 2017) images
traffic situations features with human steering angle as the training signal.
A generic vehicle motion model from large scale crowd-
Novel FCN-LSTM Ego-motion Large scale sourced video data is obtained, while developing an end-to
FCN-LSTM
(Xu et al., 2017) prediction video data -end trainable architecture (FCN-LSTM) for predicting a
distribution of future vehicle ego-motion data.
C-LSTM is end-to-end trainable, learning both visual and
Camera frames, dynamic temporal dependencies of driving. Additionally, the
Novel C-LSTM
Steering angle control C-LSTM steering wheel steering angle regression problem is considered classification
(Eraqi et al., 2017)
angle while imposing a spatial relationship between the output
layer neurons.
The sensor setup provides data for a 360-degree view of
CNN + Fully Surround-view the area surrounding the vehicle. A new driving dataset
Drive360 Steering angle and
Connected + cameras, CAN is collected, covering diverse scenarios. A novel driving
(Hecker et al., 2018) velocity control
LSTM bus reader model is developed by integrating the surround-view
cameras with the route planner.
The trained neural net directly maps pixel data from a
DNN policy front-facing camera to steering commands and does not
Steering angle control CNN + FC Camera images
(Rausch et al., 2017) require any other sensors. We compare the controller
performance with the steering behavior of a human driver.
DeepPicar is a small scale replica of a real self-driving car
DeepPicar called DAVE-2 by NVIDIA. It uses the same network
Steering angle control CNN Camera images
(Bechtel et al., 2018) architecture and can drive itself in real-time using a web
camera and a Raspberry Pi 3.
It incorporates Recurrent Neural Networks for information
TORCS
TORCS DRL Lane keeping and DQN + RNN integration, enabling the car to handle partially observable
simulator
(Sallab et al., 2017) obstacle avoidance + CNN scenarios. It also reduces the computational complexity for
images
deployment on embedded hardware.
The image features are split into three categories (sky-related,
Steering angle control TORCS
TORCS E2E roadside-related, and roadrelated features). Two experimental
in a simulated CNN simulator
(Yang et al., 2017a) frameworks are used to investigate the importance of each
env. (TORCS) images
single feature for training a CNN controller.
A CNN, refereed to as the learner, is trained with optimal
Steering angle and
Agile Autonomous Driving Raw camera trajectory examples provided at training time by an MPC controller.
velocity control CNN
(Pan et al., 2018) images The MPC acts as an expert, encoding the scene dynamics
for aggressive driving
into the layers of the neural network.
An Asynchronous ActorCritic (A3C) framework is used to
WRC6
WRC6 AD Driving in a CNN + LSTM learn the car control in a physically and graphically realistic
Racing
(Jaritz et al., 2018) racing game Encoder rally game, with the agents evolving simultaneously on
Game
different tracks.

Table 1: Summary of End2End learning methods.

et al., 2017), (Bechtel et al., 2018), (Chen et al., 2015). In (Xu et al., 2017), autonomous driving is formulated as a
future ego-motion prediction problem. The introduced FCN-LSTM (Fully Convolutional Network - Long-Short Term
Memory) method is designed to jointly train pixel-level supervised tasks using a fully convolutional encoder, together
with motion prediction through a temporal encoder. The combination between visual temporal dependencies of the
input data has also been considered in (Eraqi et al., 2017), where the C-LSTM (Convolutional Long Short Term
Memory) network has been proposed for steering control. In (Hecker et al., 2018), surround-view cameras were used
for End2End learning. The claim is that human drivers also use rear and side-view mirrors for driving, thus all the
information from around the vehicle needs to be gathered and integrated into the network model in order to output a
suitable control command.
To carry out an evaluation of the Tesla® Autopilot system, (Fridman et al., 2017) proposed an End2End Convolutional
Neural Network framework. It is designed to determine differences between Autopilot and its own output, taking into
consideration edge cases. The network was trained using real data, collected from over 420 hours of real road driving.
The comparison between Tesla® ’s Autopilot and the proposed framework was done in real-time on a Tesla® car. The
evaluation revealed an accuracy of 90.4% in detecting differences between both systems and the control transfer of the
car to a human driver.

Another approach to design End2End driving systems is DRL. This is mainly performed in simulation, where an
autonomous agent can safely explore different driving strategies. In (Sallab et al., 2017), a DRL End2End system
is used to compute steering command in the TORCS game simulation engine. Considering a more complex virtual
environment, (Perot et al., 2017) proposed an asynchronous advantage Actor-Critic (A3C) method for training a CNN
on images and vehicle velocity information. The same idea has been enhanced in (Jaritz et al., 2018), having a faster
convergence and permissiveness for more generalization. Both articles rely on the following procedure: receiving the
current state of the game, deciding on the next control commands and then getting a reward on the next iteration. The
experimental setup benefited from a realistic car game, namely World Rally Championship 6, and also from other
simulated environments, like TORCS.

The next trend in DRL based control seems to be the inclusion of classical model-based control techniques, as the
ones detailed in Section 6.1. The classical controller provides a stable and deterministic model on top of which the
policy of the neural network is estimated. In this way, the hard constraints of the modeled system are transfered into
the neural network policy (Zhang et al., 2016). A DRL policy trained on real-world image data has been proposed
in (Pan et al., 2017) and (Pan et al., 2018) for the task of aggressive driving. In this case, a CNN, refereed to as the
learner, is trained with optimal trajectory examples provided at training time by a model predictive controller.

7 Safety of Deep Learning in Autonomous Driving

Safety implies the absence of the conditions that cause a system to be dangerous (Ferrel, 2010). Demonstrating
the safety of a system which is running deep learning techniques depends heavily on the type of technique and the
application context. Thus, reasoning about the safety of deep learning techniques requires:

• understanding the impact of the possible failures;

• understanding the context within the wider system;

• defining the assumption regarding the system context and the environment in which it will likely be used;

• defining what a safe behavior means, including non-functional constraints.

In (Burton S., 2017), an example is mapped on the above requirements with respect to a deep learning component.
The problem space for the component is pedestrian detection with convolutional neural networks. The top level task
of the system is to locate an object of class person from a distance of 100 meters, with a lateral accuracy of +/- 20
cm, a false negative rate of 1% and false positive rate of 5%. The assumptions is that the braking distance and speed
are sufficient to react when detecting persons which are 100 meters ahead of the planned trajectory of the vehicle.
Alternative sensing methods can be used in order to reduce the overall false negative and false positive rates of the
system to an acceptable level. The context information is that the distance and the accuracy shall be mapped to the
dimensions of the image frames presented to the CNN.

There is no commonly agreed definition for the term safety in the context of machine learning or deep learning. In
(Varshney, 2016), Varshney defines safety in terms of risk, epistemic uncertainty and the harm incurred by unwanted
outcomes. He then analyses the choice of cost function and the appropriateness of minimizing the empirical average
training cost.
(Amodei et al., 2016) takes into consideration the problem of accidents in machine learning systems. Such accidents
are defined as unintended and harmful behaviors that may emerge from a poor AI system design. The authors present a
list of five practical research problems related to accident risk, categorized according to whether the problem originates
from having the wrong objective function (avoiding side effects and avoiding reward hacking), an objective function
that is too expensive to evaluate frequently (scalable supervision), or undesirable behavior during the learning process
(safe exploration and distributional shift).

Enlarging the scope of safety, (Möller, 2012) propose a decision-theoretic definition of safety that applies to a broad
set of domains and systems. They define safety to be the reduction or minimization of risk and epistemic uncertainty
associated with unwanted outcomes that are severe enough to be seen as harmful. The key points in this definition
are: i) the cost of unwanted outcomes has to be sufficiently high in some human sense for events to be harmful, and ii)
safety involves reducing both the probability of expected harms, as well as the possibility of unexpected harms.

Regardless of the above empirical definitions and possible interpretations of safety, the use of deep learning compo-
nents in safety critical systems is still an open question. The ISO 26262 standard for functional safety of road vehicles
provides a comprehensive set of requirements for assuring safety, but does not address the unique characteristics of
deep learning-based software.

(Salay et al., 2017) addresses this gap by analyzing the places where machine learning can impact the standard and
provides recommendations on how to accommodate this impact. These recommendations are focused towards the di-
rection of identifying the hazards, implementing tools and mechanism for fault and failure situations, but also ensuring
complete training datasets and designing a multi-level architecture. The usage of specific techniques for various stages
within the software development life-cycle is desired.

The standard ISO 26262 recommends the use of a Hazard Analysis and Risk Assessment (HARA) method to identify
hazardous events in the system and to specify safety goals that mitigate the hazards. The standard has 10 parts. Our
focus is on Part 6: product development at the software level, the standard following the well-known V model for
engineering. Automotive Safety Integrity Level (ASIL) refers to a risk classification scheme defined in ISO 26262 for
an item (e.g. subsystem) in an automotive system.

ASIL represents the degree of rigor required (e.g., testing techniques, types of documentation required, etc.) to reduce
risk, where ASIL D represents the highest and ASIL A the lowest risk. If an element is assigned to QM (Quality
Management), it does not require safety management. The ASIL assessed for a given hazard is at first assigned to the
safety goal set to address the hazard and is then inherited by the safety requirements derived from that goal (Salay
et al., 2017).

According to ISO26226, a hazard is defined as ”potential source of harm caused by a malfunctioning behavior, where
harm is a physical injury or damage to the health of a person” (Bernd et al., 2012). Nevertheless, a deep learning
component can create new types of hazards. An example of such a hazard is usually happening because humans
think that the automated driver assistance (often developed using learning techniques) is more reliable than it actually
is (Parasuraman and Riley, 1997).

Due to its complexity, a deep learning component can fail in unique ways. For example, in Deep Reinforcement
Learning systems, faults in the reward function can negatively affect the trained model (Amodei et al., 2016). In such
a case, the automated vehicle figures out that it can avoid getting penalized for driving too close to other vehicles by
exploiting certain sensor vulnerabilities so that it can’t see how close it is getting. Although hazards such as these
may be unique to deep reinforcement learning components, they can be traced to faults, thus fitting within the existing
guidelines of ISO 26262.

A key requirement for analyzing the safety of deep learning components is to examine whether immediate human costs
of outcomes exceed some harm severity thresholds. Undesired outcomes are truly harmful in a human sense and their
effect is felt in near real-time. These outcomes can be classified as safety issues. The cost of deep learning decisions is
related to optimization formulations which explicitly include a loss function L. The loss function L : X ×Y ×Y → R is
defined as the measure of the error incurred by predicting the label of an observation x as f (x), instead of y. Statistical
learning calls the risk of f as the expected value of the loss of f under P:

Z
R( f ) = L(x, f (x), y)dP(x, y), (21)

where, X × Y is a random example space of observations x and labels y, distributed according to a probability distri-
bution P(X,Y ). The statistical learning problem consists of finding the function f that optimizes (i.e. minimizes) the
risk R (Jose, 2018). For an algorithm’s hypothesis h and loss function L, the expected loss on the training set is called
the empirical risk of h:

1 m
Remp (h) = ∑ L(x(i) , h(x)(i) , y(i) ).
m i=1
(22)

A machine learning algorithm then optimizes the empirical risk on the expectation that the risk decreases significantly.
However, this standard formulation does not consider the issues related to the uncertainty that is relevant for safety.
The distribution of the training samples (x1 , y1 ), ..., (xm , ym ) is drawn from the true underlying probability distribution
of (X,Y ), which may not always be the case. Usually the probability distribution is unknown, precluding the use
of domain adaptation techniques (Daumé and Marcu, 2006) (Caruana et al., 2015). This is one of the epistemic
uncertainty that is relevant for safety because training on a dataset of different distribution can cause much harm
through bias.

In reality, a machine learning system only encounters a finite number of test samples and an actual operational risk is
an empirical quantity on the test set. The operational risk may be much larger than the actual risk for small cardinality
test sets, even if h is risk-optimal. This uncertainty caused by the instantiation of the test set can have large safety
implications on individual test samples (R. Varshney and Alemzadeh, 2016).

Faults and failures of a programmed component (e.g. one using a formal algorithm to solve a problem) are totally
different from the ones of a deep learning component. Specific faults of a deep learning component can be caused
by unreliable or noisy sensor signals (video signal due to bad weather, radar signal due to absorbing construction
materials, GPS data, etc.), neural network topology, learning algorithm, training set or unexpected changes in the
environment (e.g. unknown driving scenes or accidents on the road). We must mention the first autonomous driving
accident, produced by a Tesla® car, where, due to object misclassification errors, the AutoPilot function collided the
vehicle into a truck (Levin, 2018). Despite the 130 million miles of testing and evaluation, the accident was caused
under extremely rare circumstances, also known as Black Swans, given the height of the truck, its white color under
bright sky, combined with the positioning of the vehicle across the road.

Self-driving vehicles must have fail-safe mechanisms, usually encountered under the name of Safety Monitors. These
must stop the autonomous control software once a failure is detected (Koopman, 2017). Specific fault types and
failures have been cataloged for neural networks in (Kurd et al., 2007), (Harris, 2016) and (McPherson, 2018). This
led to the development of specific and focused tools and techniques to help finding faults. (Chakarov et al., 2018)
describes a technique for debugging misclassifications due to bad training data, while an approach for troubleshooting
faults due to complex interactions between linked machine learning components is proposed in (Nushi et al., 2017).
In (Takanami et al., 2000), a white box technique is used to inject faults onto a neural network by breaking the links
or randomly changing the weights.

The training set plays a key role in the safety of the deep learning component. ISO 26262 standard states that the
component behavior shall be fully specified and each refinement shall be verified with respect to its specification. This
assumption is violated in the case of a deep learning system, where a training set is used instead of a specification. It is
not clear how to ensure that the corresponding hazards are always mitigated. The training process is not a verification
process since the trained model will be correct by construction with respect to the training set, up to the limits of the
model and the learning algorithm (Salay et al., 2017). Effects of this considerations are visible in the commercial
Figure 7: Sensor suite of the nuTonomy® self-driving car (Caesar et al., 2019).

autonomous vehicle market, where Black Swan events caused by data not present in the training set may lead to
fatalities (McPherson, 2018).

Detailed requirements shall be formulated and traced to hazards. Such a requirement can specify how the training,
validation and testing sets are obtained. Subsequently, the data gathered can be verified with respect to this specifica-
tion. Furthermore, some specifications, for example the fact that a vehicle cannot be wider than 3 meters, can be used
to reject false positive detections. Such properties are used even directly during the training process to improve the
accuracy of the model (Katz et al., 2017).

Machine learning and deep learning techniques are starting to become effective and reliable even for safety critical
systems, even if the complete safety assurance for this type of systems is still an open question. Current standards and
regulation from the automotive industry cannot be fully mapped to such systems, requiring the development of new
safety standards targeted for deep learning.

8 Data Sources for Training Autonomous Driving Systems

Undeniably, the usage of real world data is a key requirement for training and testing an autonomous driving compo-
nent. The high amount of data needed in the development stage of such components made data collection on public
roads a valuable activity. In order to obtain a comprehensive description of the driving scene, the vehicle used for
data collection is equipped with a variety of sensors such as radar, LIDAR, GPS, cameras, Inertial Measurement Units
(IMU) and ultrasonic sensors. The sensors setup differs from vehicle to vehicle, depending on how the data is planned
to be used. A common sensor setup for an autonomous vehicle is presented in Fig. 7.

In the last years, mainly due to the large and increasing research interest in autonomous vehicles, many driving datasets
were made public and documented. They vary in size, sensor setup and data format. The researchers need only to
identify the proper dataset which best fits their problem space. (Janai et al., 2017) published a survey on a broad
spectrum of datasets. These datasets address the computer vision field in general, but there are few of them which fit
to the autonomous driving topic.

A most comprehensive survey on publicly available datasets for self-driving vehicles algorithms can be found in (Yin
and Berger, 2017). The paper presents 27 available datasets containing data recorded on public roads. The datasets
are compared from different perspectives, such that the reader can select the one best suited for his task.

Despite our extensive search, we are yet to find a master dataset that combines at least parts of the ones available.
The reason may be that there are no standard requirements for the data format and sensor setup. Each dataset heavily
depends on the objective of the algorithm for which the data was collected. Recently, the companies Scale® and
nuTonomy® started to create one of the largest and most detailed self-driving dataset on the market to date6 . This
includes Berkeley DeepDrive (Yu et al., 2018a), a dataset developed by researchers at Berkeley University. More
relevant datasets from the literature are pending for merging7 .

In (Fridman et al., 2017), the authors present a study that seeks to collect and analyze large scale naturalistic data
of semi-autonomous driving in order to better characterize the state of the art of the current technology. The study
involved 99 participants, 29 vehicles, 405, 807 miles and approximatively 5.5 billion video frames. Unfortunately, the
data collected in this study is not available for the public.

In the remaining of this section we will provide and highlight the distinctive characteristics of the most relevant datasets
that are publicly available.
Traffic
Dataset Problem Space Sensor setup Size Location License
condition
3D tracking, Radar, Lidar,
NuScenes 345 GB Boston,
3D object EgoData, GPS, Urban CC BY-NC-SA 3.0
(Caesar et al., 2019) (1000 scenes, clips of 20s) Singapore
detection IMU, Camera
Omnidirectional
AMUSE 1 TB
SLAM camera, IMU, Los Angeles Urban CC BY-NC-ND 3.0
(Koschorrek et al., 2013) (7 clips)
EgoData, GPS
Omnidirectional
Ford 3D tracking,
camera, IMU, 100 GB Michigan Urban Not specified
(Pandey et al., 2011) 3D object detection
Lidar, GPS
3D tracking, Monocular
KITTI Urban
3D object detection, cameras, IMU 180 GB Karlsruhe CC BY-NC-SA 3.0
(Geiger et al., 2013) Rural
SLAM Lidar, GPS
Monocular
Udacity 3D tracking, cameras, IMU,
220 GB Mountain View Rural MIT
(Udacity, 2018) 3D object detection Lidar, GPS,
EgoData
Darmstadt,
Cityscapes Semantic Color stereo 63 GB
Zurich, Urban CC BY-NC-SA 3.0
(Cityscapes, 2018) understanding cameras (5 clips)
Strasbourg
Stereo and
3D tracking,
Oxford monocular 23 TB Urban,
3D object detection, Oxford CC BY-NC-SA 3.0
(Maddern et al., 2017) cameras, GPS (133 clips) Highway
SLAM
Lidar, IMU
Monocular
CamVid Object detection, 8 GB
color Cambridge Urban N/A
(Brostow et al., 2009) Segmentation (4 clips)
camera
Pedestrian detection,
Daimler Stereo and
Classification, 91 GB Amsterdam,
pedestrian monocular Urban N/A
Segmentation, (8 clips) Beijing
(Flohr and Gavrila, 2013) cameras
Path prediction
Tracking,
Caltech Monocular Los Angeles
Segmentation, 11 GB Urban N/A
(Dollar et al., 2009) camera (USA)
Object detection

Table 2: Summary of datasets for training autonomous driving systems

KITTI Vision Benchmark dataset (KITTI) (Geiger et al., 2013). Provided by the Karlsruhe Institute of Technology
(KIT) from Germany, this dataset fits the challenges of benchmarking stereo-vision, optical flow, 3D tracking, 3D
object detection or SLAM algorithms. It is known as the most prestigious dataset in the self-driving vehicles domain.
To this date it counts more than 2000 citations in the literature. The data collection vehicle is equipped with multiple
high-resolution color and gray-scale stereo cameras, a Velodyne 3D LiDAR and high-precision GPS/IMU sensors. In
total, it provides 6 hours of driving data collected in both rural and highway traffic scenarios around Karlsruhe. The
dataset is provided under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

NuScenes dataset (Caesar et al., 2019). Constructed by nuTonomy, this dataset contains 1000 driving scenes collected
from Boston and Singapore, two known for their dense traffic and highly challenging driving situations. In order to
6 https://venturebeat.com/2018/09/14/scale-and-nutonomy-release-nuscenes-a-self-driving-

dataset-with-over-1-4-million-images/
7 https://scale.com/open-datasets
facilitate common computer vision tasks, such as object detection and tracking, the providers annotated 25 object
classes with accurate 3D bounding boxes at 2Hz over the entire dataset. Collection of vehicle data is still in progress.
The final dataset will include approximately 1,4 million camera images, 400.000 Lidar sweeps, 1,3 million RADAR
sweeps and 1,1 million object bounding boxes in 40.000 keyframes. The dataset is provided under the Creative
Commons Attribution-NonCommercial-ShareAlike 3.0 License license.

Automotive multi-sensor dataset (AMUSE) (Koschorrek et al., 2013). Provided by Linköping University of Sweden,
it consists of sequences recorded in various environments from a car equipped with an omnidirectional multi-camera,
height sensors, an IMU, a velocity sensor and a GPS. The API for reading these data sets is provided to the public,
together with a collection of long multi-sensor and multi-camera data streams stored in the given format. The dataset
is provided under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unsupported License.

Ford campus vision and lidar dataset (Ford) (Pandey et al., 2011). Provided by University of Michigan, this dataset
was collected using a Ford F250 pickup truck equipped with professional (Applanix POS-LV) and a consumer (Xsens
MTi-G) inertial measurement units (IMU), a Velodyne Lidar scanner, two push-broom forward looking Riegl Lidars
and a Point Grey Ladybug3 omnidirectional camera system. The approx. 100 GB of data was recorded around the Ford
Research campus and downtown Dearborn, Michigan in 2009. The dataset is well suited to test various autonomous
driving and simultaneous localization and mapping (SLAM) algorithms.

Udacity dataset (Udacity, 2018). The vehicle sensor setup contains monocular color cameras, GPS and IMU sensors,
as well as a Velodyne 3D Lidar. The size of the dataset is 223GB. The data is labeled and the user is provided with the
corresponding steering angle that was recorded during the test runs by the human driver.

Cityscapes dataset(Cityscapes, 2018). Provided by Daimler AG R&D, Germany; Max Planck Institute for Informat-
ics (MPI-IS), Germany, TU Darmstadt Visual Inference Group, Germany, the Cityscapes Dataset focuses on semantic
understanding of urban street scenes, this being the reason for which it contains only stereo vision color images. The
diversity of the images is very large: 50 cities, different seasons (spring, summer, fall), various weather conditions
and different scene dynamics. There are 5000 images with fine annotations and 20000 images with coarse annota-
tions. Two important challenges have used this dataset for benchmarking the development of algorithms for semantic
segmentation (Zhao et al., 2017) and instance segmentation (Liu et al., 2017).

The Oxford dataset (Maddern et al., 2017). Provided by Oxford University, UK, the dataset collection spanned over 1
year, resulting in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted
to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including
heavy rain, night, direct sunlight and snow. One of the particularities of this dataset is that the vehicle frequently drove
the same route over the period of a year to enable researchers to investigate long-term localization and mapping for
autonomous vehicles in real-world, dynamic urban environments.

The Cambridge-driving Labeled Video Dataset (CamVid) (Brostow et al., 2009). Provided by the University of Cam-
bridge, UK, it is one of the most cited dataset from the literature and the first released publicly, containing a collection
of videos with object class semantic labels, along with metadata annotations. The database provides ground truth
labels that associate each pixel with one of 32 semantic classes. The sensor setup is based on only one monocular
camera mounted on the dashboard of the vehicle. The complexity of the scenes is quite low, the vehicle being driven
only in urban areas with relatively low traffic and good weather conditions.

The Daimler pedestrian benchmark dataset (Flohr and Gavrila, 2013). Provided by Daimler AG R&D and University
of Amsterdam, this dataset fits the topics of pedestrian detection, classification, segmentation and path prediction.
Pedestrian data is observed from a traffic vehicle by using only on-board mono and stereo cameras. It is the first
dataset with contains pedestrians. Recently, the dataset was extended with cyclist video samples captured with the
same setup (Li et al., 2016).

Caltech pedestrian detection dataset (Caltech) (Dollar et al., 2009). Provided by California Institute of Technology,
US, the dataset contains richly annotated videos, recorded from a moving vehicle, with challenging images of low
resolution and frequently occluded people. There are approx. 10 hours of driving scenarios cumulating about 250.000
frames with a total of 350 thousand bounding boxes and 2.300 unique pedestrians annotations. The annotations include
both temporal correspondences between bounding boxes and detailed occlusion labels.

Given the variety and complexity of the available databases, choosing one or more to develop and test an autonomous
driving component may be difficult. As it can be observed, the sensor setup varies among all the available databases.
For localization and vehicle motion, the Lidar and GPS/IMU sensors are necessary, with the most popular Lidar
sensors used being Velodyne (Velodyne, 2018) and Sick (Sick, 2018). Data recorded from a radar sensor is present
only in the NuScenes dataset. The radar manufacturers adopt proprietary data formats which are not public. Almost
all available datasets include images captured from a video camera, while there is a balance use of monocular and
stereo cameras mainly configured to capture gray-scale images. AMUSE and Ford databases are the only ones that
use omnidirectional cameras.

Besides raw recorded data, the datasets usually contain miscellaneous files such as annotations, calibration files, labels,
etc. In order to cope with this files, the dataset provider must offer tools and software that enable the user to read and
post-process the data. Splitting of the datasets is also an important factor to consider, because some of the datasets
(e.g. Caltech, Daimler, Cityscapes) already provide pre-processed data that is classified in different sets: training,
testing and validation. This enables benchmarking of desired algorithms against similar approaches to be consistent.

Another aspect to consider is the license type. The most commonly used license is Creative Commons Attribution-
NonCommercial-ShareAlike 3.0. It allows the user to copy and redistribute in any medium or format and also to remix,
transform, and build upon the material. KITTI and NuScenes databases are examples of such distribution license. The
Oxford database uses a Creative Commons Attribution-Noncommercial 4.0. which, compared with the first license
type, does not force the user to distribute his contributions under the same license as the database. Opposite to that,
the AMUSE database is licensed under Creative Commons Attribution-Noncommercial-noDerivs 3.0 which makes
the database illegal to distribute if modification of the material are made.

With very few exceptions, the datasets are collected from a single city, which is usually around university campuses
or company locations in Europe, the US, or Asia. Germany is the most active country for driving recording vehicles.
Unfortunately, all available datasets together cover a very small portion of the world map. One reason for this is
the memory size of the data which is in direct relation with the sensor setup and the quality. For example, the Ford
dataset takes around 30 GB for each driven kilometer, which means that covering an entire city will take hundreds of
TeraBytes of driving data. The majority of the available datasets consider sunny, daylight and urban conditions, these
being ideal operating conditions for autonomous driving systems.

9 Computational Hardware and Deployment

Deploying deep learning algorithms on target edge devices is not a trivial task. The main limitations when it comes
to vehicles are the price, performance issues and power consumption. Therefore, embedded platforms are becoming
essential for integration of AI algorithms inside vehicles due to their portability, versatility, and energy efficiency.

The market leader in providing hardware solutions for deploying deep learning algorithms inside autonomous cars is
NVIDIA® . DRIVE PX (NVIDIA, b) is an AI car computer which was designed to enable the auto-makers to focus
directly on the software for autonomous vehicles.

The newest version of DrivePX architecture is based on two Tegra X2 (NVIDIA, f) systems on a chip (SoCs). Each
SoC contains two Denve (NVIDIA, a) cores, 4 ARM A57 cores and a graphical computeing unit (GPU) from the Pascal
(NVIDIA, e) generation. NVIDIA® DRIVE PX is capable to perform real-time environment perception, path planning
and localization. It combines deep learning, sensor fusion and surround vision to improve the driving experience.

Introduced in September 2018, NVIDIA® DRIVE AGX developer kit platform was presented as the world’s most
advanced self-driving car platform (NVIDIA, c), being based on the Volta technology (NVIDIA, d). It is available in
two different configurations, namely DRIVE AGX Xavier and DRIVE AGX Pegasus.
DRIVE AGX Xavier is a scalable open platform that can serve as an AI brain for self driving vehicles, and is an
energy-efficient computing platform, with 30 trillion operations per second, while meeting automotive standards like
the ISO 26262 functional safety specification. NVIDIA® DRIVE AGX Pegasus improves the performance with an
architecture which is built on two NVIDIA® Xavier processors and two state of the art TensorCore GPUs.

A hardware platform used by the car makers for Advanced Driver Assistance Systems (ADAS) is the R-Car V3H
system-on-chip (SoC) platform from Renesas Autonomy (Renesas, b). This SoC provides the possibility to implement
high performance computer vision with low power consumption. R-Car V3H is optimized for applications that involve
the usage of stereo cameras, containing dedicated hardware for convolutional neural networks, dense optical flow,
stereo-vision, and object classification. The hardware features four 1.0 GHz Arm Cortex-A53 MPCore cores, which
makes R-Car V3H a suitable hardware platform which can be used to deploy trained inference engines for solving
specific deep learning tasks inside the automotive domain.

Renesas also provides a similar SoC, called R-Car H3 (Renesas, a) which delivers improved computing capabilities
and compliance with functional safety standards. Equipped with new CPU cores (Arm Cortex-A57), it can be used
as an embedded platform for deploying various deep learning algorithms, compared with R-Car V3H, which is only
optimized for CNNs.

A Field-Programmable Gate Array (FPGA) is another viable solution, showing great improvements in both perfor-
mance and power consumption in deep learning applications. The suitability of the FPGAs for running deep learning
algorithms can be analyzed from four major perspectives: efficiency and power, raw computing power, flexibility and
functional safety. Our study is based on the research published by Intel (Nurvitadhi et al., 2017), Microsoft (Ovtcharov
et al., 2015) and UCLA (Cong et al., 2018).

By reducing the latency in deep learning applications, FPGAs provide additional raw computing power. The memory
bottlenecks, associated with external memory accesses, are reduced or even eliminated by the high amount of chip
cache memory. In addition, FPGAs have the advantages of supporting a full range of data types, together with custom
user-defined types.

FPGAs are optimized when it comes to efficiency and power consumption. The studies presented by manufacturers
like Microsoft and Xilinx show that GPUs can consume upon ten times more power than FPGAs when processing
algorithms with the same computation complexity, demonstrating that FPGAs can be a much more suitable solution
for deep learning applications in the automotive field.

In terms of flexibility, FPGAs are built with multiple architectures, which are a mix of hardware programmable re-
sources, digital signal processors and Processor Block RAM (BRAM) components. This architecture flexibility is
suitable for deep and sparse neural networks, which are the state of the art for the current machine learning applica-
tions. Another advantage is the possibility of connecting to various input and output peripheral devices like sensors,
network elements and storage devices.

In the automotive field, functional safety is one of the most important challenges. FPGAs have been designed to
meet the safety requirements for a wide range of applications, including ADAS. When compared to GPUs, which
were originally built for graphics and high-performance computing systems, where functional safety is not necessary,
FPGAs provide a significant advantage in developing driver assistance systems.

10 Discussion and Conclusions

We have identified seven major areas that form open challenges in the field of autonomous driving. We believe that
Deep Learning and Artificial Intelligence will play a key role in overcoming these challenges:

Perception: In order for an autonomous car to safely navigate the driving scene, it must be able to understand its
surroundings. Deep learning is the main technology behind a large number of perception systems. Although great
progress has been reported with respect to accuracy in object detection and recognition (Zhao et al., 2018b), current
systems are mainly designed to calculate 2D or 3D bounding boxes for a couple of trained object classes, or to provide
a segmented image of the driving environment. Future methods for perception should focus on increasing the levels of
recognized details, making it possible to perceive and track more objects in real-time. Furthermore, additional work
is required for bridging the gap between image- and LiDAR-based 3D perception (Wang et al., 2019), enabling the
computer vision community to close the current debate on camera vs. LiDAR as main perception sensors.

Short- to middle-term reasoning: Additional to a robust and accurate perception system, an autonomous vehicle should
be able to reason its driving behavior over a short (milliseconds) to middle (seconds to minutes) time horizon (Pendle-
ton et al., 2017). AI and deep learning are promising tools that can be used for the high- and low-level path path
planning required for navigating the miriad of driving scenarios. Currently, the largest portion of papers in deep learn-
ing for self-driving cars are focused mainly on perception and End2End learning (Shalev-Shwartz et al., 2016; Zhang
et al., 2016). Over the next period, we expect deep learning to play a significant role in the area of local trajectory
estimation and planning. We consider long-term reasoning as solved, as provided by navigation systems. These are
standard methods for selecting a route through the road network, from the car’s current position to destination (Pendle-
ton et al., 2017).

Availability of training data: ”Data is the new oil” became lately one of the most popular quote in the automotive
industry. The effectiveness of deep learning systems is directly tied to the availability of training data. As a rule of
thumb, current deep learning methods are also evaluated based on the quality of training data (Janai et al., 2017). The
better the quality of the data is, the higher the accuracy of the algorithm. The daily data recorded by an autonomous
vehicle is on the order of petabytes. This poses challenges on the parallelization of the training procedure, as well as
on the storage infrastructure. Simulation environments have been used in the last couple of years for bridging the gap
between scarce data and the deep learning’s hunger for training examples. There is still a gap to be filled between the
accuracy of a simulated world and real-world driving.

Learning corner cases: Most driving scenarios are considered solvable with classical methodologies. However, the
remaining unsolved scenarios are corner cases which, until now, required the reasoning and intelligence of a human
driver. In order to overcome corner cases, the generalization power of deep learning algorithms should be increased.
Generalization in deep learning is of special importance in learning hazardous situations that can lead to accidents,
especially due to the fact that training data for such corner cases is scarce. This implies also the design of one-shot
and low-shot learning methods, that can be trained a reduced number of training examples.

Learning-based control methods: Classical controllers make use of an a-priori model composed of fixed parameters.
In a complex case, such as autonomous driving, these controllers cannot anticipate all driving situations. The effec-
tiveness of deep learning components to adapt based on past experiences can also be used to learn the parameters of
the car’s control system, thus better approximating the underlaying true system model (Ostafew, 2016; Ostafew et al.,
2016).

Functional safety: The usage of deep learning in safety-critical systems is still an open debate, efforts being made to
bring the computational intelligence and functional safety communities closer to each other. Current safety standards,
such as the ISO 26262, do not accommodate machine learning software (Salay et al., 2017). Although new data-driven
design methodologies have been proposed, there are still opened issues on the explainability, stability, or classification
robustness of deep neural networks.

Real-time computing and communication: Finally, real-time requirements have to be fulfilled for processing the large
amounts of data gathered from the car’s sensors suite, as well as for updating the parameters of deep learning systems
over high-speed communication lines (Nurvitadhi et al., 2017). These real-time constraints can be backed up by
advances in semiconductor chips dedicated for self-driving cars, as well as by the rise of 5G communication networks.
10.1 Final Notes

Autonomous vehicle technology has seen a rapid progress in the past decade, especially due to advances in the area of
artificial intelligence and deep learning. Current AI methodologies are nowadays either used or taken into considera-
tion when designing different components for a self-driving car. Deep learning approaches have influenced not only
the design of traditional perception-planning-action pipelines, but have also enabled End2End learning systems, able
do directly map sensory information to steering commands.

Driverless cars are complex systems which have to safely drive passengers or cargo from a starting location to desti-
nation. Several challenges are encountered with the advent of AI based autonomous vehicles deployment on public
roads. A major challenge is the difficulty in proving the functional safety of these vehicle, given the current formal-
ism and explainability of neural networks. On top of this, deep learning systems rely on large training databases and
require extensive computational hardware.

This paper has provided a survey on deep learning technologies used in autonomous driving. The survey of perfor-
mance and computational requirements serves as a reference for system level design of AI based self-driving vehicles.

Acknowledgment

The authors would like to thank Elektrobit Automotive for the infrastructure and research support.

References

Amodei, D., Olah, C., Steinhardt, J., Christiano, P. F., Schulman, J., and Mané, D. (2016). Concrete Problems in AI
Safety. CoRR, abs/1606.06565.

Andrychowicz, M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., Petron, A., Plappert, M.,
Powell, G., Ray, A., Schneider, J., Sidor, S., Tobin, J., Welinder, P., Weng, L., and Zaremba, W. (2018). Learning
Dexterous In-Hand Manipulation. CoRR, abs/1808.00177.

Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). SegNet: A Deep Convolutional Encoder-Decoder Architec-
ture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39.

Barnes, D., Maddern, W., Pascoe, G., and Posner, I. (2018). Driven to Distraction: Self-Supervised Distractor Learning
for Robust Monocular Visual Odometry in Urban Environments. In 2018 IEEE Int. Conf. on Robotics and
Automation (ICRA). IEEE.

Barsan, I. A., Wang, S., Pokrovsky, A., and Urtasun, R. (2018). Learning to Localize Using a LiDAR Intensity Map.
In Proc. of the 2nd Conf. on Robot Learning (CoRL).

Bechtel, M. G., McEllhiney, E., and Yun, H. (2018). DeepPicar: A Low-cost Deep Neural Network-based Autonomous
Car. In The 24th IEEE Inter. Conf. on Embedded and Real-Time Computing Systems and Applications (RTCSA),
pages 1–12.

Bellman, R. (1957). Dynamic Programming. Princeton University Press.

Bengio, Y., Courville, A., and Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828.

Bernd, S., Detlev, R., Susanne, E., Ulf, W., Wolfgang, B., Patz, and Carsten (2012). Challenges in Applying the ISO
26262 for Driver Assistance Systems. In Schwerpunkt Vernetzung, 5. Tagung Fahrerassistenz.

Bojarski, M., Testa, D. D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller,
U., Zhang, J., Zhang, X., Zhao, J., and Zieba, K. (2016). End to End Learning for Self-Driving Cars. CoRR,
abs/1604.07316.
Bojarski, M., Yeres, P., Choromanska, A., Choromanski, K., Firner, B., Jackel, L., and Muller, U. (2017). Explaining
How a Deep Neural Network Trained with End-to-End Learning Steers a Car. arXiv preprint arXiv:1704.07911.
Brachmann, E. and Rother, C. (2018). Learning Less is More 6D Camera Localization via 3D Surface Regression. In
IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2018.
Bresson, G., Alsayed, Z., Yu, L., and Glaser, S. (2017). Simultaneous Localization and Mapping: A Survey of Current
Trends in Autonomous Driving. IEEE Transactions on Intelligent Vehicles, 2(3):194–220.
Brostow, G. J., Fauqueur, J., and Cipolla, R. (2009). Semantic Object Classes in Video: A High-definition Ground
Truth Database. Pattern Recognition Letters, 30:88–97.
Brunner, M., Rosolia, U., Gonzales, J., and Borrelli, F. (2017). Repetitive Learning Model Predictive Control: An
Autonomous Racing Example. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pages
2545–2550.
Burton S., Gauerhof L., H. C. (2017). Making the Case for Safety of Machine Learning in Highly Automated Driving.
Lecture Notes in Computer Science, 10489.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom,
O. (2019). nuScenes: A multimodal Dataset for Autonomous Driving. arXiv preprint arXiv:1903.11027.
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015). Intelligible Models for HealthCare:
Predicting Pneumonia Risk and Hospital 30-day Readmission. In Proceedings of the 21th ACM SIGKDD Int.
Conf. on Knowledge Discovery and Data Mining, pages 1721–1730.
Chakarov, A., Nori, A., Rajamani, S., Sen, S., and Vijaykeerthy, D. (2018). Debugging Machine Learning Tasks.
arXiv preprint arXiv:1603.07292.
Chen, C., Seff, A., Kornhauser, A. L., and Xiao, J. (2015). DeepDriving: Learning Affordance for Direct Perception
in Autonomous Driving. 2015 IEEE Int. Conf. on Computer Vision (ICCV), pages 2722–2730.
Chen, X., Ma, H., Wan, J., Li, B., and Xia, T. (2017). Multi-View 3D Object Detection Network for Autonomous
Driving. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017.
Cityscapes (2018). Cityscapes Data Collection. https://www.cityscapes-dataset.com/.
Cong, J., Fang, Z., Lo, M., Wang, H., Xu, J., and Zhang, S. (2018). Understanding Performance Differences of FPGAs
and GPUs: (Abtract Only). In Proceedings of the 2018 ACM/SIGDA Int. Symposium on Field-Programmable
Gate Arrays, FPGA ’18, pages 288–288, New York, NY, USA. ACM.
Dai, J., Li, Y., He, K., and Sun, J. (2016). R-fcn: Object Detection via Region-based Fully Convolutional Networks.
In Advances in neural information processing systems, pages 379–387.
Dalal, N. and Triggs, B. (2005). Histograms of Oriented Gradients for Human Detection. In In CVPR, pages 886–893.
Daumé, III, H. and Marcu, D. (2006). Domain Adaptation for Statistical Classifiers. J. Artif. Int. Res., 26(1):101–126.
Dickmanns, E. and Graefe, V. (1988). Dynamic Monocular Machine Vision. Machine vision and applications, 1:223–
240.
Dollar, P., Wojek, C., Schiele, B., and Perona, P. (2009). Pedestrian Detection: A Benchmark. In 2009 IEEE Conf. on
Computer Vision and Pattern Recognition, pages 304–311.
Drews, P., Williams, G., Goldfain, B., A Theodorou, E., and M Rehg, J. (2017a). Aggressive Deep Driving: Combining
Convolutional Neural Networks and Model Predictive Control. pages 133–142.
Drews, P., Williams, G., Goldfain, B., Theodorou, E. A., and Rehg, J. M. (2017b). Aggressive Deep Driving: Model
Predictive Control with a CNN Cost Model. CoRR, abs/1707.05303.
Eraqi, H. M., Moustafa, M. N., and Honer, J. (2017). End-to-end Deep Learning for Steering Autonomous Vehicles
Considering Temporal Dependencies. arXiv preprint arXiv:1710.03804.
Ferrel, T. (2010). Engineering Safety-critical Systems in the 21st Century.
Flohr, F. and Gavrila, D. M. (2013). Daimler Pedestrian Segmentation Benchmark Dataset. In Proc. of the British
Machine Vision Conference.
Fridman, L., Brown, D. E., Glazer, M., Angell, W., Dodd, S., Jenik, B., Terwilliger, J., Kindelsberger, J., Ding, L.,
Seaman, S., Abraham, H., Mehler, A., Sipperley, A., Pettinato, A., Angell, L., Mehler, B., and Reimer, B. (2017).
MIT Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior
and Interaction with Automation. IEEE Access 2017.
Garcia-Favrot, O. and Parent, M. (2009). Laser Scanner Based SLAM in Real Road and Traffic Environment. In IEEE
Int. Conf. Robotics and Automation (ICRA09). Workshop on Safe navigation in open and dynamic environments
Application to autonomous vehicles.
Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). Vision Meets Robotics: The KITTI Dataset. The Int. Journal
of Robotics Research, 32(11):1231–1237.
Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE Int. Conf. on computer vision, pages 1440–1448.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). Rich Feature Hierarchies for Accurate Object Detection
and Semantic Segmentation. In Proceedings of the 2014 IEEE Conf. on Computer Vision and Pattern Recognition,
CVPR ’14, pages 580–587, Washington, DC, USA. IEEE Computer Society.
Goldberg, Y. (2017). Neural Network Methods for Natural Language Processing, volume 37 of Synthesis Lectures on
Human Language Technologies. Morgan & Claypool.
Goodale, M. A. and Milner, A. (1992). Separate Visual Pathways for Perception and Action. Trends in Neurosciences,
15(1):20 – 25.
Grigorescu, S., Trasnea, B., Marina, L., Vasilcoi, A., and Cocias, T. (2019). NeuroTrajectory: A Neuroevolutionary
Approach to Local State Trajectory Learning for Autonomous Vehicles. IEEE Robotics and Automation Letters,
4(4):3441–3448.
Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. (2016a). Continuous Deep Q-Learning with Model-based Accelera-
tion. In Int. Conf. on Machine Learning ICML 2016, volume 48, pages 2829–2838.
Gu, T., Dolan, J. M., and Lee, J. (2016b). Human-like Planning of Swerve Maneuvers for Autonomous Vehicles. In
2016 IEEE Intelligent Vehicles Symposium (IV), pages 716–721.
Harris, M. (2016). Google Reports Self-driving Car Mistakes: 272 Failures and 13 Near Misses. The Guardian.
Hasirlioglu, S., Kamann, A., Doric, I., and Brandmeier, T. (2016). Test Methodology for Rain Influence on Automotive
Surround Sensors. In 2016 IEEE 19th Int. Conf. on Intelligent Transportation Systems (ITSC), pages 2242–2247.
He, K., Gkioxari, G., Dollar, P., and Girshick, R. B. (2017). Mask R-CNN. 2017 IEEE Int. Conf. on Computer Vision
(ICCV), pages 2980–2988.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In Proceedings of the
IEEE Conf. on computer vision and pattern recognition, pages 770–778.
Hecker, S., Dai, D., and Van Gool, L. (2018). End-to-End Learning of Driving Models with Surround-view Cameras
and Route Planners. In European Conference on Computer Vision (ECCV).
Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and
Silver, D. (2017). Rainbow: Combining Improvements in Deep Reinforcement Learning.
Hochreiter, S. and Schmidhuber, J. (1997). Long Short-term Memory. Neural computation, 9(8):1735–1780.
Hoermann, S., Bach, M., and Dietmayer, K. (2017). Dynamic Occupancy Grid Prediction for Urban Autonomous
Driving: Deep Learning Approach with Fully Automatic Labeling. IEEE Int. Conf. on Robotics and Automation
(ICRA).
Hubel, D. H. and N.Wiesel, T. (1963). Shape and Arrangement of Columns in Cats Striate Cortex. The Journal of
Physiology, 165(3):559568.

Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. (2016). Squeezenet: Alexnet-level
Accuracy with 50x Fewer Parameters and¡ 0.5 Mb Model Size. arXiv preprint arXiv:1602.07360.

J. Duchi, E. H. and Singer, Y. (2011). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
Journal of Machine Learning Research, 12:2121–2159.

Janai, J., Guney, F., Behl, A., and Geiger, A. (2017). Computer Vision for Autonomous Vehicles: Problems, Datasets
and State-of-the-Art.

Jaritz, M., de Charette, R., Toromanoff, M., Perot, E., and Nashashibi, F. (2018). End-to-End Race Driving with Deep
Reinforcement Learning. 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), pages 2070–2075.

Jose, F. (2018). Safety-Critical Systems.

Kamel, M., Hafez, A., and Yu, X. (2018). A Review on Motion Control of Unmanned Ground and Aerial Vehicles
Based on Model Predictive Control Techniques. Engineering Science and Military Technologies, 2:10–23.

Kapania, N. R. and Gerdes, J. C. (2015). Path Tracking of Highly Dynamic Autonomous Vehicle Trajectories via
Iterative Learning Control. In 2015 American Control Conference (ACC), pages 2753–2758.

Katz, G., Barrett, C. W., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An Efficient SMT Solver
for Verifying Deep Neural Networks. In CAV.

Kendall, A., Grimes, M., and Cipolla, R. (2015). PoseNet: A Convolutional Network for Real-Time 6-DOF Camera
Relocalization. In Proceedings of the 2015 IEEE Int. Conf. on Computer Vision (ICCV), pages 2938–2946,
Washington, DC, USA. IEEE Computer Society.

Kingma, D. P. and Ba, J. (2015). Adam: A Method for Stochastic Optimization. In 3rd Int. Conf. on Learning
Representations, ICLR 2015, San Diego, CA, USA.

Koopman, P. (2017). Challenges in Autonomous Vehicle Validation: Keynote Presentation Abstract. In Proceedings
of the 1st Int. Workshop on Safe Control of Connected and Autonomous Vehicles.

Koschorrek, P., Piccini, T., berg, P., Felsberg, M., Nielsen, L., and Mester, R. (2013). A Multi-sensor Traffic Scene
Dataset with Omnidirectional Video. In Ground Truth - What is a good dataset? CVPR Workshop 2013.

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural
Networks. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, Advances in Neural
Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc.

Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S. L. (2018). Joint 3D Proposal Generation and Object
Detection from View Aggregation. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) 2018. IEEE.

Kurd, Z., Kelly, T., and Austin, J. (2007). Developing Artificial Neural Networks for Safety Critical Systems. Neural
Computing and Applications, 16(1):11–19.

Laskar, Z., Melekhov, I., Kalia, S., and Kannala, J. (2017). Camera Relocalization by Computing Pairwise Relative
Poses Using Convolutional Neural Network. In The IEEE Int. Conf. on Computer Vision (ICCV).

Law, H. and Deng, J. (2018). Cornernet: Detecting Objects as Paired Keypoints. In Proceedings of the European
Conference on Computer Vision (ECCV), pages 734–750.

Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based Learning Applied to Document Recognition.
Proceedings of the IEEE, 86(11):2278–2324.

Lefevre, S., Carvalho, A., and Borrelli, F. (2015). Autonomous Car Following: A Learning-based Approach. In 2015
IEEE Intelligent Vehicles Symposium (IV), pages 920–926.
Lefvre, S., Carvalho, A., and Borrelli, F. (2016). A Learning-Based Framework for Velocity Control in Autonomous
Driving. IEEE Transactions on Automation Science and Engineering, 13(1):32–42.
Levin, S. (2018). Tesla Fatal Crash: ’Autopilot’ Mode Sped up Car Before Driver Killed, Report Finds. The Guardian.
Li, J., Peng, K., and Chang, C.-C. (2018). An Efficient Object Detection Algorithm Based on Compressed Networks.
Symmetry, 10(7):235.
Li, X., Flohr, F., Yang, Y., Xiong, H., Braun, M., Pan, S., Li, K., and Gavrila, D. M. (2016). A New Benchmark for
Vision-based Cyclist Detection. In 2016 IEEE Intelligent Vehicles Symposium (IV), pages 1028–1033.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2016). Continuous
Control with Deep Reinforcement Learning.
Liu, S., Jia, J., Fidler, S., and Urtasun, R. (2017). SGN: Sequential Grouping Networks for Instance Segmentation.
pages 3516–3524.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. (2016). Ssd: Single Shot Multibox
Detector. In European conference on computer vision, pages 21–37. Springer.
Luo, W., Yang, B., and Urtasun, R. (2018). Fast and Furious: Real Time End-to-End 3D Detection, Tracking and
Motion Forecasting With a Single Convolutional Net. In IEEE Conf. on Computer Vision and Pattern Recognition
(CVPR) 2018.
Maddern, W., Pascoe, G., Linegar, C., and Newman, P. (2017). 1 Year, 1000km: The Oxford RobotCar Dataset. The
Int. Journal of Robotics Research (IJRR), 36(1):3–15.
Marina, L., Trasnea, B., Cocias, T., Vasilcoi, A., Moldoveanu, F., and Grigorescu, S. (2019). Deep Grid Net (DGN): A
Deep Learning System for Real-Time Driving Context Understanding. In Int. Conf. on Robotic Computing IRC
2019, Naples, Italy.
McPherson, J. (2018). How Uber’s Self-Driving Technology Could Have Failed In The Fatal Tempe Crash. Forbes.
Meier F, H. P. and S, S. (2014). Efficient Bayesian Local Model Learning for Control. In IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems (IROS) 2016, pages 2244–2249. IEEE.
Melekhov, I., Ylioinas, J., Kannala, J., and Rahtu, E. (2017). Image-Based Localization Using Hourglass Networks.
2017 IEEE Int. Conf. on Computer Vision Workshops (ICCVW), pages 870–877.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidje-
land, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra,
D., Legg, S., and Hassabis, D. (2015). Human-level Control Through Deep Reinforcement Learning. Nature,
518(7540):529–533.
Möller, N. (2012). The Concepts of Risk and Safety. Springer Netherlands.
Muller, U., Ben, J., Cosatto, E., Flepp, B., and Cun, Y. L. (2006). Off-road Obstacle Avoidance through End-to-End
Learning. In Advances in neural information processing systems, pages 739–746.
Nguyen-Tuong D, P. J. and M, S. (2008). Local Gaussian Process Regression for Real Time Online Model Learning.
In Proceedings of the neural information processing systems Conference, pages 1193–1200.
Nurvitadhi, E., Venkatesh, G., Sim, J., Marr, D., Huang, R., Ong Gee Hock, J., Liew, Y. T., Srivatsan, K., Moss, D.,
Subhaschandra, S., and Boudoukh, G. (2017). Can FPGAs Beat GPUs in Accelerating Next-Generation Deep
Neural Networks? In Proceedings of the 2017 ACM/SIGDA Int. Symposium on Field-Programmable Gate Arrays,
FPGA ’17, pages 5–14, New York, NY, USA. ACM.
Nushi, B., Kamar, E., Horvitz, E., and Kossmann, D. (2017). On Human Intellect and Machine Failures: Troubleshoot-
ing Integrative Machine Learning Systems. In AAAI.
NVIDIA. Denver Core. https://en.wikichip.org/wiki/nvidia/microarchitectures/denver.
NVIDIA. NVIDIA AI Car Computer Drive PX. https://www.nvidia.com/en-au/self-driving-cars/drive-px/.

NVIDIA. NVIDIA Drive AGX. https://www.nvidia.com/en-us/self-driving-cars/drive-platform/hardware/.

NVIDIA. NVIDIA Volta. https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/.

NVIDIA. Pascal Microarchitecture. https://www.nvidia.com/en-us/data-center/pascal-gpu-architecture/.

NVIDIA. Tegra X2. https://devblogs.nvidia.com/jetson-tx2-delivers-twice-intelligence-edge/.

Ojala, T., Pietikäinen, M., and Harwood, D. (1996). A Comparative Study of Texture Measures with Classification
Based on Featured Distributions. Pattern Recognition, 29(1):51–59.

O’Kane, S. (2018). How Tesla and Waymo are Tackling a Major Problem for Self-Driving Cars: Data. Transportation.

Ondruska, P., Dequaire, J., Wang, D. Z., and Posner, I. (2016). End-to-End Tracking and Semantic Segmentation
Using Recurrent Neural Networks. CoRR, abs/1604.05091.

Ostafew, C., Schoellig, A., and D. Barfoot, T. (2013). Visual Teach and Repeat, Repeat, Repeat: Iterative Learning
Control to Improve Mobile Robot Path Tracking in Challenging Outdoor Environments. pages 176–181.

Ostafew, C., Schoellig, A., and D. Barfoot, T. (2016). Robust Constrained Learning-based NMPC Enabling Reliable
Mobile Robot Path Tracking. The Int. Journal of Robotics Research, 35.

Ostafew, C. J. (2016). Learning-based Control for Autonomous Mobile Robots. PhD thesis, University of Toronto.

Ostafew, C. J., Schoellig, A. P., and Barfoot, T. D. (2014). Learning-based Nonlinear Model Predictive Control to
Improve Vision-based Mobile Robot Path-tracking in Challenging Outdoor Environments. In 2014 IEEE Int.
Conf. on Robotics and Automation (ICRA), pages 4029–4036.

Ostafew, C. J., Schoellig, A. P., and Barfoot, T. D. (2015). Conservative to Confident: Treating Uncertainty Robustly
within Learning-Based Control. In 2015 IEEE Int. Conf. on Robotics and Automation (ICRA), pages 421–427.

Ovtcharov, K., Ruwase, O., Kim, J.-Y., Fowers, J., Strauss, K., and Chung, E. (2015). Accelerating Deep Convolutional
Neural Networks Using Specialized Hardware.

Paden, B., Cáp, M., Yong, S. Z., Yershov, D. S., and Frazzoli, E. (2016). A Survey of Motion Planning and Control
Techniques for Self-Driving Urban Vehicles. IEEE Trans. Intelligent Vehicles, 1(1):33–55.

Pan, Y., Cheng, C., Saigol, K., Lee, K., Yan, X., Theodorou, E., and Boots, B. (2018). Agile Off-Road Autonomous
Driving Using End-to-End Deep Imitation Learning. Robotics: Science and Systems 2018.

Pan, Y., Cheng, C.-A., Saigol, K., Lee, K., Yan, X., Theodorou, E. A., and Boots, B. (2017). Learning Deep Neural
Network Control Policies for Agile Off-Road Autonomous Driving.

Pandey, G., McBride, J. R., and Eustice, R. M. (2011). Ford Campus Vision and Lidar Data Set . Int. Journal of
Robotics Research, 30(13):1543–1552.

Panomruttanarug, B. (2017). Application of Iterative Learning Control in Tracking a Dubin’s Path in Parallel Parking.
Int. Journal of Automotive Technology, 18(6):1099–1107.

Panov, A. I., Yakovlev, K. S., and Suvorov, R. (2018). Grid Path Planning with Deep Reinforcement Learning:
Preliminary Results. Procedia Computer Science, 123:347 – 353. 8th Annual Int. Conf. on Biologically Inspired
Cognitive Architectures, BICA 2017.

Parasuraman, R. and Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors,
39(2):230–253.

Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E. (2016). Enet: A Deep Neural Network Architecture for
Real-time Semantic Segmentation. arXiv preprint arXiv:1606.02147.
Paxton, C., Raman, V., Hager, G. D., and Kobilarov, M. (2017). Combining Neural Networks and Tree Search for
Task and Motion Planning in Challenging Environments. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and
Systems (IROS), abs/1703.07887.
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., Rus, D., and Ang, M. H. (2017).
Perception, Planning, Control, and Coordination for Autonomous Vehicles. Machines, 5(1):6.
Perot, E., Jaritz, M., Toromanoff, M., and Charette, R. D. (2017). End-to-End Driving in a Realistic Racing Game
with Deep Reinforcement Learning. In 2017 IEEE Conf. on Computer Vision and Pattern Recognition Workshops
(CVPRW), pages 474–475.
Pomerleau, D. A. (1989). Alvinn: An autonomous Land Vehicle in a Neural Network. In Advances in neural infor-
mation processing systems, pages 305–313.
Qi, C. R., Liu, W., Wu, C., Su, H., and Guibas, L. J. (2018). Frustum PointNets for 3D Object Detection from RGB-D
Data. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2018.
Qi, C. R., Su, H., Mo, K., and Guibas, L. J. (2017). PointNet: Deep Learning on Point Sets for 3D Classification and
Segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017.
R. Varshney, K. and Alemzadeh, H. (2016). On the Safety of Machine Learning: Cyber-Physical Systems, Decision
Sciences, and Data Products. Big data, 5.
Radwan, N., Valada, A., and Burgard, W. (2018). VLocNet++: Deep Multitask Learning for Semantic Visual Local-
ization and Odometry. IEEE Robotics and Automation Letters.
Ramos, S., Gehrig, S. K., Pinggera, P., Franke, U., and Rother, C. (2016). Detecting Unexpected Obstacles for Self-
Driving Cars: Fusing Deep Learning and Geometric Modeling. IEEE Intelligent Vehicles Symposium, 4.
Rausch, V., Hansen, A., Solowjow, E., Liu, C., Kreuzer, E., and Hedrick, J. K. (2017). Learning a Deep Neural Net
Policy for End-to-End Control of Autonomous Vehicles. In 2017 American Control Conference (ACC), pages
4914–4919.
Rawlings, J. and Mayne, D. (2009). Model Predictive Control: Theory and Design. Nob Hill Pub.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You Only Look Once: Unified, Real-time Object
Detection. In Proceedings of the IEEE Conf. on computer vision and pattern recognition, pages 779–788.
Redmon, J. and Farhadi, A. (2017). YOLO9000: Better, Faster, Stronger. IEEE Conf. on Computer Vision and Pattern
Recognition (CVPR).
Redmon, J. and Farhadi, A. (2018). Yolov3: An Incremental Improvement. arXiv preprint arXiv:1804.02767.
Rehder, E., Quehl, J., and Stiller, C. (2017). Driving Like a Human: Imitation Learning for Path Planning using
Convolutional Neural Networks. In Int. Conf. on Robotics and Automation Workshops.
Ren, S., He, K., Girshick, R., and Sun, J. (2017). Faster R-CNN: Towards Real-time Object Detection with Region
Proposal Networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):1137–1149.
Renesas. R-Car H3. https://www.renesas.com/sg/en/solutions/automotive/soc/r-car-h3.html/.
Renesas. R-Car V3H. https://www.renesas.com/eu/en/solutions/automotive/soc/r-car-v3h.html/.
Rosolia, U., Carvalho, A., and Borrelli, F. (2017). Autonomous Racing using Learning Model Predictive Control. In
2017 American Control Conference (ACC), pages 5115–5120.
Rumelhart, D. E., McClelland, J. L., and PDP Research Group, C., editors (1986). Parallel Distributed Processing:
Explorations in the Microstructure of Cognition, Vol. 1: Foundations. MIT Press, Cambridge, MA, USA.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein,
M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. Int. Journal of
Computer Vision (IJCV), 115(3):211–252.
SAE Committee (2014). Taxonomy and Definitions for Terms Related to On-road Motor Vehicle Automated Driving
Systems.
Salay, R., Queiroz, R., and Czarnecki, K. (2017). An Analysis of ISO 26262: Using Machine Learning Safely in
Automotive Software. CoRR, abs/1709.02435.
Sallab, A. E., Abdou, M., Perot, E., and Yogamani, S. (2017). Deep Reinforcement Learning framework for Au-
tonomous Driving. CoRR, abs/1704.02532.
Sarlin, P., Debraine, F., Dymczyk, M., Siegwart, R., and Cadena, C. (2018). Leveraging Deep Visual Descriptors for
Hierarchical Efficient Localization. In Proc. of the 2nd Conf. on Robot Learning (CoRL).
Schwarting, W., Alonso-Mora, J., and Rus, D. (2018). Planning and Decision-Making for Autonomous Vehicles.
Annual Review of Control, Robotics, and Autonomous Systems, 1.
Seeger, C., Mller, A., and Schwarz, L. (2016). Towards Road Type Classification with Occupancy Grids. In Intel-
ligent Vehicles Symposium - Workshop: DeepDriving - Learning Representations for Intelligent Vehicles, IEEE,
Gothenburg, Sweden.
Shalev-Shwartz, S., Shammah, S., and Shashua, A. (2016). Safe, Multi-Agent, Reinforcement Learning for Au-
tonomous Driving.
Shin, K., Kwon, Y. P., and Tomizuka, M. (2018). RoarNet: A Robust 3D Object Detection based on RegiOn Approx-
imation Refinement. CoRR, abs/1811.03818.
Sick (2018). Sick LiDAR for Data Collection. https://www.sick.com/.
Sigaud, O., Salaün, C., and Padois, V. (2011). On-line Regression Algorithms for Learning Mechanical Models of
Robots: A Survey. Robotics and Autonomous Systems, 59(12):1115–1129.
Simonyan, K. and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-scale Image Recognition.
arXiv preprint arXiv:1409.1556.
Sun, L., Peng, C., Zhan, W., and Tomizuka, M. (2018). A Fast Integrated Planning and Control Framework for
Autonomous Driving via Imitation Learning. ASME 2018 Dynamic Systems and Control Conference, 3.
Sutton, R. and Barto, A. (1998). Introduction to Reinforcement Learning. MIT Press.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich,
A. (2015). Going Deeper with Convolutions. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR).
Takanami, I., Sato, M., and Yang, Y. P. (2000). A Fault-value Injection Approach for Multiple-weight-fault Tolerance
of MNNs. In Proceedings of the IEEE-INNS-ENNS, pages 515–520 vol.3.
Thrun, S., Burgard, W., and Fox, D. (2005). Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). In
Cambridge: The MIT Press.
Tinchev, G., Penate-Sanchez, A., and Fallon, M. (2019). Learning to See the Wood for the Trees: Deep Laser
Localization in Urban and Natural Environments on a CPU. IEEE Robotics and Automation Letters, 4(2):1327–
1334.
Treml, M., Arjona-Medina, J. A., Unterthiner, T., Durgesh, R., Friedmann, F., Schuberth, P., Mayr, A., Heusel, M.,
Hofmarcher, M., Widrich, M., Nessler, B., and Hochreiter, S. (2016). Speeding up Semantic Segmentation for
Autonomous Driving.
Udacity (2018). Udacity Data Collection. http://academictorrents.com/collection/self-
driving-cars.
Ushani, A. K. and Eustice, R. M. (2018). Feature Learning for Scene Flow Estimation from LIDAR. In Proc. of the
2nd Conf. on Robot Learning (CoRL), volume 87, pages 283–292.
Valada, A., Vertens, J., Dhall, A., and Burgard, W. (2017). AdapNet: Adaptive Semantic Segmentation in Adverse
Environmental Conditions. 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), pages 4644–4651.
Varshney, K. R. (2016). Engineering Safety in Machine Learning. In 2016 Information Theory and Applications
Workshop (ITA), pages 1–5.
Velodyne (2018). Velodyne LiDAR for Data Collection. https://velodynelidar.com/.
Viola, P. A. and Jones, M. J. (2001). Rapid Object Detection using a Boosted Cascade of Simple Features. In 2001
IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM,
8-14 December 2001, Kauai, HI, USA, pages 511–518.
Walch, F., Hazirbas, C., Leal-Taixé, L., Sattler, T., Hilsenbeck, S., and Cremers, D. (2017). Image-Based Localization
Using LSTMs for Structured Feature Correlation. 2017 IEEE Int. Conf. on Computer Vision (ICCV), pages
627–637.
Wang, Y., Chao, W.-L., Garg, D., Hariharan, B., Campbell, M., and Weinberger, K. (2019). Pseudo-LiDAR from
Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. In IEEE Conf. on
Computer Vision and Pattern Recognition (CVPR) 2019.
Watkins, C. and Dayan, P. (1992). Q-Learning. Machine Learning, 8(3):279292.
Wayve (2018). Learning to Drive in a Day.
Wulfmeier, M., Wang, D. Z., and Posner, I. (2016). Watch This: Scalable Cost-Function Learning for Path Planning
in Urban Environments. 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), abs/1607.02329.
Xu, H., Gao, Y., Yu, F., and Darrell, T. (2017). End-to-End Learning of Driving Models from Large-scale Video
Datasets. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
Yang, S., Wang, W., Liu, C., Deng, K., and Hedrick, J. K. (2017a). Feature Analysis and Selection for Training an
End-to-End Autonomous Vehicle Controller Using the Deep Learning Approach. 2017 IEEE Intelligent Vehicles
Symposium, 1.
Yang, Z., Zhou, F., Li, Y., and Wang, Y. (2017b). A Novel Iterative Learning Path-tracking Control for Nonholonomic
Mobile Robots Against Initial Shifts. Int. Journal of Advanced Robotic Systems, 14:172988141771063.
Yin, H. and Berger, C. (2017). When to Use what Data Set for Your Self-driving Car Algorithm: An Overview of
Publicly Available Driving Datasets. In 2017 IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC),
pages 1–8.
Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., and Darrell, T. (2018a). BDD100K: A Diverse Driving
Video Database with Scalable Annotation Tooling. CoRR, abs/1805.04687.
Yu, L., Shao, X., Wei, Y., and Zhou, K. (2018b). Intelligent Land-Vehicle Model Transfer Trajectory Planning Method
Based on Deep Reinforcement Learning. Sensors (Basel, Switzerland), 18.
Zhang, S., Wen, L., Bian, X., Lei, Z., and Li, S. Z. (2017). Single-shot Refinement Neural Network for Object
Detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zhang, T., Kahn, G., Levine, S., and Abbeel, P. (2016). Learning Deep Control Policies for Autonomous Aerial
Vehicles with MPC-guided Policy Search. 2016 IEEE Int. Conf. on Robotics and Automation (ICRA).
Zhao, H., Qi, X., Shen, X., Shi, J., and Jia, J. (2018a). Icnet for Real-time Semantic Segmentation on High-resolution
Images. European Conference on Computer Vision, pages 418–434.
Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017). Pyramid Scene Parsing Network. In 2017 IEEE Conf. on
Computer Vision and Pattern Recognition (CVPR), pages 6230–6239.
Zhao, Z.-Q., Zheng, P., Xu, S.-t., and Wu, X. (2018b). Object Detection with Deep Learning: A Review. IEEE
transactions on neural networks and learning systems.
Zhou, Y. and Tuzel, O. (2018). VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. IEEE
Conf. on Computer Vision and Pattern Recognition 2018, pages 4490–4499.
Zhu, H., Yuen, K.-V., Mihaylova, L. S., and Leung, H. (2017). Overview of Environment Perception for Intelligent
Vehicles. IEEE Transactions on Intelligent Transportation Systems, 18:2584–2601.

View publication stats

You might also like