A Brief Survey of Deep Reinforcement Learning PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 1

A Brief Survey of Deep Reinforcement Learning


Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath

AbstractDeep reinforcement learning is poised to revolu- deep learning algorithms within RL defining the field of
tionise the field of AI and represents a step towards building deep reinforcement learning (DRL). The aim of this survey
autonomous systems with a higher level understanding of the is to cover both seminal and recent developments in DRL,
visual world. Currently, deep learning is enabling reinforcement
learning to scale to problems that were previously intractable, conveying the innovative ways in which neural networks can
such as learning to play video games directly from pixels. Deep be used to bring us closer towards developing autonomous
reinforcement learning algorithms are also applied to robotics, agents.
arXiv:1708.05866v1 [cs.LG] 19 Aug 2017

allowing control policies for robots to be learned directly from Deep learning enables RL to scale to decision-making
camera inputs in the real world. In this survey, we begin with problems that were previously intractable, i.e., settings with
an introduction to the general field of reinforcement learning,
then progress to the main streams of value-based and policy- high-dimensional state and action spaces. Amongst recent
based methods. Our survey will cover central algorithms in work in the field of DRL, there have been two outstanding
deep reinforcement learning, including the deep Q-network, success stories. The first, kickstarting the revolution in DRL,
trust region policy optimisation, and asynchronous advantage was the development of an algorithm that could learn to play
actor-critic. In parallel, we highlight the unique advantages of a range of Atari 2600 video games at a superhuman level,
deep neural networks, focusing on visual understanding via
reinforcement learning. To conclude, we describe several current directly from image pixels [71]. Providing solutions for the
areas of research within the field. instability of function approximation techniques in RL, this
work was the first to convincingly demonstrate that RL agents
could be trained on raw, high-dimensional observations, solely
I. I NTRODUCTION based on a reward signal. The second standout success was
One of the primary goals of the field of artificial intelligence the development of a hybrid DRL system, AlphaGo, that
(AI) is to produce fully autonomous agents that interact with defeated a human world champion in Go [108], paralleling the
their environments to learn optimal behaviours, improving over historic achievement of IBMs Deep Blue in chess two decades
time through trial and error. Crafting AI systems that are earlier [15] and IBMs Watson DeepQA system that beat the
responsive and can effectively learn has been a long-standing best human Jeopardy! players [26]. Unlike the handcrafted
challenge, ranging from robots, which can sense and react rules that have dominated chess-playing systems, AlphaGo
to the world around them, to purely software-based agents, was comprised of neural networks that were trained using
which can interact with natural language and multimedia. supervised and reinforcement learning, in combination with
A principled mathematical framework for experience-driven a traditional heuristic search algorithm.
autonomous learning is reinforcement learning (RL) [115]. Al- DRL algorithms have already been applied to a wide range
though RL had some successes in the past [119, 109, 51, 79], of problems, such as robotics, where control policies for robots
previous approaches lacked scalablity and were inherently can now be learned directly from camera inputs in the real
limited to fairly low-dimensional problems. These limitations world [63, 64], succeeding controllers that used to be hand-
exist because RL algorithms share the same complexity is- engineered or learned from low-dimensional features of the
sues as other algorithms: memory complexity, computational robots state. In a step towards even more capable agents,
complexity, and in the case of machine learning algorithms, DRL has been used to create agents that can meta-learn (learn
sample complexity [113]. What we have witnessed in recent to learn) [25, 133], allowing them to generalise to complex
yearsthe rise of deep learning, relying on the powerful visual environments they have never seen before [25]. In
function approximation and representation learning properties Figure 1, we showcase some of the domains that DRL has
of deep neural networkshas provided us with new tools to been applied to, ranging from playing video games [71] to
overcoming these problems. indoor navigation [142].
The advent of deep learning has had a significant impact Video games may be an interesting challenge, but learning
on many areas in machine learning, dramatically improving how to play them is not the end goal of DRL. One of the
the state-of-the-art in tasks such as object detection, speech driving forces behind DRL is the vision of creating systems
recognition, and language translation [59]. The most important that are capable of learning how to adapt in the real world.
property of deep learning is that deep neural networks can From managing power consumption [120] to picking and
automatically find compact low-dimensional representations stowing objects [64], DRL stands to increase the amount
(features) of high-dimensional data (e.g., images, text and of physical tasks that can be automated by learning. How-
audio). Through crafting inductive biases into neural network ever, DRL does not stop there, as RL is a general way of
architectures, particularly that of hierarchical representations, approaching optimisation problems by trial and error. From
machine learning practitioners have made effective progress designing state-of-the-art machine translation models [143] to
in addressing the curse of dimensionality [11]. Deep learning constructing new optimisation functions [65], DRL has already
has similarly accelerated progress in RL, with the use of been used to approach all manner of machine learning tasks.
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 2

Fig. 1. A range of visual RL domains. (a) Two classic Atari 2600 video games, Freeway and Seaquest, from the Arcade Learning Environment
(ALE) [8]. Due to the range of supported games that vary in genre, visuals and difficulty, the ALE has become a standard testbed for DRL algorithms
[71, 81, 35, 103, 112, 134, 72]. As we discuss later, the ALE is one of several benchmarks that are now being used to standardise evaluation in RL. (b) The
TORCS car racing simulator, which has been used to test DRL algorithms that can output continuous actions [53, 67, 72] (as the games from the ALE only
support discrete actions). (c) Utilising the potentially unlimited amount of training data that can be amassed in robotic simulators, several methods aim to
transfer knowledge from the simulator to the real world [18, 97, 124]. (d) Two of the four robotic tasks designed by Levine et al. [63]: screwing on a bottle
cap and placing a shaped block in the correct hole. Levine et al. [63] were able to train visuomotor policies in an end-to-end fashion, showing that visual
servoing could be learned directly from raw camera inputs by using deep neural networks. (e) A real room, in which a wheeled robot trained to navigate
the building is given a visual cue as input, and must find the corresponding location [142]. (f) A natural image being captioned by a neural network that
uses reinforcement learning to choose where to look [141]. By processing a small portion of the image for every word generated, the network can focus its
attention on the most salient points. Figures reproduced from [8, 67, 124, 63, 142, 141], respectively.

And, in the same way that deep learning has been utilised a new state st+1 based on the current state and the chosen
across many branches of machine learning, it seems likely action. The state is a sufficient statistic of the environment
that in the future, DRL will be an important component in and thereby comprises all the necessary information for the
constructing general AI systems [57]. agent to take the best action, which can include parts of the
agent, such as the position of its actuators and sensors. In the
II. R EWARD - DRIVEN B EHAVIOUR optimal control literature, states and actions are often denoted
by xt and ut , respectively.
Before examining the contributions of deep neural networks
to RL, we will introduce the field of RL in general. The The best sequence of actions is determined by the rewards
essence of RL is learning through interaction. An RL agent provided by the environment. Every time the environment
interacts with its environment and, upon observing the conse- transitions to a new state, it also provides a scalar reward
quences of its actions, can learn to alter its own behaviour in rt+1 to the agent as feedback. The goal of the agent is to
response to rewards received. This paradigm of trial-and error- learn a policy (control strategy) that maximises the expected
learning has its roots in behaviourist psychology, and is one return (cumulative, discounted reward). Given a state, a policy
of the main foundations of RL [115]. The other key influence returns an action to perform; an optimal policy is any policy
on RL is optimal control, which has lent the mathematical that maximises the expected return in an environment. In
formalisms (most notably dynamic programming [9]) that this respect, RL aims to solve the same problem as optimal
underpin the field. control. However, the challenge in RL is that the agent needs
In the RL set-up, an autonomous agent, controlled by to learn about the consequences of actions in the environment
a machine learning algorithm, observes a state st from its by trial and error, as, unlike in optimal control, a model of the
environment at timestep t. The agent interacts with the envi- state transition dynamics is not available to the agent. Every
ronment by taking an action at in state st . When the agent interaction with the environment yields information, which the
takes an action, the environment and the agent transition to agent uses to update its knowledge. This perception-action-
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 3

Fig. 2. The perception-action-learning loop. At time t, the agent receives state st from the environment. The agent uses its policy to choose an action at .
Once the action is executed, the environment transitions a step, providing the next state st+1 as well as feedback in the form of a reward rt+1 . The agent
uses knowledge of state transitions, of the form (st , at , st+1 , rt+1 ), in order to learn and improve its policy.

learning loop is illustrated in Figure 2. Although this assumption is held by the majority of RL
algorithms, it is somewhat unrealistic, as it requires the states
A. Markov Decision Processes to be fully observable. A generalisation of MDPs are partially
observable MDPs (POMDPs), in which the agent receives an
Formally, RL can be described as a Markov decision process observation ot , where the distribution of the observation
(MDP), which consists of: p(ot+1 |st+1 , at ) is dependent on the current state and the
A set of states S, plus a distribution of starting states previous action [45]. In a control and signal processing con-
p(s0 ). text, the observation would be described by a measurement/
A set of actions A. observation mapping in a state-space-model that depends on
Transition dynamics T (st+1 |st , at ) that map a state- the current state and the previously applied action.
action pair at time t onto a distribution of states at time POMDP algorithms typically maintain a belief over the
t + 1. current state given the previous belief state, the action taken
An immediate/instantaneous reward function and the current observation. A more common approach in
R(st , at , st+1 ). deep learning is to utilise recurrent neural networks (RNNs)
A discount factor [0, 1], where lower values place [138, 35, 36, 72, 82], which, unlike feedforward neural
more emphasis on immediate rewards. networks, are dynamical systems. This approach to solving
In general, the policy is a mapping from states to a POMDPs is related to other problems using dynamical systems
probability distribution over actions: : S p(A = a|S). If and state space models, where the true state can only be
the MDP is episodic, i.e., the state is reset after each episode of estimated [12].
length T , then the sequence of states, actions and rewards in an
episode constitutes a trajectory or rollout of the policy. Every B. Challenges in RL
rollout of a policy accumulates
PTrewards from the environment,
1 It is instructive to emphasise some challenges faced in RL:
resulting in the return R = t=0 t rt+1 . The goal of RL is
The optimal policy must be inferred by trial-and-error
to find an optimal policy, , which achieves the maximum
expected return from all states: interaction with the environment. The only learning signal
the agent receives is the reward.
= argmax E[R|] (1) The observations of the agent depend on its actions and
can contain strong temporal correlations.
It is also possible to consider non-episodic MDPs, where Agents must deal with long-range time dependencies:
T = . In this situation, < 1 prevents an infinite sum Often the consequences of an action only materialise after
of rewards from being accumulated. Furthermore, methods many transitions of the environment. This is known as the
that rely on complete trajectories are no longer applicable, (temporal) credit assignment problem [115].
but those that use a finite set of transitions still are. We will illustrate these challenges in the context of an
A key concept underlying RL is the Markov property, indoor robotic visual navigation task: if the goal location is
i.e., only the current state affects the next state, or in other specified, we may be able to estimate the distance remaining
words, the future is conditionally independent of the past given (and use it as a reward signal), but it is unlikely that we will
the present state. This means that any decisions made at st know exactly what series of actions the robot needs to take
can be based solely on st1 , rather than {s0 , s1 , . . . , st1 }. to reach the goal. As the robot must choose where to go as it
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 4

navigates the building, its decisions influence which rooms it


sees and, hence, the statistics of the visual sequence captured. Q0 (st , at ) = Q (st , at ) + , (6)
Finally, after navigating several junctions, the robot may find
where is the learning rate and = Y Q (st , at ) the tem-
itself in a dead end. There are a range of problems, from
poral difference (TD) error; here, Y is a target as in a standard
learning the consequences of actions, to balancing exploration
regression problem. SARSA, an on-policy learning algorithm,
versus exploitation, but ultimately these can all be addressed
is used to improve the estimate of Q by using transitions
formally within the framework of RL.
generated by the behavioural policy (the policy derived from
Q ), which results in setting Y = rt + Q (st+1 , at+1 ). Q-
III. R EINFORCEMENT L EARNING A LGORITHMS learning is off-policy, as Q is instead updated by transitions
So far, we have introduced the key formalism used in RL, that were not necessarily generated by the derived policy.
the MDP, and briefly noted some challenges in RL. In the Instead, Q-learning uses Y = rt + maxa Q (st+1 , a), which
following, we will distinguish between different classes of directly approximates Q .
RL algorithms. There are two main approaches to solving To find Q from an arbitrary Q , we use generalised
RL problems: methods based on value functions and methods policy iteration, where policy iteration consists of policy eval-
based on policy search. There is also a hybrid, actor-critic uation and policy improvement. Policy evaluation improves
approach, which employs both value functions and policy the estimate of the value function, which can be achieved
search. We will now explain these approaches and other useful by minimising TD errors from trajectories experienced by
concepts for solving RL problems. following the policy. As the estimate improves, the policy can
naturally be improved by choosing actions greedily based on
the updated value function. Instead of performing these steps
A. Value Functions separately to convergence (as in policy iteration), generalised
Value function methods are based on estimating the value policy iteration allows for interleaving the steps, such that
(expected return) of being in a given state. The state-value progress can be made more rapidly.
function V (s) is the expected return when starting in state s
and following henceforth: B. Sampling
V (s) = E[R|s, ] (2) Instead of bootstrapping value functions using dynamic
programming methods, Monte Carlo methods estimate the
The optimal policy, , has a corresponding state-value expected return (2) from a state by averaging the return from
function V (s), and vice-versa, the optimal state-value func- multiple rollouts of a policy. Because of this, pure Monte Carlo
tion can be defined as methods can also be applied in non-Markovian environments.
On the other hand, they can only be used in episodic MDPs,
V (s) = max V (s) s S. (3) as a rollout has to terminate for the return to be calculated.

It is possible to get the best of both methods by combining
If we had V (s) available, the optimal policy could be re- TD learning and Monte Carlo policy evaluation, as in done in
trieved by choosing among all actions available at st and pick- the TD() algorithm [115]. Similarly to the discount factor,
ing the action a that maximises Est+1 T (st+1 |st ,a) [V (st+1 )]. the in TD() is used to interpolate between Monte Carlo
In the RL setting, the transition dynamics T are unavailable. evaluation and bootstrapping. As demonstrated in Figure 3,
Therefore, we construct another function, the state-action- this results in an entire spectrum of RL methods based around
value or quality function Q (s, a), which is similar to V , the amount of sampling utilised.
except that the initial action a is provided, and is only Another major value-function based method relies on learn-
followed from the succeeding state onwards: ing the advantage function A (s, a) [3]. Unlike producing
absolute state-action values, as with Q , A instead represents
Q (s, a) = E[R|s, a, ]. (4)
relative state-action values. Learning relative values is akin
The best policy, given Q (s, a), can be found by choosing a to removing a baseline or average level of a signal; more
greedily at every state: argmaxa Q (s, a). Under this policy, intuitively, it is easier to learn that one action has better
we can also define V (s) by maximising Q (s, a): V (s) = consequences than another, than it is to learn the actual return
maxa Q (s, a). from taking the action. A represents a relative advantage
Dynamic Programming: To actually learn Q , we exploit of actions through the simple relationship A = V Q ,
the Markov property and define the function as a Bellman and is also closely related to the baseline method of variance
equation [9], which has the following recursive form: reduction within gradient-based policy search methods [139].
The idea of advantage updates has been utilised in many recent
Q (st , at ) = Est+1 [rt+1 + Q (st+1 , (st+1 ))] (5) DRL algorithms [134, 34, 72, 104].

This means that Q can be improved by bootstrapping, i.e.,


we can use the current values of our estimate of Q to improve C. Policy Search
our estimate. This is the foundation of Q-learning [136] and Policy search methods do not need to maintain a value
the state-action-reward-state-action (SARSA) algorithm [94]: function model, but directly search for an optimal policy
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 5

transition dynamics is available. In the more common model-


free RL setting, a Monte Carlo estimate of the expected return
is determined. For gradient-based learning, this Monte Carlo
approximation poses a challenge since gradients cannot pass
through these samples of a stochastic function. Therefore, we
turn to an estimator of the gradient, known in RL as the REIN-
FORCE rule [139], elsewhere known as the score function [29]
or likelihood-ratio estimator [31]. The latter name is telling as
using the estimator is similar to the practice of optimising
the log-likelihood in supervised learning. Intuitively, gradient
ascent using the estimator increases the log probability of the
sampled action, weighted by the return. More formally, the
REINFORCE rule can be used to compute the gradient of an
expectation over a function f of a random variable X with
respect to parameters :

EX [f (X; )] = EX [f (X; ) log p(X)]. (7)


As this computation relies on the empirical return of a
trajectory, the resulting gradients possess a high variance.
Fig. 3. Two dimensions of RL algorithms, based on the backups used to learn
By introducing unbiased estimates that are less noisy it is
or construct a policy. Along the bottom is 1-step TD learning, n-step TD possible to reduce the variance. The general methodology
learning [115], and pure Monte Carlo approaches. Along the side is sampling for performing this is to subtract a baseline, which means
actions versus taking the expectation over all choices. Recreated from [115].
weighting updates by an advantage rather than the pure return.
The simplest baseline is the average return taken over several
episodes [139], but there are many more options available
. Typically, a parameterised policy is chosen, whose
[104].
parameters are updated to maximise the expected return E[R|]
Actor-critic Methods: It is possible to combine value
using either gradient-based or gradient-free optimisation [21].
functions with an explicit representation of the policy, resulting
Neural networks that encode policies have been successfully
in actor-critic methods, as shown in Figure 6. The actor
trained using both gradient-free [32, 19, 53] and gradient-
(policy) learns by using feedback from the critic (value
based [139, 138, 37, 67, 103, 104, 63] methods. Gradient-free
function). In doing so, these methods trade off variance
optimisation can effectively cover low-dimensional parameter
reduction of policy gradients with bias introduction from value
spaces, but despite some successes in applying them to large
function methods [52, 104].
networks [53], gradient-based training remains the method of
Actor-critic methods use the value function as a baseline
choice for most DRL algorithms, being more sample-efficient
for policy gradients, such that the only fundamental difference
when policies possess a large number of parameters.
between actor-critic methods and other baseline methods are
When constructing the policy directly, it is common to
that actor-critic methods utilise a learned value function. For
output parameters for a probability distribution; for continuous
this reason, we will later discuss actor-critic methods as a
actions, this could be the mean and standard deviations of
subset of policy gradient methods.
Gaussian distributions, whilst for discrete actions this could
be the individual probabilities of a multinomial distribution.
The result is a stochastic policy from which we can directly D. Planning and Learning
sample actions. With gradient-free methods, finding better Given a model of the environment, it is possible to use
policies requires a heuristic search across a predefined class dynamic programming over all possible actions (Figure 3, top
of models. Methods such as evolution strategies essentially left), sample trajectories for heuristic search (as was done by
perform hill-climbing in a subspace of policies [98], whilst AlphaGo [108]), or even perform an exhaustive search (Figure
more complex methods, such as compressed network search, 3, top right). Sutton and Barto [115] define planning as any
impose additional inductive biases [53]. Perhaps the greatest method which utilises a model to produce or improve a policy.
advantage of gradient-free policy search is that they can also This includes distribution models, which include T and R, and
optimise non-differentiable policies. sample models, from which only samples of transitions can be
Policy Gradients: Gradients can provide a strong learning drawn.
signal as to how to improve a parameterised policy. However, In RL, we focus on learning without access to the underly-
to compute the expected return (1) we need to average over ing model of the environment. However, interactions with the
plausible trajectories induced by the current policy parameter- environment could be used to learn value functions, policies,
isation. This averaging requires either deterministic approxi- and also a model. Model-free RL methods learn directly
mations (e.g., linearisation) or stochastic approximations via from interactions with the environment, but model-based RL
sampling [21]. Deterministic approximations can only be ap- methods can simulate transitions using the learned model,
plied in a model-based setting where a model of the underlying resulting in increased sample efficiency. This is particularly
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 6

important in domains where each interaction with the envi- order to extract spatiotemporal features, such as the movement
ronment is expensive. However, learning a model introduces of the ball in Pong or Breakout. The final feature map
extra complexities, and there is always the danger of suffering from the convolutional layers is processed by several fully
from model errors, which in turn affects the learned policy. connected layers, which more implicitly encode the effects of
Although deep neural networks can potentially produce very actions. This contrasts with more traditional controllers that
complex and rich models [81, 112, 27], sometimes simpler, use fixed preprocessing steps, which are therefore unable to
more data-efficient methods are preferable [34]. These consid- adapt their processing of the state in response to the learning
erations also play a role in actor-critic methods with learned signal.
value functions [52, 104]. A forerunner of the DQNneural fitted Q iteration
(NFQ)involved training a neural network to return the Q-
E. The Rise of DRL value given a state-action pair [93]. NFQ was later extended
to train a network to drive a slot car using raw visual inputs
Many of the successes in DRL have been based on scaling from a camera over the race track, by combining a deep
up prior work in RL to high-dimensional problems. This is autoencoder to reduce the dimensionality of the inputs with
due to the learning of low-dimensional feature representations a separate branch to predict Q-values [58]. Although the pre-
and the powerful function approximation properties of neural vious network could have been trained for both reconstruction
networks. By means of representation learning, DRL can deal and RL tasks simultaneously, it was both more reliable and
efficiently with the curse of dimensionality, unlike tabular and computationally efficient to train the two parts of the network
traditional non-parametric methods [11]. For instance, convo- sequentially.
lutional neural networks (CNNs) can be used as components The DQN [71] is closely related to the model proposed
of RL agents, allowing them to learn directly from raw, high- by Lange et al. [58], but was the first RL algorithm that
dimensional visual inputs. In general, DRL is based on training was demonstrated to work directly from raw visual inputs
deep neural networks to approximate the optimal policy , and on a wide variety of environments. It was designed such
and/or the optimal value functions V , Q and A . that the final fully connected layer outputs Q (s, ) for all
Following our review of RL, the next part of the survey action values in a discrete set of actionsin this case, the
is similarly partitioned into value function and policy search various directions of the joystick and the fire button. This not
methods in DRL. In these sections, we will focus on state- only enables the best action, argmaxa Q (s, a), to be chosen
of-the-art techniques, as well as the historical works they are after a single forward pass of the network, but also allows the
built upon. The focus of the state-of-the-art techniques will be network to more easily encode action-independent knowledge
on those for which the state space is conveyed through visual in the lower, convolutional layers. With merely the goal of
inputs, e.g., images and video. To conclude, we will examine maximising its score on a video game, the DQN learns to
ongoing research areas and open challenges. extract salient visual features, jointly encoding objects, their
movements, and, most importantly, their interactions. Using
IV. VALUE F UNCTIONS techniques originally developed for explaining the behaviour
The well-known function approximation properties of neural of CNNs in object recognition tasks, we can also inspect what
networks led naturally to the use of deep learning to regress parts of its view the agent considers important (see Figure 5).
functions for use in RL agents. Indeed, one of the earliest The true underlying state of the game is contained within
success stories in RL is TD-Gammon, a neural network that 128 bytes of Atari 2600 RAM. However, the DQN was
reached expert-level performance in Backgammon in the early designed to directly learn from visual inputs (210 160px
90s [119]. Using TD methods, the network took in the state of 8-bit RGB images), which it takes as the state s. It is
the board to predict the probability of black or white winning. impractical to represent Q (s, a) exactly as a lookup table:
Although this simple idea has been echoed in later work [108], When combined with 18 possible actions, we obtain a Q-
progress in RL research has favoured the explicit use of value table of size |S| |A| = 18 2563210160 . Even if it were
functions, which can capture the structure underlying the en- feasible to create such a table, it would be sparsely populated,
vironment. From early value function methods in DRL, which and information gained from one state-action pair cannot be
took simple states as input [93], current methods are now propagated to other state-action pairs. The strength of the DQN
able to tackle visually and conceptually complex environments lies in its ability to compactly represent both high-dimensional
[71, 103, 72, 82, 142]. observations and the Q-function using deep neural networks.
Without this ability tackling the discrete Atari domain from
raw visual inputs would be impractical.
A. Function Approximation and the DQN The DQN addressed the fundamental instability problem
We begin our survey of value-function-based DRL algo- of using function approximation in RL [123] by the use of
rithms with the deep Q-network (DQN) [71], pictured in two techniques: experience replay [68] and target networks.
Figure 4, which achieved scores across a large range of classic Experience replay memory stores transitions of the form
Atari 2600 video games [8] that were comparable to that of (st , at , st+1 , rt+1 ) in a cyclic buffer, enabling the RL agent
a professional video games tester. The inputs to the DQN are to sample from and train on previously observed data offline.
four greyscale frames of the game, concatenated over time, Not only does this massively reduce the amount of interactions
which are initially processed by several convolutional layers in needed with the environment, but batches of experience can
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 7

Fig. 4. The DQN [71]. The network takes the statea stack of greyscale frames from the video gameand processes it with convolutional and fully connected
layers, with ReLU nonlinearities in between each layer. At the final layer, the network outputs a discrete action, which corresponds to one of the possible
control inputs for the game. Given the current state and chosen action, the game returns a new score. The DQN uses the rewardthe difference between the
new score and the previous oneto learn from its decision. More precisely, the reward is used to update its estimate of Q, and the error between its previous
estimate and its new estimate is backpropagated through the network.

be sampled, reducing the variance of learning updates. Fur- Further insight into the properties of A by Gu et al. [34]
thermore, by sampling uniformly from a large memory, the led them to modify the DQN with a convex advantage layer
temporal correlations that can adversely affect RL algorithms that extended the algorithm to work over sets of continuous
are broken. Finally, from a practical perspective, batches actions, creating the normalised advantage function (NAF)
of data can be efficiently processed in parallel by modern algorithm. Benefiting from experience replay, target networks
hardware, increasing throughput. Whilst the original DQN and advantage updates, NAF is one of several state-of-the-art
algorithm used uniform sampling [71], later work showed techniques in continuous control problems [34].
that prioritising samples based on TD errors is more effective
for learning [100]. We note that although experience replay V. P OLICY S EARCH
is typically thought of as a model-free technique, it could Policy search methods aim to directly find policies by means
actually be considered a simple model [128]. of gradient-free or gradient-based methods. Prior to the current
The second stabilising method, introduced by Mnih et al. surge of interest in DRL, several successful methods in DRL
[71], is the use of a target network that initially contains the eschewed the commonly used backpropagation algorithm in
weights of the network enacting the policy, but is kept frozen favour of evolutionary algorithms [32, 19, 53], which are
for a large period of time. Rather than having to calculate the gradient-free policy search algorithms. Evolutionary methods
TD error based on its own rapidly fluctuating estimates of the rely on evaluating the performance of a population of agents.
Q-values, the policy network uses the fixed target network. Hence, they are expensive for large populations or agents with
During training the weights of the target network are updated many parameters. However, as black-box optimisation meth-
to match the policy network after a fixed number of steps. ods they can be used to optimise arbitrary, non-differentiable
Both experience replay and target networks have gone on to models and naturally allow for more exploration in parameter
be used in subsequent DRL works [34, 67, 135, 76]. space. In combination with a compressed representation of
neural network weights, evolutionary algorithms can even be
B. Q-Function Modifications used to train large networks; such a technique resulted in the
first deep neural network to learn an RL task, straight from
Considering that one of the key components of the DQN is
high-dimensional visual inputs [53]. Recent work has reignited
a function approximator for the Q-function, it can benefit from
interest in evolutionary methods for RL as they can potentially
fundamental advances in RL. van Hasselt [126] showed that
be distributed at larger scales than techniques that rely on
the single estimator used in the Q-learning update rule over-
gradients [98].
estimates the expected return due to the use of the maximum
action value as an approximation of the maximum expected
action value. Double-Q learning provides a better estimate A. Backpropagation through Stochastic Functions
through the use of a double estimator [126]. Whilst double- The workhorse of DRL, however, remains backpropagation.
Q learning requires an additional function to be learned, later The previously discussed REINFORCE rule [139] allows
work proposed using the already available target network from neural networks to learn stochastic policies in a task-dependent
the DQN algorithm, resulting in significantly better results manner, such as deciding where to look in an image to track
with only a small change in the update step [127]. [102], classify [70] or caption objects [141]. In these cases,
Yet another way to adjust the DQN architecture is to the stochastic variable would determine the coordinates of
decompose the Q-function into meaningful functions, such a small crop of the image, and hence reduce the amount
as constructing Q by adding together separate layers that of computation needed. This usage of RL to make discrete,
compute the state-value function V and advantage function stochastic decisions over inputs is known in the deep learning
A [134]. Rather than having to come up with accurate Q- literature as hard attention, and is one of the more compelling
values for all actions, the duelling DQN [134] benefits from a uses of basic policy search methods in recent years, having
single baseline for the state in the form of V , and easier-to- many applications outside of traditional RL domains.
learn relative values in the form of A . The combination of the One of the notable new methods in policy search is trust re-
duelling DQN with prioritised experience replay [100] is one gion policy optimisation (TRPO), which guarantees monotonic
of the state-of-the-art techniques in discrete action settings. improvement in the policy by preventing it from deviating too
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 8

Fig. 5. Saliency map of a trained DQN [71] playing Space Invaders [8]. Fig. 6. Actor-critic set-up. The actor (policy) receives a state from the
By backpropagating the training signal to the image space, it is possible to environment and chooses an action to perform. At the same time, the critic
see what a neural-network-based agent is attending to. In this frame, the (value function) receives the state and reward resulting from the previous
most salient pointsshown with the red overlayare the laser that the agent interaction. The critic uses the TD error calculated from this information to
recently fired, and also the enemy that it anticipates hitting in a few time steps. update itself and the actor. Recreated from [115].

wildly from previous policies [103]. On top of standard policy B. Actor-Critic Methods
gradient methods, TRPO uses the notion of a trust region,
which restricts optimisation steps to within a region where the Instead of utilising Monte Carlo returns as baselines for
approximation of the true cost function still holds. The idea policy gradient methods, actor-critic approaches have grown
of constraining policy gradient updates was explored earlier in popularity as an effective means of combining the benefits
through the use of natural gradient updates [46] and also by of policy search methods with learned value functions, which
means of the Kullback-Leibler (KL) divergence [48, 88]. In are able to learn from TD errors. They can benefit from
contrast with previous works, TRPO constrains each policy improvements in both policy gradient methods, such as GAE
update to a fixed KL divergence from the current policy [104], and value function methods, such as target networks
inducing the action conditional p(a|s), which is more feasible [71]. In the last few years, DRL actor-critic methods have been
for use with current networks. Later work by Schulman et scaled up from learning simulated physics tasks [37, 67] to
al. [104] introduced generalised advantage estimation (GAE), real robotic visual navigation tasks [142], directly from image
proposing more advanced variance reduction baselines for pixels.
policy gradient methods. The combination of TRPO and GAE One recent development in the context of actor-critic algo-
remains one of the state-of-the-art RL techniques in continuous rithms are deterministic policy gradients (DPGs) [107], which
control. extend the standard policy gradient theorems for stochastic
policies [139] to deterministic policies. One of the major
Searching directly for a policy represented by a neural advantages of DPGs is that, whilst stochastic policy gradi-
network with very many parameters can be difficult and can ents integrate over both state and action spaces, DPGs only
suffer from severe local minima. One way around this is to integrate over the state space, requiring fewer samples in
use guided policy search (GPS), which takes a few sequences problems with large action spaces. In the initial work on
of actions from another controller (which could be constructed DPGs, Silver et al. [107] introduced and demonstrated an
using a separate method, such as optimal control). GPS learns off-policy actor-critic algorithm that vastly improved upon
from them by using supervised learning in combination with a stochastic policy gradient equivalent in high-dimensional
importance sampling, which corrects for off-policy samples continuous control problems. Later work introduced deep DPG
[62]. This approach effectively biases the search towards a (DDPG), which utilised neural networks to operate on high-
good (local) optimum. GPS works in a loop, by optimising dimensional, visual state spaces [67]. In the same vein as
policies to match sampled trajectories, and optimising tra- DPGs, Heess et al. [37] devised a method for calculating
jectory distributions to match the policy and minimise costs. gradients to optimise stochastic policies, by reparameterising
Initially, GPS was used to train neural networks on simulated [50, 92] the stochasticity away from the network, thereby
continuous RL problems [61], but was later utilised to train allowing standard gradients to be used (instead of the high-
a policy for a real robot based on visual inputs [63]. This variance REINFORCE estimator [139]). The resulting stochas-
research by Levine et al. [63] showed that it was possible tic value gradient (SVG) methods are flexible, and can be
to train visuomotor policies for a robot end-to-end, straight used both with (SVG(0) and SVG(1)) and without (SVG())
from the RGB pixels of the camera to motor torques, and, value function critics, and with (SVG() and SVG(1)) and
hence, is one of the seminal works in DRL. without (SVG(0)) learned models. Later work proceeded to
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 9

integrate DPGs and SVGs with RNNs, allowing them to solve which have their own special considerations. We then bring to
continuous control problems in POMDPs, learning directly attention two broader areasthe use of RNNs, and transfer
from pixels [36]. Together, DPGs and SVGs can be considered learningin the context of DRL. We then examine the issue
algorithmic approaches for improving learning efficiency in of evaluating RL, and current benchmarks for DRL.
DRL.
An orthogonal approach to speeding up learning is to
A. Model-based RL
exploit parallel computation. In particular, methods for training
networks through asynchronous gradient updates have been The key idea behind model-based RL is to learn a tran-
developed for use on both single machines [91] and distributed sition model that allows for simulation of the environment
systems [20]. By keeping a canonical set of parameters that are without interacting with the environment directly. Model-based
read by and updated in an asynchronous fashion by multiple RL does not assume specific prior knowledge. However, in
copies of a single network, computation can efficiently be dis- practice, we can incorporate prior knowledge (e.g., physics-
tributed over both processing cores in a single CPU, and across based models [47]) to speed up learning. Model learning
CPUs in a cluster of machines. Using a distributed system, plays an important role to reduce the amount of required
Nair et al. [77] developed a framework for training multiple interactions with the (real) environment, which may be limited
DQNs in parallel, achieving both better performance and a in practice. For example, it is unrealistic to perform millions of
reduction in training time. However, the simpler asynchronous experiments with a robot in a reasonable amount of time and
advantage actor-critic (A3C) algorithm [72], developed for without significant hardware wear and tear. There are various
both single and distributed machine settings, has become one approaches to learn predictive models of dynamical systems
of the most popular DRL techniques in recent times. A3C using pixel information. Based on the deep dynamical model
combines advantage updates with the actor-critic formula- [131], where high-dimensional observations are embedded
tion, and relies on asynchronously updated policy and value into a lower-dimensional space using autoencoders, several
function networks trained in parallel over several processing model-based DRL algorithms have been proposed for learning
threads. The use of multiple agents, situated in their own, models and policies from pixel information [81, 137, 132]. If a
independent environments, not only stabilises improvements in sufficiently accurate model of the environment can be learned,
the parameters, but conveys an additional benefit in allowing then even simple controllers can be used to control a robot
for more exploration to occur. A3C has been used as a standard directly from camera images [27]. Learned models can also
starting point in many subsequent works, including the work be used to guide exploration purely based on simulation of the
of Zhu et al. [142], who applied it to robotic navigation in the environment, with deep models allowing these techniques to
real world through visual inputs. be scaled up to high-dimensional visual domains [112].
There have been several major advancements on the original Although deep neural networks can make reasonable pre-
A3C algorithm that reflect various motivations in the field of dictions in simulated environments over hundreds of timesteps
DRL. The first is actor-critic with experience replay [135], [17], they typically require many samples to tune the large
which adds Retrace() off-policy bias correction [75] to A3C, amount of parameters they contain. Training these models
allowing it to use experience replay in order to improve sample often requires more samples (interaction with the environment)
complexity. Others have attempted to bridge the gap between than simpler models. For this reason, Gu et al. [34] train
value and policy-based RL, utilising the theoretical advance- locally linear models for use with the NAF algorithm
ments to improve upon the original A3C [76, 80, 105]. Finally, the continuous equivalent of the DQN [71]to improve the
there is a growing trend towards exploiting auxiliary tasks algorithms sample complexity in the robotic domain where
to improve the representations learned by DRL agents, and, samples are expensive. It seems likely that the usage of deep
hence, improve both the learning speed and final performance models in model-based DRL could be massively spurred by
of these agents [66, 43, 69]. general advances in improving the data efficiency of neural
networks.
VI. C URRENT R ESEARCH AND C HALLENGES
To conclude, we will highlight some current areas of re- B. Exploration vs. Exploitation
search in DRL, and the challenges that still remain. Previously, One of the greatest difficulties in RL is the fundamental
we have focused mainly on model-free methods, but we dilemma of exploration versus exploitation: When should the
will now examine a few model-based DRL algorithms in agent try out (perceived) non-optimal actions in order to
more detail. Model-based RL algorithms play an important explore the environment (and potentially improve the model),
role for data-efficient RL, but also in trading off exploration and when should it exploit the optimal action in order to make
with exploitation. After tackling exploration strategies, we useful progress? Off-policy algorithms, such as the DQN [71],
shall then address hierarchical RL (HRL), which imposes an typically use the simple -greedy exploration policy, which
inductive bias on the final policy by explicitly factorising it chooses a random action with probability  [0, 1], and the
into several levels. When available, trajectories from other optimal action otherwise. By decreasing  over time, the agent
controllers can be used to bootstrap the learning process, progresses towards exploitation. Although adding independent
leading us to imitation learning and inverse RL (IRL). For the noise for exploration is usable in continuous control problems,
final topic specific to RL, we will look at multi-agent systems, more sophisticated strategies inject noise that is correlated
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 10

over time (e.g., from stochastic processes) in order to better learning in a straightforward mannera case of learning
preserve momentum [67]. from demonstration. This is indeed possible, and is known
The observation that temporal correlation is important led as behavioural cloning in traditional RL literature. Taking
Osband et al. [83] to propose the bootstrapped DQN, which advantage of the stronger signals available in supervised learn-
maintains several Q-value heads that learn different values ing problems, behavioural cloning enjoyed success in earlier
through a combination of different weight initialisations and neural network research, with the most notable success being
bootstrapped sampling from experience replay memory. At ALVINN, one of the earliest autonomous cars [89]. However,
the beginning of each training episode, a different head is behavioural cloning cannot adapt to new situations, and small
chosen, leading to temporally-extended exploration. Usunier deviations from the demonstration during the execution of the
et al. [125] later proposed a similar method that performed learned policy can compound and lead to scenarios where the
exploration in policy space by adding noise to a single output policy is unable to recover. A more generalisable solution is
head, using zero-order gradient estimates to allow backpropa- to use provided trajectories to guide the learning of suitable
gation through the policy. state-action pairs, but fine-tune the agent using RL [39].
One of the main principled exploration strategies is the The goal of IRL is to estimate an unknown reward function
upper confidence bound (UCB) algorithm, based on the prin- from observed trajectories that characterise a desired solution
ciple of optimism in the face of uncertainty [56]. The idea [78]; IRL can be used in combination with RL to improve
behind UCB is to pick actions that maximise E[R] + [R], upon demonstrated behaviour. Using the power of deep neural
where [R] is the standard deviation of the return and networks, it is now possible to learn complex, nonlinear reward
> 0. UCB therefore encourages exploration in regions with functions for IRL [140]. Ho and Ermon [41] showed that poli-
high uncertainty and moderate expected return. Whilst easily cies are uniquely characterised by their occupancies (visited
achievable in small tabular cases, the use of powerful density state and action distributions) allowing IRL to be reduced to
models has allowed this algorithm to scale to high-dimensional the problem of measure matching. With this insight they were
visual domains with DRL [7]. UCB is only one technique able to use generative adversarial training [33] to facilitate
for trading off exploration and exploitation in the context of reward function learning in a more flexible manner, resulting in
Bayesian optimisation [106]; future work in DRL may benefit the generative adversarial imitation learning (GAIL) algorithm.
from investigating other successful techniques that are used in GAIL was later extended to allow IRL to be applied even when
Bayesian optimisation. receiving expert trajectories from a different visual viewpoint
UCB can also be considered one way of implementing to that of the RL agent [111]. In complementary work, Baram
intrinsic motivation, which is a general concept that advocates et al. [4] exploit gradient information that was not used in
decreasing uncertainty/making progress in learning about the GAIL to learn models within the IRL process.
environment [101]. There have been several DRL algorithms
that try to implement intrinsic motivation via minimising E. Multi-agent RL
model prediction error [112, 86] or maximising information
Usually, RL considers a single learning agent in a sta-
gain [73, 42].
tionary environment. In contrast, multi-agent RL (MARL)
considers multiple agents learning through RL, and often the
C. Hierarchical RL non-stationarity introduced by other agents changing their
In the same way that deep learning relies on hierarchies of behaviours as they learn [14]. In DRL, the focus has been
features, HRL relies on hierarchies of policies. Early work in on enabling (differentiable) communication between agents,
this area introduced options, in which, apart from primitive which allows them to co-operate. Several approaches have
actions (single-timestep actions), policies could also run other been proposed for this purpose, including passing messages
policies (multi-timestep actions) [116]. This approach allows to agents sequentially [28], using a bidirectional channel
top-level policies to focus on higher-level goals, whilst sub- (providing ordering with less signal loss) [87], and an all-to-
policies are responsible for fine control. Several works in DRL all channel [114]. The addition of communication channels is
have attempted HRL by the use of one top-level policy that a natural strategy to apply to MARL in complex scenarios
chooses between subpolicies, where the division of states or and does not preclude the usual practice of modelling co-
goals in to subpolicies is achieved either manually [1, 121, 54] operative or competing agents as applied elsewhere in the
or automatically [2, 129, 130]. One of the ways to help MARL literature [14]. Other DRL works of note in MARL
with constructing subpolicies is to focus on discovering and investigate the effects of learning and sequential decision
reaching goals, which are specific states in the environment; making in game theory [38, 60].
they may often be locations, which an agent should navigate
to. Whether utilised with HRL or not, the discovery and F. Memory and Attention
generalisation of goals is also an important area of ongoing
As one of the earliest works in DRL the DQN spawned
research [99, 55, 130].
many extensions. One of the first extensions was converting
the DQN into an RNN, which allows the network to better
D. Imitation Learning and Inverse RL deal with POMDPs by integrating information over long time
One may ask why, if given a sequence of optimal actions periods. Like recursive filters, recurrent connections provide an
from expert demonstrations, it is not possible to use supervised efficient means of acting conditionally on temporally distant
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 11

prior observations. By using recurrent connections between forgetting old knowledge by adding extra layers when trans-
its hidden units, the deep recurrent Q-network (DRQN) in- ferring domain [96, 97]. Other approaches involve directly
troduced by Hausknecht and Stone [35] was able to suc- learning an alignment between simulated and real visuals
cessfully infer the velocity of the ball in the game Pong, [124], or even between two different camera viewpoints [111].
even when frames of the game were randomly blanked out. A different form of transfer can be utilised to help RL in
Further improvements were gained by introducing attention the form of multitask training [66, 43, 69]. Especially with
a technique where additional connections are added from the neural networks, supervised and unsupervised learning tasks
recurrent units to lower layersto the DRQN, resulting in the can help train features that can be used by RL agents, making
deep attention recurrent Q-network (DARQN) [110]. Attention optimising the RL objective easier to achieve. For example,
gives a network the ability to choose which part of its next the unsupervised reinforcement and auxiliary learning A3C-
input to focus on, and allowed the DARQN to beat both based agent is additionally trained with pixel control (maxi-
the DQN and DRQN on games, which require longer-term mally changing pixel inputs), plus reward prediction and value
planning. However, the DQN outperformed the DRQN and function learning from experience replay [43]. Meanwhile, the
DARQN on games requiring quick reactions, where Q-values A3C-based agent of Mirowski et al. [69] was additionally
can fluctuate more rapidly. trained to construct a depth map given RGB inputs, which
Taking recurrent processing further, it is possible to add a helps it in its task of learning to navigate a 3D environment.
differentiable memory to the DQN, which allows it to more In an ablation study, Mirowski et al. [69] showed the predicting
flexibly process information in its working memory [82]. In depth was more useful than receiving depth as an extra input,
traditional RNNs, recurrent units are responsible for both per- lending further support to the idea that gradients induced by
forming calculations and storing information. Differentiable auxiliary tasks can be extremely effective at boosting DRL.
memories add large matrices that are purely used for storing Transfer learning can also be used to construct more
information, and can be accessed using differentiable read parameter-efficient policies. In the student-teacher paradigm
and write operations, analagously to computer memory. With in machine learning, one can first train a more powerful
their key-value-based memory Q-network (MQN), Oh et al. teacher model, and then use it to guide the training of a
[82] constructed an agent that could solve a simple maze less powerful student model. Whilst originally applied to
built in Minecraft, where the correct goal in each episode supervised learning, the neural network knowledge transfer
was indicated by a coloured block shown near the start of technique known as distillation [40] has been utilised to both
the maze. The MQN, and especially its more sophisticated transfer policies learned by large DQNs to smaller DQNs, and
variants, significantly outperformed both DQN and DRQN transfer policies learned by several DQNs trained on separate
baselines, highlighting the importance of using decoupled games to one single DQN [85, 95]. This is an important step if
memory storage. More recent work, where the memory was we wish to construct agents that can accomplish a wide range
given a 2D structure in order to resemble a spatial map, hints of tasks since training directly on multiple RL objectives at
at future research where more specialised memory structures once may be infeasible.
will be developed to address specific problems, such as 2D or
3D navigation [84]. Alternatively, differentiable memories can
H. Benchmarks
be used as approximate hash tables, allowing DRL algorithms
to store and retrieve successful experiences to facilitate rapid One of the challenges in any field in machine learning is
learning [90]. a standardised way of evaluating new techniques. Although
Note that RNNs are not restricted to value-function-based much early work focused on simple, custom MDPs, there
methods but have also been successfully applied to policy shortly emerged control problems that could be used as
search [138] and actor-critic methods [36, 72]. standard benchmarks for testing new algorithms, such as the
Cartpole [5], Acrobot [22] and Mountain Car [74] domains.
However, these problems were limited to relatively small
G. Transfer Learning state spaces, and therefore failed to capture the complexities
Even though DRL algorithms can process high-dimensional that would be encountered in most realistic scenarios. Ar-
inputs, it is rarely feasible to train RL agents directly on guably the initial driver of DRL, the ALE provided an interface
visual inputs in the real world, due to the large number of to Atari 2600 video games, with code to access over 50 games
samples required. To speed up learning in DRL, it is possible provided with the initial release [8]. As video games can vary
to exploit previously acquired knowledge from related tasks, greatly, but still present interesting and challenging objectives
which comes in several guises: transfer learning, multitask for humans, they provide an excellent testbed for RL agents.
learning [16] and curriculum learning [10] to name a few. As the first algorithm to successfully play a range of these
There is much interest in transferring learning from one task to games directly from their visuals, the DQN [71] has secured
another, particularly from training in physics simulators with its place as a milestone in the development of RL algorithms.
visual renderers and fine-tuning the models in the real world. This success story has started a trend of using video games
This can be achieved in a naive fashion, directly using the as standardised RL testbeds, with several interesting options
same network in both the simulated and real phases [142], or now available .ViZDoom provides an interface to the Doom
with more sophisticated training procedures that directly try first-person shooter [49], and echoing the popularity of e-
to mitigate the problem of neural networks catastrophically sports competitions, ViZDoom competitions are now held at
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 12

the yearly IEEE Conference on Computational Intelligence world, but steady progress is being made in agents that learn
in Games. Facebooks TorchCraft provides an interface to the fundamental principles of the world through observation
the StarCraft real-time strategy game, presenting challenges and action. Perhaps, then, we are not too far away from
in both micromanagement and long-term planning [117]. In AI systems that learn and act in more human-like ways in
an aim to provide more flexible environments, DeepMind increasingly complex environments.
Lab was developed on top of the Quake III Arena first-
person shooter engine [6], and Microsofts Project Malmo R EFERENCES
exposed an interface to the Minecraft sandbox game [44]. Both [1] Kai Arulkumaran, Nat Dilokthanakul, Murray Shanahan, and Anil Anthony
Bharath. Classifying Options for Deep Reinforcement Learning. In IJCAI
environments provide customisable platforms for RL agents in Workshop on Deep Reinforcement Learning: Frontiers and Challenges, 2016.
3D environments. [2] Pierre-Luc Bacon, Jean Harb, and Doina Precup. The Option-critic Architecture.
In AAAI, 2017.
Most DRL approaches focus on discrete actions, but some [3] Leemon C Baird III. Advantage Updating. Technical report, DTIC Document,
solutions have also been developed for continuous control 1993.
[4] Nir Baram, Oron Anschel, and Shie Mannor. Model-based Adversarial Imitation
problems. Many DRL papers in continuous control [103, 37, Learning. In NIPS Workshop on Deep Reinforcement Learning, 2016.
67, 72, 4, 111] have used the MuJoCo physics engine to [5] Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike
Adaptive Elements That Can Solve Difficult Learning Control Problems. IEEE
obtain relatively realistic dynamics for multi-joint continuous Trans. on Systems, Man, and Cybernetics, (5):834846, 1983.
control problems [122], and there has now been some effort [6] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright,
Heinrich Kuttler, Andrew Lefrancq, Simon Green, Vctor Valdes, Amir Sadik,
to standardise these problems [24]. et al. DeepMind Lab. arXiv:1612.03801, 2016.
To help with standardisation and reproducibility, most of [7] Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton,
and Remi Munos. Unifying Count-based Exploration and Intrinsic Motivation. In
the aforementioned RL domains and more have been made NIPS, 2016.
available in the OpenAI Gym, a library and online service [8] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade
Learning Environment: An Evaluation Platform for General Agents. In IJCAI,
that allows people to easily interface with and publicly share 2015.
the results of RL algorithms on these domains [13]. [9] Richard Bellman. On the Theory of Dynamic Programming. PNAS, 38(8):716
719, 1952.
[10] Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curricu-
lum Learning. In ICML, 2009.
VII. C ONCLUSION : B EYOND PATTERN R ECOGNITION [11] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning:
A Review and New Perspectives. IEEE Trans. on Pattern Analysis and Machine
Despite the successes of DRL, many problems need to Intelligence, 35(8):17981828, 2013.
[12] Dimitri P Bertsekas. Dynamic Programming and Suboptimal Control: A Survey
be addressed before these techniques can be applied to a from ADP to MPC. European Journal of Control, 11(4-5):310334, 2005.
wide range of complex real-world problems [57]. Recent [13] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schul-
man, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
work with (non-deep) generative causal models demonstrated [14] Lucian Busoniu, Robert Babuska, and Bart De Schutter. A Comprehensive survey
superior generalisation over standard DRL algorithms [72, 96] of Multiagent Reinforcement Learning. IEEE Trans. on Systems, Man, And
Cybernetics, 2008.
in some benchmarks [8], achieved by reasoning about causes [15] Murray Campbell, A Joseph Hoane, and Feng-hsiung Hsu. Deep Blue. Artificial
and effects in the environment [47]. For example, the schema Intelligence, 134(1-2):5783, 2002.
[16] Rich Caruana. Multitask Learning. Machine Learning, 28(1):4175, 1997.
networks of Kanksy et al. [47] trained on the game Breakout [17] Silvia Chiappa, Sebastien Racaniere, Daan Wierstra, and Shakir Mohamed.
immediately adapted to a variant where a small wall was Recurrent Environment Simulators. In ICLR, 2017.
[18] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell,
placed in front of the target blocks, whilst progressive (A3C) Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from Simulation to
networks [96] failed to match the performance of the schema Real World through Learning Deep Inverse Dynamics Model. arXiv:1610.03518,
2016.
networks even after training on the new domain. Although [19] Giuseppe Cuccu, Matthew Luciw, Jurgen Schmidhuber, and Faustino Gomez.
DRL has already been combined with AI techniques, such as Intrinsically Motivated Neuroevolution for Vision-based Reinforcement Learning.
In ICDL, volume 2, 2011.
search [108] and planning [118], a deeper integration with [20] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao,
other traditional AI approaches promises benefits such as Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large Scale Distributed
Deep Networks. In NIPS, 2012.
better sample complexity, generalisation and interpretability [21] Marc P Deisenroth, Gerhard Neumann, and Jan Peters. A Survey on Policy Search
[30]. In time, we also hope that our theoretical understanding for Robotics. Foundations and Trends R in Robotics, 2(12), 2013.
[22] Gerald DeJong and Mark W Spong. Swinging Up the Acrobot: An Example of
of the properties of neural networks (particularly within DRL) Intelligent Control. In ACC, 1994.
will improve, as it currently lags far behind practice. [23] Misha Denil, Pulkit Agrawal, Tejas D Kulkarni, Tom Erez, Peter Battaglia,
and Nando de Freitas. Learning to Perform Physics Experiments via Deep
To conclude, it is worth revisiting the overarching goal Reinforcement Learning. In ICLR, 2017.
of all of this research: the creation of general-purpose AI [24] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Bench-
marking Deep Reinforcement Learning for Continuous Control. In ICML, 2016.
systems that can interact with and learn from the world around [25] Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter
them. Interaction with the environment is simultaneously the Abbeel. RL2 : Fast Reinforcement Learning via Slow Reinforcement Learning.
arXiv:1611.02779, 2016.
advantage and disadvantage of RL. Whilst there are many [26] David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek,
challenges in seeking to understand our complex and ever- Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager,
et al. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31
changing world, RL allows us to choose how we explore (3):5979, 2010.
it. In effect, RL endows agents with the ability to perform [27] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter
Abbeel. Deep Spatial Autoencoders for Visuomotor Learning. In ICRA, 2016.
experiments to better understand their surroundings, enabling [28] Jakob N Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson.
them to learn even high-level causal relationships. The avail- Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-
Networks. arXiv:1602.02672, 2016.
ability of high-quality visual renderers and physics engines [29] Michael C Fu. Gradient Estimation. Handbooks in Operations Research and
now enables us to take steps in this direction, with works that Management Science, 13:575616, 2006.
[30] Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards Deep Symbolic
try to learn intuitive models of physics in visual environments Reinforcement Learning. In NIPS Workshop on Deep Reinforcement Learning,
[23]. Challenges remain before this will be possible in the real 2016.
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 13

[31] Peter W Glynn. Likelihood Ratio Gradient Estimation for Stochastic Systems. 2015.
Communications of the ACM, 33(10):7584, 1990. [67] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom
[32] Faustino Gomez and Jurgen Schmidhuber. Evolving Modular Fast-weight Net- Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous Control with
works for Control. In ICANN, 2005. Deep Reinforcement Learning. In ICLR, 2016.
[33] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, [68] Long-Ji Lin. Self-improving Reactive Agents Based on Reinforcement Learning,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. Planning and Teaching. Machine Learning, 8(34):293321, 1992.
In NIPS, 2014. [69] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard,
[34] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu,
Deep Q-learning with Model-based Acceleration. In ICLR, 2016. et al. Learning to Navigate in Complex Environments. In ICLR, 2017.
[35] Matthew Hausknecht and Peter Stone. Deep Recurrent Q-learning for Partially [70] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recur-
Observable MDPs. In Association for the Advancement of Artificial Intelligence rent Models of Visual Attention. In NIPS, 2014.
Fall Symposium Series, 2015. [71] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness,
[36] Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory- Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg
based Control with Recurrent Neural Networks. arXiv:1512.04455, 2015. Ostrovski, et al. Human-level Control through Deep Reinforcement Learning.
[37] Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Nature, 518(7540):529533, 2015.
Tassa. Learning Continuous Control Policies by Stochastic Value Gradients. In [72] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timo-
NIPS, 2015. thy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous
[38] Johannes Heinrich and David Silver. Deep Reinforcement Learning from Self-play Methods for Deep Reinforcement Learning. In ICLR, 2016.
in Imperfect-information Games. arXiv:1603.01121, 2016. [73] Shakir Mohamed and Danilo Jimenez Rezende. Variational Information Maximi-
[39] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal sation for Intrinsically Motivated Reinforcement Learning. In NIPS, 2015.
Piot, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, [74] Andrew William Moore. Efficient Memory-based Learning for Robot Control.
et al. Learning from Demonstrations for Real World Reinforcement Learning. Technical report, University of Cambridge, Computer Laboratory, 1990.
arXiv:1704.03732, 2017. [75] Remi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe
[40] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a and Efficient Off-policy Reinforcement Learning. In NIPS, 2016.
Neural Network. arXiv:1503.02531, 2015. [76] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridg-
[41] Jonathan Ho and Stefano Ermon. Generative Adversarial Imitation Learning. In ing the Gap Between Value and Policy Based Reinforcement Learning.
NIPS, 2016. arXiv:1702.08892, 2017.
[42] Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter [77] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon,
Abbeel. VIME: Variational Information Maximizing Exploration. In NIPS, 2016. Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles
[43] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Beattie, Stig Petersen, et al. Massively Parallel Methods for Deep Reinforcement
Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Learning. arXiv:1507.04296, 2015.
Unsupervised Auxiliary Tasks. In ICLR, 2017. [78] Andrew Y Ng and Stuart J Russell. Algorithms for Inverse Reinforcement
[44] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The Malmo Learning. In ICML, 2000.
Platform for Artificial Intelligence Experimentation. In IJCAI, 2016. [79] Andrew Y Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben
[45] Leslie P Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and Tse, Eric Berger, and Eric Liang. Autonomous Inverted Helicopter Flight via
Acting in Partially Observable Stochastic Domains. Artificial Intelligence, 101 Reinforcement Learning. Experimental Robotics IX, pages 363372, 2006.
(1):99134, 1998. [80] Brendan ODonoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih.
[46] Sham M Kakade. A Natural Policy Gradient. In NIPS, 2002. PGQ: Combining Policy Gradient and Q-learning. In ICLR, 2017.
[47] Ken Kansky, Tom Silver, David A Mely, Mohamed Eldawy, Miguel Lazaro- [81] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh.
Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Action-conditional Video Prediction using Deep Networks in Atari Games. In
Dileep George. Schema Networks: Zero-shot Transfer with a Generative Causal NIPS, 2015.
Model of Intuitive Physics. In ICML, 2017. [82] Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control
[48] Hilbert J Kappen. Path Integrals and Symmetry Breaking for Optimal Control of Memory, Active Perception, and Action in Minecraft. In ICLR, 2016.
Theory. Journal of Statistical Mechanics: Theory and Experiment, 2005(11): [83] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep
P11011, 2005. Exploration via Bootstrapped DQN. In NIPS, 2016.
[49] Micha Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech [84] Emilio Parisotto and Ruslan Salakhutdinov. Neural Map: Structured Memory for
Jaskowski. ViZDoom: A Doom-based AI Research Platform for Visual Rein- Deep Reinforcement Learning. arXiv:1702.08360, 2017.
forcement Learning. arXiv:1605.02097, 2016. [85] Emilio Parisotto, Jimmy L Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep
[50] Diederik P Kingma and Max Welling. Auto-encoding Variational Bayes. Multitask and Transfer Reinforcement Learning. arXiv:1511.06342, 2015.
arXiv:1312.6114, 2013. [86] Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-
[51] Nate Kohl and Peter Stone. Policy Gradient Reinforcement Learning for Fast driven Exploration by Self-supervised Prediction. In ICML, 2017.
Quadrupedal Locomotion. In ICRA, volume 3, 2004. [87] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long,
[52] Vijay R Konda and John N Tsitsiklis. On Actor-critic Algorithms. SIAM Journal and Jun Wang. Multiagent Bidirectionally-coordinated Nets for Learning to Play
on Control and Optimization, 42(4):11431166, 2003. StarCraft Combat Games. arXiv:1703.10069, 2017.
[53] Jan Koutnk, Giuseppe Cuccu, Jurgen Schmidhuber, and Faustino Gomez. Evolv- [88] Jan Peters, Katharina Mulling, and Yasemin Altun. Relative Entropy Policy
ing Large-scale Neural Networks for Vision-based Reinforcement Learning. In Search. In AAAI, 2010.
GECCO, 2013. [89] Dean A Pomerleau. ALVINN, an Autonomous Land Vehicle in a Neural Network.
[54] Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Technical report, Carnegie Mellon University, Computer Science Department,
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and 1989.
Intrinsic Motivation. In NIPS, 2016. [90] Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech, Oriol
[55] Tejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural Episodic
Deep Successor Reinforcement Learning. arXiv:1606.02396, 2016. Control. arXiv:1703.01988, 2017.
[56] Tze Leung Lai and Herbert Robbins. Asymptotically Efficient Adaptive Allocation [91] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A
Rules. Advances in Applied Mathematics, 6(1):422, 1985. Lock-free Approach to Parallelizing Stochastic Gradient Descent. In NIPS, 2011.
[57] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J [92] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic
Gershman. Building Machines That Learn and Think Like People. The Behavioral Backpropagation and Approximate Inference in Deep Generative Models. In
and Brain Sciences, page 1, 2016. ICML, 2014.
[58] Sascha Lange, Martin Riedmiller, and Arne Voigtlander. Autonomous Reinforce- [93] Martin Riedmiller. Neural Fitted Q IterationFirst Experiences with a Data
ment Learning on Raw Visual Input Data in a Real World Application. In IJCNN, Efficient Neural Reinforcement Learning Method. In ECML, 2005.
2012. [94] Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using Connec-
[59] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep Learning. Nature, 521 tionist Systems. University of Cambridge, Department of Engineering, 1994.
(7553):436444, 2015. [95] Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume
[60] Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray
Graepel. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. Kavukcuoglu, and Raia Hadsell. Policy Distillation. arXiv:1511.06295, 2015.
In AAMAS, 2017. [96] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James
[61] Sergey Levine and Pieter Abbeel. Learning Neural Network Policies with Guided Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive
Policy Search under Unknown Dynamics. In NIPS, 2014. Neural Networks. arXiv:1606.04671, 2016.
[62] Sergey Levine and Vladlen Koltun. Guided Policy Search. In ICLR, 2013. [97] Andrei A Rusu, Matej Vecerik, Thomas Rothorl, Nicolas Heess, Razvan Pascanu,
[63] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end and Raia Hadsell. Sim-to-real Robot Learning from Pixels with Progressive Nets.
Training of Deep Visuomotor Policies. JMLR, 17(39):140, 2016. arXiv:1610.04286, 2016.
[64] Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning [98] Tim Salimans, Jonathan Ho, Xi Chen, and Ilya Sutskever. Evolution Strategies
Hand-eye Coordination for Robotic Grasping with Deep Learning and Large-scale as a Scalable Alternative to Reinforcement Learning. arXiv:1703.03864, 2017.
Data Collection. In ISER, 2016. [99] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal Value
[65] Ke Li and Jitendra Malik. Learning to Optimize. arXiv:1606.01885, 2016. Function Approximators. In ICML, 2015.
[66] Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and [100] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized
Ji He. Recurrent Reinforcement Learning: A Hybrid Approach. arXiv:1509.03044, Experience Replay. arXiv:1511.05952, 2015.
TO APPEAR IN IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING 14

[101] Jurgen Schmidhuber. A Possibility for Implementing Curiosity and Boredom in ence Replay. In ICLR, 2017.
Model-building Neural Controllers. In SAB, 1991. [136] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine Learning, 8
[102] Jurgen Schmidhuber and Rudolf Huber. Learning to Generate Artificial Fovea (3-4):279292, 1992.
Trajectories for Target Detection. Intl. Journal of Neural Systems, 2(01n02):125 [137] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller.
134, 1991. Embed to Control: A Locally Linear Latent Dynamics Model for Control from
[103] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Raw Images. In NIPS, 2015.
Moritz. Trust Region Policy Optimization. In ICML, 2015. [138] Daan Wierstra, Alexander Forster, Jan Peters, and Jurgen Schmidhuber. Recurrent
[104] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Policy Gradients. Logic Journal of the IGPL, 18(5):620634, 2010.
Abbeel. High-dimensional Continuous Control using Generalized Advantage [139] Ronald J Williams. Simple Statistical Gradient-following Algorithms for Connec-
Estimation. In ICLR, 2016. tionist Reinforcement Learning. Machine Learning, 8(3-4):229256, 1992.
[105] John Schulman, Pieter Abbeel, and Xi Chen. Equivalence Between Policy [140] Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Deep Inverse Reinforce-
Gradients and Soft Q-Learning. arXiv:1704.06440, 2017. ment Learning. CoRR, 2015.
[106] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando de Fre- [141] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan
itas. Taking the Human out of the Loop: A Review of Bayesian Optimization. Salakhutdinov, Richard S Zemel, and Yoshua Bengio. Show, Attend and Tell:
Proc. of the IEEE, 104(1):148175, 2016. Neural Image Caption Generation with Visual Attention. In ICML, volume 14,
[107] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and 2015.
Martin Riedmiller. Deterministic Policy Gradient Algorithms. In ICML, 2014. [142] Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-
[108] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Fei, and Ali Farhadi. Target-driven Visual Navigation in Indoor Scenes using
van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershel- Deep Reinforcement Learning. In ICRA, 2017.
vam, Marc Lanctot, et al. Mastering the Game of Go with Deep Neural Networks [143] Barret Zoph and Quoc V Le. Neural Architecture Search with Reinforcement
and Tree Search. Nature, 529(7587):484489, 2016. Learning. In ICLR, 2017.
[109] Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing
Dialogue Management with Reinforcement Learning: Experiments with the NJFun
System. JAIR, 16:105133, 2002.
[110] Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov, and Anas-
tasiia Ignateva. Deep Attention Recurrent Q-network. In NIPS Workshop on Deep
Reinforcement Learning, 2015.
[111] Bradley C Stadie, Pieter Abbeel, and Ilya Sutskever. Third Person Imitation
Learning. In ICLR, 2017.
[112] Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing Exploration Kai Arulkumaran ([email protected]) is a Ph.D. candidate in the
in Reinforcement Learning with Deep Predictive Models. In NIPS Workshop on Department of Bioengineering at Imperial College London. He received a
Deep Reinforcement Learning, 2015. B.A. in Computer Science at the University of Cambridge in 2012, and an
[113] Alexander L Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L M.Sc. in Biomedical Engineering at Imperial College London in 2014. He
Littman. PAC Model-free Reinforcement Learning. In ICML, 2006.
[114] Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning Multiagent
was a Research Intern in both Twitter Magic Pony and Microsoft Research in
Communication with Backpropagation. In NIPS, 2016. 2017. His research focus is deep reinforcement learning and transfer learning
[115] Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. for visuomotor control.
MIT Press, 1998.
[116] Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and
Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning.
Artificial Intelligence, 112(12):181211, 1999.
[117] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothee
Lacroix, Zeming Lin, Florian Richoux, and Nicolas Usunier. TorchCraft:
A Library for Machine Learning Research on Real-Time Strategy Games.
arXiv:1611.00625, 2016.
[118] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value Marc Peter Deisenroth ([email protected]) is a Lecturer in
Iteration Networks. In NIPS, 2016. Statistical Machine Learning in the Department of Computing at Imperial
[119] Gerald Tesauro. Temporal Difference Learning and TD-Gammon. Communica- College London and with PROWLER.io. He has been awarded an Imperial
tions of the ACM, 38(3):5868, 1995. College Research Fellowship in 2014 and received Best Paper Awards at
[120] Gerald Tesauro, Rajarshi Das, Hoi Chan, Jeffrey Kephart, David Levine, Freeman ICRA 2014 and ICCAS 2016. He is a recipient of a Google Faculty Research
Rawson, and Charles Lefurgy. Managing Power Consumption and Performance
Award and a Microsoft Ph.D. Scholarship. His research is centred around
of Computing Systems using Reinforcement Learning. In NIPS, 2008.
[121] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. data-efficient machine learning for autonomous decision making.
A Deep Hierarchical Approach to Lifelong Learning in Minecraft. In AAAI, 2017.
[122] Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A Physics Engine for
Model-based Control. In IROS, 2012.
[123] John N Tsitsiklis and Benjamin Van Roy. Analysis of Temporal-difference
Learning with Function Approximation. In NIPS, 1997.
[124] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Sergey
Levine, Kate Saenko, and Trevor Darrell. Towards Adapting Deep Visuomotor
Representations from Simulated to Real Environments. In WAFR, 2016.
[125] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic Miles Brundage ([email protected]) is a Ph.D. candidate
Exploration for Deep Deterministic Policies: An Application to StarCraft Micro- in Human and Social Dimensions of Science and Technology at Arizona
management Tasks. In ICLR, 2017. State University, and a Research Fellow at the University of Oxfords Future
[126] Hado van Hasselt. Double Q-learning. In NIPS, 2010. of Humanity Institute. He received a B.A. in Political Science at George
[127] Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning Washington University in 2010. His research focuses on governance issues
with Double Q-Learning. In AAAI, 2016. related to artificial intelligence.
[128] Harm Vanseijen and Rich Sutton. A Deeper Look at Planning as Learning from
Replay. In ICML, 2015.
[129] Alexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol
Vinyals, John Agapiou, and Koray Kavukcuoglu. Strategic Attentive Writer for
Learning Macro-actions. In NIPS, 2016.
[130] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess,
Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal Networks for
Hierarchical Reinforcement Learning. arXiv:1703.01161, 2017.
[131] Niklas Wahlstrom, Thomas B Schon, and Marc P Deisenroth. Learning Deep Anil Anthony Bharath ([email protected]) is a Reader in the De-
Dynamical Models from Image Pixels. IFAC SYSID, 48(28), 2015.
partment of Bioengineering at Imperial College London. He was an academic
[132] Niklas Wahlstrom, Thomas B Schon, and Marc P Deisenroth. From Pixels to
Torques: Policy Learning with Deep Dynamical Models. arXiv:1502.02251, 2015. visitor in the Signal Processing Group at the University of Cambridge in
[133] Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, 2006. He holds a B.Eng. from University College London (EEE) in 1988,
Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning and a Ph.D. from Imperial College London (Signal Processing) in 1993. He
to Reinforcement Learn. arXiv:1611.05763, 2016. is a co-founder of Cortexica Vision Systems. His research interests are in deep
[134] Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling Network Architectures architectures for visual inference.
for Deep Reinforcement Learning. In ICLR, 2016.
[135] Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray
Kavukcuoglu, and Nando de Freitas. Sample Efficient Actor-Critic with Experi-

You might also like