Deep Reinforcement Learning
Deep Reinforcement Learning
January 7, 2022
Springer Nature
v
vii
viii
Acknowledgments
This book benefited from the help of many friends. First of all, I thank everyone at
the Leiden Institute of Advanced Computer Science, for creating such a fun and
vibrant environment to work in.
Many people contributed to this book. Some material is based on the book
that we used in our previous reinforcement learning course and on lecture notes
on policy-based methods written by Thomas Moerland. Thomas also provided
invaluable critique on an earlier draft of the book. Furthermore, as this book was
being prepared, we worked on survey articles on deep model-based reinforcement
learning, deep meta-learning, and deep multi-agent reinforcement learning. I thank
Mike Preuss, Walter Kosters, Mike Huisman, Jan van Rijn, Annie Wong, Anna
Kononova, and Thomas Bäck, the co-authors on these articles.
I thank all members of the Leiden reinforcement learning community for their
input and enthusiasm. I thank especially Thomas Moerland, Mike Preuss, Matthias
Müller-Brockhausen, Mike Huisman, Hui Wang, and Zhao Yang, for their help
with the course for which this book is written. I thank Wojtek Kowalczyk for
insightful discussions on deep supervised learning, and Walter Kosters for his views
on combinatorial search, as well as for his neverending sense of humor.
A very special thank you goes to Thomas Bäck, for our many discussions on
science, the universe, and everything (including, especially, evolution). Without
you, this effort would not have been possible.
This book is a result of the graduate course on reinforcement learning that we
teach in Leiden. I thank all students of this course, past, present, and future, for
their wonderful enthusiasm, sharp questions, and many suggestions. This book was
written for you and by you!
Finally, I thank Saskia, Isabel, Rosalin, Lily, and Dahlia, for being who they are,
for giving feedback and letting me learn, and for their boundless love.
Leiden,
December 2021 Aske Plaat
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 What is Deep Reinforcement Learning? . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Three Machine Learning Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
ix
x CONTENTS
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
9 Meta-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
9.1 Learning to Learn Related Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.2 Transfer Learning and Meta-Learning Agents . . . . . . . . . . . . . . . . . . . 247
9.3 Meta-Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
xii CONTENTS
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 What is Deep Reinforcement Learning? . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.5 Four Related Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.5.1 Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.5.2 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.5.3 Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.5.4 Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Three Machine Learning Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.3 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.1 Prerequisite Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.2 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
xiii
xiv Contents
2.2.3.1 Trace 𝜏 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.3.2 State Value 𝑉 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.3.3 State-Action Value 𝑄 . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2.3.4 Reinforcement Learning Objective . . . . . . . . . . . . . . 38
2.2.3.5 Bellman Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.4 MDP Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.4.1 Hands On: Value Iteration in Gym . . . . . . . . . . . . . . . 41
2.2.4.2 Model-Free Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.2.4.3 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2.4.4 Off-Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.4.5 Hands On: Q-learning on Taxi . . . . . . . . . . . . . . . . . . 52
2.3 Classic Gym Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.1 Mountain Car and Cartpole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.2 Path Planning and Board Games . . . . . . . . . . . . . . . . . . . . . . . . 56
Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9 Meta-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
9.1 Learning to Learn Related Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.2 Transfer Learning and Meta-Learning Agents . . . . . . . . . . . . . . . . . . . 247
9.2.1 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
9.2.1.1 Task Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
9.2.1.2 Pretraining and Finetuning . . . . . . . . . . . . . . . . . . . . 249
9.2.1.3 Hands-on: Pretraining Example . . . . . . . . . . . . . . . . . 249
9.2.1.4 Multi-task Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
9.2.1.5 Domain Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
9.2.2 Meta-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
9.2.2.1 Evaluating Few-Shot Learning Problems . . . . . . . . 253
9.2.2.2 Deep Meta-Learning Algorithms . . . . . . . . . . . . . . . 254
9.2.2.3 Recurrent Meta-Learning . . . . . . . . . . . . . . . . . . . . . . 256
9.2.2.4 Model-Agnostic Meta-Learning . . . . . . . . . . . . . . . . 257
9.2.2.5 Hyperparameter Optimization . . . . . . . . . . . . . . . . . 259
9.2.2.6 Meta-Learning and Curriculum Learning . . . . . . . . 260
9.2.2.7 From Few-Shot to Zero-Shot Learning . . . . . . . . . . 260
9.3 Meta-Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
xviii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Chapter 1
Introduction
1
2 1 Introduction
to bake bread). The field of reinforcement learning is all about learning from success
as well as from mistakes.
In recent years the two fields of deep and reinforcement learning have come
together, and have yielded new algorithms, that are able to approximate high-
dimensional problems by feedback on their actions. Deep learning has brought new
methods and new successes, with advances in policy-based methods, in model-
based approaches, in transfer learning, in hierarchical reinforcement learning, and
in multi-agent learning.
The fields also exist separately, as deep supervised learning and as tabular re-
inforcement learning (see Table 1.1). The aim of deep supervised learning is to
generalize and approximate complex, high-dimensional, functions from pre-existing
datasets, without interaction; Appendix B discusses deep supervised learning. The
aim of tabular reinforcement learning is to learn by interaction in simpler, low-
dimensional, environments such as Grid worlds; Chap. 2 discusses tabular reinforce-
ment learning.
Let us have a closer look at the two fields.
Classic machine learning algorithms learn a predictive model on data, using methods
such as linear regression, decision trees, random forests, support vector machines,
and artificial neural networks. The models aim to generalize, to make predictions.
Mathematically speaking, machine learning aims to approximate a function from
data.
Traditionally, when computers were slow, the neural networks that were used
consisted of a few layers of fully connected neurons, and did not perform excep-
tionally well. This changed with the advent of deep learning and faster computers.
Deep neural networks now consist of many layers of neurons and use different
types of connections.1 Deep networks and deep learning have taken the accuracy of
certain important machine learning tasks to a new level, and have allowed machine
learning to be applied to complex, high-dimensional, problems, such as recognizing
cats and dogs in high-resolution (mega-pixel) images.
Deep learning has allowed machine learning to be applied to day-to-day tasks
such as the face-recognition and speech-recognition that we use in our smartphones.
Deep learning allows high-dimensional problems to be solved in real-time.
1 Where many means an input layer, an output layer, and more than one hidden layer in between.
1.1 What is Deep Reinforcement Learning? 3
Let us look more deeply at reinforcement learning, to see what it means to learn
from our own actions.
Reinforcement learning is a field in which an agent learns by interacting with
an environment. In supervised learning we need pre-existing datasets of labeled
examples to approximate a function; reinforcement learning only needs an environ-
ment that provides feedback signals for actions that the agent is trying out. This
requirement is easier to fulfill, allowing reinforcement learning to be applicable to
more situations than supervised learning.
Reinforcement learning agents generate, by their actions, their own on-the-fly
data, through the environment’s rewards. Agents can choose which actions to
learn from; reinforcement learning is a form of active learning. In this sense, our
agents are like children, that, through playing and exploring, teach themselves a
certain task. This level of autonomy is one of the aspects that attracts researchers
to the field. The reinforcement learning agent chooses which action to perform—
which hypothesis to test, and adjusts its knowledge of what works, building up a
policy of actions that are to be performed in the different states of the world that it
has encountered. (This freedom is also what makes reinforcement learning hard,
because when you are allowed to choose your own examples, it is all too easy to
stay in your comfort zone, stuck in a positive reinforcement bubble, believing you
are doing great, but learning very little of the world in reality.)
1.1.4 Applications
In its most basic form, reinforcement learning is a way to teach an agent to operate in
the world. As a child learns to walk from actions and feedback, so do reinforcement
learning agents learn from actions and feedback. Deep reinforcement learning can
4 1 Introduction
Learning to operate in the world is a high level goal; we can be more specific.
Reinforcement learning is about the agent’s behavior. Reinforcement learning can
find solutions for sequential decision problems, or optimal control problems, as
they are known in engineering. There are many situations in the real world where,
in order to reach a goal, a sequence of decisions must be made. Whether it is baking
a cake, building a house, or playing a card game; a sequence of decisions has to
be made. Reinforcement learning is an efficient way to learn to solve sequential
decision problems.
Many real world problems can be modeled as a sequence of decisions [543]. For
example, in autonomous driving, an agent is faced with questions of speed control,
finding drivable areas, and, most importantly, avoiding collisions. In healthcare,
treatment plans contain many sequential decisions, and factoring the effects of
delayed treatment can be studied. In customer centers, natural language process-
ing can help improve chatbot dialogue, question answering, and even machine
translation. In marketing and communication, recommender systems recommend
news, personalize suggestions, deliver notifications to user, or otherwise optimize
the product experience. In trading and finance, systems decide to hold, buy or
sell financial titles, in order to optimize future reward. In politics and governance,
the effects of policies can be simulated as a sequence of decisions before they are
implemented. In mathematics and entertainment, playing board games, card games,
and strategy games consists of a sequence of decisions. In computational creativity,
making a painting requires a sequence of esthetic decisions. In industrial robotics
and engineering, the grasping of items and the manipulation of materials consists of
a sequence of decisions. In chemical manufacturing, the optimization of production
processes consists of many decision steps, that influence the yield and quality of
the product. Finally, in energy grids, the efficient and safe distribution of energy
can be modeled as a sequential decision problem.
In all these situations, we must make a sequence of decisions. In all these situa-
tions, taking the wrong decision can be very costly.
The algorithmic research on sequential decision making has focused on two
types of applications: (1) robotic problems and (2) games. Let us have a closer look
at these two domains, starting with robotics.
1.1 What is Deep Reinforcement Learning? 5
Robotics
In principle, all actions that a robot should take can be pre-programmed step-by-step
by a programmer in meticulous detail. In highly controlled environments, such as
a welding robot in a car factory, this can conceivably work, although any small
change or any new task requires reprogramming the robot.
It is surprisingly hard to manually program a robot to perform a complex task.
Humans are not aware of their own operational knowledge, such as what “voltages”
we put on which muscles when we pick up a cup. It is much easier to define a desired
goal state, and let the system find the complicated solution by itself. Furthermore,
in environments that are only slightly challenging, when the robot must be able to
respond more flexibly to different conditions, an adaptive program is needed.
It will be no surprise that the application area of robotics is an important driver
for machine learning research, and robotics researchers turned early on to finding
methods by which the robots could teach themselves certain behavior.
The literature on robotics experiments is varied and rich. A robot can teach itself
how to navigate a maze, how to perform manipulation tasks, and how to learn
6 1 Introduction
Fig. 1.4 Go
locomotion tasks. Research into adaptive robotics has made quite some progress.
For example, one of the recent achievements involves flipping pancakes [422] and
flying an aerobatic model helicopter [2, 3]; see Figs. 1.1 and 1.2. Frequently, learning
tasks are combined with computer vision, where a robot has to learn by visually
interpreting the consequences of its own actions.
Games
Let us now turn to games. Puzzles and games have been used from the earliest days
to study aspects of intelligent behavior. Indeed, before computers were powerful
enough to execute chess programs, in the days of Shannon and Turing, paper
designs were made, in the hope that understanding chess would teach us something
about the nature of intelligence [693, 787].
Games allow researchers to limit the scope of their studies, to focus on intelli-
gent decision making in a limited environment, without having to master the full
1.1 What is Deep Reinforcement Learning? 7
complexity of the real world. In addition to board games such as chess and Go,
video games are being used extensively to test intelligent methods in computers.
Examples are Arcade-style games such as Pac-Man [522] and multi-player strategy
games such as StarCraft [812]. See Figs. 1.3–1.6.
Deep reinforcement learning is a rich field, that has existed long before the artificial
intelligence endeavour had started, as a part of biology, psychology, and educa-
tion [86, 388, 742]. In artificial intelligence it has become one of the three main
categories of machine learning, the other two being supervised and unsupervised
learning [93]. This book is a book of algorithms that are inspired by topics from
the social sciences. Although the rest of the book will be about these algorithms, it
is interesting to briefly discuss the links of deep reinforcement learning to human
8 1 Introduction
and animal learning. We will introduce the four scientific disciplines that have a
profound influence on deep reinforcement learning.
1.1.5.1 Psychology
1.1.5.2 Mathematics
1.1.5.3 Engineering
Fig. 1.11 Turing-award winners Geoffrey Hinton, Yann LeCun, and Yoshua Bengio
1.1.5.4 Biology
Now that we have introduced the general context and origins of deep reinforcement
learning, let us switch gears, and talk about machine learning. Let us see how deep
reinforcement learning fits in the general picture of the field. At the same time, we
will take the opportunity to introduce some notation and basic concepts.
In the next section we will then provide an outline of the book. But first it is
time for machine learning. We start at the beginning, with function approximation.
Representing a Function
𝑓 : 𝑋 → 𝑌,
where the domain 𝑋 and range 𝑌 can be discrete or continuous, and the dimension-
ality (number of attributes in 𝑋) can be arbitrary.
Often, in the real world, the same input may yield a range of different outputs,
and we would like our function to provide a conditional probability distribution, a
function that maps
𝑓 : 𝑋 → 𝑝(𝑌 ).
Here the function maps the domain to a probability distribution 𝑝 over the range.
Representing a conditional probability allows us to model functions for which the
input does not always give the same output.
Sometimes the function that we are interested in is given, and we can represent
the function by a specific algorithm that computes an analytical expression that is
known exactly. This is, for example, the case for the laws of physics, or when we
make explicit assumptions for a particular system.
Example: Newton’s second Law of Motion states that for objects with
constant mass
𝐹 = 𝑚 · 𝑎,
where 𝐹 denotes the net force on the object, 𝑚 denotes its mass, and 𝑎
denotes its acceleration. In this case, the analytical expression defines the
entire function, for every possible combination of the inputs.
1.2 Three Machine Learning Paradigms 13
However, for many functions in the real world, we do not have an analytical
expression. Here, we enter the realm of machine learning, in particular of supervised
learning. When we do not know an analytical expression for a function, our best
approach is to collect data—examples of (𝑥, 𝑦) pairs—and reverse engineer or learn
the function from this data. See Fig. 1.12.
Fig. 1.12 Example of learning a function; data points are in blue, a possible learned linear function
is the red line, which allows us to make predictions 𝑦ˆ for any new input 𝑥
Example: A company wants to predict the chance that you buy a shampoo
to color your hair, based on your age. They collect many data points of
𝑥 ∈ N, your age (a natural number), that map to 𝑦 ∈ {0, 1}, a binary indicator
whether you bought their shampoo. They then want to learn the mapping
𝑦ˆ = 𝑓 (𝑥)
where 𝑓 is the desired function that tells the company who will buy the
product and 𝑦ˆ is the predicted 𝑦 (admittedly overly simplistic in this exam-
ple).
Let us see which methods exist in machine learning to find function approxima-
tions.
Three Approaches
There are three main approaches for how the observations can be provided in
machine learning: (1) supervised learning, (2) reinforcement learning, and (3) unsu-
pervised learning.
14 1 Introduction
The first and most basic method for machine learning is supervised learning. In
supervised learning, the data to learn the function 𝑓 (𝑥) is provided to the learning
algorithm in (𝑥, 𝑦) example-pairs. Here 𝑥 is the input, and 𝑦 the observed output
value to be learned for that particular input value 𝑥. The 𝑦 values can be thought
of as supervising the learning process, they teach the learning process the right
answers for each input value 𝑥, hence the name supervised learning.
The data pairs to be learned from are organized in a dataset, which must be
present in its entirety before the algorithm can start. During the learning process,
an estimate of the real function that generated the data is created, 𝑓ˆ. The 𝑥 values
of the pair are also called the input, and the 𝑦 values are the label to be learned.
Two well-known problems in supervised learning are regression and classifi-
cation. Regression predicts a continuous number, classification a dicrete category.
The best known regression relation is the linear relation: the familiar straight line
through a cloud of observation points that we all know from our introductory
statistics course. Figure 1.12 shows such a linear relationship 𝑦ˆ = 𝑎 · 𝑥 + 𝑏. The
linear function can be characterized with two parameters 𝑎 and 𝑏. Of course, more
complex functions are possible, such as quadratic regression, non-linear regression,
or regression with higher-order polynomials [210].
The supervisory signal is computed for each data item 𝑖 as the difference between
the current estimate and the given label, for example by ( 𝑓ˆ(𝑥𝑖 ) − 𝑦 𝑖 ) 2 . Such an error
function ( 𝑓ˆ(𝑥) − 𝑦) 2 is also known as a loss function; it measures the quality of our
prediction. The closer our prediction is to the true label, the lower the loss. There
are many Í ways to compute this closeness, such as the mean squared error loss
L = 𝑁1 1𝑁 ( 𝑓ˆ(𝑥𝑖 ) − 𝑦 𝑖 ) 2 , which is used often for regression over 𝑁 observations.
This loss function can be used by any supervised learning algorithm to adjust model
parameters 𝑎 and 𝑏 to fit the function 𝑓ˆ to the data. Some of the many possible
learning algorithms are linear regression and support vector machines [93, 646].
In classification, a relation between an input value and a class label is fitted. A
well-studied classification problem is image recognition, where two-dimensional
images are to be categorized. Table 1.2 shows a tiny dataset of labeled images
of the proverbial cats and Í dogs. A popular loss function for classification is the
cross-entropy loss L = − 1𝑁 𝑦 𝑖 log( 𝑓ˆ(𝑥𝑖 )), see also Sect. A.2.5.3. Again, such a
loss function can be used to adjust the model parameters to fit the function to the
data. The model can be small and linear, or it can be large, such as a neural network,
which is often used for image classification.
1.2 Three Machine Learning Paradigms 15
In supervised learning a large dataset exists where all input items have an
associated training label. Reinforcement learning is different, it does not assume
the pre-existence of a large labeled training set. Unsupervised learning does require
a large dataset, but no user-supplied output labels; all it needs are the inputs.
Deep learning function approximation was first developed in a supervised set-
ting. Although this book is about deep reinforcement learning, we will encounter
supervised learning concepts frequently, whenever we discuss the deep learning
aspect of deep reinforcement learning.
When there are no labels in the dataset, different learning algorithms must be
used. Learning without labels is called unsupervised learning. In unsupervised
learning an inherent metric of the data items is used, such as distance. A typical
problem in unsupervised learning is to find patterns in the data, such as clusters or
subgroups [819, 800].
Popular unsupervised learning algorithms are 𝑘-means algorithms, and prin-
cipal component analysis [676, 379]. Other popular unsupervised methods are
dimensionality reduction techniques from visualization, such as t-SNE [492], mini-
mum description length [293] and data compression [55]. A popular application of
unsupervised learning are autencoders, see Sect. B.2.6 [410, 411].
The relation between supervised and unsupervised learning is sometimes char-
acterized as follows: supervised learning aims to learn the conditional probability
distribution 𝑝(𝑥|𝑦) of input data conditioned on a label 𝑦, whereas unsupervised
learning aims to learn the a priori probability distribution 𝑝(𝑥) [342].
We will encounter unsupervised methods in this book in a few places, specif-
ically, when autoencoders and dimension reduction are discussed, for example,
in Chap. 5. At the end of this book explainable artificial intelligence is discussed,
where interpretable models play an important role, in Chap. 10.
The third machine learning paradigm is, indeed, reinforcement learning. In contrast
to supervised and unsupervised learning, in reinforcement learning data items
come one by one. The dataset is produced dynamically, as it were. The objective in
reinforcement learning is to find the policy: a function that gives us the best action
in each state that the world can be in.
The approach of reinforcement learning is to learn the policy for the world by
interacting with it. In reinforcement learning we recognize an agent, that does the
learning of the policy, and an environment, that provides feedback to the agent’s
actions (and that performs state changes, see Fig. 1.13). In reinforcement learning,
16 1 Introduction
the agent stands for the human, and the environment for the world. The goal of
reinforcement learning is to find the actions for each state that maximize the long
term accumulated expected reward. This optimal function of states to actions is
called the optimal policy.
In reinforcement learning there is no teacher or supervisor, and there is no static
dataset. There is, however, the environment, that will tell us how good the state is
in which we find ourselves. Reinforcement learning gives us partial information:
a number indicating the quality of the action that brought us to our state, where
supervised learning gives full information: the correct answer or action in that state
(Table 1.3). In this sense, reinforcement learning is in between supervised learning,
in which all data items have a label, and unsupervised learning, where no data has
a label.
Reinforcement learning provides the data to the learning algorithm step by
step, action by action; whereas in supervised learning the data is provided all at
once in one large dataset. The step-by-step approach is well suited to sequential
decision problems. On the other hand, many deep learning methods were developed
for supervised learning and may work differently when data items are generated
one-by-one. Furthermore, since actions are selected using the policy function, and
action rewards are used to update this same policy function, there is a possibility
of circular feedback and local minima. Care must be taken to ensure convergence
to global optima in our methods. Human learning also suffers from this problem,
when a stubborn child refuses to explore outside of its comfort zone. This topic is is
discussed in Sect. 2.2.4.3.
Another difference is that in supervised learning the pupil learns from a finite-
sized teacher (the dataset), and at some point may have learned all there is to learn.
The reinforcement learning paradigm allows a learning setup where the agent
1.3 Overview of the Book 17
can continue to sample the environment indefinitely, and will continue to become
smarter as long as the environment remains challenging (which can be a long time,
for example in games such as chess and Go).3
For these reasons there is great interest in reinforcement learning, although
getting the methods to work is often harder than for supervised learning.
Most classical reinforcement learning use tabular methods that work for low-
dimensional problems with small state spaces. Many real world problems are com-
plex and high-dimensional, with large state spaces. Due to steady improvements
in learning algorithms, datasets, and compute power, deep learning methods have
become quite powerful. Deep reinforcement learning methods have emerged that
successfully combine step-by-step sampling in high-dimensional problems with
large state spaces. We will discuss these methods in the subsequent chapters of this
book.
The aim of this book is to present the latest insights in deep reinforcement learning in
a single comprehensive volume, suitable for teaching a graduate level one-semester
course.
In addition to covering state of the art algorithms, we cover necessary background
in classic reinforcement learning and in deep learning. We also cover advanced,
forward looking developments in self-play, and in multi-agent, hierarchical, and
meta-learning.
9. Meta-Learning
6. Two-Agent Self-Play
Fig. 1.14 Deep Reinforcement Learning is built on Deep Supervised Learning and Tabular Rein-
forcement Learning
The figure shows how deep reinforcement learning is built on tabular reinforce-
ment learning and on deep (supervised) learning. Please check the material in
Appendix B to make sure you have enough background knowledge to follow the
rest of this book.
We also assume undergraduate level familiarity with the Python programming
language. Python that has become the programming language of choice for machine
learning research, and the host-language of most machine learning packages. All
example code in this book is in Python, and major machine learning environments
such as scikit-learn, TensorFlow, Keras and PyTorch work best from Python. See
https://www.python.org for pointers on how to get started in Python. Use the
latest stable version, unless the text mentions otherwise.
We assume an undergraduate level of familiarity with mathematics—a basic
understanding of set theory, graph theory, probability theory and information
theory is necessary, although this is not a book of mathematics. Appendix A contains
a summary to refresh your mathematical knowledge, and to provide an introduction
to the notation that is used in the book.
1.3 Overview of the Book 19
Course
There is a lot of material in the chapters, both basic and advanced, with many
pointers to the literature. One option is to teach a single course about all topics in
the book. Another option is to go slower and deeper, to spend sufficient time on the
basics, and create a course about Chaps. 2–5 to cover the basic topics (value-based,
policy-based, and model-based learning), and to create a separate course about
Chaps. 6–9 to cover the more advanced topics of multi-agent, hierarchical, and
meta-learning.
The field of deep reinforcement learning is a highly active field, in which theory and
practice go hand in hand. The culture of the field is open, and you will easily find
many blog posts about interesting topics, some quite good. Theory drives experi-
mentation, and experimental results drive theoretical insights. Many researchers
publish their papers on arXiv and their algorithms, hyperparameter settings and
environments on GitHub.
In this book we aim for the same atmosphere. In the chapters we first discuss
problems that are to be solved, then the algorithmic approaches to solve them, and
finally the environments to test these approaches.
Throughout the text we provide links to code, and we challenge you with hands-
on sections to get your hands dirty to perform your own experiments. All links to
web pages that we use have been stable for some time.
The field of deep reinforcement learning consists of two main areas: model-free
reinforcement learning and model-based reinforcement learning. Both areas have
two subareas. The chapters of this book are organized according to this structure.
• Model-free methods
– Value-based methods: Chap. 2 (tabular) and 3 (deep)
– Policy-based methods: Chap. 4
• Model-based methods
– Learned model: Chap. 5
20 1 Introduction
Chapters
The next chapter, Chap. 6, studies how a self-play system can be created for
applications where the transition model is given by the problem description. This is
the case in two-agent games, where the rules for moving in the game determine the
transition function. We study how TD-Gammon and AlphaZero achieve tabula rasa
learning: teaching themselves from zero knowledge to world champion level play
through playing against a copy of itself (Table 6.2). In this chapter deep residual
networks and Monte Carlo Tree Search result in curriculum learning.
Chapter 7 introduces recent developments in deep multi-agent and team learning.
The chapter covers competition and collaboration, population-based methods, and
playing in teams. Applications of these methods are found in games such as StarCraft
and Capture the Flag (Table 7.2).
Chapter 8 covers deep hierarchical reinforcement learning. Many tasks exhibit
an inherent hierarchical structure, in which clear subgoals can be identified. The
options framework is discussed, and methods that can identify subgoals, subpolicies,
and meta policies. Different approaches for tabular and deep hierarchical methods
are discussed (Table 8.1).
The final technical chapter, Chap. 9, covers deep meta-learning, or learning to
learn. One of the major hurdles in machine learning is the long time it takes to learn
to solve a new task. Meta-learning and transfer learning aim to speed up learning of
new tasks by using information that has been learned previously for related tasks;
algorithms are listed in Table 9.2. At the end of the chapter we will experiment with
few-shot learning, where a task has to be learned without having seen more than a
few training examples.
Chapter 10 concludes the book by reviewing what we have learned, and by
looking ahead into what the future may bring.
Appendix A provides mathematical background information and notation. Ap-
pendix B provides a chapter-length overview of machine learning and deep super-
vised learning. If you wish to refresh your knowledge of deep learning, please go to
this appendix before you read Chap. 3.
Chapter 2
Tabular Value-Based Reinforcement Learning
This chapter will introduce the classic, tabular, field of reinforcement learning, to
build a foundation for the next chapters. First, we will introduce the concepts of
agent and environment. Next come Markov decision processes, the formalism that
is used to reason mathematically about reinforcement learning. We discuss at some
length the elements of reinforcement learning: states, actions, values, policies.
We learn about transition functions, and solution methods that are based on
dynamic programming using the transition model. There are many situations where
agents do not have access to the transition model, and state and reward information
must be acquired from the environment. Fortunately, methods exist to find the
optimal policy without a model, by querying the environment. These methods,
appropriately named model-free methods, will be introduced in this chapter. Value-
based model-free methods are the most basic learning approach of reinforcement
learning. They work well in problems with deterministic environments and discrete
action spaces, such as mazes and games. Model-free learning makes few demands
on the environment, building up the policy function 𝜋(𝑠) → 𝑎 by sampling the
environment.
After we have discussed these concepts, it is time to apply them, and to under-
stand the kinds of sequential decision problems that we can solve. We will look
at Gym, a collection of reinforcement learning environments. We will also look at
simple Grid world puzzles, and see how to navigate those.
This is a non-deep chapter: in this chapter functions are exact, states are stored in
tables, an approach that works as long as problems fit in memory. The next chapter
shows how function approximation with neural networks allows us to approximate
problems with more states than fit in memory.
The chapter is concluded with exercises, a summary, and pointers to further
reading.
23
24 2 Tabular Value-Based Reinforcement Learning
Core Concepts
• Agent, environment
• MDP: state, action, reward, value, policy
• Planning and learning
• Exploration and exploitation
• Gym, baselines
Core Problem
Core Algorithms
Finding a Supermarket
Imagine that you have just moved to a new city, you are hungry, and you want to
buy some groceries. There is a somewhat unrealistic catch: you do not have a map
of the city and you forgot to charge your smartphone. It is a sunny day, you put
on your hiking shoes, and after some random exploration you have found a way
to a supermarket and have bought your groceries. You have carefully noted your
route in a notebook, and you retrace your steps, finding your way back to your new
home.
What will you do the next time that you need groceries? One option is to
follow exactly the same route, exploiting your current knowledge. This option is
guaranteed to bring you to the store, at no additional cost for exploring possible
alternative routes. Or you could be adventurous, and explore, trying to find a new
route that may actually be quicker than the old route. Clearly, there is a trade-off:
you should not spend so much time exploring that you can not recoup the gains of
a potential shorter route before you move elsewhere.
Reinforcement learning is a natural way of learning the optimal route as we go,
by trial and error, from the effects of the actions that we take in our environment.
This little story contained many of the elements of a reinforcement learning
problem, and how to solve it. There is an agent (you), an environment (the city), there
2.1 Sequential Decision Problems 25
are states (your location at different points in time), actions (assuming a Manhattan-
style grid, moving a block left, right, forward, or back), there are trajectories (the
routes to the supermarket that you tried), there is a policy (which action you will
take at a particular location), there is a concept of cost/reward (the length of your
current path), we see exploration of new routes, exploitation of old routes, a trade-off
between them, and your notebook in which you have been sketching a map of the
city (a local transition model).
By the end of this chapter you will have learned which role all these topics play
in reinforcement learning.
Grid Worlds
By exploring the grid, taking different actions, and recording the reward (whether
it reached the goal square), the agent can find a route—and when it has a route, it
can try to improve that route, to find an optimal, shortest, route to the goal.
Grid world is a simple environment that is well-suited for manually playing
around with reinforcement learning algorithms, to build up intuition of what the
algorithms do to learn by reinforcement. In this chapter we will model reinforcement
learning problems formally, and encounter algorithms that find optimal routes in
Grid world.
2.2 Tabular Value-Based Agents 27
Agent
Environment
After Grid world problems, there are more complicated problems, with extensive
wall structures to make navigation more difficult (see Fig. 2.2). Trajectory planning
algorithms play a central role in robotics [455, 264]; there is a long tradition of
using 2D and 3D mazes for path-finding problems in reinforcement learning. The
Taxi domain was introduced by Dietterich [196], and box-pushing problems such
as Sokoban have also been used frequently [385, 204, 542, 877], see Fig. 2.3. The
challenge in Sokoban is that boxes can only be pushed, not pulled. Actions can have
the effect of creating an inadvertent dead-end for into the future, making Sokoban
a difficult game to play. The action space of these puzzles and mazes is discrete.
Small versions of the mazes can be solved exactly by planning, larger instances are
only suitable for approximate planning or learning methods. Solving these planning
problems exactly is NP-hard or PSPACE-hard [169, 321], as a consequence the
computational time required to solve problem instances exactly grows exponentially
with the problem size, and becomes quickly infeasible for all but the smallest
problems.
Let us see how we can model agents to act in these types of environments.
In Fig. 2.4 the agent and environment are shown, together with action 𝑎 𝑡 , next state
𝑠𝑡+1 , and its reward 𝑟 𝑡+1 . Let us have a closer look at the figure. The environment
28 2 Tabular Value-Based Reinforcement Learning
Formalism
• 𝐴 is a finite set of actions (if the set of actions differs per state, then 𝐴𝑠 is the
finite set of actions in state 𝑠)
• 𝑇𝑎 (𝑠, 𝑠 0) = Pr(𝑠𝑡+1 = 𝑠 0 |𝑠𝑡 = 𝑠, 𝑎 𝑡 = 𝑎) is the probability that action 𝑎 in state
𝑠 at time 𝑡 will transition to state 𝑠 0 at time 𝑡 + 1 in the environment (do not
confuse the 𝑇 in transition function 𝑇𝑎 (·) with time index 𝑡)
• 𝑅 𝑎 (𝑠, 𝑠 0) is the reward received after action 𝑎 transitions state 𝑠 to state 𝑠 0
• 𝛾 ∈ [0, 1] is the discount factor representing the difference between future and
present rewards.
2.2.2.1 State 𝑺
Let us have a deeper look at the Markov-tuple 𝑆, 𝐴, 𝑇𝑎 , 𝑅 𝑎 , 𝛾, to see their role in the
reinforcement learning paradigm, and how, together, they can model and describe
reward-based learning processes.
At the basis of every Markov decision process is a description of the state 𝑠𝑡 of
the system at a certain time 𝑡.
State Representation
new state. This is the case in Grid worlds, Sokoban, and in games such as chess and
checkers, where a move action deterministically leads to one new board position.
An example of a non-deterministic situation is a robot movement in an envi-
ronment. In a certain state, a robot arm is holding a bottle. An agent-action can be
turning the bottle in a certain orientation (presumably to pour a drink in a cup). The
next state may be a full cup, or it may be a mess, if the bottle was not poured in the
correct orientation, or location, or if something happened in the environment such
as someone bumping the table. The outcome of the action is unknown beforehand
by the agent, and depends on elements in the environment, that are not known to
the agent.
2.2.2.2 Action 𝑨
Now that we have looked at the state, it is time to look at the second item that
defines an MDP, the action.
The actions are discrete in some applications, continuous in others. For example,
the actions in board games, and choosing a direction in a navigation task in a grid,
are discrete.
In contrast, arm and joint movements of robots, and bet sizes in certain games, are
continuous (or span a very large range of values). Applying algorithms to continuous
or very large action spaces either requires discretization of the continuous space
2.2 Tabular Value-Based Agents 31
𝑠 𝑠
𝜋 𝜋 𝑠
𝑎 𝑎
𝑡𝑎 , 𝑟𝑎 𝑡𝑎 , 𝑟𝑎 𝜋 𝑡𝑎 , 𝑟𝑎
𝑠0 𝑠0 𝑎, 𝑠0
2.2.2.3 Transition 𝑻𝒂
After having discussed state and action, it is time to look at the transition function
𝑇𝑎 (𝑠, 𝑠 0). The 𝑇 function 𝑇𝑎 determines how the state changes after an action
has been selected. In model-free reinforcement learning the transition function is
implicit to the solution algorithm: the environment has access to the transition
function, and uses it to compute the next state 𝑠 0, but the agent has not. (In Chap. 5
we will discuss model-based reinforcement learning. There the agent has its own
transition function, an approximation of the environment’s transition function,
which is learned from the environment feedback.)
We have discussed states, actions and transitions. The dynamics of the MDP are
modelled by transition function 𝑇𝑎 (·) and reward function 𝑅 𝑎 (·). The imaginary
1 If we assume that supermarkets are large, block-sized, items that typically can be found on street
corners, then we can discretize the action space. Note that we may miss small sub-block-sized
supermarkets, because of this simplification. Another, better, simplification, would be to discretize
the action space into walking distances of the size of the smallest supermarket that we expect to
ever encounter.
32 2 Tabular Value-Based Reinforcement Learning
space of all possible states is called the state space. The state space is typically
large. The two functions define a two-step transition from state 𝑠 to 𝑠 0, via action 𝑎:
𝑠 → 𝑎 → 𝑠 0. To help our understanding of the transitions between states we can
use a graphical depiction, as in Fig. 2.5.
In the figure, states and actions are depicted as nodes (vertices), and transitions
are links (edges) between the nodes. States are drawn as open circles, and actions
as smaller black circles. In a certain state 𝑠, the agent can choose which action 𝑎 to
perform, that is then acted out in the environment. The environment returns the
new state 𝑠 0 and the reward 𝑟 0.
Figure 2.5 shows a transition graph of the elements of the MDP tuple 𝑠, 𝑎, 𝑡 𝑎 , 𝑟 𝑎
as well as 𝑠 0, and policy 𝜋, and how the value can be calculated. The root node at
the top is state 𝑠, where policy 𝜋 allows the agent to choose between three actions
𝑎, that, following distribution Pr, each can transition to two possible states 𝑠 0, with
their reward 𝑟 0. In the figure, a single transition is shown. Use your imagination to
picture the other transitions as the graph extends down.
In the left panel of the figure the environment can choose which new state it
returns in response to the action (stochastic environment), in the middle panel there
is only one state for each action (deterministic environment).
To calculate the value of the root of the tree a backup procedure can be followed.
A backup procedure calculates the value of a parent from the values of the children,
recursively, in a bottom-up fashion, summing or maxing their values from the
leaves to the root of the tree. Such a calculation uses discrete time steps, indicated
by subscripts to the state and action, as in 𝑠𝑡 , 𝑠𝑡+1 , 𝑠𝑡+2 , . . .. For brevity, 𝑠𝑡+1 is
sometimes written as 𝑠 0. The figure shows a single transition step; an episode in
reinforcement learning typically consists of a sequence of many time steps.
A graph such as the one in the center and right panel of Fig. 2.5, where child nodes
have only one parent node and without cycles, is known as a tree. In computer
science the root of the tree is at the top, and trees grow downward to the leaves.
As actions are performed and states and rewards are returned, a learning process
is taking place in the agent. We can use Fig. 2.5 to better understand the learning
process that is unfolding.
The rewards of actions are learned by the agent by interacting with the envi-
ronment, performing the actions. In the tree of Fig. 2.5 an action selection moves
downward, towards the leaves. At the leaves, we find the rewards, which we propa-
gate to the parent states upwards. Reward learning is learning by backpropagation:
in Fig. 2.5 the reward information flows upward in the diagram from the leaves to
the root. Action selection moves down, reward learning flows upward.
Reinforcement learning is learning by trial and error. Trial is selecting an action
down (using the behavior policy) to perform in the environment. Error is moving
up the tree, receiving a feedback reward from the environment, and reporting that
back up the tree to the state to update the current behavior policy. The downward
2.2 Tabular Value-Based Agents 33
selection policy chooses which actions to explore, and the upward propagation of
the error signal performs the learning of the policy.
Figures such as the one in Fig. 2.5 are useful for seeing how values are calculated.
The basic notions are trial, and error, or down, and up.
2.2.2.4 Reward 𝑹𝒂
We distinguish between two types of tasks: (1) continuous time and long running
tasks, and (2) episodic tasks—tasks that end. In continuous and long running tasks it
makes sense to discount rewards from far into the future in order to more strongly
value current information. To achieve this a discount factor 𝛾 is used that reduces
the impact of far away rewards. Many continuous tasks use discounting, 𝛾 ≠ 1.
In this book we will mostly discuss episodic problems, where 𝛾 is irrelevant.
Both the supermarket example and the game of chess are episodic, and discounting
does not make sense in these problems.
2.2.2.6 Policy 𝝅
𝜋 : 𝑆 → 𝑝( 𝐴)
34 2 Tabular Value-Based Reinforcement Learning
𝜋(𝑎|𝑠)
Example: For a discrete state space and discrete action space, we may store
an explicit policy as a table, e.g.:
𝑠 𝜋(𝑎=up|𝑠) 𝜋(𝑎=down|𝑠) 𝜋(𝑎=left|𝑠) 𝜋(𝑎=right|𝑠)
1 0.2 0.8 0.0 0.0
2 0.0 0.0 0.0 1.0
3 0.7 0.0 0.3 0.0
etc. . . . .
𝜋(𝑠)
where
𝜋:𝑆→𝐴
A deterministic policy selects a single action in every state. Of course the deter-
ministic action may differ between states, as in the example below:
Finding the optimal policy function is the goal of the reinforcement learning prob-
lem, and the remainder of this book will discuss many different algorithms to
achieve this goal under different circumstances. Let us have a closer look at the
objective of reinforcement learning.
Before we can do so, we will look at traces, their return, and value functions.
2.2 Tabular Value-Based Agents 35
𝑎
𝑠
𝑇 𝑇
2.2.3.1 Trace 𝝉
𝜏02 = {𝑠0 =1, 𝑎 0 =up, 𝑟 0 =−1, 𝑠1 =2, 𝑎 1 =up, 𝑟 1 =−1, 𝑠2 =3, 𝑎 2 =left, 𝑟 2 =20, 𝑠3 =5}
Since both the policy and the transition dynamics can be stochastic, we will not
always get the same trace from the start state. Instead, we will get a distribution
over traces. The distribution of traces from the start state (distribution) is denoted
by 𝑝(𝜏0 ). The probability of each possible trace from the start is actually given by
the product of the probability of each specific transition in the trace:
Return 𝑹
We have not yet formally defined what we actually want to achieve in the sequential
decision-making task—which is, informally, the best policy. The sum of the future
reward of a trace is known as the return. The return of trace 𝜏𝑡 is:
Example: For the previous trace example we assume 𝛾 = 0.9. The return
(cumulative reward) is equal to:
The real measure of optimality that we are interested in is not the return of just one
trace. The environment can be stochastic, and so can our policy, and for a given
policy we do not always get the same trace. Therefore, we are actually interested
in the expected cumulative reward that a certain policy achieves. The expected
cumulative discounted future reward of a state is better known as the value of that
state.
We define the state 𝑉 𝑉 𝜋 (𝑠) as the return we expect to achieve when an agent
starts in state 𝑠 and then follows policy 𝜋, as:
∞
∑︁
𝑉 𝜋 (𝑠) = E 𝜏𝑡 ∼ 𝑝 ( 𝜏𝑡 ) 𝛾 𝑖 · 𝑟 𝑡+𝑖 |𝑠𝑡 = 𝑠 (2.3)
𝑖=0
2.2 Tabular Value-Based Agents 37
Example: Imagine that we have a policy 𝜋, which from state 𝑠 can result
in two traces. The first trace has a cumulative reward of 20, and occurs in
60% of the times. The other trace has a cumulative reward of 10, and occurs
40% of the times. What is the value of state 𝑠?
Every policy 𝜋 has one unique associated value function 𝑉 𝜋 (𝑠). We often omit
𝜋 to simplify notation, simply writing 𝑉 (𝑠), knowing a state value is always condi-
tioned on a certain policy.
The state value is defined for every possible state 𝑠 ∈ 𝑆. 𝑉 (𝑠) maps every state
to a real number (the expected return):
𝑉 :𝑆→R
𝑠 = terminal ⇒ 𝑉 (𝑠) := 0.
In addition to state values 𝑉 𝜋 (𝑠), we also define 𝑄 values 𝑄 𝜋 (𝑠, 𝑎).2 The only
difference is that we now condition on a state and action. We estimate the average
return we expect to achieve when taking action 𝑎 in state 𝑠, and then following
policy 𝜋 afterwards:
∞
𝜋
∑︁
𝑄 (𝑠, 𝑎) = E 𝜏𝑡 ∼ 𝑝 ( 𝜏𝑡 ) 𝛾 𝑖 · 𝑟 𝑡+𝑖 |𝑠𝑡 = 𝑠, 𝑎 𝑡 = 𝑎 (2.4)
𝑖=0
2The reason for the choice for letter Q is lost in the mists of time. Perhaps it is meant to indicate
quality.
38 2 Tabular Value-Based Reinforcement Learning
Every policy 𝜋 has only one unique associated state-action value function 𝑄 𝜋 (𝑠, 𝑎).
We often omit 𝜋 to simplify notation. Again, the state-action value is a function
𝑄:𝑆×𝐴→R
Example: For a discrete state and action space, 𝑄(𝑠, 𝑎) can be represented
as a table of size |𝑆| × | 𝐴|. Each table entry stores a 𝑄(𝑠, 𝑎) estimate for the
specific 𝑠, 𝑎 combination:
𝑎=up 𝑎=down 𝑎=left 𝑎=right
𝑠=1 4.0 3.0 7.0 1.0
𝑠=2 2.0 -4.0 0.3 1.0
𝑠=3 3.5 0.8 3.6 6.2
etc. . . . .
𝑠 = terminal ⇒ 𝑄(𝑠, 𝑎) := 0, ∀𝑎
We now have the ingredients to state the objective 𝐽 (·) of reinforcement learning.
The objective is to achieve the highest possible average return from the start state:
h i
𝐽 (𝜋) = 𝑉 𝜋 (𝑠0 ) = E 𝜏0 ∼ 𝑝 ( 𝜏0 | 𝜋) 𝑅(𝜏0 ) . (2.5)
for 𝑝(𝜏0 ) given in Eq. 2.1. There is one optimal value function, which achieves
higher or equal value than all other value functions. We search for a policy that
achieves this optimal value function, which we call the optimal policy 𝜋★:
This function 𝜋★ is the optimal policy, it uses the arg max function to select the
policy with the optimal value. The goal in reinforcement learning is to find this
optimal policy for start state 𝑠0 .
A potential benefit of state-action values 𝑄 over state values 𝑉 is that state-action
values directly tell what every action is worth. This is may be useful for action
selection, since, for discrete action spaces,
the Q function directly identifies the best action. Equivalently, the optimal policy
can be obtained directly from the optimal Q function:
2.2 Tabular Value-Based Agents 39
We will now turn to construct algorithms to compute the value function and the
policy function.
To calculate the value function, let us look again at the tree in Fig. 2.5 on page 31,
and imagine that it is many times larger, with subtrees that extend to fully cover
the state space. Our task is to compute the value of the root, based on the reward
values at the real leaves, using the transition function 𝑇𝑎 . One way to calculate the
value 𝑉 (𝑠) is to traverse this full state space tree, computing the value of a parent
node by taking the reward value and the sum of the children, discounting this value
by 𝛾.
This intuitive approach was first formalized by Richard Bellman in 1957. He
showed that discrete optimization problems can be described as a recursive back-
ward induction problem [72]. He introduced the term dynamic programming to
recursively traverse the states and actions. The so-called Bellman equation shows
the relationship between the value function in state 𝑠 and the future child state 𝑠 0,
when we follow the transition function.
The discrete Bellman equation of the value of state 𝑠 after following policy 𝜋 is:3
∑︁ h ∑︁ i
𝑇𝑎 (𝑠, 𝑠 0) 𝑅 𝑎 (𝑠, 𝑠 0) + 𝛾 · 𝑉 𝜋 (𝑠 0)
𝑉 𝜋 (𝑠) = 𝜋(𝑎|𝑠) (2.7)
𝑎∈𝐴 𝑠0 ∈𝑆
The Bellman equation is a recursive equation: it shows how to calculate the value
of a state, out of the values of applying the function specification again on the
3 State-action value and continuous Bellman equations can be found in Appendix A.4.
40 2 Tabular Value-Based Reinforcement Learning
def v al ue _ it er a ti o n () :
initialize ( V )
while not convergence ( V ) :
for s in range ( S ) :
for a in range ( A ) :
Í 0 0 0
Q [s , a ] = 𝑠0 ∈𝑆 𝑇𝑎 (𝑠, 𝑠 ) (𝑅 𝑎 (𝑠, 𝑠 ) + 𝛾𝑉 [𝑠 ])
V [ s ] = max_a ( Q [s , a ])
return V
OpenAI Gym
OpenAI has created the Gym suite of environments for Python, which has become
the de facto standard in the field of research [108]. The Gym suite can be found at
OpenAI4 and on GitHub.5 Gym works on Linux, macOS and Windows. An active
community exists and new environments are created continuously and uploaded to
the Gym website. Many interesting environments are available for experimentation,
to create your own agent algorithm for, and test it.
If you browse Gym on GitHub, you will see different sets of environments,
from easy to advanced. There are the classics, such as Cartpole and Mountain car.
There are also small text environments. Taxi is there, and the Arcade Learning
Environment [71], which was used in the paper that introduced DQN [521], as we
will discuss at length in the next chapter. MuJoCo6 is also available, an environment
for experimentation with simulated robotics [779] (student licences are free, or you
can use pybullet).7
We can now install Gym. Go to the Gym page on https://gym.openai.com
and read the documentation. Make sure Python is installed on your system (does
typing python at the command prompt work?), and that your Python version is up
to date (version 3.10 at the time of this writing). Then type
4 https://gym.openai.com
5 https://github.com/openai/gym
6 http://www.mujoco.org
7 https://pybullet.org/wordpress/
42 2 Tabular Value-Based Reinforcement Learning
import gym
to install Gym with the Python package manager. Soon, you will also be needing
deep learning suites, such as TensorFlow or PyTorch. It is recommended to install
Gym in the same virtual environment as your upcoming PyTorch and TensorFlow
installation, so that you can use both at the same time (see Sect. B.3.4). You may
have to install or update other packages, such as numpy, scipy and pyglet, to get
Gym to work, depending on your system installation.
You can check if the installation works by seeing if the CartPole environment
works, see Listing 2.2. A window should appear on your screen in which a Cartpole
is making random movements (your window system should support OpenGL, and
you may need a version of pyglet newer than 1.5.11 on some operating systems).
The Taxi example (Fig. 2.8) is an environment where taxis move up, down, left,
and right, and pickup and drop off passengers. Let us see how we can use value
iteration to solve the Taxi problem. The Gym documentation describes the Taxi
world as follows. There are four designated locations in the Grid world indicated by
R(ed), B(lue), G(reen), and Y(ellow). When the episode starts, the taxi starts off at a
random square and the passenger is at a random location. The taxi drives to the
passenger’s location, picks up the passenger, drives to the passenger’s destination
(another one of the four specified locations), and then drops off the passenger. Once
the passenger is dropped off, the episode ends.
The Taxi problem has 500 discrete states: there are 25 taxi positions, five possible
locations of the passenger (including the case when the passenger is in the taxi),
and 4 destination locations (25 × 5 × 4).
The environment returns a new result tuple at each step. There are six discrete
deterministic actions for the Taxi driver:
0: Move south
1: Move north
2: Move east
2.2 Tabular Value-Based Agents 43
import gym
import numpy as np
# solve MDP
for _ in range (100) :
v_old = v . copy ()
v = i t e r a t e _ v a l u e _ f u n c t i o n (v , gamma , env )
if np . all ( v == v_old ) :
break
policy = b u i l d _ g r e e d y _ p o l i c y (v , gamma , env ) . astype ( np . int )
# apply policy
for t in range (1000) :
action = policy [ observation ]
observation , reward , done , info = env . step ( action )
cum_reward += reward
if done :
break
if t_rounds % 50 == 0 and t_rounds > 0:
print ( cum_reward * 1.0 / ( t_rounds + 1) )
env . close ()
3: Move west
4: Pick up passenger
5: Drop off passenger
There is a reward of −1 for each action and an additional reward of +20 for
delivering the passenger. There is a reward of −10 for executing actions pickup and
dropoff illegally.
The Taxi environment has a simple transition function, which is used by the
agent in the value iteration code.8 Listing 2.3 shows an implementation of value
iteration that uses the Taxi environment to find a solution. This code is written by
Mikhail Trofimov, and illustrates clearly how value iteration first creates the value
function for the states, and then that a policy is formed by finding the best action
in each state, in the build-greedy-policy function.9
Please use the value iteration code with the Gym Taxi environment, refer to
Listing 2.3. Run the code, and play around with some of the hyperparameters to
familiarize yourself a bit with Gym and with planning by value iteration. Try to
visualize for yourself what the algorithm is doing. This will prepare you for the
more complex algorithms that we will look into next.
The value iteration algorithm can compute the policy function. It uses the transition
model in its computation. Frequently, we are in a situation when the exact transition
probabilities are not known to the agent, and we need other methods to compute the
8Note that the code uses the environment to compute the next state, so that we do not have to
implement a version of the transition function for the agent.
9 https://gist.github.com/geffy/b2d16d01cbca1ae9e13f11f678fa96fd#file-taxi-vi-
py
2.2 Tabular Value-Based Agents 45
policy function. For this situation, model-free algorithms have been developed, for
the agent to compute the policy without knowing any of the transition probabilities
itself.
The development of these model-free methods is a major milestone of reinforce-
ment learning, and we will spend some time to understand how they work. We will
start with value-based model-free algorithms. We will see how, when the agent does
not know the transition function, an optimal policy can be learned by sampling
rewards from the environment. Table 2.1 lists value iteration in conjunction with
the model-free algorithms that we cover in this chapter.
First we will discuss how the principle of temporal difference uses sampling and
bootstrapping to construct the value function from irreversible actions. We will
see how the value function can be use to find the best actions, to form the policy.
Second, we will discuss which mechanisms for action selection exist, where we will
encounter the exploration/exploitation trade-off. Third, we will discuss how to learn
from the rewards of the selected actions. We will encounter on-policy learning and
off-policy learning. We wil discuss two simple algorithms: SARSA and Q-learning.
Let us now start by having a closer look at sampling actions with temporal difference
learning.
In the previous section the value function was calculated recursively by using the
value function of successor states, following Bellman’s equation (Eq. 2.7).
Bootstrapping is a process of subsequent refinement by which old estimates of
a value are refined with new updates. It means literally: pull yourself up by your
boot straps. Bootstrapping solves the problem of computing a final value when we
only know how to compute step-by-step intermediate values. Bellman’s recursive
computation is a form of bootstrapping. In model-free learning, the role of the
transition function is replaced by an iterative sequence of environment samples.
A bootstrapping method that can be used to process the samples, and to refine
them to approximate the final value, is temporal difference learning. Temporal
difference learning, TD for short, was introduced by Sutton [739] in 1988. The
temporal difference in the name refers to the difference in values between two time
steps, which it uses to calculate the value at the new time step.
Temporal difference learning works by updating the current estimate of the state
value 𝑉 (𝑠) (the bootstrap-value) with an error value based on the estimate of the
46 2 Tabular Value-Based Reinforcement Learning
Here 𝑠 is the current state, 𝑠 0 the new state, and 𝑟 0 the reward of the new state.
Note the introduction of 𝛼, the learning rate, which controls how fast the algorithm
learns (bootstraps). It is an important parameter; setting the value too high can
be deterimental since the last value then dominates the bootstrap process too
much. Finding the optimal value will require experimentation. The 𝛾 parameter is
the discount rate. The last term −𝑉 (𝑠) subtracts the value of the current state, to
compute the temporal difference. Another way to write the update rule is
as the difference between the new temporal difference target and the old value.
Note the absence of transition model 𝑇 in the formula; temporal difference is a
model-free update formula.
The introduction of the temporal difference method has allowed model-free
methods to be used successfully in various reinforcement learning settings. Most
notably, it was the basis of the program TD-Gammon, that beat human world-
champions in the game of Backgammon in the early 1990s [762].
Now that we know how to calculate the value function (the up-motion in the
tree diagram), let us see how we can select the action in our model-free algorithm
(the down-motion in the tree diagram).
The goal of reinforcement learning is to construct the policy with the highest
cumulative reward. Thus, we must find the best action 𝑎 in each state 𝑠. In the
value-based approach we know the value functions 𝑉 (𝑠) or 𝑄(𝑠, 𝑎). How can that
help us to find action 𝑎? In a discrete action space, there is at least one discrete
action with the highest value. Thus, if we have the optimal state-value 𝑉 ★, then the
optimal policy can be found by finding the action with that value. This relationship
is given by
𝜋★ = max 𝑉 𝜋 (𝑠) = max 𝑄 𝜋 (𝑠, 𝑎)
𝜋 𝑎, 𝜋
and the arg max function finds the best action for us
In this way the optimal policy sequence of best actions 𝜋★ (𝑠) can be recovered from
the value functions, hence the name value-based method [845].
2.2 Tabular Value-Based Agents 47
2.2.4.3 Exploration
Since there is no local transition function, model-free methods perform their state
changes directly in the environment. This may be an expensive operation, for
example, when a real-world robot arm has to perform a movement. The sampling
policy should choose promising actions to reduce the number of samples as much
as possible, and not waste any actions. What behavior policy should we use? It is
tempting to favor at each state the actions with the highest Q-value, since then we
would be following what is currently thought to be the best policy.
This approach is called the greedy approach. It appears attractive, but is short-
sighted and risks settling for local maxima. Following the trodden path based
on only a few early samples risks missing a potential better path. Indeed, the
greedy approach is high variance, using values based on few samples, resulting
in a high uncertainty. We run the risk of circular reinforcement, if we update
the same behavior policy that we use to choose our samples from. In addition to
exploiting known good actions, a certain amount of exploration of unknown actions
is necessary. Smart sampling strategies use a mix of the current behavior policy
(exploitation) and randomness (exploration) to select which action to perform in the
environment.
Bandit Theory
The exploration/exploitation trade-off, the question of how to get the most reliable
information at the least cost, has been studied extensively in the literature [345, 844].
The field has the colorful name of multi-armed bandit theory [30, 442, 278, 631]. A
bandit in this context refers to a casino slot machine, with not one arm, but many
arms, each with a different and unknown payout probability. Each trial costs a coin.
The multi-armed bandit problem is then to find a strategy that finds the arm with
the highest payout at the least cost.
A multi-armed bandit is a single-state reinforcement learning problem, a one-step
non-sequential decision making problem, with the arms representing the possible
actions. This simplified model of stochastic decision making allows the in-depth
study of exploration/exploitation strategies.
Single-step exploration/exploitation questions arise for example in clinical trials,
where new drugs are tested on test-subjects (real people). The bandit is the trial, and
the arms are the choice how many of the test subjects are given the real experimental
drug, and how many are given the placebo. This is a serious setting, since the cost
may be measured in the quality of human lives.
In a conventional fixed randomized controlled trial (supervised setup) the sizes
of the groups that get the experimental drugs and the control group would be fixed,
and the confidence interval and the duration of the test would also be fixed. In an
adaptive trial (bandit setup) the sizes would adapt during the trial depending on
the outcomes, with more people getting the drug if it appears to work, and fewer if
it does not.
48 2 Tabular Value-Based Reinforcement Learning
Let us have a look at Fig. 2.9. Assume that the learning process is a clinical trial
in which three new compounds are tested for their medical effect on test subjects.
In the fixed trial (left panel) all test subjects receive the medicine of their group to
the end of the test period, after which the data set is complete and we can determine
which of the compounds has the best effect. At that point we know which group has
had the best medicine, and which two thirds of the subjects did not, with possibly
harmful effect. Clearly, this is not a satisfactory situation. It would be better if we
could gradually adjust the proportion of the subjects that receive the medicine
that currently looks best, as our confidence in our test results increases as the trial
progresses. Indeed, this is what reinforcement learning does (Fig. 2.9, right panel).
It uses a mix of exploration and exploitation, adapting the treatment, giving more
subjects the promising medicine, while achieving the same confidence as the static
trial at the end [442, 441].
𝝐-greedy Exploration
On-Policy SARSA
In on-policy learning a single policy function is used for (downward) action selection
and (upward) value backup towards the learning target. SARSA is an on-policy
algorithm [644]. On-policy learning updates values directly on the single policy.
The same policy function is used for exploration behavior and for the target policy.
The SARSA update formula is
Going back to temporal difference (Eq. 2.8), we see that the SARSA formula looks
very much like TD, although now we deal with state-action values, and temporal
difference dealt with state values.
On-policy learning selects an action, evaluates it in the environment, and moves
on to better actions, guided by the behavior policy (which is not specified in the
formula, but might be 𝜖-greedy). On-policy learning begins with a behavior policy,
samples the state space with this policy, and improves the policy by backing up
values of the selected actions. Note that the term 𝑄(𝑠𝑡+1 , 𝑎 𝑡+1 ) can also be written
as 𝑄(𝑠𝑡+1 , 𝜋(𝑠𝑡+1 )), highlighting the difference with off-policy learning. SARSA
updates its Q-values using the Q-value of the next state 𝑠 and the current policy’s
action.
The primary advantage of on-policy learning is that it directly optimizes the
target of interest, and converges quickly by learning with the direct behavior values.
The biggest drawback is sample inefficiency, since the target policy is updated with
sub-optimal explorative rewards.
Off-Policy Q-Learning
Off-policy learning is more complicated; it uses separate behavior and target policies:
one for exploratory downward selection behavior, and one to update as the current
target backup policy. Learning (backing up) is from data off the downward behavior
policy, and the whole method is therefore called off-policy learning.
The most well-known off-policy algorithm is Q-learning [830]. It gathers infor-
mation from explored moves, it evaluates states as if a greedy policy was used, even
when the actual behavior performed an exploration step.
The Q-learning update formula is
The difference from on-policy learning is that the 𝛾𝑄(𝑠𝑡+1 , 𝑎 𝑡+1 ) term from Eq. 2.9
has been replaced by 𝛾 max𝑎 𝑄(𝑠𝑡+1 , 𝑎). The learning is from backup values of
the best action, not the one that was actually evaluated. Listing 2.4 shows the full
pseudocode for Q-learning.
2.2 Tabular Value-Based Agents 51
The reason that Q-learning is off-policy is that it updates its Q-values using the Q-
value of the next state 𝑠, and the greedy action (not necessarily the behavior policy’s
action—it is learning off the behavior policy). In this sense, off-policy learning
collects all available information and uses it simultaneously to construct the best
target policy.
On-policy targets follow the behavioral policy and convergence is typically more
stable (low variance). Off-policy targets can learn the optimal policy/value (low
bias), but they can be unstable due to the max operation, especially in combination
with function approximation, as we will see in the next chapter.
Before we conclude this section, we should discuss sparsity. For some environments
a reward exists for each state. For the supermarket example a reward can be calcu-
lated for each state that the agent has walked to. The reward is actually the opposite
of the cost expended in walking. Environments in which a reward exists in each
state are said to have a dense reward structure.
For other environments rewards may exist for only some of the states. For
example, in chess, rewards only exist at terminal board positons where there is a
win or a draw. In all other states the return depends on the future states and must be
calculated by the agent by propagating reward values from future states up towards
the root state 𝑠0 . Such an environment is said to have a sparse reward structure.
Finding a good policy is more complicated when the reward structure is sparse. A
graph of the landscape of such a sparse reward function would show a flat landscape
with a few sharp mountain peaks. Many of the algorithms that we will see in future
chapters use the reward-gradient to find good returns. Finding the optimum in a
flat landscape where the gradient is zero, is hard. In some applications it is possible
to change the reward function to have a shape more amenable to gradient-based
optimization algorithms such as we use in deep learning. Reward shaping can make
all the difference when no solution can be found with a naive reward function. It
is a way of incorporating heuristic knowledge into the MDP. A large literature on
52 2 Tabular Value-Based Reinforcement Learning
reward shaping and heuristic information exists [559]. The use of heuristics on
board games such as chess and checkers can also be regarded as reward shaping.
To get a feeling for how these algorithms work in practice, let us see how Q-learning
solves the Taxi problem.
In Sect. 2.2.4.1 we saw how value iteration solved the Taxi problem, making use
of the transition model. We will now see how we solve this problem if we do not
have the transition model, but have to use a model-free sample method. Q-learning
samples actions, and records the reward values in a Q-table, converging to the
state-action value function. When in all states the best values of the best actions
are known, these can be used to sequence the optimal policy.
Let us see how a value-based model-free algorithm would solve a simple 5 × 5
Taxi problem. Refer to Fig. 2.8 on page 44 for an illustration of Taxi world.
Please recall that in Taxi world, the taxi can be in one of 25 locations and there
are 25 × (4 + 1) × 4 = 500 different states that the environment can be in.
We follow the reward model as it is used in the Gym Taxi environment. Recall
that our goal is to find a policy (actions in each state) that leads to the highest
cumulative reward. Q-learning learns the best policy through guided sampling.
It records the rewards it gets from actions it performs in the environment. The
Q-values are the expected rewards of the actions in the states. It uses the Q-values
to guide which actions it will sample. Q-values 𝑄(𝑠, 𝑎) are stored in an array that
is indexed by state and action. The Q-values guide the exploration, higher values
indicate better actions.
Listing 2.5 shows the full Q-learning algorithm, in Python, after [394]. It uses
an 𝜖-greedy behavior policy: mostly the best action is followed, but in a certain
fraction a random action is chosen, for exploration. Recall that the Q-values are
updated according to the Q-learning formula:
where 0 ≤ 𝛾 ≤ 1 is the discount factor and 0 < 𝛼 ≤ 1 the learning rate. Note that
Q-learning uses bootstrapping, and the initial Q-values are set to a random value
(their value will disappear slowly due to the learning rate).
Q-learning is learning the best action to take in the current state by looking at
the reward for the current state-action combination, plus the maximum rewards
for the next state. Eventually the best policy is found in this way, and the taxi will
consider the route consisting of a sequence of the best rewards.
To summarize informally:
1. Initialize the Q-table to random values
2. Select a state 𝑠
3. For all possible actions from 𝑠 select the one with the highest Q-value and travel
to this state, which becomes the new 𝑠, or, with 𝜖 greedy, explore
2.2 Tabular Value-Based Agents 53
total_epochs , t ot a l_ pe n al ti e s = 0 , 0
ep = 100
for _ in range ( ep ) :
state = env . reset ()
epochs , penalties , reward = 0 , 0 , 0
done = False
while not done :
action = np . argmax ( Q [ state ])
state , reward , done , info = env . step ( action )
if reward == -10:
penalties += 1
epochs += 1
t ot al _ pe n al ti e s += penalties
total_epochs += epochs
print ( f " Results ␣ after ␣ { ep } ␣ episodes : " )
print ( f " Average ␣ timesteps ␣ per ␣ episode : ␣ { total_epochs ␣ / ␣ ep } " )
print ( f " Average ␣ penalties ␣ per ␣ episode : ␣ { t ot al _ pe n al ti e s ␣ / ␣ ep } " )
table to converge the rewards to the value function. In this way the optimal policy
can be found model-free.
Go ahead, implement and run this code, and play around to become familiar with
the algorithm. Q-learning is an excellent algorithm to learn the essence of how
reinforcement learning works. Try out different values for hyperparameters, such
as the exploration parameter 𝜖, the discount factor 𝛾 and the learning rate 𝛼. To
be successful in this field, it helps to have a feeling for these hyperparameters. A
choice close to 1 for the discount parameter is usually a good start, and a choice
close to 0 for the learning rate is a good start. You may feel a tendency to do the
opposite, to choose the learning rate as high as possible (close to 1) to learn as
quickly as possible. Please go ahead and see which works best in Q-learning (you
can have a look at [230]). In many deep learning environments a high learning rate
is a recipe for disaster, your algorithm may not converge at all, and Q-values can
become unbounded. Play around with tabular Q-learning, and approach your deep
learning slowly, with gentle steps!
The Taxi example is small, and you will get results quickly. It is well suited to build
up useful intuition. In later chapters, we will do experiments with deep learning,
that take longer to converge, and acquiring intuition for tuning hyperparameter
values will be more expensive.
2.3 Classic Gym Environments 55
Conclusion
We have now seen how a value function can be learned by an agent without having
the transition function, by sampling the environment. Model-free methods use
actions that are irreversible for the agent. It samples states and rewards from the
environment, using a behavior policy with the current best action, and following
an exploration/exploitation trade-off. The backup rule for learning is based on
bootstrapping, and can follow the rewards of the actions on-policy, including the
value of the occasional explorative action, or off-policy, always using the value
of the best action. We have seen two model-free tabular algorithms, SARSA and
Q-learning, where the value function is assumed to be stored in an exact table data
structure.
In the next chapter we will move to network-based algorithms for high-
dimensional state spaces, based on function approximation with a deep neural
network.
Now that we have discussed at length the agent algorithms, it is time to have a look
at the environments, the other part of the reinforcement learning model. Without
them, progress cannot be measured, and results cannot be compared in a meaningful
way. In a real sense, environments define the kind of intelligence that our artificial
methods can perform.
In this chapter we will start with a few smaller environments, that are suited
for the tabular algorithms that we have discussed. Two environments that have
been around since the early days of reinforcement learning are Mountain car and
Cartpole (see Fig. 2.10).
ends when the pole falls over, or when the cart runs too far left or right [57]. Again
the challenge is to apply the right force at the right moment, solely by feedback of
the pole being upright or too far down.
Navigation tasks and board games provide environments for reinforcement learning
that are simple to understand. They are well suited to reason about new agent
algorithms. Navigation problems, and the heuristic search trees built for board
games, can be of moderate size, and are then suited for determining the best action
by dynamic programming methods, such as tabular Q-learning, A*, branch and
bound, and alpha-beta [646]. These are straightforward search methods that do not
attempt to generalize to new, unseen, states. They find the best action in a (largish)
space of states, all of which are present at training time—the optimization methods
do not perform generalization from training to test time.
Path Planning
Path planning (Fig 2.1) is a classic problem that is related to robotics [455, 264].
Popular versions are mazes, as we have seen earlier (Fig. 2.2). The Taxi domain was
originally introduced (Fig. 2.8) in the context of hierarchical problem solving [196].
Box-pushing problems such as Sokoban are frequently used as well [385, 204, 542,
877], see Fig. 2.3. The action space of these puzzles and mazes is discrete. Basic path
and motion planning can enumerate possible solutions [169, 321].
Small versions of mazes can be solved exactly by enumeration, larger instances
are only suitable for approximation methods. Mazes can be used to test algorithms
for path finding problems and are frequently used to do so. Navigation tasks and
box-pushing games such as Sokoban can feature rooms or subgoals, that may then
2.3 Classic Gym Environments 57
be used to test algorithms for hierarchically structured problems [235, 297, 614, 238]
(Chap. 8).
The problems can be made more difficult by enlarging the grid and by insert-
ing more obstacles. Mazes and Sokoban grids are sometimes procedurally gen-
erated [691, 328, 780]. The goal for the algorithms is typically to find a solution
for a grid of a certain difficulty class, to find a shortest path solution, or, in trans-
fer learning, to learn to solve a class of grids by training on a different class of
grids [858].
Board Games
Board games are a classic group of benchmarks for planning and learning since
the earliest days of artificial intelligence. Two-person zero-sum perfect information
board games such as tic tac toe, chess, checkers, Go, and shogi have been used to
test algorithms since the 1950s. The action space of these games is discrete. Notable
achievements were in checkers, chess, and Go, where human world champions
were defeated in 1994, 1997, and 2016, respectively [662, 124, 702].
The board games are typically used “as is” and are not changed for different
experiments (in contrast to mazes, that are often adapted in size or complexity for
specific purposes of the experiment). Board games are often used for the difficulty of
the challenge. The ultimate goals is to beat human grandmasters or even the world
champion. Board games have been traditional mainstays of artificial intelligence,
mostly associated with the search-based symbolic reasoning approach to artificial
intelligence [646]. In contrast, the benchmarks in the next chapter are associated
with connectionist artificial intelligence.
This has been a long chapter. We will summarize the chapter, and provide references
for further reading.
Summary
Reinforcement learning can learn behavior that achieves high rewards, using feed-
back from the environment. Reinforcement learning has no supervisory labels,
it can learn beyond a teacher, as long as there is an environment that provides
feedback.
Reinforcement learning problems are modeled as a Markov decision problem,
consisting of a 5-tuple (𝑆, 𝐴, 𝑇𝑎 , 𝑅 𝑎 , 𝛾) for state, action, transition, reward, and
58 2 Tabular Value-Based Reinforcement Learning
discount factor. The agent performs an action, and the environment returns the
new state and the reward value to be associated with the new state.
Games and robotics are two important fields of application. Fields of application
can be episodic (they end—such as a game of chess) or continuous (they do not
end—a robot remains in the world). In continuous problems it often makes sense to
discount behavior that is far from the present, episodic problems typically do not
bother with a discount factor—a win is a win.
Environments can be deterministic (many board games are deterministic—boards
don’t move) or stochastic (many robotic worlds are stochastic—the world around a
robot moves). The action space can be discrete (many games have a discrete action
space—a piece either moves to a square or it does not) or continuous (typical robot
joints move continuously—a car can move any distance, an arm can rotate over a
continuous angle).
The goal in reinforcement learning is to find for all states the best actions (the
policy, 𝜋) that maximizes the cumulative future reward. The policy function is
used in two different ways. In a discrete environment 𝑎 = 𝜋(𝑠) the policy function
typically returns for each state the best action in that sate. Alternatively the policy
returns the value of each action in each state, out of which the argmax function
can find the action with the highest value.
The optimal policy can be found by finding the maximal value of a state. The
value function 𝑉 (𝑠) returns the expected reward for a state. When the transition
function 𝑇𝑎 (𝑠, 𝑠 0) is present, the agent can use Bellman’s equation, or a dynamic
programming method to recursively traverse the behavior space. Value iteration is
one such dynamic programming method, it traverses all actions of all states, backing
up reward values, until the value function stops changing. Planning methods follow
the principle of dynamic programming. The state-action value 𝑄(𝑠, 𝑎) determines
the value of an action of a state.
Bellman’s equation calculates the value of a state by calculating the value of
successor states. Accessing successor states (by following the action and transition)
is also called expanding a successor state. In a tree diagram successor states are
called child nodes, and expanding is a downard action. Backpropagating the reward
values to the parent node is a movement upward in the tree.
Methods where the agent makes use of the transition model are called model-
based methods. When the agent does not use the transition model, they are model-
free methods.
In many situations the learning agent does not have access to the transition model
of the environment, and planning methods cannot be used by the agent. Value-based
model-free methods can find an optimal policy by using only irreversible actions,
sampling the environment to find the value of the actions.
A major determinant in model-free reinforcement learning is the exploration/ex-
ploitation trade-off, or how much of the information that has been learned from the
environment is used in choosing actions to sample. We discussed the advantages
of exploiting the latest knowledge in settings where environment actions are very
costly, such as clinial trials.
2.3 Classic Gym Environments 59
Further Reading
Exercises
We will end with questions on key concepts, with programming exercises to build
up more experience,
Questions
The questions below are meant to refresh your memory, and should be answered
with yes, no, or short answers of one or two sentences.
1. In reinforcement learning the agent can determine which training examples are
generated next through its action. Why is this beneficial? What is a potential
problem?
2. What is Grid world?
3. Which five elements does an MDP have to model reinforcement learning prob-
lems?
4. In a tree diagram, is successor selection of behavior up or down?
5. In a tree diagram, is learning values through backpropagation up or down?
6. What is 𝜏?
7. What is 𝜋(𝑠)?
8. What is 𝑉 (𝑠)?
9. What is 𝑄(𝑠, 𝑎)?
10. What is dynamic programming?
11. What is recursion?
12. Do you know a dynamic programming method to determine the value of a state?
13. Is an action in an environment reversible for the agent?
14. Mentions two tyical application areas of reinforcement learning.
15. Is the action space of games typically discrete or continuous?
16. Is the action space of robots typically discrete or continuous?
17. Is the environment of games typically deterministic or stochastic?
18. Is the environment of robots typically deterministic or stochastic?
19. What is the goal of reinforcement learning?
20. Which of the five MDP elements is not used in episodic problems?
21. Which model or function is meant when we say “model-free” or “model-based”?
22. What type of action space and what type of environment are suited for value-
based methods?
23. Why are value-based methods used for games and not for robotics?
24. Name two basic Gym environments.
Exercises
There is an even better way to learn about deep reinforcement learning then reading
about it, and that is to perform experiments yourself, to see the learning processes
2.3 Classic Gym Environments 61
unfold before your own eyes. The following exercises are meant as starting points
for your own discoveries in the world of deep reinforcement learning.
Consider using Gym to implement these exercises. Section 2.2.4.1 explains how
to install Gym.
1. Implement Q-learning for Taxi, including the procedure to derive the best policy
for the Q-table. Go to Sect. 2.2.4.5 and implement it. Print the Q-table, to see
the values on the squares. You could print a live policy as the search progresses.
Try different values for 𝜖, the exploration rate. Does it learn faster? Does it keep
finding the optimal solution? Try different values for 𝛼, the learning rate. Is it
faster?
2. Implement SARSA, the code is in Listing 2.7. Compare your results to Q-learning,
can you see how SARSA chooses different paths? Try different 𝜖 and 𝛼.
3. How large can problems be before converging starts taking too long?
4. Run Cartpole with the greedy policy computed by value iteration. Can you make
it work? Is value iteration a suitable algorithm for Cartpole? If not, why do you
think it is not?
Chapter 3
Deep Value-Based Reinforcement Learning
63
64 3 Deep Value-Based Reinforcement Learning
Core Concepts
Core Problem
Core Algorithm
End-to-end Learning
Before the advent of deep learning, traditional reinforcement learning has been
used mostly on small problems and puzzles, such as the supermarket example. Their
state space fits in the small memories of yesterday’s computers. Reward shaping, in
the form of domain-specific heuristics, was used to shoehorn the problem into a
computer, for example, in chess and checkers [124, 353, 661]. Deep learning changed
this situation, and reinforcement learning is now used on large and high-dimensional
problems.
In the field of supervised learning, a yearly competition had created years of
steady progress in which the accuracy of image classification had steadily improved.
Progress was driven by the availability of ImageNet, a large database of labeled
images [236, 191], by increases in computation power through GPUs, and by steady
3 Deep Value-Based Reinforcement Learning 65
Fig. 3.2 Example Game from the Arcade Learning Environment [71]
reinforcement learning algorithm that could play 1980s Atari video games just
by training on the pixel input of the video screen (Fig. 3.2). The algorithm used a
combination of deep learning and Q-learning, and was named Deep Q-Network, or
DQN [521, 522]. An illuminating video of how it learned to play the game Breakout
is here.1 This was a breakthrough for reinforcement learning. Many researchers
at the workshop could relate to this achievement, perhaps because they had spent
hours playing Space Invaders, Pac-Man and Pong themselves when they were
younger. Two years after the presentation at the deep learning workshop a longer
article appeared in the journal Nature in which a refined and expanded version of
DQN was presented (see Fig. 3.1 for the journal cover).
Why was this such a momentous achievement? Besides the fact that the problem
that was solved was easily understood, true eye-hand coordination of this com-
plexity had not been achieved by a computer before; furthermore, the end-to-end
learning from pixel to joystick implied artificial behavior that was close to how
humans play games. DQN essentially launched the field of deep reinforcement learn-
ing. For the first time the power of deep learning had been successfully combined
with behavior learning, for an imaginative problem.
A major technical challenge that was overcome by DQN is the instability of
the deep reinforcement learning process. In fact, there were convincing theoretical
analyses at the time that this instability was fundamental, and it was generally
assumed that it would be next to impossible to overcome [48, 282, 786, 742], since
the target of the loss-function depended on the convergence of the reinforcement
learning process itself. By the end of this chapter we will have covered the problems
of convergence and stability in reinforcement learning. We will have seen how
DQN addresses these problems, and we will also have discussed some of the many
further solutions that were devised after DQN.
But let us first have a look at the kind of new, high-dimensional, environments
that were the cause of these developments.
In the previous chapter, Grid worlds and mazes were introduced as basic sequential
decision making problems in which exact, tabular, reinforcement learning methods
work well. The complexity of a problem is related to the number of unique states
that a problem has, or how large the state space is. Tabular methods work for small
problems, where the entire state space fits in memory. This is for example the case
with linear regression, which has only one variable 𝑥 and two parameters 𝑎 and
𝑏, or the Taxi problem, which has a state space of size 500. In this chapter we will
be more ambitious and introduce various games, most notably Atari arcade games.
The state space of Atari video input is 210 × 160 pixels of 256 RGB color values
= 25633600 , see Sect. B.1.2, where we discuss the curse of dimensionality.
1 https://www.youtube.com/watch?v=TmPfTpjtdgg
3.1 Large, High-Dimensional, Problems 67
There is a qualitative difference between small (500) and large (25633600 ) prob-
lems. For small problems the policy can be learned by loading all states of a problem
in memory. States are identified individually, and each has its own best action, that
we can try to find. Large problems, in contrast, do not fit in memory, the policy
cannot be memorized, and states are grouped together based on their features (see
Sect. B.1.3). A parameterized network maps states to actions and values; states are
no longer individually identifiable in a lookup table.
When deep learning methods were introduced in reinforcement learning, larger
problems than before could be solved. Let us have a look at those problems.
Learning actions directly from high-dimensional sound and vision inputs is one of
the long-standing challenges of artificial intelligence. To stimulate this research, in
2012 a test-bed was created designed to provide challenging reinforcement learning
tasks. It was called the Arcade Learning Environment, or ALE [71], and it was based
on a simulator for 1980s Atari 2600 video games. Figure 3.3 shows a picture of a
distinctly retro Atari 2600 gaming console.
Among other things ALE contains an emulator of the Atari 2600 console. ALE
presents agents with a high-dimensional2 visual input (210 × 160 RGB video at
60 Hz, or 60 images per second) of tasks that were designed to be interesting and
challenging for human players (Fig. 3.2 showed an example of such a game and
Fig. 3.4 shows a few more). The game cartridge ROM holds 2-4 kB of game code,
while the console random-access memory is small, just 128 bytes (really, just 128
bytes, although the video memory is larger, of course). The actions can be selected
2 That is, high dimensional for machine learning. 210 × 160 pixels is not exactly high-definition
video quality.
68 3 Deep Value-Based Reinforcement Learning
Fig. 3.4 Screenshots of 4 Atari Games (Breakout, Pong, Montezuma’s Revenge, and Private Eye)
via a joystick (9 directions), which has a fire button (fire on/off), giving 18 actions
in total.
The Atari games provide challenging eye-hand coordination and reasoning tasks,
that are both familiar and challenging to humans, providing a good test-bed for
learning sequential decision making.
Atari games, with high-resolution video input at high frame rates, are an entirely
different kind of challenge than Grid worlds or board games. Atari is a step closer to
a human environment in which visual inputs should quickly be followed by correct
actions. Indeed, the Atari benchmark called for very different agent algorithms,
prompting the move from tabular algorithms to algorithms based on function
approximation and deep learning. The ALE has become a standard benchmark in
deep reinforcement learning research. ALE is included in the Gym environment.
ALE has fulfilled its goal of stimulating research into deep end-to-end reinforcement
learning well.
Real-time strategy games provide an even greater challenge than simulated 1980s
Atari consoles. Games such as StarCraft (Fig. 1.6) [572], and Capture the Flag [372]
have very large state spaces. These are games with large maps, many players, many
pieces, and many types of actions. The state space of StarCraft is estimated at
101685 [572], more than 1500 orders of magnitude larger than Go (10170 ) [539, 785]
3.2 Deep Value-Based Agents 69
and more than 1635 orders of magnitude large than chess (1047 ) [354], which
is played on a relatively small 8 × 8 board. Most real time strategy games are
multi-player, non-zero-sum, imperfect information games that also feature high-
dimensional pixel input, reasoning, and team collaboration. The action space is
stochastic and is a mix of discrete and continuous actions.
Despite the challenging nature, impressive achievements have been reported
recently in three games where human performance was matched or even ex-
ceeded [812, 80, 372], see also Chap. 7.
We will now turn to agent algorithms for solving large sequential decision problems.
The main challenge of this section is to create an agent algorithm that can learn a
good policy by interacting with the world—with a large problem, not a toy problem.
From now on, our agents will be deep learning agents.
Let us look at deep reinforcement learning algorithms. How can we use deep
learning for high-dimensional and large sequential decision making environments?
How can tabular value and policy functions 𝑉, 𝑄, and 𝜋 be transformed into 𝜃
parameterized functions 𝑉 𝜃 , 𝑄 𝜃 , and 𝜋 𝜃 ?
Deep supervised learning uses a static dataset to approximate a function, and loss-
function targets (the labels) are therefore stable. However, convergence in deep
reinforcement learning is based on the Q-learning bootstrapping process, and we
lack a static dataset with ground truths; our data items are generated dynamically,
and our bootstrapped loss-function targets move. The movement is influenced by
the same policy function that the convergence process is trying to learn.
It has taken quite some effort to find deep learning algorithms that converge
on these moving targets. Let us try to understand in more detail how the super-
vised methods have to be adapted in order to work in reinforcement learning, by
comparing three algorithmic structures.
Listing 3.1 shows pseudocode for a typical supervised deep learning training algo-
rithm, consisting of an input dataset, a forward pass that calculates the network
output, a loss computation, and a backward pass. See Appendix B or [279] for more
details. We see that the code consists of a double loop: the outer loop controls
the training epochs. Epochs consist of forward approximation of the target value
70 3 Deep Value-Based Reinforcement Learning
using the parameters, computation of the gradient, and backward adjusting of the
parameters with the gradient. In each epoch the inner loop serves all examples
of the static dataset to the forward computation of the output value, the loss and
the gradient computation, so that the parameters can be adjusted in the backward
pass. The dataset is static, and all that the inner loop does is deliver the samples
to the backpropagation algorithm. Note that each sample is independent of the
other, samples are chosen with equal probability. After an image of a white horse is
sampled, the probability that the next image is of a black grouse or a blue moon is
equally (un)likely.
Let us now look at Q-learning. Reinforcement learning chooses the training exam-
ples differently. For convergence of algorithms such as Q-learning, the selection rule
must guarantee that eventually all states will be sampled by the environment [830].
For large problems, this is not the case; this condition for convergence to the value
function does not hold.
Listing 3.2 shows the short version of the bootstrapping tabular Q-learning
pseudocode from the previous chapter. As in the previous deep learning algorithm,
the algorithm consists of a double loop. The outer loop controls the Q-value conver-
3.2 Deep Value-Based Agents 71
gence episodes, and each episode consists of a single trace of (time) steps from the
start state to a terminal state. The Q-values are stored in a Python-array indexed
by 𝑠 and 𝑎, since Q is the state-action value. Convergence of the Q-values is as-
sumed to have occurred when enough episodes have been sampled. The Q-formula
shows how the Q-values are built up by bootstrapping on previous values, and how
Q-learning is learning off-policy, taking the max value of an action.
A difference with the supervised learning is that in Q-learning subsequent
samples are not independent. The next action is determined by the current policy,
and will most likely be the best action of the state (𝜖-greedy). Furthermore, the next
state will be correlated to the previous state in the trajectory. After a state of the
ball in the upper left corner of the field has been sampled, the next sample will
with very high probability also be of a state where the ball is close to the upper left
corner of the field. Training can be stuck in local minima.
To summarize, there are three problems with our naive deep Q-learner. First, con-
vergence to the optimal Q-function depends on full coverage of the state space.
Second, there is a strong correlation between subsequent training samples. Third,
the loss function of gradient descent literally has a moving target. Let us have a
closer look at these three problems.
3.2.2.1 Coverage
Proofs that algorithms such as Q-learning converge to the optimal policy depend
on the assumption that all state-action pairs are sampled. Otherwise, the algorithms
will not converge to an optimal action value for each state. Clearly, in large state
spaces where not all states are sampled, this situation does not hold.
3.2.2.2 Correlation
The coverage of the training set is also problematic when states are not uniformly
distributed and training samples are distributed differently between training and
test time. This happens, for example, when a chess program has been trained on a
particular opening, and the opponent plays a different one. When test examples are
different from training examples, then generalization will be bad. This problem is
related to out-of-distribution training, see for example [484].
In supervised learning, data samples (states) are independent and are assumed
to be distributed evenly over the state space and over the training and test set.
In the static dataset of images, there is no relation between subsequent images,
and examples are independently sampled. Each image is separate and unrelated to
another image.
In reinforcement learning a sequence of states is generated in an agent/environ-
ment loop. The states differ only by a single action, one move or one stone, all other
features of the states remaining unchanged, and thus, the value of subsequent sam-
ples are correlated, which may result in a biased training. The training may cover
h i
∇ 𝜃𝑖 L𝑖 ( 𝜃𝑖 ) = E𝑠,𝑎∼𝜌(·) ;𝑠0 ∼E 𝑟 + 𝛾 max
0
𝑄 𝜃𝑖−1 (𝑠0 , 𝑎0 ) − 𝑄 𝜃𝑖 (𝑠, 𝑎) ∇ 𝜃𝑖 𝑄 𝜃𝑖 (𝑠, 𝑎)
𝑎
where 𝜌 is the behavior distribution and E the Atari emulator. Further details are in [521].
3.2 Deep Value-Based Agents 73
only a part of the state space, especially when greedy action selection increases
the tendency to select a small set of actions and states. The bias can result in the
so-called specialization trap (when there is too much exploitation, and too little
exploration). Correlation between subsequent states contributes to the low coverage
that we discussed before, reducing convergence towards the optimal Q-function,
increasing the probability of local optima and feedback loops.
3.2.2.3 Convergence
Deadly Triad
Multiple works [48, 282, 786] showed that a combination of off-policy reinforcement
learning with nonlinear function approximation (such as deep neural networks)
could cause Q-values to diverge. Sutton and Barto [742] further analyze three
elements for divergent training: function approximation, bootstrapping, and off-
policy learning, focusing on the effect state identification. Together, they are called
deadly triad.
Function approximation may attribute values to states inaccurately. In contrast
to exact tabular methods, that are designed to identify individual states exactly,
neural networks are designed to individual features of states. These features can be
shared by different states, and values attributed to those features are shared also by
other states. Function approximation may thus cause misidentification of states,
and reward values and Q-values that are not assigned correctly. In a reinforcement
learning process where new states are generated on the fly, divergent Q-values
may cause loops or other forms of instability, as we just discussed. If the accuracy
of the approximation of the true function values is good enough, then states may
be identified well enough to reduce or prevent divergent training processes and
loops [522].
Bootstrapping of values builds up new values on the basis of older values. This
occurs in Q-learning and temporal-difference learning where the current value
depends on the previous value. Bootstrapping increases the efficiency of the training
because values do not have to be calculated from the start. However, errors or
biases in initial values may persist, and even spill over to other states as values are
74 3 Deep Value-Based Reinforcement Learning
ment learning are indeed possible, and have improved our understanding of the
circumstances that influence stability and convergence.
Let us have a closer look at the methods that are used to achieve stable deep
reinforcement learning.
As mentioned in the introduction of this chapter, in 2013 Mnih et al. [521, 522]
published their work on end-to-end reinforcement learning in Atari games.
Let us look in more detail at how stable learning was achieved in their Deep
Q-Network algorithm (DQN). The original focus of DQN is on breaking correlations
between subsequent states, and also on slowing down changes to parameters in
the training process to improve stability. The DQN algorithm has two methods to
achieve this: (1) experience replay and (2) infrequent weight updates. We will first
look at experience replay.
Experience Replay
DQN treats all examples equal, old and recent alike. A form of importance
sampling might differentiate between important transitions, as we will see in the
next section.
Note that, curiously, training by experience replay is a form of off-policy learning,
since the target parameters are different from those used to generate the sample.
Off-policy learning is one of the three elements of the deadly triad, and we find that
stable learning can actually be improved by a special form of one of its causes.
Experience replay works well in Atari [522]. However, further analysis of replay
buffers has pointed to possible problems. Zhang et al. [874] study the deadly triad
with experience replay, and find that larger networks resulted in more instabilities,
but also that longer multi-step returns yielded fewer unrealistically high reward
values. In Sect. 3.2.4 we will see many further enhancements to DQN-like algorithms.
To get some hands on experience with DQN, we will now have a look at how DQN
can be used to play the Atari game Breakout.
The field of deep reinforcement learning is an open field where most codes of
algorithms are freely shared on GitHub and where test environments are available.
The most widely used environment is Gym, in which benchmarks such as ALE
and MuJoCo can be found, see also Appendix C. The open availability of the
software allows for easy replication, and, importantly, for further improvement of
the methods. Let us have a closer look at the code of DQN, to experience how it
works.
3.2 Deep Value-Based Agents 77
import gym
Listing 3.4 Running Stable Baseline PPO on the Gym Cartpole Environment
ALE and Gym are designed to get you started quickly with Atari problems. The
DQN papers come with source code. The original DQN code from [522] is available
at Atari DQN.5 This code is the original code, in the programming language Lua,
which may be interesting to study, if you are familiar with this language. A modern
reference implementation of DQN, with further improvements, is in the (stable)
baselines.6 The RL Baselines Zoo even provides a collection of pretrained agents, at
Zoo [602, 269].7 The Network Zoo is especially useful if your desired application
happens to be in the Zoo, since training often takes a long time.
Listing 3.5 Deep Q-Network Atari Breakout example with Stable Baselines
or
After having studied tabular Q-learning on Taxi in Sect. 2.2.4.5, let us now see
how the network-based DQN works in practice. Listing 3.5 illustrates how easy
it is to use the Stable Baselines implementation of DQN on the Atari Breakout
environment. (See Sect. 2.2.4.1 for installation instructions of Gym.)
After you have run the DQN code and seen that it works, it is worthwhile to study
how the code is implemented. Before you dive into the Python implementation of
Stable Baselines, let’s look at the pseudocode to refresh how the elements of DQN
work together. See Listing 3.6. In this pseudocode we follow the 2015 version of
DQN [522]. (The 2013 version of DQN did not use the target network [521].)
DQN is based on Q-learning, with as extra a replay buffer and a target network
to improve stability and convergence. The core of Q-learning is the Q-table, the
core of DQN is the Q-network, both are implementations of the Q-function, which
holds the current action-values. First, at the start of the code, the replay buffer is
3.2 Deep Value-Based Agents 79
def dqn :
initialize replay_buffer empty
initialize Q network with random weights
initialize Qt target network with random weights
set s = s0
while not convergence :
# DQN in Atari uses preprocessing ; not shown
epsilon - greedy select action a in argmax ( Q (s , a ) ) # action
selection depends on Q ( moving target )
sx , reward = execute action in environment
append (s ,a ,r , sx ) to buffer
sample minibatch from buffer # break temporal correlation
take target batch R ( when terminal ) or Qt
do gradient descent step on Q # loss function uses target
Qt network
initialized to empty, and the weights of the Q network and the separate Q target
network are initialized. The state 𝑠 is set to the start state.
Next is the optimization loop, that runs until convergence. At the start of each
iteration an action is selected at the state 𝑠, following an 𝜖-greedy approach. The
action is executed in the environment, and the new state and the reward are stored
in a tuple in the replay buffer. Then, we train the Q-network. A minibatch is sampled
randomly from the replay buffer, and one gradient descent step is performed. For
this step the loss function is calculated with the separate Q-target network 𝑄ˆ 𝜃 , that
is updated less frequently than the primary Q-network 𝑄 𝜃 . In this way the loss
function
h i
L𝑡 (𝜃 𝑡 ) = E𝑠,𝑎∼𝜌( ·) E𝑠0 ∼E (𝑟 + 𝛾 max ˆ 𝜃 (𝑠 0, 𝑎 0)|𝑠, 𝑎) − 𝑄 𝜃 (𝑠, 𝑎) 2
𝑄
0 𝑡−1 𝑡
𝑎
Conclusion
In summary, DQN was able to successfully learn end-to-end behavior policies for
many different games (although similar and from the same benchmark set). Minimal
prior knowledge was used to guide the system, and the agent only got to see the
pixels and the game score. The same network architecture and procedure was used
on each game; however, a network trained for one game could not be used to play
another game.
80 3 Deep Value-Based Reinforcement Learning
The DQN achievement was an important milestone in the history of deep rein-
forcement learning. The main problems that were overcome by Mnih et al. [521]
were training divergence and learning instability.
The nature of most Atari 2600 games is that they require eye-hand reflexes. The
games have few strategic elements, credit assignment is mostly over a short term,
and can be learned with a surprisingly simple neural network. Most Atari games
are more about immediate reflexes than about longer term reasoning. In this sense,
the problem of playing Atari well is not unlike an image categorization problem:
both problems are to find the right response that matches an input consisting of a
set of pixels. Mapping pixels to categories is not that different from mapping pixels
to joystick actions (see also the observations in [399]).
The Atari results have stimulated much subsequent research. Many blogs have
been written on reproducing the result, which is not a straightforward task, requir-
ing the fine-tuning of many hyperparameters [58].
The DQN results have spawned much activity among reinforcement learning re-
searchers to improve training stability and convergence further, and many refine-
ments have been devised, some of which we will review in this section.
Many of the topics that are covered by the enhancements are older ideas that
work well in deep reinforcement learning. DQN applies random sampling of its
replay buffer, and one of the first enhancements was prioritized sampling [665].
It was found that DQN, being an off-policy algorithm, typically overestimates
action values (due to the max operation, Sect. 2.2.4.4). Double DQN addresses
overestimation [799], and dueling DDQN introduces the advantage function to
standardize action values [829]. Other approaches look at variance in addition to
expected value, the effect of random noise on exploration was tested [253], and
distributional DQN showed that networks that use probability distributions work
better than networks that only use single point expected values [70].
In 2017 Hessel et al. [334] performed a large experiment that combined seven
important enhancements. They found that the enhancement worked well together.
The paper has become known as the Rainbow paper, since the major graph showing
the cumulative performance over 57 Atari games of the seven enhancements is
3.2 Deep Value-Based Agents 81
multi-colored (Fig. 3.5). Table 3.1 summarizes the enhancements, and this section
provides an overview of the main ideas. The enhancements were tested on the same
benchmarks (ALE, Gym), and most algorithm implementations can be found on the
OpenAI Gym GitHub site in the baselines.11
3.2.4.1 Overestimation
Van Hasselt et al. introduce double deep Q learning (DDQN) [799]. DDQN is based
on the observation that Q-learning may overestimate action values. On the Atari
2600 games DQN suffers from substantial over-estimations. Remember that DQN
uses Q-learning. Because of the max operation in Q-learning this results in an
overestimation of the Q-value. To resolve this issue, DDQN uses the Q-Network to
choose the action but uses the separate target Q-Network to evaluate the action.
Let us compare the training target for DQN
11 https://github.com/openai/baselines
82 3 Deep Value-Based Reinforcement Learning
The DQN target uses the same set of weights 𝜃 𝑡 twice, for selection and evaluation;
the DDQN target use a separate set of weights 𝜙𝑡 for evaluation, preventing overes-
timation due to the max operator. Updates are assigned randomly to either set of
weights.
Earlier Van Hasselt et al. [313] introduced the double Q learning algorithm in
a tabular setting. The later paper shows that this idea also works with a large
deep network. They report that the DDQN algorithm not only reduces the over-
estimations but also leads to better performance on several games. DDQN was
tested on 49 Atari games and achieved about twice the average score of DQN with
the same hyperparameters, and four times the average DQN score with tuned
hyperparameters [799].
DQN samples uniformly over the entire history in the replay buffer, where Q-
learning uses only the most recent (and important) state. It stands to reason to see
if a solution in between these two extremes performs well.
Prioritized experience replay, or PEX, is such an attempt. It was introduced by
Schaul et al. [665]. In the Rainbow paper PEX is combined with DDQN, and, as we
can see, the blue line (with PEX) indeed outperforms the purple line.
In DQN experience replay lets agents reuse examples from the past, although
experience transitions are uniformly sampled, and actions are simply replayed
at the same frequency that they were originally experienced, regardless of their
significance. The PEX approach provides a framework for prioritizing experience.
Important actions are replayed more frequently, and therefore learning efficiency
is improved. As measure for importance, standard proportional prioritized replay
is used, with the absolute TD error to prioritize actions. Prioritized replay is used
widely in value-based deep reinforcement learning. The measure can be computed
in the distributional setting using the mean action values. In the Rainbow paper all
distributional variants prioritize actions by the Kullback-Leibler loss [334].
Advantage Function
The original DQN uses a single neural network as function approximator; DDQN
(double deep Q-network) uses a separate target Q-Network to evaluate an action.
Dueling DDQN [829], also known as DDDQN, improves on this architecture by
using two separate estimators: a value function and an advantage function
Advantage functions are related to the actor-critic approach (see Chap. 4). An
advantage function computes the difference between the value of an action and the
value of the state. The function standardizes values on a baseline for the actions
3.3 Atari 2600 Environments 83
of a state [292]. Advantage functions provide better policy evaluation when many
actions have similar values.
The original DQN learns a single value, which is the estimated mean of the state
value. This approach does not take uncertainty into account. To remedy this, distri-
butional Q-learning [70] learns a categorical probability distribution of discounted
returns instead, increasing exploration. Bellemare et al. design a new distributional
algorithm which applies Bellman’s equation to the learning of distributions, a
method called distributional DQN. Also Moerland et al. [525, 526] look into the
distributional perspective.
Interestingly, a link between the distributional approach and biology has been
reported. Dabney et al. [174] showed correspondence between distributional rein-
forcement learning algorithms and the dopamine levels in mice, suggesting that
the brain represents possible future rewards as a probability distribution.
Noisy DQN
Another distributional method is noisy DQN [253]. Noisy DQN uses stochastic net-
work layers that add parametric noise to the weights. The noise induces randomness
in the agent’s policy, which increases exploration. The parameters that govern the
noise are learned by gradient descent together with the remaining network weights.
In their experiments the standard exploration heuristics for A3C (Sect. 4.2.4), DQN,
and dueling agents (entropy reward and 𝜖-greedy) were replaced with NoisyNet.
The increased exploration yields substantially higher scores for Atari (dark red
line).
In their original 2013 workshop paper Mnih et al. [521] achieved human-level play
for some of the games. Training was performed on 50 million frames in total on
seven Atari games. In their 2013 work [521] the neural network performed better
than an expert human player on Breakout, Enduro, and Pong. On Seaqest, Q*Bert,
and Space Invaders performance was far below that of a human. In these games a
strategy must be found that extends over longer time periods. In their follow-up
journal article two years later were able to achieve human level for 49 of the 57
games that were in ALE [522], and performed better than human-level play in 29
of the 49 games.
Some of the games proved difficult, notably games that required longer-range
planning, where long stretches of the game do not give rewards, such as in Mon-
84 3 Deep Value-Based Reinforcement Learning
tezuma’s Revenge, where the agent has to walk long distances, and pick up a key
to reach new rooms or new levels. In reinforcement learning terms, delayed credit
assignment over long periods is hard.
To close the Atari story, we discuss to final algorithms. Of the many value-based
model-free deep reinforcement learning algorithms that have been developed, the
final algorithm that we discuss is R2D2 [396], because of its performance. R2D2
is not part of the Rainbow experiments, but is a significant further improvement
of the algorithms. R2D2 stands for Recurrent Replay Distributed DQN. It is built
upon prioritized distributed replay and 5-step double Q-learning. Furthermore, it
uses a dueling network architecture and an LSTM layer after the convolutional
stack. Details about the architecture can be found in [829, 294]. The LSTM uses
the recurrent state to exploit long-term temporal dependencies, which improve
performance. The authors also report that the LSTM allows for better representation
learning. R2D2 achieved good results on all 57 Atari games [396].
A more recent benchmark achievement has been published as Agent57. Agent57
is the first program that achieves a score higher than the human baseline on all 57
Atari 2600 games from ALE. It uses a controller that adapts the long and short-term
behavior of the agent, training for a range of policies, from very exploitative to very
explorative, depending on the game [46].
Conclusion
Progress has come a long way since the replay buffer of DQN. Performance has
been improved greatly in value-based model-free deep reinforcement learning and
now super-human performance in all 57 Atari games of ALE has been achieved.
Many enhancements that improve coverage, correlation, and convergence have
been developed. The presence of a clear benchmark was instrumental for progress
so that researchers could clearly see which ideas worked and why. The earlier
mazes and navigation games, OpenAI’s Gym [108], and especially the ALE [71],
have enabled this progress.
In the next chapter we will look at the other main branch of model-free reinforce-
ment learning: policy-based algorithms. We will see how they work, and that they
are well suited for a different kind of application, with continuous action spaces.
86 3 Deep Value-Based Reinforcement Learning
This has been the first chapter in which we have seen deep reinforcement learning
algorithms learn complex, high-dimensional, tasks. We end with a summary and
pointers to the literature.
Summary
The methods that have been discussed in the previous chapter were exact, tabular
methods. Most interesting problems have large state spaces that do not fit into
memory. Feature learning identifies states by their common features. Function
values are not calculated exactly, but are approximated, with deep learning.
Much of the recent success of reinforcement learning is due to deep learning
methods. For reinforcement learning a problem arises when states are approximated.
Since in reinforcement learning the next state is determined by the previous state,
algorithms may get stuck in local minima or run in circles when values are shared
with different states.
Another problem is training convergence. Supervised learning has a static dataset
and training targets are also static. In reinforcement learning the loss function
targets depend on the parameters that are being optimized. This causes further
instability. DQN caused a breakthrough by showing that with a replay buffer and a
separate, more stable, target network, enough stability could be found for DQN to
learn how to play Atari arcade games just from looking at the video screen.
Many further improvements to increase stability through diversity have been
found. The Rainbow paper implements some of these improvements, and finds that
they are complementary, and together achieve very strong play.
The availability of compute power (GPU) software suites (TensorFlow/Keras,
PyTorch) has been a major force in deep reinforcement learning. Also the availability
of labeled data sets (MNIST, ImageNet) and environments (Gym) played a crucial
role in the progress that deep reinforcement learning has made in a relatively short
amount of time.
Further Reading
Exercises
We will end this chapter with some questions to review the concepts that we have
covered. Next are programming exercises to get some more exposure on how to
use the deep reinforcement learning algorithms in practice.
Questions
Below are some questions to check your understanding of this chapter. Each question
is a closed question where a simple, single sentence answer is expected.
1. What is Gym?
2. What are the Stable Baselines?
3. The loss function of DQN uses the Q-function as target. What is a consequence?
4. Why is the exploration/exploitation trade-off central in reinforcement learning?
5. Name one simple exploration/exploitation method.
6. What is bootstrapping?
7. Describe the architecture of the neural network in DQN.
8. Why is deep reinforcement learning more susceptible to unstable learning than
deep supervised learning?
9. What is the deadly triad?
10. How does function approximation reduce stability of Q-learning?
11. What is the role of the replay buffer?
12. How can correlation between states lead to local minima?
13. Why should the coverage of the state space be sufficient?
14. What happens when deep reinforcement learning algorithms do not converge?
15. How large is the state space of chess estimated to be? 1047 , 10170 or 101685 ?
88 3 Deep Value-Based Reinforcement Learning
16. How large is the state space of Go estimated to be? 1047 , 10170 or 101685 ?
17. How large is the state space of StarCraft estimated to be? 1047 , 10170 or 101685 ?
18. Why is the Rainbow paper so named, and what is the main message?
19. Mention three Rainbow improvements that are added to DQN.
Exercises
Let us now start with some exercises. If you have not done so already, install
Gym, PyTorch12 or TensorFlow and Keras (see Sect. 2.2.4.1 or go to the TensorFlow
page).13 Be sure to check the right versions of Python, Gym, TensorFlow, and the
Stable Baselines to make sure that they work well together. The exercises below
are designed to be done with Keras.
1. DQN Implement DQN from the Stable Baselines on Breakout from Gym. Turn
off Dueling and Priorities. Find out what the values are for 𝛼, the training rate,
for 𝜖, the exploration rate, what kind of neural network architecture is used,
what the replay buffer size is, and how frequently the target network is updated.
2. Hyperparameters Change all those hyperparameters, up, and down, and note the
effect on training speed, and the training outcome: how good is the result? How
sensitive is performance to hyperparameter optimization?
3. Cloud Use different computers, experiment with GPU versions to speed up
training, consider Colab, AWS, or another cloud provider with fast GPU (or TPU)
machines.
4. Gym Go to Gym and try different problems. For what kind of problems does
DQN work, what are characteristics of problems for which it works less good?
5. Stable Baselines Go to the Stable baselines and implement different agent algo-
rithms. Try Dueling algorithms, Prioritized experience replay, but also other
algorithm, such as Actor critic or policy-based. (These algorithms will be ex-
plained in the next chapter.) Note their performance.
6. Tensorboard With Tensorboard you can follow the training process as it pro-
gresses. Tensorboard works on log files. Try TensorBoard on a Keras exercise and
follow different training indicators. Also try TensorBoard on the Stable Baselines
and see which indicators you can follow.
7. Checkpointing Long training runs in Keras need checkpointing, to save valuable
computations in case of a hardware or software failure. Create a large training
job, and setup checkpointing. Test everything by interrupting the training, and
try to re-load the pre-trained checkpoint to restart the training where it left off.
12 https://pytorch.org
13 https://www.tensorflow.org
Chapter 4
Policy-Based Reinforcement Learning
Core Concepts
• Policy gradient
89
90 4 Policy-Based Reinforcement Learning
• Actor critic
Core Problem
Core Algorithms
Jumping Robots
One of the most intricate problems in robotics is learning to walk, or more generally,
how to perform locomotion. Much work has been put into making robots walk, run
and jump. A video of a simulated robot that taught itself to jump over an obstacle
course can be found at YouTube1 [324].
Learning to walk is a challenge that takes human infants months to master. (Cats
and dogs are quicker.) Teaching robots to walk is a challenging problem that is
studied extensively in artificial intelligence and engineering. Movies abound on the
internet of robots that try to open doors, and fall over, or just try to stand upright,
and still fall over.2
Locomotion of legged robots is a difficult sequential decision problem. For each
leg, many different joints are involved. They must be actuated in the right order,
turned with the right force, over the right duration, to the right angle. Most of
these angles, forces, and durations are continuous. The algorithm has to decide
how many degrees, Newtons, and seconds, constitute the optimal policy. All these
actions are continuous quantities. Robot locomotion is a difficult problem, that is
used frequently to study policy-based deep reinforcement learning.
1 https://www.youtube.com/watch?v=hx_bgoTF7bs
2 See, for example, https://www.youtube.com/watch?v=g0TaYhjpOfo.
4.1 Continuous Problems 91
In this chapter, our action are continuous and stochastic. We will discuss both
aspects, some of the challenges they pose. We will start with continuous action
policies.
The problems that we discussed in the previous chapters were discrete Grid worlds,
mazes, and high-dimensional Atari games, whose action spaces were small and
discrete—we could walk north, east, west, south, or we could choose from 9 joystick
movements. In board games such as chess the action space is larger, but still discrete.
When you move your pawn to e4, you do not move it to e4½.
In this chapter the problems are different. Steering a self driving car requires
turning the steering wheel a certain angle, duration, and angular velocity, to prevent
jerky movements. Throttle movements should also be smooth and continuous.
Actuation of robot joints is continuous, as we mentioned in the introduction of
this chapter. An arm joint can move 1 degree, 2 degrees, or 90 or 180 degrees or
anything in between.
An action in a continuous space is not one of a set of discrete choices, such as
{𝑁, 𝐸, 𝑊, 𝑆}, but rather a value over a continuous range, such as [0, 2𝜋] or R+ ; the
number of possible values is infinite. How can we find the optimum value in an
infinite space in a finite amount of time? Trying out all possible combinations of
setting joint 1 to 𝑥 degrees and applying force 𝑦 in motor 2 will take infinitely long.
A solution could be to discretize the actions, although that introduces potential
quantization errors.
When actions are not discrete, the arg max operation can not be used to identify
“the” best action. Policy-based methods find suitable continuous or stochastic policies
directly, without the intermediate step of a value function and the need for the
arg max operation.
When a robot moves its hand to open a door, it must judge the distance correctly. A
small error, and it may fail (as many movie clips show).3 Stochastic environments
cause stability problems for value-based methods [479]. Small perturbations in
Q-values may lead to large changes in the policy of value-based methods. Con-
vergence can typically only be achieved at slow learning rates, to smooth out the
3 Even worse, when a robot thinks it stands still, it may actually be in the process of falling over
(and, of course, robots can not think, they only wished they could).
92 4 Policy-Based Reinforcement Learning
randomness. A stochastic policy (a target dsitribution) does not suffer from this
problem. Stochastic policies have another advantage. By their nature they perform
exploration, without the need to separately code 𝜖-greediness or other exploration
methods, since a stochastic policy returns a distribution over actions.
Policy-based methods find suitable stochastic policies directly. A potential dis-
advantage of purely episodic policy-based methods is that they are high-variance;
they may find local optima instead of global optima, and converge slower than
value-based methods. Newer (actor critic) methods, such as A3C, TRPO, and PPO,
were designed to overcome these problems. We will discuss these algorithms later
in this chapter.
Before we will explain policy-based methods, we will have a closer look at some
of the applications for which they are needed.
4.1.3.1 Robotics
Most robotic applications are more complicated than the classics such as mazes,
Mountain car and Cart pole. Robotic control decisions involve more joints, directions
of travel, and degrees of freedom, than a single cart that moves in one dimension.
Typical problems involve learning of visuo-motor skills (eye-hand coordination,
grasping), or learning of different locomotion gaits of multi-legged “animals.” Some
examples of grasping and walking are illustrated in Fig. 4.1.
The environments for these actions are unpredictable to a certain degree: they
require reactions to disturbances such as bumps in the road, or the moving of objects
in a scene.
Simulating robot motion involves modeling forces, acceleration, velocity, and move-
ment. It also includes modeling mass and elasticity for bouncing balls, tactile/grasp-
4.1 Continuous Problems 93
ing mechanics, and the effect of different materials. A physics mechanics model
needs to simulate the result of actions in the real world. Among the goals of such a
simulation is to model grasping, locomotion, gaits, and walking and running (see
also Sect. 4.3.1).
The simulations should be accurate. Furthermore, since model-free learning
algorithms often involve millions of actions, it is important that the physics sim-
ulations are fast. Many different physics environments for model-based robotics
have been created, among them Bullet, Havok, ODE and PhysX, see [227] for a
comparison. Of the models, MuJoCo [779], and PyBullet [167] are the most popular
in reinforcement learning, especially MuJoCo is used in many experiments.
Although MuJoCo calculations are deterministic, the initial state of environments
is typically randomized, resulting in an overall non-deterministic environment.
Despite many code optimizations in MuJoCo, simulating physics is still an expensive
proposition. Most MuJoCo experiments in the literature therefore are based on
stick-like entities, that simulate limited motions, in order to limit the computational
demands.
Figures 4.2 and 4.3 illustrate a few examples of some of the common Gym/MuJoCo
problems that are often used in reinforcement learning: Ant, Half-cheetah, and
Humanoid.
94 4 Policy-Based Reinforcement Learning
4.1.3.3 Games
In real time video games and certain card games the decisions are also continuous.
For example, in some variants of poker, the size of monetary bets can be any amount,
which makes the action space quite large (although strictly speaking still discrete).
In games such as StarCraft and Capture the Flag, aspects of the physical world are
modeled, and movement of agents can vary in duration and speed. The environment
for these games is also stochastic: some information is hidden for the agent. This
increases the size of the state space greatly. We will discuss these games in Chap. 7
on multi-agent methods.
Now that we have discussed the problems and environments that are used with
policy-based methods, it is time to see how policy-based algorithms work. Policy-
based methods are a popular approach in model-free deep reinforcement learning.
Many algorithms have been developed that perform well. Table 4.1 lists some of
the better known algorithms that will be covered in this chapter.
4.2 Policy-Based Agents 95
We will first provide an intuitive explanation of the idea behind the policy-based
approach. Then we will provide references to the theory behind it, and discuss
advantages and disadvantages. Most of these disadvantages are alleviated by the
actor critic method, that is discussed next.
Let us start with the basic idea behind policy-based methods.
Let us see how we can optimize such a direct policy directly, without the intermedi-
ate step of the Q-function. We will develop a first, generic, policy-based algorithm
to see how the pieces fit together. We provide an intuitive explanation, based on
three papers [220, 192, 254].
The basic framework for policy-based algorithms is straightforward: (1) initialize
the parameters of a policy, (2) sample a trajectory, (3) if it is a good trajectory,
increase the parameters, otherwise decrease them, and (4) keep going until conver-
gence. Algorithm 4.1 provides a framework in pseudocode. Please note the similarity
with the codes in the previous chapter (Listing 3.1–3.3), and especially the deep
learning algorithms, where we also optimized function parameters in a loop.
The policy is represented by a set of parameters 𝜃. Together, the parameters 𝜃
map the states 𝑠 to action probability 𝑎. When we are given a set of parameters,
how should we adjust them to improve the policy? The basic idea is to randomly
sample a new policy, and if it is better, adjust the parameters a bit in the direction
4 Policy-based methods may use a value function to learn the policy parameters 𝜃, but do not use
it for action selection.
96 4 Policy-Based Reinforcement Learning
of this new policy (and away if it is worse). Let us see in more detail how this idea
works.
To know which policy is best, we need some kind of measure of its quality. We
denote the quality of the policy that is defined by the parameters as 𝐽 (𝜃). It is
natural to use the value function of the start state as our measure of quality
𝐽 (𝜃) = 𝑉 𝜋 (𝑠0 ).
When the parameters are differentiable, then all we need to do is to find a way to
improve the gradient
∇ 𝜃 𝐽 (𝜃) = ∇ 𝜃 𝑉 𝜋 (𝑠0 )
of this expression to maximize our objective function 𝐽 (·).
Policy-based methods apply gradient-based optimization, using the derivative
of the objective to find the optimum. Since we are maximizing, we apply gradient
ascent. In each time step 𝑡 of the algorithm we perform the following update:
𝜃 𝑡+1 = 𝜃 𝑡 + 𝛼 · ∇ 𝜃 𝐽 (𝜃)
for learning rate 𝛼 ∈ R+ and performance objective 𝐽, see the gradient ascent
algorithm in Alg. 4.1.
Remember that 𝜋 𝜃 (𝑎|𝑠) is the probability of taking action 𝑎 in state 𝑠. This
function 𝜋 is represented by a neural network, mapping states 𝑠 at the input side
of the network to action probabilities on the output side of the network. The
parameters 𝜃 determine the value of our function 𝜋. Our goal is to update the
parameters so that 𝜋 𝜃 becomes the optimal policy. The better the action 𝑎 is, the
more we want to increase the parameters 𝜃.
If we now would know, by some magical way, the optimal action 𝑎★, then we
could use the gradient to push each parameter 𝜃 𝑡 , 𝑡 ∈ trajectory, of the policy, in
the direction of the optimal action, as follows
Unfortunately, we do not know which action is best. We can, however, take a sample
trajectory and use estimates of the value of the actions of the sample. This estimate
can use the regular 𝑄ˆ function from the previous chapter, or the discounted return
function, or an advantage function (to be introduced shortly). Then, by multiplying
4.2 Policy-Based Agents 97
the push of the parameters (the probability) with our estimate, we get
ˆ 𝑎)∇𝜋 𝜃 (𝑎|𝑠).
𝜃 𝑡+1 = 𝜃 𝑡 + 𝛼𝑄(𝑠, 𝑡
A problem with this formula is that not only are we going to push harder on actions
with a high value, but also more often, because the policy 𝜋 𝜃𝑡 (𝑎|𝑠) is the probability
of action 𝑎 in state 𝑠. Good actions are doubly improved, which may cause instability.
We can correct by dividing by the general probability:
ˆ 𝑎) ∇𝜋 𝜃𝑡 (𝑎|𝑠) .
𝜃 𝑡+1 = 𝜃 𝑡 + 𝛼𝑄(𝑠,
𝜋 𝜃 (𝑎|𝑠)
In fact, we have now almost arrived at the classic policy-based algorithm, REIN-
FORCE, introduced by Williams in 1992 [843]. In this algorithm our formula is
expressed in a way that is reminiscent of a logarithmic cross-entropy loss function.
We can arrive at such a log-formulation by using the basic fact from calculus that
∇ 𝑓 (𝑥)
∇ log 𝑓 (𝑥) = .
𝑓 (𝑥)
Substituting this formula into our equation, we thus arrive at
ˆ 𝑎)∇ 𝜃 log 𝜋 𝜃 (𝑎|𝑠).
𝜃 𝑡+1 = 𝜃 𝑡 + 𝛼𝑄(𝑠,
The versions of gradient ascent (Alg. 4.1) and REINFORCE (Alg. 4.2) that we show,
update the parameters inside the innermost loop. All updates are performed as the
time steps of the trajectory are traversed. This method is called the online approach.
When multiple processes work in parallel to update data, the online approach makes
sure that information is used as soon as it is known. The policy gradient algorithm
is also formulated frequently in batch-fashion: all gradients are summed over the
states and actions, and the parameters are updated at the end of the trajectory. This
batch version of the policy gradient algorithm is more efficient.
Since parameter updates can be expensive, the batch approach can be more
efficient. An intermediate form that is frequently applied in practice is to work with
mini-batches, trading off computational efficiency for information efficiency.
Let us now take a step back and look at the algorithm and assess how well it
works.
Now that we have seen the principles behind a policy-based algorithm, let us
see how policy-based algorithms work in practice, and compare advantages and
disadvantages of the policy-based approach.
Let us start with the advantages. First of all, parameterization is at the core of
policy-based methods, making them a good match for deep learning. For value-
based methods deep learning had to be retrofitted, giving rise to complications
as we saw in Sect. 3.2.3. Second, policy-based methods can easily find stochastic
policies; value-based methods find deterministic policies. Due to their stochastic
nature, policy-based methods naturally explore, without the need for methods
such as 𝜖-greedy, or more involved methods, that may require tuning to work well.
Third, policy-based methods are effective in high-dimensional or continuous action
spaces. Small changes in 𝜃 lead to small changes in 𝜋, and to small changes in state
distributions (they are smooth). Policy-based algorithms do not suffer (as much)
from convergence and stability issues that are seen in arg max-based algorithms in
large or continuous action spaces.
On the other hand, there are disadvantages to the episodic Monte Carlo version
of the REINFORCE algorithm. Remember that REINFORCE generates a full random
episode in each iteration, before it assesses the quality. (Value-based methods use a
reward to select the next action in each time step of the episode). Because of this,
policy-based is low bias, since full random trajectories are generated. However, they
are also high variance, since the full trajectory is generated randomly (whereas value-
based uses the value for guidance at each selection step). What are the consequences?
First, policy evaluation of full trajectories has low sample efficiency and high
variance. As a consequence, policy improvement happens infrequently, leading to
4.2 Policy-Based Agents 99
The actor critic approach combines the advantage of the value-based approach
(low variance) with the advantage of the policy-based approach (low bias). The
actor stands for the action, or policy-based, approach. The critic stands for the
value-based approach. The critic can use full Monte Carlo trajectories to determine
the value, but more often uses single step or 𝑛-step temporal difference targets.
The actor critic approach is sometimes called the temporal difference version of
policy-based methods. Actor critic methods use value approximators to replace
rollout estimates and reduce variance with an advantage function, at the cost of
some bias [742]. Actor critic methods are popular because they work well. It is an
active field where many different algorithms have been developed.
Action selection in episodic REINFORCE is random, and hence low bias. However,
variance is high since the full episode is sampled (the size and direction of the update
can strongly vary between different samples). Actor critic is designed to combine
the high bias, low variance of vallue-based methods to reduce the high variance of
policy-based methods, at the cost of introducing some bias.
The variance can originate from two sources: (1) high variance in the cumulative
reward estimate, and (2) high variance in the gradient estimate. For both problems
a solution has been developed: bootstrapping, and baseline subtraction. Both of
these methods use the learned value function, which we denote by 𝑉 𝜙 (𝑠). The value
function can use a separate neural network, with separate parameters 𝜙, or it can
use a value head on top of the actor parameters 𝜃. In this case the actor and the
critic share the lower layers of the network, and the network has two separate top
heads: a policy and a value head. We will use 𝜙 for the parameters of the value
function, to discriminate them from the policy parameters 𝜃.
100 4 Policy-Based Reinforcement Learning
To reduce the variance of the policy gradient, we can increase the number of traces
𝑀 that we sample. However, the possible number of different traces is exponential
in the length of the trace for a given stochastic policy, and we cannot afford to
sample them all for one update. In practice the number of sampled traces 𝑀 is small,
sometimes even 𝑀 = 1, updating the policy parameters from a single trace. The
return of the trace depends on many random action choices; the update has high
variance. A solution is to use a principle that we known from temporal difference
learning, to bootstrap the value function step by step. Bootstrapping uses the value
function to compute intermediate 𝑛-step values per episode, trading-off variance
for bias. The 𝑛-step values are in-between full-episode Monte Carlo and single step
temporal difference targets.
We can use bootstrapping to compute an 𝑛 step target
𝑛−1
∑︁
𝑄ˆ n (𝑠𝑡 , 𝑎 𝑡 ) = 𝑟 𝑡+𝑘 + 𝑉 𝜙 (𝑠𝑡+𝑛 ),
𝑘=0
and we can then update the value function, for example on a squared loss
2
L (𝜙|𝑠𝑡 , 𝑎 𝑡 ) = 𝑄ˆ 𝑛 (𝑠𝑡 , 𝑎 𝑡 ) − 𝑉 𝜙 (𝑠𝑡 )
and update the policy with the standard policy gradient but with that (improved)
value 𝑄ˆ 𝑛
∇ 𝜃 L (𝜃|𝑠𝑡 , 𝑎 𝑡 ) = 𝑄ˆ 𝑛 (𝑠𝑡 , 𝑎 𝑡 ) · ∇ 𝜃 log 𝜋 𝜃 (𝑎 𝑡 |𝑠𝑡 ).
We are now using the value function prominently in the algorithm, which is pa-
rameterized by a separate set of parameters, denoted by 𝜙; the policy parameters
are still denoted as 𝜃. The use of both policy and value is what gives the actor critic
approach its name.
4.2 Policy-Based Agents 101
Another method to reduce the variance of the policy gradient is by baseline subtrac-
tion. Subtracting a baseline from a set of numbers reduces the variance, but leaves
the expectation unaffected. Assume, in a given state with three available actions,
that we sample action returns of 65, 70, and 75, respectively. Policy gradient will
then try to push the probability of each action up, since the return for each action is
positive. The above method may lead to a problem, since we are pushing all actions
up (only somewhat harder on one of them). It might be better if we only push up
on actions that are higher than the average (action 75 is higher than the average of
70 in this example), and push down on actions that are below average (65 in this
example). We can do so through baseline subtraction.
The most common choice for the baseline is the value function. When we subtract
the value 𝑉 from a state-action value estimate 𝑄, the function is called the advantage
function:
𝐴(𝑠𝑡 , 𝑎 𝑡 ) = 𝑄(𝑠𝑡 , 𝑎 𝑡 ) − 𝑉 (𝑠𝑡 ).
The 𝐴 function subtracts the value of the state 𝑠 from the state-action value. It now
estimates how much better a particular action is compared to the expectation of a
particular state.
We can combine baseline subtraction with any bootstrapping method to estimate
ˆ 𝑡 , 𝑎 𝑡 ). We compute
the cumulative reward 𝑄(𝑠
We have now seen the ingredients to construct a full actor critic algorithm. An
example algorithm is shown in Alg. 4.4.
With these two ideas we can formulate an entire spectrum of policy gradient
methods, depending on the type of cumulative reward estimate that they use. In
general, the policy gradient estimator takes the following form, where we now
introduce a new target Ψ𝑡 that we sample from the trajectories 𝜏:
102 4 Policy-Based Reinforcement Learning
𝑛
h ∑︁ i
∇ 𝜃 𝐽 (𝜃) = E 𝜏0 ∼ 𝑝 𝜃 ( 𝜏0 ) Ψ𝑡 ∇ 𝜃 log 𝜋 𝜃 (𝑎 𝑡 |𝑠𝑡 )
𝑡=0
There is a variety of potential choices for Ψ𝑡 , based on the use of bootstrapping and
baseline substraction:
∞
∑︁
Ψ𝑡 = 𝑄ˆ 𝑀 𝐶 (𝑠𝑡 , 𝑎 𝑡 ) = 𝛾𝑖 · 𝑟𝑖 Monte Carlo target
𝑖=𝑡
𝑛−1
∑︁
Ψ𝑡 = 𝑄ˆ 𝑛 (𝑠𝑡 , 𝑎 𝑡 ) = 𝛾 𝑖 · 𝑟 𝑖 + 𝛾 𝑛𝑉 𝜃 (𝑠 𝑛 ) bootstrap (𝑛-step target)
𝑖=𝑡
∞
∑︁
Ψ𝑡 = 𝐴ˆ𝑀 𝐶 (𝑠𝑡 , 𝑎 𝑡 ) = 𝛾 𝑖 · 𝑟 𝑖 − 𝑉 𝜃 (𝑠𝑡 ) baseline subtraction
𝑖=𝑡
𝑛−1
∑︁
Ψ𝑡 = 𝐴ˆ𝑛 (𝑠𝑡 , 𝑎 𝑡 ) = 𝛾 𝑖 · 𝑟 𝑖 + 𝛾 𝑛𝑉 𝜃 (𝑠 𝑛 ) − 𝑉 𝜃 (𝑠𝑡 ) baseline + bootstrap
𝑖=𝑡
Ψ𝑡 = 𝑄 𝜙 (𝑠𝑡 , 𝑎 𝑡 ) Q-value approximation
Actor critic algorithms are among the most popular model-free reinforcement
learning algorithms in practice, due to their good performance. After having dis-
cussed relevant theoretical background, it is time to look at how actor critic can be
implemented in a practical, high performance, algorithm.
4.2 Policy-Based Agents 103
Many high performance implementations are based on the actor critic approach. For
large problems the algorithm is typically parallelized and implemented on a large
cluster computer. A well-known parallel algorithm is Asynchronous advantage actor
critic (A3C). A3C is a framework that uses asynchronous (parallel and distributed)
gradient descent for optimization of deep neural network controllers [520].
There is also a non-parallel version of A3C, the synchronous variant A2C [848].
Together they popularized this approach to actor critic methods. Figure 4.4 shows
the distributed architecture of A3C [381]; Alg. 4.5 shows the pseudocode, from
Mnih et al. [520]. The A3C network will estimate both a value function 𝑉 𝜙 (𝑠)
and an advantage function 𝐴 𝜙 (𝑠, 𝑎), as well as a policy function 𝜋 𝜃 (𝑎|𝑠). In the
experiments on Atari [520], the neural networks were separate fully-connected
policy and value heads at the top (orange in Fig. 4.4), followed by joint convolutional
networks (blue). This network architecture is replicated over the distributed workers.
Each of these workers are run on a separate processor thread and are synced with
global parameters from time to time.
A3C improves on classic REINFORCE in the following ways: it uses an advantage
actor critic design, it uses deep learning, and it makes efficient use of parallelism
in the training stage. The gradient accumulation step at the end of the code can
be considered as a parallelized reformulation of minibatch-based stochastic gra-
dient update: the values of 𝜙 or 𝜃 are adjusted in the direction of each training
thread independently. A major contribution of A3C comes from its parallelized
and asynchronous architecture: multiple actor-learners are dispatched to separate
104 4 Policy-Based Reinforcement Learning
instantiations of the environment; they all interact with the environment and collect
experience, and asynchronously push their gradient updates to a central target
network (just as DQN).
It was found that the parallel actor-learners have a stabilizing effect on training.
A3C surpassed the previous state-of-the-art on the Atari domain and succeeded on
a wide variety of continuous motor control problems as well as on a new task of
navigating random 3D mazes using high-resolution visual input [520].
should not be too large. A less naive approach is to use an adaptive step size that
depends on the output of the optimization progress.
Trust regions are used in general optimization problems to constrain the update
size [733]. The algorithms work by computing the quality of the approximation; if
it is still good, then the trust region is expanded. Alternatively, the region can be
shrunk if the divergence of the new and current policy is getting large.
Schulman et al. [680] introduced trust region policy optimization (TRPO) based
on this ideas, trying to take the largest possible parameter improvement step on a
policy, without accidentally causing performance to collapse.
To this end, as it samples policies, TRPO compares the old and the new policy:
h 𝜋 (𝑎 |𝑠 ) i
𝜃 𝑡 𝑡
L (𝜃) = E𝑡 · 𝐴𝑡 .
𝜋 𝜃old (𝑎 𝑡 |𝑠𝑡 )
TRPO tries to maximize this loss function L, subject to the constraint that the old
and the new policy are not too far away. In TRPO the Kullback-Leibler divergence5
is used for this purpose:
baseline.8 Both TRPO and PPO are on-policy algorithms. Hsu et al. [352] reflect on
design choices of PPO.
However, when there is not sufficient exploration, a potential problem is the collapse
of the policy distribution. The distribution then becomes too narrow, and we lose
the exploration pressure that is necessary for good performance.
Although we could simply add additional noise, another common approach
is to use entropy regularization (see Sect. A.2 for details). We then add an addi-
tional penalty to the loss function, that enforces the entropy 𝐻 of the distribution
to stay larger. Soft actor critic (SAC) is a well-known algorithm that focuses on
8 https://openai.com/blog/openai-baselines-ppo/#ppo
4.2 Policy-Based Agents 107
Actor critic approaches improve the policy-based approach with various value-based
ideas, and with good results. Another method to join policy and value approaches,
is to use a learned value function as a differentiable target to optimize the policy
against—we let the policy follow the value function [523]. An example is the deter-
ministic policy gradient [704]. Imagine we collect data 𝐷 and train a value network
𝑄 𝜙 (𝑠, 𝑎). We can then attempt to optimize the parameters 𝜃 of a deterministic
policy by optimizing the prediction of the value network:
9 https://github.com/haarnoja/sac
108 4 Policy-Based Reinforcement Learning
𝑛
h ∑︁ i
𝐽 (𝜃) = E𝑠∼𝐷 𝑄 𝜙 (𝑠, 𝜋 𝜃 (𝑠)) ,
𝑡=0
𝜙0 ← 𝜏 𝜙 + ( 1 − 𝜏) 𝜙0
𝜃 0 ← 𝜏 𝜃 + ( 1 − 𝜏) 𝜃 0
end for
end for
4.2 Policy-Based Agents 109
The pseudocode of DDPG is shown in Alg. 4.6. DDPG has been shown to work
well on simulated physics tasks, including classic problems such as Cartpole, Gripper,
Walker, and Car driving, being able to learn policies directly from raw pixel inputs.
DDPG is off-policy and uses a replay buffer and a separate target network to achieve
stable deep reinforcement learning (just as DQN).
DDPG is a popular actor critic algorithm. Annotated pseudocode and efficient
implementations can be found at Spinning Up10 and Stable Baselines11 in addition
to the original paper [479].
Conclusion
We have seen quite some algorithms that combine the policy and value approach,
and we have discussed possible combinations of these building blocks to construct
working algorithms. Figure 4.5 provides a conceptual map of how the different
approaches are related, including two approaches that will be discussed in later
chapters (AlphaZero and Evolutionary approaches).
Researchers have constructed many algorithms and performed experiments to
see when they perform best. Quite a number of actor critic algorithms have been
developed. Working high-performance Python implementations can be found on
GitHub in the Stable Baselines.12
10 https://spinningup.openai.com
11 https://stable-baselines.readthedocs.io
12 https://stable-baselines.readthedocs.io/en/master/guide/quickstart.html
110 4 Policy-Based Reinforcement Learning
Now that we have discussed these algorithms, let us see how they work in practice,
to get a feeling for the algorithms and their hyperparameters. It is easy to find good
working implementations online, and also evaluation environments can easily be
found.
MuJoCo is the most frequently used physics simulator in policy-based learning
experiments. Gym, the (Stable) Baselines and Spinning up allow us to run any mix
of learning algorithms and experimental environments. You are encouraged to try
these experiments yourself.
Please be warned, however, that attempting to install all necessary pieces of
software may invite a minor version-hell. Different versions of your operating
system, of Python, of GCC, of Gym, of the Baselines, of TensorFlow or PyTorch, and
of MuJoCo all need to line up before you can see beautiful images of moving arms,
legs and jumping humanoids. Unfortunately not all of these versions are backwards-
compatible, specifically the switch from Python 2 to 3 and from TensorFlow 1 to 2
introduced incompatible language changes.
Getting everything to work may be an effort, and may require switching ma-
chines, operating systems and languages, but you should really try. This is the
disadvantage of being part of one of the fastest moving fields in machine learning
research. If things do not work with your current operating system and Python
version, in general a combination of Linux Ubuntu (or macOS), Python 3.7, Ten-
sorFlow 1 or PyTorch, Gym, and the Baselines may be a good idea to start with.
Search on the GitHub repositories or Stackoverflow when you get error messages.
Sometimes downgrading to the one-but latest version will be necessary, or fiddling
with include or library paths.
If everything works, then both Spinning up and the Baselines provide convenient
scripts that facilitate mixing and matching algorithms and environments from the
command line.
For example, to run PPO on MuJoCo’s Walker environment, with a 32 × 32
hidden layer, the following command line does the job:
13 Tutorial: https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#
deriving-the-simplest-policy-gradient
14 TensorFlow: https://github.com/openai/spinningup/blob/master/spinup/examples/
tf1/pg_math/1_simple_pg.py
15 PyTorch: https://github.com/openai/spinningup/blob/master/spinup/examples/
pytorch/pg_math/1_simple_pg.py
4.3 Locomotion and Visuo-Motor Environments 111
To train DDPG from the Baselines on the Half-Cheetah, the command is:
python -m base-lines.run
--alg=ddpg --env=HalfCheetah-v2 --num_timesteps=1e6
All hyperparameters can be controlled via the command line, providing for a
flexible way to run experiments. A final example command line:
python scripts/all_plots.py
-a ddpg -e HalfCheetah Ant Hopper Walker2D -f logs/
-o logs/ddpg_results
The Stable Baselines site explains what this command line does.
4.3.1 Locomotion
One of the problems of locomotion of legged entities is the problem of learning gaits.
Humans, with two legs, can walk, run, and jump, amongst others. Dogs and horses,
with four legs, have other gaits, where their legs may move in even more interesting
patterns, such as the trot, canter, pace and gallop. The challenges that we pose
robots are often easier. Typical reinforcement learning tasks are for a one-legged
robot to learn to jump, for biped robots to walk and jump, and for a quadruped
112 4 Policy-Based Reinforcement Learning
to get to learn to use its multitude of legs in any coordinated fashion that results
in forward moving, however it is achieved. Learning such policies can be quite
computationally expensive, and a curious simulated virtual animal has emerged
that is cheaper to simulate: the two-legged half-cheetah, whose task it is to run
forward. We have already seen some of these robotic creatures in Figs. 4.2–4.3.
The first approach that we will discuss is by Schulman et al. [681]. They report
experiments where human-like bipeds and quadrupeds must learn to stand up and
learn running gaits. These are challenging 3D locomotion tasks that were formerly
attempted with hand-crafted policies. Figure 4.6 shows a sequence of states.
4.3 Locomotion and Visuo-Motor Environments 113
Fig. 4.9 DeepMind Control Suite. Top: Acrobot, Ball-in-cup, Cart-pole, Cheetah, Finger, Fish,
Hopper. Bottom: Humanoid, Manipulator, Pendulum, Point-mass, Reacher, Swimmer (6 and 15
links), Walker [754]
4.3.3 Benchmarking
This chapter is concerned with the second kind of model-free algorithms: policy-
based methods. We summarize what we have learned, and probide pointers to
further reading.
Summary
Policy-based model-free methods are some of the most popular methods of deep
reinforcement learning. For large, continuous action spaces, indirect value-based
methods are not well suited, because of the use of the arg max function to recover
the best action to go with the value. Where value-based methods work step-by-step,
vanilla policy-based methods roll out a full future trajectory or episode. Policy-based
methods work with a parameterized current policy, which is well suited for a neural
network as policy function approximator.
After the full trajectory has been rolled out, the reward and the value of the
trajectory is calculated and the policy parameters are updated, using gradient ascent.
Since the value is only known at the end of an episode, classic policy-based methods
have a higher variance than value based methods, and may converge to a local
optimum, but are more stable in continuous action spaces. The best known classic
policy method is called REINFORCE.
Actor critic methods add a value network to the policy network, to achieve the
benefits of both approaches, with some value-guidance during the episode rollout.
To reduce variance, n-step temporal difference bootstrapping can be added, and a
baseline value can be subtracted, so that we get the so-called advantage function
(which subtracts the value of the parent state from the action values of the future
states, bringing their expected value closer to zero). Well known actor critic methods
are A3C, DDPG, TRPO, PPO, and SAC.19 A3C features an asynchronous (parallel,
distributed) implementation, DDPG is an actor critic version of DQN for continous
action spaces finding the maximum using a gradient-bassed learning rule, TRPO
and PPO use trust regions to achieve adaptive step sizes in non linear spaces, SAC
optimizes for expected value and entropy of the policy. Benchmark studies have
shown that the performance of these actor critic algorithm is as good or better than
value-based methods [212, 327].
Robot learning is among the most popular applications for policy-based method.
Model-free methods have low sample efficiency, and to prevent the cost of wear after
millions of samples, most experiments use a physics simulation as environment,
such as MuJoCo. Two main application areas are locomotion (learning to walk,
learning to run) and visuo-motor interaction (learning directly from camera images
of one’s own actions).
19Asynchronous advantage actor critic; Deep deterministic policy gradients; Trust region policy
optimization; Proximal policy optimization; Soft actor critic.
116 4 Policy-Based Reinforcement Learning
Further Reading
Policy-based methods have been an active research area for some time. Their natural
suitability for deep function approximation for robotics applications and other ap-
plications with continuous action spaces has spurred a large interest in the research
community. The classic policy-based algorithm is Williams’ REINFORCE [843],
which is based on the policy gradient theorem, see [743]. Joining policy and value-
based methods as we do in actor critic is discussed in Barto et al. [57]. Mnih et
al. [520] introduce a modern efficient parallel implementation named A3C. After the
success of DQN a version for the continuous action space of policy-based methods
was introduced as DDPG by Lillicrap et al. [479]. Schulman et al. have worked on
trust regions, yielding efficient popular algorithms TRPO [680] and PPO [682].
Important benchmark studies of policy-based methods are Duan et al. [212] and
Henderson et al. [327]. This last paper has stimulated reproducibility in reinforce-
ment learning research.
Software environments that are used in testing policy-based methods are Mu-
JoCo [779] and PyBullet [167]. Gym [108] and the DeepMind control suite [754]
incorporate MuJoCo and provide an easy to use Python interface. An active research
community has emerged around the DeepMind control suite.
Exercises
We have come to the end of this chapter, and it is time to test our understanding
with questions, exercises, and a summary.
Questions
Below are some quick questions to check your understanding of this chapter. For
each question a simple, single sentence answer is sufficient.
1. Why are value-based methods difficult to use in continuous action spaces?
2. What is MuJoCo? Can you name a few example tasks?
3. What is an advantage of policy-based methods?
4. What is a disadvantage of full-trajectory policy-based methods?
5. What is the difference between actor critic and vanilla policy-based methods?
6. How many parameter sets are used by actor critic? How can they be represented
in a neural network?
7. Describe the relation between Monte Carlo REINFORCE, n-step methods, and
temporal difference bootstrapping.
8. What is the advantage function?
9. Describe a MuJoCo task that methods such as PPO can learn to perform well.
4.3 Locomotion and Visuo-Motor Environments 117
10. Give two actor critic approaches to further improve upon bootstrapping and
advantage functions, that are used in high-performing algorithms such as PPO
and SAC.
11. Why is learning robot actions from image input hard?
Exercises
Let us now look at programming exercises. If you have not already done so, install
MuJoCo or PyBullet, install the DeepMind control suite.20 We will use agent algo-
rithms from the Stable baselines. Furthermore, browse the examples directory of
the DeepMind control suite on GitHub, and study the Colab notebook.
1. REINFORCE Go to the Medium blog21 and reimplement REINFORCE. You can
choose PyTorch, or TensorFlow/Keras, in which case you will have to improvise.
Run the algorithm on an environment with a discrete action space, and compare
with DQN. Which works better? Run in an environment with a continuous action
space. Note that Gym offers a discrete and a continuous version of Mountain
Car.
2. Algorithms Run REINFORCE on a walker environment from the baselines. Run
DDPG, A3C, and PPO. Run them for different time steps. Make plots. Com-
pare training speed, and outcome quality. Vary hyperparameters to develop an
intuition for their effect.
3. Suite Explore the DeepMind control suite. Look around and see what environ-
ments have been provided, and how you can use them. Consider extending an
environment. What learning challenges would you like to introduce? First do
a survey of the literature that has been published about the DeepMind control
suite.
20https://github.com/deepmind/dm_control
21 https://medium.com/@ts1829/policy-gradient-reinforcement-learning-in-
pytorch-df1383ea0baf
Chapter 5
Model-Based Reinforcement Learning
The previous chapters discussed model-free methods, and we saw their success
in video games and simulated robotics. In model-free methods the agent updates
a policy directly from the feedback that the environment provides on its actions.
The environment performs the state transitions and calculates the reward. A dis-
advantage of model-free methods is that they can be slow to train; often millions
of environment samples are needed before the policy function converges to an
optimum.
In contrast, with model-based methods the agent builds its own internal transition
model from the environment feedback. The agent can then use this local transition
model instead of the environment to find out about the effect of actions on states
and rewards. The agent can use a planning algorithm to play what-if games, and
generate policy updates, all without causing any state changes in the environment.
This approach promises higher quality at lower sample complexity.
Model-based methods update the policy indirectly: the agent first learns a local
transition model from the environment, which the agent then uses to update the
policy. Indirectly learning the policy function has two consequences. On the positive
side, as soon as the agent has its own model of the state transitions of the world, it
can learn the best policy for free, without further incurring the cost of acting in the
environment. Model-based methods thus are able to have lower sample complexity.
The downside is that the learned transition model may be inaccurate, and it may
be a bad idea to only rely on the internal transition model to learn the policy. No
matter how many samples can be taken for free from the model, if the agent’s local
transition model does not reflect the environment’s real transition model, then the
locally learned policy function will not work in the environment. Thus, dealing with
uncertainty and model bias are important elements in model-based reinforcement
learning.
The transition models that are learned by model-based methods can be imple-
mented in different ways. Models can be tabular, or they can be based on various
kinds of deep learning, as we will see.
This chapter will start with an example showing how model-based methods
work. Next, we describe in more detail different kinds of model-based approaches;
119
120 5 Model-Based Reinforcement Learning
approaches that focus on learning an accurate model, and approaches for planning
with an imperfect model. Finally, we describe application environments for which
model-based methods have been used in practice, to see how well the approaches
perform.
The chapter is concluded with exercises, a summary, and pointers to further
reading.
Core Concepts
• Imagination
• Uncertainty models
• World models, Latent models
• Model-predictive control
• Deep end-to-end planning and learning
Core Problem
Core Algorithms
Model-free Q-learning: the agent picks the start state 𝑠0 , and uses (for example)
an 𝜖-greedy behavior policy on the action-value function 𝑄(𝑠, 𝑎) to pick the next
action. The environment then executes the action, computes the next state 𝑠 0 and
reward 𝑟, and returns these to the agent. The agent updates its action-value function
𝑄(𝑠, 𝑎) with the familiar update rule
The agent repeats this procedure until the values in the 𝑄-function no longer change
greatly.
Thus we pick our start location in the city, perform one walk along a block in an
𝜖-greedy direction, and record the reward and the new state at which we arrive.
We use the information to update the policy, and from our new location, we walk
again in an 𝜖-greedy direction using the policy. If we find the supermarket, we start
over again, trying to find a shorter path, until our policy values no longer change
(this may take many environment interactions). Then the best policy is the path
with the shortest distances.
Model-based planning and learning: the agent uses the 𝑄(𝑠, 𝑎) function as behav-
ior policy as before to sample the new state and reward from the environment, and
to update the policy (𝑄-function). In addition, however, the agent will record the
new state and reward in a local transition function 𝑇𝑎 (𝑠, 𝑠 0) and reward function
𝑅 𝑎 (𝑠, 𝑠 0). Because the agent now has these local entries we can also sample from
our local functions to update the policy. We can choose: sample from the (expensive)
environment transition function, or from the (cheap) local transition function. There
is a caveat with sampling locally, however. The local functions may contain fewer
entries—or only high variance entries—especially in the early stages, when few
environment samples have been performed. The usefulness of the local functions
increases as more environment samples are performed.
Thus, we now have a local map on which to record the new states and rewards.
We will use this map to peek, as often as we like and at no cost, at a location on
that map, to update the 𝑄-function. As more environment samples come in, the
map will have more and more locations for which a distance to the supermarket is
recorded. When glances at the map do not improve the policy anymore, we have to
walk in the environment again, and, as before, update the map and the policy.
In conclusion, model-free finds all policy updates outside the agent, from the
environment feedback; model-based also2 uses policy updates from within the agent,
using information from its local map (see Fig. 5.1). In both methods all updates to
the policy are ultimately derived from the environment feedback; model-based just
offers a different way to use the information to update the policy, a way that may
be more information-efficient, by keeping information from each sample within
the agent transition model and re-using that information.
2 One option is to only update the policy from the agent’s internal transition model, and not by the
environment samples anymore. However, another option is to keep using the environment samples
to also update the policy in the model-free way. Sutton’s Dyna [740] approach is a well-known
example of this last, hybrid, approach. Compare also Fig. 5.4 and Fig. 5.2.
122 5 Model-Based Reinforcement Learning
Value/Policy
planning
acting
direct RL
Model Environment
model learning
The application environments for model-based reinforcement learning are the same
as for model-free; our goal, however, is to solve larger and more complex problems
in the same amount of time, by virtue of the lower sample complexity and, as it
were, a deeper understanding of the environment.
Sample Efficiency
The sample efficiency of an agent algorithm tells us how many environment samples
it needs for the policy to reach a certain accuracy.
To achieve high sample efficiency, model-based methods learn a dynamics model.
Learning high-accuracy high-capacity models of high-dimensional problems re-
quires a high number of training examples, to prevent overfitting (see Sect. B.2.7).
Thus, reducing overfitting in learning the transition model would negate (some
of) the advantage of the low sample complexity that model-based learning of the
policy function achieves. Constructing accurate transition models can be difficult
in practice, and for many sequential decision problems the best results are often
achieved with model-free methods, although model-based methods are becoming
stronger (see, for example, Wang et al. [827]).
Tabular Imagination
Policy/Value
planning acting
learning
Dynamics Model Environment
learning
is sampled with actions according to the behavior policy, and the feedback is used
to update the same behavior policy. Imagination also uses the environment sample
to update the dynamics model {𝑇𝑎 , 𝑅 𝑎 }. This extra model is also sampled, and
provides extra updates to the behavior policy, in between the model-free updates.
The diagram in Fig. 5.2 shows how sample feedback is used both for updating
the policy directly and for updating the model, which then updates the policy, by
planning “imagined” feedback. In Alg. 5.2 the general imagination approach is
shown as pseudocode.
Sutton’s Dyna-Q [740, 742], which is shown in more detail in Alg. 5.3, is a
concrete implementation of the imagination approach. Dyna-Q uses the Q-function
as behavior policy 𝜋(𝑠) to perform 𝜖-greedy sampling of the environment. It then
updates this policy with the reward, and an explicit model 𝑀. When the model 𝑀
has been updated, it is used 𝑁 times by planning with random actions to update
the Q-function. The pseudocode shows the learning steps (from environment) and
𝑁 planning steps (from model). In both cases the Q-function state-action values are
updated. The best action is then derived from the 𝑄-values as usual.
Thus, we see that the number of updates to the policy can be increased without
more environment samples. By choosing the value for 𝑁, we can tune how many
5.2 Learning and Planning Agents 125
of the policy updates will be environment samples, and how many will be model
samples. In the larger problems that we will see later in this chapter, the ratio
of environment-to-model samples is often set at, for example, 1 : 1000, greatly
reducing sample complexity. The questions then become, of course: how good is the
model, and: how far is the resulting policy from a model-free baseline?
It is time to illustrate how Dyna-Q works with an example. For that, we turn to one
of our favorites, the Taxi world.
Let us see what the effect of imagining with a model can be. Please refer to
Fig. 5.3. We use our simple maze example, the Taxi maze, with zero imagination
(𝑁 = 0), and with large imagination (𝑁 = 50). Let us assume that the reward at all
states returned by the environment is 0, except for the goal, where the reward is +1.
In states the usual actions are present (north, east, west, south), except at borders
or walls.
When 𝑁 = 0, with zero imagination, Dyna-Q performs exactly Q-learning,
randomly sampling action rewards, building up the Q-function, and using the
Q-values following the 𝜖-greedy policy for action selection. The purpose of the
Q-function is to act as a vessel of information to find the goal. How does our vessel
get filled with information? Sampling starts off randomly, and the Q-values fill
slowly, since the reward landscape is flat, or sparse: only the goal state returns +1,
all other states return 0. In order to fill the Q-values with actionable information
on where to find the goal, first the algorithm must be lucky enough to choose a
state next to the goal, including the appropriate action to reach the goal. Only then
the first useful reward information is found and the first non-zero step towards
finding the goal can be entered into the Q-function. We conclude that, with 𝑁 = 0,
the Q-function is filled up slowly, due to sparse rewards.
126 5 Model-Based Reinforcement Learning
Policy/Value
planning acting Policy/Value
learning acting
Dynamics Model Environment
learning Environment
Fig. 5.4 Model-based (left) and Model-free (right). Learning changes the environment state
irreversibly (single arrow); planning changes the agent state reversibly (undo, double arrow)
What happens when we turn on planning? When we set 𝑁 to a high value, such
as 50, we perform 50 planning steps for each learning step. As we can see in the
algorithm, the model is built alongside the 𝑄-function, from environment returns.
As long as the 𝑄-function is still fully zero, then planning with the model will also
be useless. But as soon as one goal entry is entered into 𝑄 and 𝑀, then planning
will start to shine: it will perform 50 planning samples on the M-model, probably
finding the goal information, and possibly building up an entire trajectory filling
states in the 𝑄-function with actions towards the goal.
In a way, the model-based planning amplifies any useful reward information
that the agent has learned from the environment, and plows it back quickly into
the policy function. The policy is learned much quicker, with fewer environment
samples.
Model-free methods sample the environment and learn the policy function 𝜋(𝑠, 𝑎)
directly, in one step. Model-based methods sample the environment to learn the
policy indirectly, using a dynamics model {𝑇𝑎 , 𝑅 𝑎 } (as we see in Fig. 5.1 and 5.4,
and in Alg. 5.1).
5.2 Learning and Planning Agents 127
Planning Learning
Transition model in: Agent Environment
Agent can Undo: Yes No
State is: Reversible by agent Irreversible by agent
Dynamics: Backtrack Forward only
Data structure: Tree Path
New state: In agent Sample from environment
Reward: By agent Sample from environment
Synonyms: Imagination, simulation Sampling, rollout
Table 5.1 Difference between Planning and Learning
It is useful to step back for a moment to consider the place of learning and
planning algorithms in the reinforcement learning paradigm. Please refer to Table 5.1
for a summary of differences between planning and learning.
Planning with an internal transition model is reversible. When the agent uses
its own transition model to perform local actions on a local state, then the actions
can be undone, since the agent applied them to a copy in its own memory [527].3
Because of this local state memory, the agent can return to the old state, reversing
the local state change caused by the local action that it has just performed. The
agent can then try an alternative action (which it can also reverse). The agent can
use tree-traversal methods to traverse the state space, backtracking to try other
states.
In contrast to planning, learning is done when the agent does not have access
to the transition function 𝑇𝑎 (𝑠, 𝑠 0). The agent can get reward information by sam-
pling real actions in the environment. These actions are not played out inside the
agent but executed in the actual environment; they are irreversible and can not be
undone by the agent. Learning uses actions that irreversibly change the state of the
environment. Learning does not permit backtracking; learning algorithms learn a
policy by repeatedly sampling the environment.
Note the similarity between learning and planning: learning samples rewards
from the external environment, planning from the internal model; both use the
samples to update the policy function 𝜋(𝑠, 𝑎).
The variance of the transition model can be reduced by increasing the number
of environment samples, but there are also other approaches that we will discuss.
A popular approach for smaller problems is to use Gaussian processes, where
the dynamics model is learned by giving an estimate of the function and of the
uncertainty around the function with a covariance matrix on the entire dataset [93].
A Gaussian model can be learned from few data points, and the transition model can
be used to plan the policy function successfully. An example of this approach is the
PILCO system, which stands for Probabilistic Inference for Learning Control [188,
189]. This system was effective on Cartpole and Mountain car, but does not scale to
larger problems. We can also sample from a trajectory distribution optimized for
cost, and use that to train the policy, with a policy-based method [468]. Then we can
optimize policies with the aid of locally-linear models and a stochastic trajectory
optimizer. This approach, called Guided policy search (GPS), has been shown to
train complex policies with thousands of parameters learning tasks in MuJoCo such
as Swimming, Hopping and Walking.
Another popular method to reduce variance is the ensemble method. Ensemble
methods combine multiple learning algorithms to achieve better predictive perfor-
mance; for example, a random forest of decision trees often has better predictive
performance than a single decision tree [93, 573]. The ensemble methods are used to
estimate the variance and account for it during planning. A number of researchers
have reported good results with ensemble methods on larger problems [156, 375].
5.2 Learning and Planning Agents 129
For example, Chua et al. use an ensemble of probabilistic neural network mod-
els [149] in their approach named Probabilistic ensembles with trajectory sampling
(PETS). They report good results on high-dimensional simulated robotic tasks (such
as Half-cheetah and Reacher). Kurutach et al. [438] combine an ensemble of models
with TRPO, in ME-TRPO.4 In ME-TRPO an ensemble of deep neural networks is used
to maintain model uncertainty, while TRPO is used to control the model parameters.
In the planner, each imagined step is sampled from the ensemble predictions (see
Alg. 5.4).
Uncertainty modeling tries to improve the accuracy of high-dimensional models
by probabilistic methods. A different approach is the latent model approach, which
we will discuss next.
idea in VPN is not to learn directly in the actual observation space, but first to
transform the state respresentations to a smaller latent representation model, also
known as abstract model. The other functions, such as value, reward, and next-state,
then work on these smaller latent states, instead of on the more complex high-
dimensional states. In this way, planning and learning occur in a space where states
are encouraged only to contain the elements that influence value changes. Latent
space is lower-dimensional, and training and planning become more efficient.
The four functions in VPN are: (1) an encoding function, (2) a reward function,
(3) a value function, and (4) a transition function. All functions are parameterized
with their own set of parameters. To distinghuish these latent-based functions
from the conventional observation-based functions 𝑅, 𝑉, 𝑇 they are denoted as
𝑓 𝜃𝑒𝑛𝑐
𝑒
, 𝑓 𝜃𝑟𝑟𝑒𝑤 𝑎𝑟 𝑑 , 𝑓 𝜃𝑣𝑣𝑎𝑙𝑢𝑒 , 𝑓 𝜃𝑡𝑟𝑡 𝑎𝑛𝑠 .
• The encoding function 𝑓 𝜃𝑒𝑛𝑐
𝑒
: 𝑠 𝑎𝑐𝑡𝑢𝑎𝑙 → 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 maps the observation 𝑠 𝑎𝑐𝑡𝑢𝑎𝑙 to
the abstract state using neural network 𝜃 𝑒 , such as a CNN for visual observations.
• The latent-reward function 𝑓 𝜃𝑟𝑟𝑒𝑤 𝑎𝑟 𝑑 : (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 , 𝑜) → 𝑟, 𝛾 maps the latent state 𝑠
and option 𝑜 (a kind of action) to the reward and discount factor. If the option
takes 𝑘 primitive actions, the network should predict the discounted sum of
the 𝑘 immediate rewards as a scalar. (The role of options is explained in the
paper [563].) The network also predicts option-discount factor 𝛾 for the number
of steps taken by the option.
• The latent-value function 𝑓 𝜃𝑣𝑣𝑎𝑙𝑢𝑒 : 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 → 𝑉 𝜃𝑣 (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 ) maps the abstract
state to its value using a separate neural network 𝜃 𝑣 . This value is the value of
the latent state, not of the actual observation state 𝑉 (𝑠 𝑎𝑐𝑡𝑢𝑎𝑙 ).
0
• The latent-transition function 𝑓 𝜃𝑡𝑟𝑡 𝑎𝑛𝑠 : (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 , 𝑜) → 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 maps the latent
state to the next latent state, depending also on the option.
Figure 5.5 shows how the core functions work together in the smaller, latent, space;
with 𝑥 the observed actual state, and 𝑠 the encoded latent state [563].
The figure shows a single rollout step, planning one step ahead. However, a
model also allows looking further into the future, by performing multi-step rollouts.
Of course, this requires a highly accurate model, otherwise the accumulated inaccu-
racies diminish the accuracy of the far-into-the-future lookahead. Algorithm 5.5
shows the pseudocode for a 𝑑-step planner for the value prediction network.
5.2 Learning and Planning Agents 131
end function
The networks are trained with 𝑛-step Q-learning and TD search [708]. Trajec-
tories are generated with an 𝜖-greedy policy using the planning algorithm from
Alg. 5.5. VPN achieved good results on Atari games such as Pacman and Seaquest,
outperforming model-free DQN, and outperforming observation-based planning in
stochastic domains.
Another relevant approach is presented in a sequence of papers by Hafner et
al. [307, 308, 309]. Their PlaNet and Dreamer approaches use latent models based
on a Recurrent State Space Model (RSSM), that consists of a transition model, an ob-
servation model, a variational encoder and a reward model, to improve consistency
between one-step and multi-step predictions in latent space [397, 121, 199].
In the next section we will further discuss the performance of latent models.
The latent-model approach reduces the dimensionality of the observation space.
Dimensionality reduction is related to unsupervised learning (Sect. 1.2.2), and
autoencoders (Sect. B.2.6). The latent-model approach is also related to world models,
a term used by Ha and Schmidhuber [302, 303]. World models are inspired by the
manner in which humans are thought to contruct a mental model of the world in
which we live. Ha et al. implement world models using generative recurrent neural
networks that generate states for simulation using a variational autoencoder [410,
411] and a recurrent network. Their approach learns a compressed spatial and
temporal representation of the environment. By using features extracted from the
world model as inputs to the agent, a compact and simple policy can be trained to
solve a task, and planning occurs in the compressed world. The term world model
goes back to 1990, see Schmidhuber [669].
Latent models and world models achieve promising results and are, despite their
complexity, an active area of research, see, for example [872].
two planning approaches that are designed to be forgiving for models that contain
inaccuracies. The planners try to reduce the impact of the inaccuracy of tthe model,
for example, by planning ahead with a limited horizon, and by re-learning and
re-planning at each step of the trajectory. We will start with planning with a limited
horizon.
At each planning step, the local transition model 𝑇𝑎 (𝑠) → 𝑠 0 computes the new state,
using the local reward to update the policy. Due to the inaccuracies of the internal
model, planning algorithms that perform many steps will quickly accumulate model
errors [295]. Full rollouts of long, inaccurate, trajectories are therefore problematic.
We can reduce the impact of accumulated model errors by not planning too far
ahead. For example, Gu et al. [295] perform experiments with locally linear models
that roll out planning trajectories of length 5 to 10. This reportedly works well for
MuJoCo tasks Gripper and Reacher.
In another experiment, Feinberg et al. [237] allow imagination to a fixed look-
ahead depth, after which value estimates are split into a near-future model-based
component and a distant future model-free component (Model-based value expan-
sion, MVE). They experiment with model horizons of 1, 2, and 10, and find that 10
generally performs best on typical MuJoCo tasks such as Swimmer, Walker, and
Cheetah. The sample complexity in their experiments is better than model-free meth-
ods such as DDPG [704]. Similarly good results are reported by others [375, 392],
with a model horizon that is much shorter than the task horizon.
Model-Predictive Control
Taking the idea of shorter trajectories for planning than for learning further, we
arrive at decision-time planning [466], also known as Model-predictive control
(MPC) [439, 261]. Model-predictive control is a well-known approach in process
engineering, to control complex processes with frequent re-planning over a limited
time horizon. Model-predictive control uses the fact that many real-world processes
are approximately linear over a small operating range (even though they can be
highly non-linear over a longer range). In MPC the model is optimized for a limited
time into the future, and then it is re-learned after each environment step. In this
way small errors do not get a chance to accumulate and influence the outcome
greatly. Related to MPC are other local planning methods. All try to reduce the
impact of the use of an inaccurate model by not planning too far into the future
and by updating the model frequently. Applications are found in the automotive
industry and in aerospace, for example for terrain-following and obstacle-avoidance
algorithms [393].
MPC has been used in various model learning approaches. Both Finn et al.
and Ebert et al. [243, 218] use a form of MPC in the planning for their Visual
5.2 Learning and Planning Agents 133
foresight robotic manipulation system. The MPC part uses a model that generates
the corresponding sequence of future frames based on an image to select the least-
cost sequence of actions. This approach is able to perform multi-object manipulation,
pushing, picking and placing, and cloth-folding tasks (which adds the difficulty of
material that changes shape as it is being manipulated).
Another approach is to use ensemble models for learning the transition model,
with MPC for planning. PETS [149] uses probabilistic ensembles [447] for learning,
based on cross-entropy-methods (CEM) [183, 99]. In MPC-fashion only the first
action from the CEM-optimized sequence is used, re-planning at every environment-
step. Many model-based approaches combine MPC and the ensemble method, as we
will also see in the overview in Table 5.2 at the end of the next section. Algorithm 5.6
shows in pseudocode an example of Model-predictive control (based on [548], only
the model-based part is shown).6
MPC is a simple and effective planning method that is well-suited for use with
inaccurate models, by restricting the planning horizon and by re-planning. It has
also been used with success in combination with latent models [308, 390].
Up until now, the learning of the dynamics model and its use are performed by
separate algorithms. In the previous subsection differentiable transition models were
learned through backpropagation and then the models were used by a conventional
hand-crafted procedural planning algorithm, such as depth-limited search, with
hand-coded selection and backup rules.
A trend in machine learning is to replace all hand-crafted algorithms by differ-
entiable approaches, that are trained by example, end-to-end. These differentiable
approaches often are more general and perform better than their hand-crafted
6 The code is at https://github.com/anagabandi/nn_dynamics.
134 5 Model-Based Reinforcement Learning
versions.7 We could ask the question if it would be possible to make the planning
phase differentiable as well? Or, as a first step, to see if the planning rollouts can be
implemented in a single computational model, the neural network?
At first sight, it may seem strange to think of a neural network as something
that can perform planning and backtracking, since we often think of a neural
network as a state-less mathematical function. Neural networks normally perform
transformation and filter activities to achieve selection or classification. Planning
consists of action selection and state unrolling. Note, however, that recurrent neural
networks and LSTMs contain implicit state, making them a candidate to be used
for planning (see Sect. B.2.5). Let us see how it is possible to perform planning with
a neural network.
Tamar et al. [747] introduced Value Iteration Networks (VIN), convolutional
networks for planning in Grid worlds. A VIN is a differentiable multi-layer network
that can execute the steps of a simple planning algorithm [561]. The core idea it
that in a Grid world, value iteration can be implemented by a multi-layer convolu-
tional network: each layer does a step of lookahead (refer back to Listing 2.1 for
value iteration). The value iterations are rolled-out in the network layers 𝑆 with 𝐴
channels, and the CNN architecture is shaped specifically for each problem task.
Through backpropagation the model learns the value iteration parameters including
the transition function. The aim is to learn a general model, that can navigate in
unseen environments.
Let us look in more detail at the value iteration algorithm. It is a simple algorithm
that consists
Í of a doubly nested loop over states and actions, calculating the sum of
rewards 𝑠0 ∈𝑆 𝑇𝑎 (𝑠, 𝑠 0) (𝑅 𝑎 (𝑠, 𝑠 0) + 𝛾𝑉 [𝑠 0]) and a subsequent maximization opera-
tion 𝑉 [𝑠] = max𝑎 (𝑄 [𝑠, 𝑎]). This double loop is iterated to convergence. The insight
is that each iteration can be implemented by passing the previous value function
𝑉𝑛 and reward function 𝑅 through a convolution layer and max-pooling layer. In
this way, each channel in the convolution layer corresponds to the Q-function for a
specific action—the innermost loop—and the convolution kernel weights correspond
to the transitions. Thus by recurrently applying a convolution layer 𝐾 times, 𝐾
iterations of value iteration are performed.
The value iteration module is simply a neural network that has the capability
of performing an approximate value iteration computation. Representing value
iteration in this form makes learning the MDP parameters and functions natural—by
backpropagating through the network, as in a standard CNN. In this way, the classic
value iteration algorithm can be approximated with a neural network.
Why would we want to have a fully differentiable algorithm that can only give
an approximation, if we have a perfectly good classic procedural implementation
that can calculate the value function 𝑉 exactly?
7 Note that here we use the term end-to-end to indicate the use of differentiable methods for
the learning and use of a dynamics model—to replace hand-crafted planning algorithm to use
the learned model. Elsewhere, in supervised learning, the term end-to-end is used differently,
to describe learning both features and their use from raw pixels for classification—to replace
hand-crafted feature recognizers to pre-process the raw pixels and use in a hand-crafted machine
learning algorithm.
5.2 Learning and Planning Agents 135
The reason is generalization. The exact algorithm only works for known tran-
sition probabilities. The neural network can learn 𝑇 (·) when it is not given, from
the environment, and it learns the reward and value functions at the same time. By
learning all functions all at once in an end-to-end fashion, the dynamics and value
functions might be better integrated than when a separately hand-crafted planning
algorithm uses the results of a learned dynamics model. Indeed, reported results do
indicate good generalization to unseen problem instances [747].
The idea of planning by gradient descent has existed for some time—actually, the
idea of learning all functions by example has existed for some time—several authors
explored learning approximations of dynamics in neural networks [402, 670, 368].
The VINs can be used for discrete and continuous path planning, and have been
tried in Grid world problems and natural language tasks.
Later work has extended the approach to other applications of more irregular
shape, by adding abstraction networks [667, 723, 709]. Latent models increase the
power and versatility of end-to-end learning of planning and transitions even
further. Let us look briefly in more detail at one such extension of VIN, to illustrate
how latent models and planning go together. TreeQN by Farquhar et al. [235] is
a fully differentiable model learner and planner, using observation abstraction so
that the approach works on applications that are less regular than mazes.
TreeQN consists of five differentiable functions, four of which we have seen in
the previous section in Value Prediction Networks [563], Fig. 5.5 on page 130.
• The encoding function consists of a series of convolutional layers that embed
the actual state in a lower dimensional state 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 ← 𝑓 𝜃𝑒𝑛𝑐 𝑒
(𝑠 𝑎𝑐𝑡𝑢𝑎𝑙 )
• The transition function uses a fully connected layer per action to calculate the
0
next-state representation 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 ← 𝑓 𝜃𝑡𝑟𝑡 𝑎𝑛𝑠 (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 , 𝑎 𝑖 )𝑖=
𝐼 .
0
• The reward function predicts the immediate reward for every action 𝑎 𝑖 ∈ 𝐴 in
0
state 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 using a ReLU layer 𝑟 ← 𝑓 𝜃𝑟𝑟𝑒𝑤 𝑎𝑟 𝑑 (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 ).
• The value function of a state is estimated with a vector of weights 𝑉 (𝑠𝑙𝑎𝑡𝑒𝑛𝑡 ) ←
𝑤 > 𝑠𝑙𝑎𝑡𝑒𝑛𝑡 + 𝑏.
• The backup function applies a softmax function8 recursively to calculate the tree
Í𝐼
backup value 𝑏(𝑥) ← 𝑖= 0 𝑥 𝑖 softmax(𝑥)𝑖 .
These functions together can learn a model, and can also execute 𝑛-step Q-learning,
to use the model to update a policy. Further details can be found in [235] and the
GitHub code.9 TreeQN has been applied on games such as box-pushing and some
Atari games, and outperformed DQN.
A limitation of VIN is that the tight connection between problem domain, itera-
tion algorithm, and network architecture limited the applicability to other problems.
Like TreeQN, the Predictron [709] introduces an abstract model to remove this
limitation. As in VPN, the latent model consists of four differentiable components:
a representation model, a next-state model, a reward model, and a discount model.
8 The softmax function normalizes an input vector of real numbers to a probability distribution
𝑓 ( 𝑥)
[ 0, 1 ]; 𝑝 𝜃 ( 𝑦 | 𝑥) = softmax( 𝑓𝜃 ( 𝑥)) = Í 𝑒 𝜃𝑓𝜃 ,𝑘 ( 𝑥)
𝑘 𝑒
9 See https://github.com/oxwhirl/treeqn for the code of TreeQN.
136 5 Model-Based Reinforcement Learning
The goal of the abstract model in Predictron is to facilitate value prediction (not
state prediction) or prediction of pseudo-reward functions that can encode special
events, such as staying alive or reaching the next room. The planning part rolls
forward its internal model 𝑘 steps. Unlike VPN, Predictron uses joint parameters.
The Predictron has been applied to procedurally generated mazes and a simulated
pool domain. In both cases it out-performed model-free algorithms.
End-to-end model-based learning-and-planning is an active area of research.
Challenges include understanding the relation between planning and learning [18,
288], achieving performance that is competitive with classical planning algorithms
and with model-free methods, and generalizing the class of applications. In Sect. 5.3
more methods will be shown.
Conclusion
In the previous sections we have discussed two methods to reduce the inaccuracy
of the model, and two methods to reduce the impact of the use of an inaccurate
model. We have seen a range of different approaches to model-based algorithms.
Many of the algorithms were developed recently. Deep model-based reinforcement
learning is an active area of research.
Ensembles and MPC have improved the performance of model-based reinforce-
ment learning. The goal of latent or world models is to learn the essence of the do-
main, reducing the dimensionality, and for end-to-end, to also include the planning
part in the learning. Their goal is generalization in a fundamental sense. Model-free
learns a policy of which action to take in each state. Model-based methods learn
the transition model, from state (via action) to state. Model-free teaches you how to
best respond to actions in your world, model-based helps you to understand your
world. By learning the transition model (and possibly even how to best plan with
it) it is hoped that new generalization methods can be learned.
The goal of model-based methods is to get to know the environment so intimitely
that the sample complexity can be reduced while staying close to the solution
quality of model-free methods. A second goal is that the generalization power of
the methods improves so much, that new classes of problems can be solved. The
literature is rich and contains many experiments of these approaches on different
environments. Let us see if we have succeeded.
We see that a few approaches use smaller 2D Grid world navigation tasks such
as mazes, or block puzzles, such as Sokoban, and Pacman. Grid world tasks are
some of the oldest problems in reinforcement learning, and they are used frequently
to test out new ideas. Tabular imagination approaches such as Dyna, and some
latent model and end-to-end learning and planning, have been evaluated with
these environments. They typically achieve good results, since the problems are of
moderate complexity.
Grid world navigation problems are quintessential sequential decision problems.
Navigation problems are typically low-dimensional, and no visual recognition is
involved; transition functions are easy to learn.
Navigation tasks are also used for latent model and end-to-end learning. Three
latent model approaches in Table 5.2 use navigation problems. I2A deals with model
imperfections by introducing a latent model, based on Chiappa et al. and Buesing et
al. [144, 121]. I2A is applied to Sokoban and Mini-Pacman by [614, 121]. Performance
compares favorably to model-free learning and to planning algorithms (MCTS).
5.3 High-Dimensional Environments 139
Next, we see papers that use MuJoCo to model continuous robotic problems. Robotic
problems are high-dimensional problems with continuous action spaces. MuJoCo
is used by most experiments in this category to simulate the physical behavior of
robotic movement and the environment.
Uncertainty modeling with ensembles and MPC replanning try to reduce or
contain inaccuracies.
The combination of ensembles methods with MPC is well suited for robotic
problems, as we have seen in individual approaches such as PILCO and PETS. Let
us have a closer look at how well the uncertainty modeling and MPC methods
succeed at achieving our first goal, low sample complexity in high-dimensional
problems. A benchmark study by Wang et al. [827] shows that ensemble methods
and Model-predictive control indeed achieve good results on MuJoCo tasks, and
do so in significantly fewer time steps than model-free methods, typcially in 200k
time steps versus 1 million for model-free. Unfortunately, although the sample
complexity may be lower, the wall-clock time may not be, with model-free methods
such as PPO ans SAC being much faster for some problems. The score that the
policy achieves varies greatly for different problems, and is sensitive to different
hyperparameter values.
Furthermore, in some experiments with a large number of time steps performance
of model-based methods plateaus well below model-free performance, indicating
the need for further research. Additionally, the performance of the model-based
methods themselves differed substantially, also indicating a need for further research.
Recent surveys are [528, 600].
Some experiments use the Arcade learning environment (ALE). ALE features high-
dimensional inputs, and provides one of the most challenging environments of the
table. Especially latent models choose Atari games to showcase their performance,
and some do indeed achieve impressive results, in that they are able to solve new
problems, such as playing all 57 Atari games well (Dreamer-v2) [309] and learning
the rules of Atari and chess (MuZero) [678].
140 5 Model-Based Reinforcement Learning
Hafner et al. published the papers Dream to control: learning behaviors by latent
imagination, and Dreamer v2 [307, 309]. Their work extend the work on VPN and
PlaNet by more advanced latent models and reinforcement learning methods [563,
308]. Dreamer uses an actor-critic approach to learn behaviors that consider rewards
beyond the horizon. Values are backpropagated through the value model, similar to
DDPG [479] and Soft actor critic [305]. An important advantage of model-based
reinforcement learning is that it can generalize to unseen environments with similar
dynamics [687]. The Dreamer experiments showed that latent models are indeed
more robust to unseen environments than model-free methods. Dreamer is tested
with applications from the DeepMind control suite (Sect. 4.3.2).
Value prediction networks are another latent approach. They outperform model-
free DQN on mazes and Atari games such as Seaquest, QBert, Krull, and Crazy
Climber.
Taking the development of end-to-end learner/planners such as VPN and Predic-
tron further is the work on MuZero [678, 289, 358]. In MuZero a new architecture
is used to learn the transition functions for a range of different games, from Atari
to board games. MuZero learns the transition model for all games from interaction
with the environment.10 The MuZero model includes different modules: a represen-
tation, dynamics, and prediction function. Like AlphaZero, MuZero uses a refined
version of MCTS for planning (see Sect. 6.2.1.2 in the next chapter). This MCTS
planner is used in a self-play training loop for policy improvement. MuZero is able
to learn the rules of Atari games as well as board games, learning to play the games
from scratch, in conjunction with learning the rules of the games. The MuZero
achievements have created follow up work to provide more insight into the relation-
ship between actual and latent representations, and to reduce the computational
demands [289, 186, 39, 333, 18, 679, 288, 859].
Latent models reduce observational dimensionality to a smaller model and do
planning in latent space. End-to-end learning and planning is able to learn new
problems—the second of our two goals: it is able to learn to generalize navigation
tasks, and to learn the rules of chess and Atari. These are new problems, that are
out of reach for model-free methods (although the sample complexity of MuZero is
quite large).
Conclusion
The experiments used many different environments within the ALE and Mu-
JoCo suites. It would appear that there is no standardized unified benchmark for
continuous and discrete problems. Or is there? In the next two chapters we will
study multi-agent problems, where we encounter a new set of benchmarks, with a
state space of many combinations, including hidden information or simultaneous
actions.
Before we go to the next chaper, let us take a closer look at how one of these
methods achieves efficient learning of a complex high-dimensional task. We will
look at PlaNet, a well-documented project by Hafner et al. [308]. Code is available,11
scripts are available, videos are available, and a blog is available12 inviting us to
take the experiments further. The name of the work is Learning latent dynamics
from pixels, which describes what the algorithm does: use high dimensional visual
input, convert it to latent space, and plan in latent space to learn robot locomotion
dynamics.
PlaNet solves continuous control tasks that include contact dynamics, partial
observability, and sparse rewards. The applications used in the PlaNet experiments
are: (a) Cartpole (b) Reacher (c) Cheetah (d) Finger (e) Cup and (f) Walker (see
Fig. 5.6). The Cartpole task is a swing-up task, with a fixed viewpoint. The cart can
be out of sight, requiring the agent to remember information from previous frames.
The Finger spin task requires predicting the location of two separate objects and
their interactions. The Cheetah task involves learning to run. It includes contacts
of the feet with the ground that requires a model to predict multiple futures. The
Cup task must catch a ball in a cup. It provides a sparse reward signal once the
ball is caught, requiring accurate predictions far into the future. The Walker task
involves a simulated robot that begins lying on the ground, and must learn to stand
up and then walk. PlaNet performs well on these tasks. On DeepMind control tasks
it achieves higher accuracy than an A3C or an D4PG agent. It reportedly does so
using 5000% fewer interactions with the environment on average.
11 https://github.com/google-research/planet
12 https://planetrl.github.io
142 5 Model-Based Reinforcement Learning
As usual, this does require having the right versions of the right libraries installed,
which may be a challenge and may require some creativity on your part. The
required versions are listed on the GitHub page. The blog also contains videos and
pictures of what to expect, inlcuding comparisons to model-free baselines from the
DeepMind Control Suite (A3C, D4PG).
The experiments show the viability of the idea to use rewards and values to
compress actual states into lower dimensional latent states, and then plan with
these latent states. Value-based compression reduces details in the high-dimensional
actual states as noise that is not relevant to improve the value function [289]. To
help understand how the actual state map to the latent states, see, for example [186,
471, 400].
This has been a diverse chapter. We will summarize the chapter, and provide refer-
ences for further reading.
Summary
Model-free methods sample the environment using the rewards to learn the policy
function, providing actions for all states for an environment. Model-based methods
use the rewards to learn the transition function, and then use planning methods
to sample the policy from this internal model. Metaphorically speaking: model-
free learns how to act in the environment, model-based learns how to be the
environment. The learned transition model acts as a multiplier on the amount of
information that is used from each environment sample. A consequence is that
model-based methods have a lower sample complexity, although, when the agent’s
transition model does not perfectly reflect the environment’s transition function,
the performance of the policy may be worse than a model-free policy (since that
always uses the environment to sample from).
Another, and perhaps more important aspect of the model-based approach, is
generalization. Model-based reinforcement learning builds a dynamics model of
the domain. This model can be used multiple times, for new problem instances,
13 https://github.com/google-research/planet
5.3 High-Dimensional Environments 143
but also for related problem classes. By learning the transition and reward model,
model-based reinforcement learning may be better at capturing the essence of a
domain than model-free methods, and thus be able to generalize to variations of
the problem.
Imagination showed how to learn a model and use it to fill in extra samples
based on the model (not the environment). For problems where exact methods work,
imagination can be many times more efficient than model-free methods.
Note that when the agent has access to the transition model, it can apply re-
versible planning algorithms, in additon to one-way learning with samples. There is
a large literature on backtracking, tree-traversal, algorithms that can be used. Using
a look-ahead of more than one step can increase the quality of the reward even more.
When the problem size increases, or when we perform deep multi-step look-ahead,
the accuracy of the model becomes critical. For high-dimensional problems high
capacity networks are used that require many samples to prevent overfitting. Thus
a trade-off exists, to keep sample complexity low.
Methods such as PETS aim to take the uncertainty of the model into account
in order to increase modeling accuracy. Model-predictive control methods re-plan
at each environment step to prevent over-reliance on the accuracy of the model.
Classical tabular approaches and Gaussian Process approaches have been quite
succesful in achieving low sample complexity for small problems [742, 190, 416].
Latent models observe that in many high-dimensional problems the factors that
influence changes in the value function are often lower-dimensional. For example,
the background scenery in an image may be irrelevant for the quality of play in
a game, and has no effect on the value. Latent models use an encoder to translate
the high-dimensional actual state space into a lower-dimensional latent state space.
Subsequent planning and value functions work on the (much smaller) latent space.
Finally, we considered end-to-end model-based algorithms. These fully differen-
tiable algorithms not only learn the dynamics model, but also learn the planning
algorithm that uses the model. The work on Value iteration networks [747] inspired
recent work on end-to-end learning, where both the transition model and the plan-
ning algorithm are learned, end-to-end. Combined with latent models (or World
models [303]) impressive results were achieved [709], and the model and planning
accuracy was improved to the extent that tabula rasa self-learning of game-rules
was achieved, in Muzero [678] for both chess, shogi, Go, and Atari games.
Further Reading
390, 468, 467, 325, 243]. For Model-predictive control see [548, 261, 504, 425, 295,
237, 38].
Latent models is an active field of research. Two of the earlier works are [562, 563],
although the ideas go back to World models [303, 402, 670, 368]. Later, a sequence
of PlaNet and Dreamer papers was influential [308, 307, 687, 309, 873].
The literature on end-to-end learning and planning is also extensive, starting
with VIN [747], see [22, 703, 705, 709, 550, 297, 235, 678, 238].
As applications became more challenging, notably in robotics, other methods
were developed, mostly based on uncertainty, see for surveys [190, 416]. Later, as
high-dimensional problems became prevalent, latent and end-to-end methods were
developed. Surveys are [600, 528], a comprehensive benchmark study is [827].
Exercises
Questions
Below are first some quick questions to check your understanding of this chapter.
For each question a simple, single sentence answer is sufficient.
1. What is the advantage of model-based over model-free methods?
2. Why may the sample complexity of model-based methods suffer in high-
dimensional problems?
3. Which functions are part of the dynamics model?
4. Mention four model-based approaches.
5. Do model-based methods achieve better sample complexity than model-free?
6. Do model-based methods achieve better performance than model-free?
7. In Dyna-Q the policy is updated by two mechanisms: learning by sampling the
environment and what other mechanism?
8. Why is the variance of ensemble methods lower than of the individual machine
learning approaches that are used in the ensemble?
9. What does model-predictive control do and why is this approach suited for
models with lower accuracy?
10. What is the advantage of planning with latent models over planning with actual
models?
11. How are latent models trained?
12. Mention four typical modules that constitute the latent model.
13. What is the advantage of end-to-end planning and learning?
14. Mention two end-to-end planning and learning methods.
5.3 High-Dimensional Environments 145
Exercises
It is now time to introduce a few programming exercises. The main purpose of the
exercises is to become more familiar with the methods that we have covered in this
chapter. By playing around with the algorithms and trying out different hyperpa-
rameter settings you will develop some intuition for the effect on performance and
run time of the different methods.
The experiments may become computationally expensive. You may want to
consider running them in the cloud, with Google Colab, Amazon AWS, or Microsoft
Azure. They may have student discounts, and they will have the latest GPUs or
TPUs for use with TensorFlow or PyTorch.
1. Dyna Implement tabular Dyna-Q for the Gym Taxi environment. Vary the amount
of planning 𝑁 and see how performance is influenced.
2. Keras Make a function approximation version of Dyna-Q and Taxi, with Keras.
Vary the capacity of the network and the amount of planning. Compare against
a pure model-free version, and note the difference in performance for different
tasks and in computational demands.
3. Planning In Dyna-Q, planning has so far been with single step model samples.
Implement a simple depth-limited multi-step look-ahead planner, and see how
performance is influenced for the different look-ahead depths.
4. MPC Read the paper by Nagabandi et al. [548] and download the code.14 Acquire
the right versions of the libraries, and run the code with the supplied scripts,
just for the MB (model-based) versions. Note that plotting is also supported by
the scripts. Run with different MPC horizons. Run with different ensemble sizes.
What are the effects on performance and run time for the different applications?
5. PlaNet Go to the PlaNet blog and read it (see previous section).15 Go to the PlaNet
GitHub site and download and install the code.16 Install the DeepMind control
suite,17 and all necessary versions of the support libraries.
Run Reacher and Walker in PlaNet, and compare against the model-free methods
D4PG and A3C. Vary the size of the encoding network and note the effect on
performance and run time. Now turn off the encoder, and run with planning on
actual states (you may have to change network sizes to achieve this). Vary the
capacity of the latent model, and of the value and reward functions. Also vary
the amount of planning, and note its effect.
6. End-to-end As you have seen, these experiments are computationally expen-
sive. We will now turn to end-to-end planning and learning (VIN and MuZero).
This exercise is also computationally expensive. Use small applications, such
as small mazes, and Cartpole. Find and download a MuZero implementation
from GitHub and explore using the experience that you have gained from the
previous exercises. Focus on gaining insight into the shape of the latent space.
14 https://github.com/anagabandi/nn_dynamics
15 https://planetrl.github.io
16 https://github.com/google-research/planet
17 https://github.com/deepmind/dm_control
146 5 Model-Based Reinforcement Learning
18 https://github.com/werner-duvaud/muzero-general
19 https://github.com/kaesve/muzero
Chapter 6
Two-Agent Self-Play
Previous chapters were concerned with how a single agent can learn optimal
behavior for its environment. This chapter is different. We turn to problems where
two agents operate whose behavior will both be modeled (and, in the next chapter,
more than two).
Two-agent problems are interesting for two reasons. First, the world around us is
full of active entities that interact, and modeling two agents and their interaction is
a step closer to understanding the real world than modeling a single agent. Second,
in two-agent problems exceptional results were achieved—reinforcement learning
agents teaching themselves to become stronger than human world champions—and
by studying these methods we may find a way to achieve similar results in other
problems.
The kind of interaction that we model in this chapter is zero-sum: my win is
your loss and vice versa. These two-agent zero-sum dynamics are fundamentally
different from single-agent dynamics. In single agent problems the environment lets
you probe it, lets you learn how it works, and lets you find good actions. Although
the environment may not be your friend, it is also not working against you. In
two-agent zero-sum problems the environment does try to win from you, it actively
changes its replies to minimize your reward, based on what it learns from your
actions. Learning our optimal policy should take all possible counter-actions into
account.
A popular way to do so is to implement the environment’s actions with self-play:
we replace the environment by a copy of ourselves. In this way we let ourselves
play against an opponent that has all the knowledge that we currently have, and
agents learn from eachother.
We start with a short review of two-agent problems, after which we dive into self-
learning. We look at the situation when both agents know the transition function
perfectly, so that model accuracty is no longer a problem. This is the case, for
example, in games such as chess and Go, where the rules of the game determine
how we can go from one state to another. We will discuss the minimax principle. In
self-learning the environment is used to generate training examples for the agent to
train a better policy, after which the better agent policy is used in this environment
147
148 6 Two-Agent Self-Play
to train the agent, and again, and again, creating a virtuous cycle of self-learning
and mutual improvement. It is possible for an agent to teach itself to play a game
without any prior knowledge at all. A program that teaches itself to play a game
from zero knowledge is said to perform tabula rasa learning, learning from a blank
slate.
The self-play systems that we describe in this chapter use model-based methods.
The systems use combinations of planning and learning approaches. There is one
planning algorithm that we have mentioned so far, but have not yet explained
in detail. In this chapter we will discuss Monte Carlo Tree Search, or MCTS, a
highly popular planning algorithm. MCTS can be used in single agent and in two-
agent situations, and is the core of many successful applications, including the
self-learning AlphaZero series of programs. We will explain how self-learning and
self-play work in AlphaGo Zero, and why they work so well. We will then discuss
the concept of curriculum learning, which is behind the success of self-learning.
The chapter is concluded with exercises, a summary, and pointers to further
reading.
Core Concepts
• Self-play
• Curriculum learning
Core Problem
• Use a given transition model for self-play, in order to become stronger than the
current best players
Core Algorithms
Self-Play in Games
We have seen in Chap. 5 that when the agent has a transition model of the envi-
ronment, it can achieve greater performance, especially when the model has high
6 Two-Agent Self-Play 149
accuracy. What if the accuracy of our model were perfect, if the agent’s transition
function is the same as the environment’s, how far would that bring us? And what
if we could improve our environment as part of our learning process, can we then
transcend our teacher, can the sorcerer’s apprentice become more intelligent than
the wizard?
To set the scene for this chapter, let us describe the first game where this has
happened: backgammon.
In Sect. 3.2.3 we briefly discussed research into backgammon. Already in the early
1990s, the program TD-Gammon achieved stable reinforcement learning with a
shallow network. This work was started at the end of the 1980s by Gerald Tesauro,
a researcher at IBM laboratories. Tesauro was faced with the problem of getting a
program to learn beyond the capabilities of any existing entity. (In Fig. 6.1 we see
Tesauro in front of his program; image by IBM Watson Media.)
In the 1980s computing was different. Computers were slow, datasets were small,
and neural networks were shallow. Against this background, the success of Tesauro
is quite remarkable.
His programs were based on neural networks that learned good patterns of
play. His first program, Neurogammon, was trained using supervised learning,
based on games of human experts. It achieved an intermediate level of play [761].
His second program, TD-Gammon, was based on reinforcement learning, using
temporal difference learning and self-play. Combined with hand-crafted heuristics
and some planning, in 1992 it played at human championship level, becoming the
first computer program to do so in a game of skill [764].
TD-Gammon is named after temporal difference learning because it updates
its neural net after each move, reducing the difference between the evaluation of
previous and current positions. The neural network used a single hidden layer with
up to 80 units. TD-Gammon initially learned from a state of zero knowledge, tabula
rasa. Tesauro describes TD-Gammon’s self-play as follows: The move that is selected
150 6 Two-Agent Self-Play
is the move with maximum expected outcome for the side making the move. In other
words, the neural network is learning from the results of playing against itself. This
self-play training paradigm is used even at the start of learning, when the network’s
weights are random, and hence its initial strategy is a random strategy [763].
TD-Gammon performed tabula rasa learning, from a blank slate, its neural
network weights initiliazed to small random numbers. It reached world-champion
level purely by self-learning, by playing against itself, learning the game as it played
along.
Such autonomous self-learning is one of the main goals of what artificial in-
telligence hopes to achieve. TD-Gammon’s success inspired many researchers to
try neural networks and self-play approaches, culminating eventually, many years
later, in high-profile results in Atari [521] and AlphaGo [702, 705], which we will
describe in this chapter.1
Before we look into self-play algorithms, let us look for a moment at the two-agent
games that have fascinated artificial intelligence researchers for such a long time.
Games come in many shapes and sizes. Some are easy, some are hard. The
characteristics of games are described in a fairly standard taxonomy. Important
characteristics of games are: the number of players, whether the game is zero-sum or
non-zero-sum, whethre it is perfect or imperfect information, what the complexity
of taking decisions is, and what the state space complexity is. We will look at these
characteristics in more detail.
• Number of Players One of the most important elements of a game is the number
of players. One-player games are normally called puzzles, and are modeled as
a standard MDP. The goal of a puzzle is to find a solution. Two-player games
are “real” games. Quite a number of two-player games exist that provide a nice
balance between being too easy and being too hard for players (and for computer
programmers) [172]. Examples of two-player games that are popular in AI are
chess, checkers, Go, Othello, and shogi.
Multi-player games are played by three or more players. Well-known examples
of multiplayer games are the card games bridge and poker, and strategy games
such as Risk, Diplomacy, and StarCraft.
• Zero Sum versus Non Zero Sum An important aspect of a game is whether it is
competitive or cooperative. Most two-player games are competitive: the win (+1)
of player A is the loss (−1) of player B. These games are called zero sum because
the sum of the wins for the players remains a constant zero. Competition is an
important element in the real world, and these games provide a useful model for
the study of conflict and strategic behavior.
1A modern reimplementation of TD-Gammon in TensorFlow is available on GitHub at TD-
Gammon https://github.com/fomorians/td-gammon
6.1 Two-Agent Zero-Sum Problems 151
In contrast, in cooperative games the players win if they can find win/win
situations. Examples of cooperative games are Hanabi, bridge, Diplomacy [428,
185], poker and Risk. The next chapter will discuss multi-agent and cooperative
games.
• Perfect versus Imperfect Information In perfect information games all relevant
information is known to all players. This is the case in typical board games such
as chess and checkers. In imperfect information games some information may
be hidden from some players. This is the case in card games such as bridge and
poker, where not all cards are known to all players. Imperfect information games
can be modeled as partially observable Markov processes, POMDP [568, 692].
A special form of (im)perfect information games are games of chance, such as
backgammon and Monopoly, in which dice play an important role. There is no
hidden information in these games, and these games are sometimes considered
to be perfect information games, despite the uncertainty present at move time.
Stochasticity is not the same as imperfect information.
• Decision Complexity The difficulty of playing a game depends on the complexity
of the game. The decision complexity is the number of end positions that define
the value (win, draw, or loss) of the initial game position (also known as the
critical tree or proof tree [415]). The larger the number of actions in a position,
the larger the decision complexity. Games with small board sizes such as tic tac
toe (3 × 3) have a smaller complexity than games with larger boards, such as
gomoku (19 × 19). When the action space is very large, it can often be treated as
a continuous action space. In poker, for example, the monetary bets can be of
any size, defining an action size that is practically continuous.
• State Space Complexity The state space complexity of a game is the number of
legal positions reachable from the initial position of a game. State space and
decision complexity are normally positively correlated, since games with high
decision complexity typically have high state space complexity. Determining the
exact state space complexity of a game is a nontrivial task, since positions may
be illegal or unreachable.2 For many games approximations of the state space
have been calculated. In general, games with a larger state space complexity are
harder to play (“require more intelligence”) for humans and computers. Note
that the dimensionality of the states may not correlate with the size of the state
space, for example, the rules of some of the simpler Atari games limit the number
of reachable states, although the states themselves are high-dimensional (they
consist of many video pixels).
Fig. 6.2 Deep Blue and Garry Kasparov in May 1997 in New York
years ago [787, 693]. To study strategic reasoning in artificial intelligence, these
games are frequently used. Strategies, or policies, determine the outcome. Table 6.1
summarizes some of the games that have played an important role in artificial
intelligence research.
After the 1997 defeat of chess world champion Garry Kasparov by IBM’s Deep Blue
computer (Fig. 6.2; image by Chessbase), the game of Go (Fig. 1.4) became the next
benchmark game, the Drosophila3 of AI, and research activity in Go intensified
significantly.
3 Drosophila Melanogaster is also known as the fruitfly, a favorite species of genetics researchers
to test their theories, because experiments produce quick and clear answers.
6.1 Two-Agent Zero-Sum Problems 153
The game of Go is more difficult than chess. It is played on a larger board (19 × 19
vs. 8 × 8), the action space is larger (around 250 moves available in a position versus
some 25 in chess), the game takes longer (typically 300 moves versus 70) and the state
space complexity is much larger: 10170 for Go, versus 1047 for chess. Furthermore,
rewards in Go are sparse. Only at the end of a long game, after many moves have
been played, is the outcome (win/loss) known. Captures are not so frequent in
Go, and no good efficiently computable heuristic has been found, as we have in
chess (the material balance in chess can be calculated efficiently, and gives a good
indication by how far ahead we are). For the computer, much of the playing in
Go happens in the dark. In contrast, for humans, it can be argued that the visual
patterns of Go may be somewhat easier to interpret than the deep combinatorial
lines of chess.
For reinforcement learning, credit assignment in Go is challenging. Rewards only
occur after a long sequence of moves, and it is unclear which moves contributed
the most to such an outcome, or whether all moves contributed equally. Many
games will have to be played to acquire enough outcomes. In conclusion, Go is
more difficult to master with a computer than chess.
Traditionally, computer Go programs followed the conventional chess design
of a minimax search with a heuristic evaluation function, that, in the case of Go,
was based on the influence of stones (see Sect. 6.2.1 and Fig. 6.4) [514]. This chess
approach, however, did not work for Go, or at least not well enough. The level of
play was stuck at mid-amateur level for many years.
The main problems were the large branching factor, and the absence of an
efficient and good evaluation function.
Subsequently, Monte Carlo Tree Search was developed, in 2006. MCTS was a
variable depth adaptive search algorithm, that did not need a heuristic function, but
instead used random playouts to estimate board strength. MCTS programs caused
the level of play to improve from 10 kyu to 2-3 dan, and even stronger on the small
9 × 9 board.4 However, again, at that point, performance stagnated, and researchers
expected that world champion level play was still many years into the future. Neural
networks had been tried, but were slow, and did not improve performance much.
Playing Strength in Go
Let us compare the three programming paradigms of the different Go programs that
have been written over the years (Fig. 6.3). The programs fall into three categories.
First are the programs that use heuristic planning, the minimax-style programs.
GNU Go is a well-known example of this group of programs. The heuristics in
these programs are hand-coded. The level of play of these programs was at medium
amateur level. Next come the MCTS-based programs. They reached strong amateur
level. Finally come the AlphaGo programs, in which MCTS is combined with deep
4Absolute beginners in Go start at 30 kyu, progressing to 10 kyu, and advancing to 1 kyu (30k–1k).
Stronger amateur players then achieve 1 dan, progressing to 7 dan, the highest amateur rating for
Go (1d–7d). Professional Go players have a rating from 1 dan to 9 dan, written as 1p–9p.
154 6 Two-Agent Self-Play
Fig. 6.3 Go Playing Strength of Top Programs over the Years [20]
Fig. 6.4 Influence in the game of Go. Empty intersections are marked as being part of Black’s or
White’s Territory
self-play. These reached super-human performance. The figure also shows other
programs that follow a related approach.
Thus, Go provided a large and sparse state space, providing a highly challenging
test, to see how far self-play with a perfect transition function can come.
6.1 Two-Agent Zero-Sum Problems 155
In 2016, after decades of research, the effort in Go paid off. In the years 2015–2017
the DeepMind AlphaGo team played three matches in which it beat all human
champions that it played, Fan Hui, Lee Sedol, and Ke Jie. The breakthrough perfor-
mance of AlphaGo came as a surprise. Experts in computer games had expected
grandmaster level play to be at least ten years away.
The techniques used in AlphaGo are the result of many years of research, and
cover a wide range of topics. The game of Go worked very well as Drosophila.
Important new algorithms were developed, most notably Monte Carlo Tree Search
(MCTS), as well as major progress was made in deep reinforcement learning. We
will provide a high-level overview of the research that culminated in AlphaGo (that
beat the champions), and its successor, AlphaGo Zero (that learns Go tabula rasa).
First we will describe the Go matches.
The games against Fan Hui were played in October 2015 in London as part of
the development effort of AlphaGo. Fan Hui is the 2013, 2014, and 2015 European
Go Champion, then rated at 2p dan. The games against Lee Sedol were played in
May 2016 in Seoul, and were widely covered by the media (see Fig. 6.5; image by
DeepMind). Although there is no official worldwide ranking in international Go,
in 2016 Lee Sedol was widely considered one of the four best players in the world.
A year later another match was played, this time in China, against the Chinese
156 6 Two-Agent Self-Play
champion Ke Jie, who was ranked number one in the Korean, Japanese, and Chinese
ranking systems at the time of the match. All three matches were won convincingly
by AlphaGo. Beating the best Go players appeared on the cover of the journal
Nature, see Fig. 6.6.
Agent1
Agent2
Studying how such a high level of play is achieved is interesting, for three
reasons: (1) it is exciting to follow an AI success story, (2) it is interesting to see
which techniques were used and how it is possible to achieve beyond-human
intelligence, and (3) it is interesting to see if we can learn a few techniques that
can be used in other domains, beyond two-agent zero-sum games, to see if we can
achieve super-intelligence there as well.
Let us have a closer look at the self-learning agent architecture that is used by
AlphaGo Zero. We will see that two-agent self-play actually consists of three levels
of self-play: move-level self-play, example-level self-play, and tournament-level
self-play.
We will start at the first level, with the classic concept of minimax. Next, we will
generalize this concept to the agent-agent paradigm and example-level self-play, to
see how search-eval can be used to create a cycle of virtuous improvement (Fig. 6.9).
Then, we will look at how we can construct full self-play agents.
First, we will discuss the general architecture, and how it creates a cycle of
virtuous improvement.
In contrast to the agent/environment model, we now have two agents (Fig. 6.7). In
comparison with the model-based world of Chap. 5 (Fig. 6.8) our learned model has
been replaced by perfect knowledge of the transition rules, and the environment is
now called opponent: the negative version of the same agent playing the role of
agent2 .
The goal of this chapter is to reach the highest possible performance in terms
of level of play, without using any domain knowledge. In applications such as
158 6 Two-Agent Self-Play
chess and Go a perfect transition model is present. Together with a learned reward
function and a learned policy function, we can create a self-learning system in which
a virtuous cycle of ever improving performance occurs. Figure 6.9 illustrates such a
system: (1) the searcher uses the evaluation network to estimate reward values and
policy actions, and the search results are used in games against the opponent in
self-play, (2) the game results are then collected in a buffer, which is used to train
the evaluation network in self-learning, and (3) by playing a tournament against
a copy of ourselves a virtuous cycle of ever-increasing function improvement is
created.
Thus, there are three levels of self-play in AlphaGo Zero, from small to large:
the first to create the environment, the second to train the policy, and the third a
1-on-1 tournament that creates a curriculum of learning tasks from easy to hard.
The three levels are:
1. Move-level: to create the environment from the second player, using the minimax
search principle;
2. Example-level: to improve the policy by training the reward value and policy
functions;
3. Tournament-level: to play a tournament of games creating a training curriculum
from easy to hard learning tasks.
All three levels are needed for self-play to learn a policy that performs at world
champion level.
...
MCTS
reward MCTS
search eval
net2
MCTS
net0
search-eval functions to keep training the network with the game results, creating
a learning curriculum. Let us put these ideas into pseudocode.
2/4 game_pairs
policy/value
5 game_pairs ← mcts 5 pol/val ← eval(net(state))
state
uses the policy head of the net in P-UCT selection, and the value head of the net at
the MCTS leaves. Line 6 adds the outcome of each game to the (state, action)-pairs,
to make the (state, action, outcome)-triples for the network to train on. Note that
since the network is a two-headed policy/value net, both an action and an outcome
are needed for network training. On the last line this triples-buffer is then used to
train the network. The newly trained network is used in the next self-play iteration
as the evaluation function by the searcher. With this net another tournament is
played, using the searcher’s look-ahead to generate a next batch of higher-quality
examples, resulting in a sequence of stronger and stronger networks (Fig. 6.9 right
panel).
In the pseudocode we see the three self-play loops where the principle of playing
against a copy of yourself is used:
1. Move-level: in the MCTS playouts, our opponent actually is a copy of ourselves
(line 5)—hence, self-play at the level of game moves; the same kind of self-play
that minimax has been using since the days of Turing.
2. Example-level: the input for self-training the approximator for the policy and
the reward functions is generated by our own games (line 2)—hence, self-play at
the level of the value/policy network.
3. Tournament-level: the self-play loop creates a training curriculum that starts
tabula rasa and ends at world champion level. The system trains at the level of
the player against itself (line 1)—hence, self-play, of the third kind.
All three of these levels use their own kind of self-play, of which we will describe
the details in the following sections. We start with move-level self-play.
At the innermost level, we have to create an environment for our agent, which we
do with our agent. We use the agent to play against itself, as its own opponent.
Whenever it is my opponent’s turn to move, I play its move, trying to find the best
move for my opponent (which will be the worst possible move for me). This scheme
uses the same knowledge for player and opponent. This is different from the real
6.2 Tabula Rasa Self-Play Agents 161
search
action/state value
eval
world, where the agents are different, with different brains, different reasoning
skills, and different experience. Our scheme is symmetrical: when we assume that
our agent plays a strong game, then the opponent is also assumed to play strongly,
and we can hope to learn from the strong counter play. (We thus assume that our
agent plays with the same knowledge as we have; we are not trying to consciously
exploit opponent weaknesses.)
6.2.1.1 Minimax
This principle of generating the counter play by playing yourself while switching
perspectives has been used since the start of artificial intelligence, in checkers and
chess.5
The games of chess, checkers and Go are challenging games. The architecture
that has been used to program chess and checkers players has been the same since
the earliest paper designs of Turing [787]: a search routine based on minimax which
searches to a certain search-depth, and an evaluation function to estimate the score
of board positions using heuristic rules of thumb when this search-depth is reached.
In chess and checkers, for example, the number of pieces on the board of a player
is a crude but effective approximation of the strength of a state for that player.
Figure 6.11 shows a diagram of this classic search-eval architecture.6
Based on this principle many successful search algorithms have been developed,
of which alpha-beta is the best known [415, 592]. Since the size of the state space
is exponential in the depth of lookahead, however, many enhancements had to
be developed to manage the size of the state space and allow deep lookahead to
occur [599].
The word minimax is a contraction of maximizing/minimizing (and then reversed
for easy pronunciation). It means that in zero-sum games the two players alternate
making moves, and that on even moves, when player A is to choose a move, the
5 There is also research into opponent modeling, where we try to exploit our opponent’s weak-
nesses [319, 91, 258]. Here, we assume an identical opponent, which often works best in chess and
Go.
6 Because the agent knows the transition function 𝑇 , it can calculate the new state 𝑠0 for each
action 𝑎. The reward 𝑟 is calculated at terminal states, where it is equal to the value 𝑣. Hence,
in this diagram the search function provides the state to the eval function. See [787, 599] for an
explanation of the search-eval architecture.
162 6 Two-Agent Self-Play
1 2 1
6 1 3 3 4 2 1 6 5
best move is the one that maximizes the score for player A, while on odd moves
the best move for player B is the move that minimizes the score for player A.
Figure 6.12 depicts this situation in a tree. The score values in the nodes are
chosen to show how minimax works. At the top is the root of the tree, level 0, a
square node where player A is to move.
Since we assume that all players rationally choose the best move, the value of the
root node is determined by the value of the best move, the maximum of its children.
Each child, at level 1, is a circle node where player B chooses its best move, in order
to minimize the score for player A. The leaves of this tree, at level 2, are again max
squares (even though there is no child to choose from anymore). Note how for each
circle node the value is the minimum of its children, and for the square node, the
value is the maximum of the tree circle nodes.
Python pseudocode for a recursive minimax procedure is shown in Listing 6.2.
Note the extra hyperparameter d. This is the search depth counting upwards from
the leaves. At depth 0 are the leaves, where the heuristic evaluation function is
called to score the board.7 Also note that the code for making moves on the board—
transitioning actions into the new states—is not shown in the code. It is assumed to
happen inside the children dictionary. We frivolously mix actions and states in these
sections, since an action fully determines which state will follow. (At the end of this
chapter, the exercises provide more detail about move making and unmaking.)
AlphaGo Zero uses MCTS, a more advanced search algorithm than minimax,
that we will discuss shortly.
7The heuristic evaluation function is originally a linear combination of hand-crafted heuristic
rules, such as material balance (which side has more pieces) or center control. At first, the linear
combinations (coefficients) were not only hand-coded, but also hand-tuned. Later they were trained
by supervised learning [61, 612, 770, 252]. More recently, NNUE was introduced as a non-linear
neural network to use as evaluation function in an alpha-beta framework [556].
6.2 Tabula Rasa Self-Play Agents 163
Beyond Heuristics
In the early 1990s experiments with a different approach started, based on ran-
dom playouts of a single line of play [6, 102, 118] (Fig. 6.13 and 6.14). In Fig. 6.14
this different approach is illustrated. We see a search of a single line of play ver-
sus a search of a full subtree. It turned out that averaging many such playouts
could also be used to approximate the value of the root, in addition to the classic
recursive tree search approach. In 2006, a tree version of this approach was intro-
duced that proved successful in Go. This algorithms was called Monte Carlo Tree
Search [164, 117]. Also in that year Kocsis and Szepesvári created a selection rule
for the exploration/exploitation trade-off that performed well and converged to the
minimax value [418]. Their rule is called UCT, for upper confidence bounds applied
to trees.
Monte Carlo Tree Search has two main advantages over minimax and alpha-beta.
First, MCTS is based on averaging single lines of play, instead of recursively travers-
ing subtrees. The computational complexity of a path from the root to a leaf is
polynomial in the search depth. The computational complexity of a tree is expo-
nential in the search depth. Especially in applications with many actions per state
it is much easier to manage the search time with an algorithm that expands one
path at a time.8
Second, MCTS does not need a heuristic evaluation function. It plays out a line
of play in the game from the root to an end position. In end-positions the score of
the game, a win or a loss, is known. By averaging many of these playouts the value
of the root is approximated. Minimax has to cope with an exponential search tree,
which it cuts off after a certain search depth, at which point it uses the heuristic
8Compare chess and Go: in chess the typical number of moves in a position is 20, for Go this
number is 200. A chess-tree of depth 5 has 205 = 3200000 leaves. A Go-tree of depth 5 has
2005 = 320000000000 leaves. A depth-5 minimax search in Go would take prohibitively long; an
MCTS search of 1000 expansions expands the same number of paths from root to leaf in both
games.
6.2 Tabula Rasa Self-Play Agents 165
to estimate the scores at the leaves. There are, however, games where no efficient
heuristic evaluation function can be found. In this case MCTS has a clear advantage,
since it works without a heuristic score function.
MCTS has proven to be successful in many different applications. Since its
introduction in 2006 MCTS has transformed the field of heuristic search. Let us see
in more detail how it works.
Monte Carlo Tree Search consists of four operations: select, expand, playout,
and backpropagate (Fig. 6.15). The third operation (playout) is also called rollout,
simulation, and sampling. Backpropagation is sometimes called backup. Select is
the downward policy action trial part, backup is the upward error/learning part of
the algorithm. We will discuss the operations in more detail in a short while.
MCTS is a succesful planning-based reinforcement learning algorithm, with an
advanced exploration/exploitation selection rule. MCTS starts from the initial state
𝑠0 , using the transition function to generate successor states. In MCTS the state
space is traversed iteratively, and the tree data structure is built in a step by step
fashion, node by node, playout by playout. A typical size of an MCTS search is to
do 1000–10,000 iterations. In MCTS each iteration starts at the root 𝑠0 , traversing a
path in the tree down to the leaves using a selection rule, expanding a new node,
and performing a random playout. The result of the playout is then propagated
back to the root. During the backpropagation, statistics at all internal nodes are
updated. These statistics are then used in future iterations by the selection rule to
go to the currently most interesting part of the tree.
The statistics consist of two counters: the win count 𝑤 and the visit count 𝑣.
During backpropagation, the visit count 𝑣 at all nodes that are on the path back
from the leaf to the root are incremented. When the result of the playout was a win,
then the win count 𝑤 of those nodes is also incremented. If the result was a loss,
then the win count is left unchanged.
166 6 Two-Agent Self-Play
1 def m o n t e _ c a r l o _ t r e e _ s e a r c h ( root ) :
2 while re source s_left ( time , computational power ) :
3 leaf = select ( root ) # leaf = unvisited node
4 s i m u l a t i o n _ r e s u l t = rollout ( leaf )
5 backpropagate ( leaf , s i m u l a t i o n _ r e s u l t )
6 return best_child ( root ) # or : child with highest visit count
7
8 def select ( node ) :
9 while fu lly_ex panded ( node ) :
10 node = best_child ( node ) # traverse down path of best
UCT nodes
11 return expand ( node . children ) or node # no children / node is
terminal
12
13 def rollout ( node ) :
14 while non_terminal ( node ) :
15 node = rol lout_p olicy ( node )
16 return result ( node )
17
18 def rollou t_poli cy ( node ) :
19 return pick_random ( node . children )
20
21 def backpropagate ( node , result ) :
22 if is_root ( node ) return
23 node . stats = update_stats ( node , result )
24 backpropagate ( node . parent )
25
26 def best_child ( node , c_param =1.0) :
27 c ho ic e s_ w ei gh t s = [
28 ( c . q / c . n ) + c_param * np . sqrt (( np . log ( node . n ) / c . n ) )
# UCT
29 for c in node . children
30 ]
31 return node . children [ np . argmax ( c h oi ce s _w ei g ht s ) ]
The selection rule uses the win rate 𝑤/𝑣 and the visit count 𝑣 to decide whether
to exploit high-win-rate parts of the tree or to explore low-visit-counts parts. An
often used selection rule is UCT (Sect. 6.2.1.2). It is this selection rule that governs
the exploration/exploitation trade-off in MCTS.
Let us look in more detail at the four operations. Please refer to Listing 6.3 and
Fig. 6.15 [117]. As we see in the figure and the listing, the main steps are repeated
as long as there is time left. Per step, the activities are as follows.
6.2 Tabula Rasa Self-Play Agents 167
1. Select In the selection step the tree is traversed from the root node down until
a leaf of the MCTS search tree is reached where a new child is selected that is
not part of the tree yet. At each internal state the selection rule is followed to
determine which action to take and thus which state to go to next. The UCT rule
works well in many applications [418].
The selections at these states are part of the policy 𝜋(𝑠) of actions of the state.
2. Expand Then, in the expansion step, a child is added to the tree. In most cases
only one child is added. In some MCTS versions all successors of a leaf are added
to the tree [117].
3. Playout Subsequently, during the playout step random moves are played in a
form of self-play until the end of the game is reached.9 (These nodes are not
added to the MCTS tree, but their search result is, in the backpropagation step.)
The reward 𝑟 of this simulated game is +1 in case of a win for the first player, 0
in case of a draw, and −1 in case of a win for the opponent.10
4. Backpropagation In the backpropagation step, reward 𝑟 is propagated back up-
wards in the tree, through the nodes that were traversed down previously. Two
counts are updated: the visit count, for all nodes, and the win count, depending
on the reward value.
MCTS is on-policy: the values that are backed up are those of the nodes that
were selected.
Pseudocode
Many websites contain useful resources on MCTS, including example code (see
Listing 6.3).11 The pseudocode in the listing is from an example program for game
play (only the main methods are shown). The MCTS algorithm can be coded in
many different ways. For implementation details, see [173] and the comprehensive
survey [117].
MCTS is a popular algorithm, one way to use it in Python is by installing it from
a pip package (pip install mcts).
9 Note that this form of self-play is like the self-play in minimax and alpha-beta; the opponent
moves for finding the best reply are found by the same algorithm. Unlike minimax, here only one,
randomly chosen, successor is played out: a path, not a tree. MCTS is as a whole part in a “bigger”
kind of self-learning, where the reward function is trained by its search results.
10 Originally, playouts were random (the Monte Carlo part in the name of MCTS) following
Brügmann’s [118] and Bouzy and Helmstetter’s [102] original approach. In practice, most Go
playing programs improve on the random playouts by using databases of small 3 × 3 patterns with
best replies and other fast heuristics [268, 165, 137, 701, 170]. Small amounts of domain knowledge
are used after all, albeit not in the form of a heuristic evaluation function.
11 https://int8.io/monte-carlo-tree-search-beginners-guide/
168 6 Two-Agent Self-Play
Policies
At the end of the search, after the predetermined iterations have been performed, or
when time is up, MCTS returns the value and the action with the highest visit count.
An alternative would be to return the action with the highest win rate. However,
the visit count takes into account the win rate (through UCT) and the number of
simulations on which it is based. A high win rate may be based on a low number of
simulations, and can thus be high variance. High visit counts will be low variance.
Due to selection rule, high visit count implies high win-rate with high confidence,
while high win rate may be low confidence [117]. The action of this initial state 𝑠0
constitutes the deterministic policy 𝜋(𝑠0 ).
UCT Selection
P-UCT
We should note that the MCTS that is used in the AlphaGo Zero program is a
little different. MCTS is used inside the training loop, as an integral part of the
self-generation of training examples, to enhance the quality of the examples for
every self-play iteration, using both value and policy inputs to guide the search.
Also, in the AlphaGo Zero program MCTS backups rely fully on the value
function approximator; no playout is performed anymore. The MC part in the name
of MCTS, which stands for the Monte Carlo playouts, really has become a misnomer
for this network-guided tree searcher.
Furthermore, selection in self-play MCTS is different. UCT-based node selection
now also uses the input from the policy head of the trained function approximators,
in addition to the win rate and newness. What remains is that through the UCT
mechanism MCTS can focus its search effort greedily on the part with the highest
win rate, while at the same time balancing exploration of parts of the tree that are
underexplored.
The formula that is used to incorporate input from the policy head of the deep
network is a variant of P-UCT [705, 529, 637, 503] (for predictor-UCT). Let us
compare P-UCT with UCT. The P-UCT formula adds the policy head 𝜋(𝑎|𝑠) to
Eq. 6.1 √
𝑤𝑎 𝑛
P-UCT(𝑎) = + 𝐶 𝑝 𝜋(𝑎|𝑠) .
𝑛𝑎 1 + 𝑛𝑎
P-UCT adds the 𝜋(𝑎|𝑠) term specifying the probability of the action 𝑎 to the explo-
ration part of the UCT formula.13
Exploration/Exploitation
The search process of MCTS is guided by the statistics values in the tree. MCTS
discovers during the search where the promising parts of the tree are. The tree
expansion of MCTS is inherently variable-depth and variable-width (in contrast to
minimax-based algorithms such as alpha-beta, which are inherently fixed-depth
and fixed-width). In Fig. 6.16 we see a snapshot of the search tree of an MCTS
optimization. Some parts of the tree are searched more deeply than others [807].
An important element of MCTS is the exploration/exploitation trade-off, that
can be tuned with the 𝐶 𝑝 hyperparameter. The effectiveness of MCTS in different
applications depends on the value of this hyperparameter. Typical initial choices
for Go programs are 𝐶 𝑝 = 1 or 𝐶 𝑝 = 0.1 [117], although in AlphaGo we see highly
explorative choices such as 𝐶 𝑝 = 5. In general, when compute power is low, 𝐶 𝑝
less frequently. However, since logarithm values are unbounded, eventually all actions will be
selected [742].
13 Note further the small differences under the square root (no logarithm, and the 1 in the denomi-
nator) also change the UCT function profile somewhat, ensuring correct behavior at unvisited
actions [529].
170 6 Two-Agent Self-Play
0
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
21 22 23 24 25 26 27 28 29 30 31 32 33 34 20 22 23 20 21 20 21 20 21 22 20 21 22 23 24 26 27 28 29 30 31 32 33 34 20 21 22 23 20 21 22 23 24 25 20 21 22 23 24 25 26 27 29 30 31 32 33 20 21 22 23 24 25 26 27 28 30 31 32 33 34 20 21 22 23 24 25 26 27 28 29 31 32 33 34 20 21 22 23 24 20 21 22 23 24 25 26 27 28 29 30 31 33 34 20 21 22 23 24 25 26 20 21 22 23 24 25 26
20 20 20 20 21 20 20 21 20 20 20 20 20 21 20 20 20 20 20 20 20 21 22 20 21 22 23 24 25 26 27 28 31 32 33 34 20 21 22 20 21 22 20 21 20 20 20 20 20 20 21 20 21 22
21 22 23 20 22 23 20 20 21 22 23 20 21 22 20 21 20 20 21 20 21 22 20 21 22 23 24 25 26 27 28 31 33 34 20 21 22 23 20 21
21 22 23 24 25 26 27 28 31 33 34 20 22 23 20 21 20 21 20 21 22 23 25 26 27 28 31 33 34 20 21 22 23 24 26 27 28 31 33 34 20 21 22 23 24 25 27 20 21 22 23 20 21 22 23 20 21 22 23 24 25 26 27 28 33 34 20 21 22 23 24 25 26 27 28 31 34 20 21 22 23
21 22 21 21 21 21 22 21 21 21 22 20 20 20 21 22 20 21 20 20 21 22 20 21 22 20 21 20 21 22 21 20 20 20 21 20 20 20 20 21 22 20 21 22 20 21 22 23 20 22 20 20 20 21 22 20 21 22 23 20 21 20 20 21 20 21 22 20 21 21 22 23 24 25 26 27 28 31 34 20 22 23 20 21 23 24 20 21 22 24 20 21 22 23 25 26 27 28 31 34 20 21 22 23 24 26 27 28 31 34 20 21 22 23 24 25 27 28 31 34 20 21 22 20 21 22 23 24 25 26 27 31 34 20 21 22 23 24 25 26 27 28 34 20 21 22 23
22 21 21 21 22 21 21 22 21 21 22 21 22 23 21 21 20 20 21 22 20 21 20 20 21 20 21 22 20 21 20 20 20 20 20 20 21 20 21 20 21 20 20 21 22 20 22 20 20 21 20 20 21 20 20 21 20 21 21 22 23 24 25 26 27 28 34 20 22 23 24 25 26 27 28 34 20 21 23 24 25 26 27 28 34 20 21 22 24 25 26 27 28 34 20 21 22 23 25 26 27 28 34 20 21 22 23 24 26 27 28 34 20 21 22 23 24 25 27 28 34 20 21 22 23 24 25 26 28 34 20 21 22 23 24 25 26 27 34 20 21 22 23 24 25 26 27 28
22 23 24 25 26 27 28 34 21 23 24 25 26 27 28 34 21 22 24 25 26 27 28 34 21 22 23 25 26 27 28 34 21 22 23 24 26 27 28 34 21 22 23 24 25 27 28 34 21 22 23 24 25 26 28 34 21 22 23 24 25 26 27 34 21 22 23 24 25 26 27 28 22 23 20 20 20 22 20 20 22 20 20 22 20 21 22 23 25 26 27 28 34 20 22 23 25 26 20 21 23 25 20 21 22 25 26 20 21 22 23 26 27 28 34 20 21 22 23 25 27 28 34 20 21 22 23 20 21 22 23 25 26 27 34 20 21 22 23 25 26 21 22 23 24 26 27 28 34 20 22 23 24 26 27 20 21 23 24 20 21 22 24 20 21 22 23 26 27 28 34 20 21 22 23 24 27 28 34 20 21 22 23 24 20 21 22 23 24 26 27 34 20 21 22 23 24 26 21 22 23 24 25 27 20 22 23 20 21 20 21 20 21 22 23 25 20 21 22 23 24 20 21 22 20 21 22 23 24 20 21 22 23 21 20 20 20 20 20 20 20 21 22 23 24 25 26 27 20 22 23 20 21 23 20 21 20 21 22 23 25 26 27 20 21 22 23 24 26 27 20 21 22 23 24 25 27 20 21 22 23 20 21 22 23 21 22 23 20 22 20 20 20 21 22 20 21 22 20 21 20 21 20 21 22
22 22 22 22 22 22 22 21 21 21 21 21 21 22 23 21 21 21 22 21 22 21 21 22 21 22 21 21 21 22 21 22 21 21 22 21 22 21 21 21 22 21 21 21 22 21 22 21 21 21 21 22 23 21 21 21 22 21 22 21 22 21 21 22 21 21 21 21 21 21 21 21 21 20 20 20 20 20 21 21 21 21 20 20 20 21 20
should be low, and when more compute power is available, more exploration (higher
𝐶 𝑝 ) is advisable [117, 433].
There is a deeper relation between UCT and reinforcement learning. Grill et
al. [288] showed how the second term of P-UCT acts as a regularizer on model-free
policy optimization [4]. In particular, Jacob et al. [371] showed how MCTS can
be used to achieve human-like play in chess, Go, and Diplomacy, by regularizing
reinforcement learning with supervised learning on human games.
Applications
MCTS was introduced in 2006 [165, 166, 164] in the context of computer Go pro-
grams, following work by Chang et al. [135], Auer et al. [31], and Cazenave and
Helmstetter [134]. The introduction of MCTS improved performance of Go programs
considerably, from medium amateur to strong amateur. Where the heuristics-based
GNU Go program played around 10 kyu, Monte Carlo programs progressed to 2-3
dan in a few years’ time.
Eventually, on the small 9 × 9 board, Go programs achieved very strong play.
On the large 19 × 19 board, performance did not improve much beyond the 4-5
dan level, despite much effort by researchers. It was thought that perhaps the large
action space of the 19 × 19 board was too hard for MCTS. Many enhancements were
considered, for the playout phase, and for the selection. As the AlphaGo results
show, a crucial enhancement was the introduction of function approximation.
After its introduction, MCTS quickly proved successful in other applications,
both two agent and single agent: for video games [138], for single player applica-
tions [117], and for many other games. Beyond games, MCTS revolutionized the
world of heuristic search [117]. Previously, in order to achieve best-first search,
one had to find a domain specific heuristic to guide the search in a smart way.
With MCTS this is no longer necessary. Now a general method exists that finds the
promising parts of the search without a domain-specific heuristic, just by using
statistics of the search itself.
6.2 Tabula Rasa Self-Play Agents 171
For policy improvement, AlphaGo Zero uses a version of on-policy MCTS that
does not use random playouts anymore. To increase exploration, Dirichlet noise
is added to the P-UCT value at the root node, to ensure that all moves may be
tried. The 𝐶 𝑝 value of MCTS in AlphaGo is 5, heavily favoring exploration. In
AlphaGo Zero the value depends on the stage in the learning; it grows during
self-play. In each self-play iteration 25,000 games are played. For each move, MCTS
performs 1600 simulations. In total over a three-day course of training 4.9 million
games were played, after which AlphaGo Zero outperformed the previous version,
AlphaGo [705].
Conclusion
We have taken a look into the planning part of AlphaGo Zero’s self-play architecture.
MCTS consists of a move selection and a statistics backup phase, that corresponds
to the behavior (trial) and learning (error) from reinforcement learning. MCTS is
an important algorithm in reinforcement learning, and we have taken a detailed
look at the algorithm.
Move-level self-play is our first self-play procedure; it is a recursive procedure
that calls itself to generate its counter moves. The move-level planning is only one
part of the self-play picture. Just as important is the learning part. Let us have a
look at how AlphaGo Zero achieves its function approximation. For this, we move
to the second level of self-play: the example level.
Move-level self-play creates an environment for us that can play our counter-moves.
Now we need a mechanism to learn from these actions. AlphaGo Zero follows the
actor critic principle to approximate both value and policy functions. It approximates
these functions using a single deep residual neural network with a value-head and a
policy-head (Sect. B.2.6). The policy and the value approximations are incorporated
in MCTS in the selection and backup step (these steps will be explained shortly).
In order to learn, reinforcement learning needs training examples. The training
examples are generated at the self-play move level. Whenever a move is played, the
h𝑠, 𝑎i state-action pair is recorded, and whenever a full game has been played to the
end, the outcome 𝑧 is known, and the outcome is added to all pairs of game moves,
to create h𝑠, 𝑎, 𝑧i triples. The triples are stored in the replay buffer, and sampled
randomly to train the value/policy net. The actual implementation in AlphGo Zero
contains many more elements to improve the learning stability.
172 6 Two-Agent Self-Play
The player is designed to become stronger than the opponent, and this occurs at
the example-level. Here it uses MCTS to improve the current policy, improving it
with moves that are winning against the opponent’s moves.
Example-level self-play is our second self-play procedure, the examples are
generated in the self-play games and are used to train the network that is used to
play the moves by the two players.
The first AlphaGo program uses three separate neural networks: for the rollout
policy, for the value, and for the selection policy [702]. AlphaGo Zero uses a single
network, that is tightly integrated in MCTS. Let us have a closer look at this single
network.
The network is trained on the example triples h𝑠, 𝑎, 𝑧i from the replay buffer.
These triples contain search results of the board states of the game, and the two loss
function targets: 𝑎 for the actions that MCTS predicts for each board states, and 𝑧
for the outcome of the game (win or loss) when it came to an end. The action 𝑎 is
the policy loss, and the outcome 𝑧 is the value loss. All triples for a game consist of
the same outcome 𝑧, and the different actions that were played at each state.
AlphaGo Zero uses a dual-headed residual network (a convolutional network
with extra skip-links between layers, to improve regularization, see Sect. B.2.6 [320,
132]). Policy and value loss contribute equally to the loss function [822]. The net-
work is trained by stochastic gradient descent. L2 regularization is used to reduce
overfitting. The network has 19 hidden layers, and an input layer and two output
layers, for policy and value. The size of the mini-batch for updates is 2048. This
batch is distributed over 64 GPU workers, each with 32 data entries. The mini-batch
is sampled uniformly over the last 500,000 self-play games (replay buffer). The
learning rate started at 0.01 and went down to 0.0001 during self-play. More details
of the AlphaGo Zero network are described in [705].
Please note the size of the replay buffer, and the long training time. Go is a
complex game, with sparse rewards. Only at the end of a long game the win or loss
is known, and attributing this sparse reward to the many individual moves of a
game is difficult, requiring many games to even out errors.
MCTS is an on-policy algorithm that makes use of guidance in two places: in
the downward action selection and in the upward value backup. In AlphaGo Zero
the function approximator returns both elements: a policy for the action selection
and a value for the backup [705].
For tournament-level self-play to succeed, the training process must (1) cover
enough of the state space, must (2) be stable, and must (3) converge. Training targets
must be sufficiently challenging to learn, and sufficiently diverse. The purpose of
MCTS is to act as a policy improver in the actor critic setting, to generate learning
targets of sufficient quality and diversity for the agent to learn.
Let us have a closer look at these aspects, to get a broader perspective on why it
was so difficult get self-play to work in Go.
6.2 Tabula Rasa Self-Play Agents 173
Two Views
At this point it is useful to step back and reflect on the self-play architecture.
There are two different views. The one view, planning-centric, which we have
followed so far, is of a searcher (which generates opponent’s moves with an inverted
replica of itself) that is helped by a learned evaluation function (which trains on
examples from games played against itself). In addition, there is move-level self-play
174 6 Two-Agent Self-Play
(by the minimax searcher) and there is tournament-level self-play (by the value
learner).
The alternative view, learning-centric, is that a policy is learned by generating
game examples from self-play. In order for these examples to be of high quality, the
policy-learning is helped by a policy improver, a planning function that performs
lookahead to create better learning targets (and the planning is performed by making
moves by a copy of the player). In addition, there is tournament-level self-play (by
the policy learner) and there is move-level self-play (by the policy improver).
The difference in viewpoint is who comes first: the minimax viewpoint favors
the planner, and the learner is there to help the planner; the reinforcement learning
viewpoint favors the policy learner, and the planner is there to help improve the
policy. Both viewpoints are equally valid, and both viewpoints are equally valuable.
Knowing of the other viewpoint deepens our understanding of how these complex
self-play algorithms work.
This concludes our discussion of the second, example-level, of self-play.
At the top level, a tournament of self-play games is played between the two (iden-
tical) players. The player is designed to increase in strength, learning from the
examples at the seond level, so that the player can achieve a higher level of play.
In tabula rasa self-play, the players start from scratch. By becoming progressively
stronger, they also become stronger opponents for each other, and their level of
play can increase. A virtuous cycle of ever increasing intelligence will emerge.
For this ideal of artificial intelligence to become reality, many stars have to line
up. After TD-Gammon, many researchers have tried to achieve this goal in other
games, but were unsuccessful.
Tournament-level self-play is only possible when move-level self-play and
example-level self-play work. For move-level self-play to work, both players need
to have access to the transition function, which must be completely accurate. For
example-level self-play to work, the player architecture must be such that it is able
to learn a stable policy of a high quality (MCTS and the network have to mutually
improve eachother).
Tournament-level self-play is the third self-play procedure, where a tournament
is created with the games starting from easy learning tasks, changing to harder tasks,
increasing all the way to world champion level training. This third procedure allows
reinforcement learning to transcend the level of play (“intelligence”) of previous
teachers.
6.2 Tabula Rasa Self-Play Agents 175
Curriculum Learning
The AlphaGo effort consists of three programs: AlphaGo, AlphaGo Zero, and Alp-
haZero. The first AlphaGo program used supervised learning based on grandmaster
games, followed by reinforcement learning on self-play games. The second program,
AlphaGo Zero, used reinforcement learning only, in a self-play architecture that
starts from zero knowledge. The first program trained many weeks, yet the second
program needed only a few days to become stronger than the first [705, 702].
Why did the self-play approach of AlphaGo Zero learn faster than the original
AlphaGo that could benefit from all the knowledge of Grandmaster games? Why is
self-play faster than a combination of supervised and reinforcement learning? The
reason is a phenomenon called curriculum learning: self-play is faster because it
creates a sequence of learning tasks that are ordered from easy to hard. Training
such an ordered sequence of small tasks is quicker than one large unordered task.
Curriculum learning starts the training process with easy concepts before the
hard concepts are learned; this is, of course, the way in which humans learn. Before
we learn to run, we learn to walk; before we learn about multiplication, we learn
about addition. In curriculum learning the examples are ordered in batches from
easy to hard. Learning such an ordered sequence of batches goes better since under-
standing the easy concepts helps understanding of the harder concepts; learning
everything all at once typically takes longer and may result in lower accuracy.14
In ordinary deep reinforcement learning the network tries to solve a fixed problem
in one large step, using environment samples that are not sorted from easy to hard.
With examples that are not sorted, the program has to achieve the optimization
step from no knowledge to human-level play in one big, unsorted, leap, by optimiz-
ing many times over challenging samples with where the error function is large.
Overcoming such a large training step (from beginner to advanced) costs much
training time.
In contrast, in AlphaGo Zero, the network is trained in many small steps, starting
against a very weak opponent, just as a human child learns to play the game by
playing against a teacher that plays simple moves. As our level of play increases,
so does the difficulty of the moves that our teacher proposes to us. Subsequently,
harder problems are generated and trained for, refining the network that has already
been pretrained with the easier examples.
Self-play naturally generates a curriculum with examples from easy to hard. The
learning network is always in lock step with the training target—errors are low
throughout the training. As a consequence, training times go down and the playing
strength goes up.
14 Such a sequence of related learning tasks corresponds to a meta-learning problem. In meta-
learning the aim is to learn a new task fast, by using the knowledge learned from previous, related,
tasks; see Chap. 9.
176 6 Two-Agent Self-Play
Curriculum learning has been studied before in psychology and education science.
Selfridge et al. [688] first connected curriculum learning to machine learning, where
they trained the proverbial Cartpole controller. First they trained the controller on
long and light poles, while gradually moving towards shorter and heavier poles.
Schmidhuber [671] proposed a related concept, to improve exploration for world
models by artificial curiosity. Curriculum learning was subsequently applied to
match the order of training examples to the growth in model capacity in differ-
ent supervised learning settings [224, 431, 77]. Another related development is in
developmental robotics, where curriculum learning can help to self-organize open-
ended developmental trajectories [577]. Here it is related to intrinsic motivation (see
Sect. 8.3.2). The AMIGo approach uses curriculum learning to generate subgoals in
hierarchical reinforcement learning [125].
To order the training examples from easy to hard, we need a measure to quantify
the difficulty of the task. One idea is to use the minimal loss with respect to some of
the upper layers of a high-quality pretrained model [832]. In a supervised learning
experiment, Weinshall et al. compared the effectiveness of curriculum learning on
a set of test images (5 images of mammals from CIFAR100). Figure 6.17 shows the
accuracy of a curriculum ordering (green), no curriculum (blue), randomly ordered
groups (yellow) and the labels sorted in reverse order (red). Both networks are
regular networks with multiple convolutional layers followed by a fully connected
layer. The large network has 1,208,101 parameters, the small network has 4,557
parameters. We can clearly see the effectiveness of ordered learning [832].
6.2 Tabula Rasa Self-Play Agents 177
Finding a good way to order the sequence of examples is often difficult. A possible
method to generate a sequence of tasks that are related is by using procedural con-
tent generation (PCG) [691, 540]. Procedural content generation uses randomized
algorithms to generate images and other content for computer games; the difficulty
of the examples can often be controlled. It is frequently used to automatically gen-
erate different levels in games, so that they do not have to be all created manually
by the game designers and programmers [714, 780].15
The Procgen benchmark suite has been built upon procedurally generated
games [158]. Another popular benchmark is the General video game AI competition
(GVGAI) [478]. Curriculum learning reduces overfitting to single tasks. Justesen et
al. [387] have used GVGAI to show that a policy easily overfits to specific games,
and that training over a curriculum improves its generalization to levels that were
designed by humans. MiniGrid is a procedurally generated world that can be used
for hierarchical reinforcement learning [143, 620].
Active Learning
Curriculum learning has been studied for many years. A problem is that it is difficult
to find an ordering of tasks from easy to hard in most learning situations. In two-
player self-play the ordering comes natural, and the successes have inspired recent
work on single-agent curriculum learning. For example, Laterre et al. introduce
the Ranked Reward method for solving bin packing problems [454] and Wang et
al. presented a method for Morpion Solitaire [823]. Feng et al. use an AlphaZero
based approach to solve hard Sokoban instances [238]. Their model is an 8 block
standard residual network, with MCTS as planner. They create a curriculum by
constructing simpler subproblems from hard instances, using the fact that Sokoban
problems have a natural hierarchical structure. This approach was able to solve
15See also generative adversarial networks and deep dreaming, for a connectionist approach to
content generation, Sect. B.2.6.
178 6 Two-Agent Self-Play
harder Sokoban instances than had been solved before. Florensa et al. [247] study
the generation of goals for curriculum learning using a generator network (GAN).
Conclusion
In the previous chapter the transition function was learned, and we discussed
methods to cope with inaccuracies in the model. In this chapter the transition
function is a given, and the opponent has the same model as the agent. As the agent
learns, so does the opponent. In this way, learning world champion level play has
been achieved in backgammon and Go [763, 705].
Although curriculum learning has been studied in artificial intelligence and in
psychology for some time, it has not been a popular method in artifical intelligence,
since it is difficult to find well-sorted training curricula [517, 518, 826]. Due to the
self-play results, curriculum learning is now attracting more interest, see [551, 836].
Work is reported in single-agent problems [551, 238, 198, 454], and in multi-agent
games, as we will see in Chap. 7.
Figure 6.18 shows the playing strength of traditional programs (left panel, in red)
and different versions of the AlphaGo programs. We see how much stronger the
2015, 2016, and 2017 versions of AlphaGo are than the earlier heuristic minimax
program GnuGo, and two MCTS-only programs Pachi and Crazy Stone.
How did the AlphaGo authors design such a strong Go program? Before AlphaGo,
the strongest programs used the Monte Carlo Tree Search planning algorithm,
without neural networks. For some time, neural networks were considered too
slow for use as value function in MCTS, and random playouts were used, often
improved with small pattern-based heuristics [268, 267, 138, 166, 226]. Around 2015
a few researchers tried to improve performance of MCTS by using deep learning
evaluation functions [155, 22, 266]. These efforts were strenghthened by the strong
results in Atari [522].
The AlphaGo team also tried to use neural networks. Except for backgammon,
pure self-play approaches had not been shown to work well, and the AlphaGo
team did the sensible thing to pretrain the network with the games of human
grandmasters, using supervised learning. Next, a large number of self-play games
were used to further train the networks. In total, no less than three neural networks
were used: one for the MCTS playouts, one for the policy function, and one for the
value function [702].
Thus, the original AlphaGo program consisted of three neural networks and
used both supervised learning and reinforcement learning. The diagram in Fig. 6.18
illustrates the AlphaGo architecture. Although this design made sense at the time
given the state of the art of the field, and although it did convincingly beat the three
strongest human Go players, managing and tuning such a complicated piece of
180 6 Two-Agent Self-Play
In their paper Silver et al. [705] describe that learning progressed smoothly through-
out the training. AlphaGo Zero outperformed the original AlphaGo after just 36
hours. The training time for the version of AlphaGo that played Lee Sedol was
several months. Furthermore, AlphaGo Zero used a single machine with 4 tensor
processing units, whereas AlphaGo Lee was distributed over many machines and
used 48 TPUs.16 Figure 6.19 shows the performance of AlphaGo Zero. Also shown
is the performance of the raw network, without MCTS search. The importance of
MCTS is large, around 2000 Elo points.17
AlphaGo Zero’s reinforcement learning is truly learning Go knowledge from
scratch, and, as the development team discovered, it did so in a way similar to
16 TPU stands for tensor processing unit, a low-precision design specifically developed for fast
neural network processing.
17 The basis of the Elo rating is pairwise comparison [225]. Elo is often used to compare playing
Fig. 6.20 AlphaGo Zero is Learning Joseki in a Curriculum from Easy to Hard [705]
how humans are discovering the intricacies of the game. In their paper [705] they
published a picture of how this knowledge acquisition progressed (Fig. 6.20).
Joseki are standard corner openings that all Go players become familiar with
as they learn to play the game. There are beginner’s and advanced joseki. Over
the course of its learning, AlphaGo Zero did learn joseki, and it learned them
from beginner to advanced. It is interesting to see how it did so, as it reveals the
progression of AlphaGo Zero’s Go intelligence. Figure 6.20 shows sequences from
games played by the program. Not to anthropomorphize too much,18 but you can
see the little program getting smarter.
The top row shows five joseki that AlphaGo Zero discovered. The first joseki
is one of the standard beginner’s openings in Go theory. As we move to the right,
more difficult joseki are learned, with stones being played in looser configurations.
The bottom row shows five joseki favored at different stages of the self-play training.
It starts with a preference for a weak corner move. After 10 more hours of training,
a better 3-3 corner sequence is favored. More training reveals more, and better,
variations.
AlphaGo Zero discovered a remarkable level of Go knowledge during its self-play
training process. This knowledge included not only fundamental elements of human
Go knowledge, but also nonstandard strategies beyond the scope of traditional Go
knowledge.
For a human Go player, it is remarkable to see this kind of progression in com-
puter play, reminding them of the time when they discovered these joseki themselves.
With such evidence of the computer’s learning, it is hard not to anthropomorphize
AlphaGo Zero.
18 Treat as if human
182 6 Two-Agent Self-Play
6.3.3 AlphaZero
The AlphaGo story does not end with AlphaGo Zero. A year after AlphaGo Zero, a
version was created with different input and output layers that learned to play chess
and shogi (also known as Japanese chess, Fig. 6.21), using the same MCTS and deep
reinforcement learning architecture as for learning to play Go (the only differences
are the input and output layers) [703]. This new program, AlphaZero, beat the
strongest chess and shogi programs, Stockfish and Elmo. Both these programs
followed a conventional heuristic minimax design, optimized by hand and machine
learning, and improved with many heuristics for decades. AlphaZero used zero
knowledge, zero grandmaster games, and zero hand-crafted heuristics, yet it played
stronger. The AlphaZero architecture allows not only very strong play, but is also a
general architecture, suitable for three different games.19
The Elo rating of AlphaZero in chess, shogi, and Go is shown in Fig. 6.22, [705,
702]. AlphaZero is stronger than the other programs. In chess the difference is
the smallest. In this field the program has benefited from a large community of
researchers that have worked intensely on improving performance of the heuristic
alpha-beta approach. For shogi the difference is larger.
AlphaZero can play three different games with the same architecture. The three
games are quite different. Go is a static game of strategy. Stones do not move and are
rarely captured. Stones, once played, are of strategic importance. In chess the pieces
19Although an AlphaZero version that has learned to play Go, cannot play chess. It has to re-learn
chess from scratch, with different input and output layers.
6.3 Self-Play Environments 183
move. Chess is a dynamic game where tactics are important. Chess also features
sudden death, a check-mate can occur in the middle of the game by capturing the
king. Shogi is even more dynamic since capture pieces can be returned to the game,
creating even more complex game dynamics.
It is testament to the generality of AlphaZero’s architecture, that games that differ
so much in tactics and strategy can be learned so successfully. Conventional games
must be purposely developed for each game, with different search hyperparameters
and different heuristics. Yet the MCTS/ResNet self-play architecture is able to learn
all three from scratch.
Tabula rasa learning for the game of Go is a remarkable achievement that inspired
many researchers. The code of AlphaGo Zero and AlphaZero, however, is not public.
Fortunately, the scientific publications [705, 703] provide many details, allowing
other researchers to reproduce similar results.
Table 6.3 summarizes some of the self-learning environments, which we will
briefly discuss.
• A0G: AlphaZero General Thakoor et al. [767] created a self-play system called
AlphaZero General (A0G).20 It is implemented in Python for TensorFlow, Keras,
20 https://github.com/suragnair/alpha-zero-general
184 6 Two-Agent Self-Play
and PyTorch, and suitably scaled down for smaller computational resources. It
has implementations for 6 × 6 Othello, tic tac toe, gobang, and connect4, all small
games of significantly less complexity than Go. Its main network architecture is
a four layer CNN followed by two fully connected layers. The code is easy to
understand in an afternoon of study, and is well suited for educational purposes.
The project write-up provides some documentation [767].
• Facebook ELF ELF stands for Extensible Lightweight Framework. It is a framework
for game research in C++ and Python [775]. Originally developed for real-time
strategy games by Facebook, it includes the Arcade Learning Environment and
the Darkforest21 Go program [777]. ELF can be found on GitHub.22 ELF also
contains the self-play program OpenGo [776], a reimplementation of AlphaGo
Zero (in C++).
• Leela Another reimplementation of AlphaZero is Leela. Both a chess and a Go
version of Leela exist. The chess version is based on chess engine Sjeng. The
Go23 version is based on Go engine Leela. Leela does not come with trained
weights of the network. Part of Leela is a community effort to compute these
weights.
• PhoenixGo PhoenixGo is a strong self-play Go program by Tencent [865]. It is
based on the AlphaGo Zero architecture.24 A trained network is available as
well.
• Polygames PolyGames [133] is an environment for Zero-based learning (MCTS
with deep reinforcement learning) inspired by AlphaGo Zero. Relevant learning
methods are implemented, and bots for hex, Othello, and Havannah have been
implemented. PolyGames can be found on GitHub.25 A library of games is
provided, as well as a checkpoint zoo of neural network models.
Let us get some hands-on experience with MCTS-based self-play. We will implement
the game of Hex with the PolyGames suite. Hex is a simple and fun board game
invented independently by Piet Hein and John Nash in the 1940s. Its simplicity
makes it easy to learn and play, and also a popular choice for mathematical analysis.
The game is played on a hexagonal board, player A wins if its moves connect the
right to the left side, and player B wins if top connects to bottom (see Fig. 6.23;
image by Wikimedia). A simple page with resources is here;26 extensive strategy
and background books have been written about hex [318, 115]. We use hex because
21 https://github.com/facebookresearch/darkforestGo
22 https://github.com/pytorch/ELF
23 https://github.com/gcp/leela-zero
24 https://github.com/Tencent/PhoenixGo
25 https://github.com/facebookincubator/Polygames
26 https://www.maths.ed.ac.uk/~csangwin/hex/index.html
6.3 Self-Play Environments 185
it is simpler than Go, to get you up to speed quickly with self-play learning; we will
also use PolyGames.
Click on the link27 and start by reading the introduction to PolyGames on
GitHub. Download the paper [133] and familiarize yourself with the concepts
behind Polygames. Clone the repository and build it by following the instruc-
tions. Polygames uses PyTorch,28 so install that too (follow the instructions on the
Polygames page).
Polygames is interfaced via the pypolygames Python package. The games, such
as Hex, can be found in src/games and are coded in C++ for speed. The command
pypolygames train
pypolygames eval
pypolygames human
for help with each of the commands train, eval, traineval, or human.
A command to start training a Hex model with the default options is:
27 https://github.com/facebookincubator/Polygames
28 https://pytorch.org
186 6 Two-Agent Self-Play
Try loading a pre-trained model from the zoo. Experiment with different training
options, and try playing against the model that you just trained. When everything
works, you can also try training different games. Note that more complex games
may take (very) long to train.
We will now summarize the chapter, and provide pointers for further reading.
Summary
For two-agent zero-sum games, when the transition function is given by the rules
of the game, a special kind of reinforcement learning becomes possible. Since the
agent can perfectly simulate the moves of the opponent, accurate planning far into
the future becomes possible. Typically, the second agent becomes the environment.
Previously environments were static, but they will now evolve as the agent is
learning, creating a virtuous cycle of increasing (artificial) intelligence in agent
(and environment). The promise of this self-play setup is to achieve high levels
of intelligence. The challenges to overcome instability, however, are large, since
this kind of self-play combines different kinds of unstable learning methods. Both
TD-Gammon and AlphaGo Zero have overcome these challenges, and we have
described their approach in quite some detail.
Self-play is a combination of planning, learning, and a self-play loop. The self-
play loop in AlphaGo Zero uses MCTS to generate high-quality examples, which are
used to train the neural net. This new neural net is then used in a further self-play
iteration to generate more difficult games, and refine the network further (and
again, and again, and again). Alpha(Go) Zero thus learns starting at zero knowledge,
tabula rasa.
Self-play makes use of many reinforcement learning techniques. In order to
ensure stable learning, exploration is important. MCTS is used for deep planning.
The exploration parameter in MCTS is set high, and convergent training is achieved
by a low learning rate 𝛼. Because of these parameter settings many games have to
be played. The computational demands of stable self-play are large.
AlphaGo Zero uses function approximation of two functions: value and policy.
Policy is used to help guide action selection in the P-UCT selection operation in
MCTS, and value is used instead of random playouts to provide the value function
at the leaves of the MCTS tree. MCTS has been changed significantly to work in
the self-play setting. Gone are the random playouts that gave MCTS the name
6.3 Self-Play Environments 187
Monte Carlo, and much of the performance is due to a high-quality policy and value
approximation residual network.
Originally, the AlphaGo program (not AlphaGo Zero) used grandmaster games
in supervised learning in addition to using reinforcement learning; it started from
the knowledge of grandmaster games. Next came AlphaGo Zero, which does not use
grandmaster games or any other domain specific knowledge. All learning is based
on reinforcement learning, playing itself to build up the knowledge of the game
from zero knowledge. A third experiment has been published, called AlphaZero
(without the “Go”). In this paper the same network architecure and MCTS design
(and the same learning hyperparameters) were used to learn three games: chess,
shogi, and Go. This presented the AlphaZero architecture as a general learning
architecture, stronger than the best alpha-beta-based chess and shogi programs.
Interestingly, the all-reinforcement learning AlphaGo Zero architecture was not
only stronger than the supervised/reinforcement hybrid AlphaGo, but also faster:
it learned world champion level play in days, not weeks. Self-play learns quickly
because of curriculum learning. It is more efficient to learn a large problem in many
small steps, starting with easy problems, ending with hard ones, then in one large
step. Curriculum learning works both for humans and for artificial neural networks.
Further Reading
One of the main interests of artificial intelligence is the study of how intelligence
emerges out of simple, basic, interactions. In self-learning systems this is happen-
ing [463].
The work on AlphaGo is a landmark achievement in artificial intelligence. The
primary sources of information for AlphaGo are the three AlphaGo/AlphaZero
papers by Silver et al. [702, 705, 703]. The systems are complex, and so are the
papers and their supplemental methods sections. Many blogs have been written
about AlphaGo that are more accessible. A movie has been made about AlphaGo.29
There are also explanations on YouTube.30
A large literature on minimax and minimax enhancements for games exists,
an overview is in [599]. A book devoted to building your own state-of-the-art
self-learning Go bot is Deep Learning and the Game of Go by Pumperla and Fergu-
son [611], which came out before PolyGames [133].
MCTS has been a landmark algorithm by itself in artificial intelligence [164,
117, 418]. In the contexts of MCTS, many researchers worked on combining MCTS
with learned patterns, especially to improve the random rollouts of MCTS. Other
developments are [530, 529], or parallelizations such as [515].
Supervised learning on grandmaster games was used to improve playouts and
also to improve UCT selection. Gelly and Silver published notable works in this
29 https://www.alphagomovie.com
30 https://www.youtube.com/watch?v=MgowR4pq3e8
188 6 Two-Agent Self-Play
area [268, 707, 267]. Graf et al. [283] describe experiments with adaptive playouts
in MCTS with deep learning. Convolutional neural nets were also used in Go by
Clark and Storkey [154, 155], who had used a CNN for supervised learning from a
database of human professional games, showing that it outperformed GNU Go and
scored wins against Fuego, a strong open source Go program [226] based on MCTS
without deep learning.
Tesauro’s success inspired many others to try temporal difference learning.
Wiering et al. and Van der Ree [839, 793] report on self-play and TD learning in
Othello and Backgammon. The program Knightcap [60, 61] and Beal et al. [62]
also use temporal difference learning on evaluation function features. Arenz [25]
applied MCTS to chess. Heinz reported on self-play experiments in chess [326].
Since the AlphaGo results many other applications of machine learning have
been shown to be successful. There is interest in theoretical physics [643, 582], chem-
istry [384], and pharmacology, specifically for retrosynthetic molecular design [686]
and drug design [802]. High-profile results have been achieved by AlphaFold, a
program that can predict protein structures [384, 689].
To learn more about curriculum learning, see [836, 77, 501, 247]. Wang et al. [822]
study the optimization target of a dual-headed self-play network in AlphaZeroGen-
eral. The success of self-play has led to interest in curriculum learning in single-
agent problems [238, 198, 213, 454]. The relation between classical single-agent and
two-agent search is studied by [663].
Exercises
To review your knowledge of self-play, here are some questions and exercises. We
start with questions to check your understanding of this chapter. Each question is a
closed question where a simple, one sentence answer is possible.
Questions
1. What are the differences between AlphaGo, AlphaGo Zero, and AlphaZero?
2. What is MCTS?
3. What are the four steps of MCTS?
4. What does UCT do?
5. Give the UCT formula. How is P-UCT different?
6. Describe the function of each of the four operations of MCTS.
7. How does UCT achieve trading off exploration and exploitation? Which inputs
does it use?
8. When 𝐶 𝑝 is small, does MCTS explore more or exploit more?
9. For small numbers of node expansions, would you prefer more exploration or
more exploitation?
10. What is a double-headed network? How is it different from regular actor critic?
6.3 Self-Play Environments 189
11. Which three elements make up the self-play loop? (You may draw a picture.)
12. What is tabula rasa learning?
13. How can tabula rasa learning be faster than reinforcement learning on top of
supervised learning of grandmaster games?
14. What is curriculum learning?
You may have noticed that the minimax and MCTS pseudocode in the figures lacks
implementation details for performing actions, to arrive at successor states. Such
board manipulation and move making details are important for creating a working
program.
Game playing programs typically call the search routine with the current board
state, often indicated with parameter n for the new node. This board can be created
and allocated anew in each search node, in a value-passing style (local variable).
Another option is to pass a reference to the board, and to apply a makemove oper-
ation on the board, placing the stone on the board before the recursive call, and
an undomove operation removing the stone from the board when it returns back
out of the recursion (global variable). This reference-passing style may be quicker
if allocating the memory for a new board is an expensive operation. It may also
be more difficult to implement correctly, since the makemove/undomove protocol
must be followed strictly. If capture moves cause many changes to the board, then
these must be remembered for the subsequent undo.
For parallel implementations in a shared memory at least all parallel threads
must have their own copy of a value-passing style board. (On a distributed memory
cluster the separate machines will have their own copy of the board by virtue of
the distributed memory.)
Exercises
For the programming exercises we use PolyGames. See the previous section on how
to install PolyGames. If training takes a long time, consider using the GPU support
of Polygames and Pytorch.
1. Install PolyGames and train a Hex player with self-play. Experiment with differ-
ent board sizes. Keep the training time constant, and draw a graph where you
contrast playing strength against board size. Do you see a clear correlation that
the program is stronger on smaller boards?
2. Install the visualization support torchviz. Visualize the training process using
the draw_model script.
3. Use different models, and try different hyperparameters, as specified in the
PolyGames documentation.
4. Run an evaluation tournament from the trained Hex model against a pure MCTS
player. See below for tips on make and undo of moves. How many nodes can
190 6 Two-Agent Self-Play
MCTS search in a reasonable search time? Compare the MCTS Hex player against
the self-play player. How many games do you need to play to have statistically
significant results? Is the random seed randomized or fixed? Which is stronger:
MCTS or trained Hex?
Chapter 7
Multi-Agent Reinforcement Learning
On this planet, in our societies, millions of people live and work together. Each
individual has their own individual set of goals and performs their actions accord-
ingly. Some of these goals are shared. When we want to achieve shared goals we
organize ourselves in teams, groups, companies, organizations and societies. In
many intelligent species—humans, mammals, birds, insects—impressive displays
of collective intelligence emerge from such organization [847, 347, 685]. We are
learning as individuals, and we are learning as groups. This setting is studied in
multi-agent learning: through their independent actions, agents learn to interact
and to compete and cooperate with other agents, and form groups.
Most research in reinforcement learning has focused on single agent problems.
Progress has been made in many topics, such as path-finding, robot locomotion, and
video games. In addition, research has been performed in two-agent problems, such
as competitive two-person board games. Both single-agent and two-agent problems
are questions of optimization. The goal is to find the policy with the highest reward,
the shortest path, and the best moves and counter moves. The basic setting is one
of reward optimization in the face of natural adversity or competitors, modeled by
the environment.
As we move closer toward modeling real-world problems, we encounter another
category of sequential decision making problems, and that is the category of multi-
agent decision problems. Multi-agent decision making is a difficult problem,
agents that share the same goal might collaborate, and finding a policy with
the highest reward for oneself or for the group may include achieving win/win
solutions with other agents. Coalition forming and collusion are an integral part of
the field of multi-agent reinforcement learning.
In real-world decision making both competition and cooperation are important.
If we want our agents to behave realistically in settings with multiple agents, then
they should understand cooperation in order to perform well.
From a computational perspective, studying the emergence of group behavior
and collective intelligence is challenging: the environment for the agents consists of
many other agents; goals may move, many interactions have to be modeled in order
to be understood, and the world to be optimized against is constantly changing.
191
192 7 Multi-Agent Reinforcement Learning
Core Concepts
• Competition
• Cooperation
• Team learning
Core Problem
Core Algorithms
Self-Driving Car
Game Theory
One direct generalization of MDP that captures the interaction of multiple agents
is the Markov game, also known as the stochastic game [694]. Described by
Littman [482], the framework of Markov games has long been used to express
multi-agent reinforcement learning algorithms [451, 788].
The multi-agent version of an MDP is defined as follows [869]. At time 𝑡, each
agent 𝑖 ∈ 𝑁 executes an action 𝑎 𝑖𝑡 , for the state 𝑠𝑡 of the system. The system then
transitions to state 𝑠𝑡+1 , and rewards each agent 𝑖 with reward 𝑅 𝑖𝑎𝑡 (𝑠𝑡 , 𝑠𝑡+1 ). The
goal of agent 𝑖 is to optimize its own long-term reward, by finding the policy 𝜋 𝑖 :
𝑆 → E( 𝐴𝑖 ) as a mapping from the state space to a distribution over the action space,
such that 𝑎 𝑖𝑡 ∼ 𝜋 𝑖 (·|𝑠𝑡 ). Then, the value-function 𝑉 𝑖 : 𝑆 → 𝑅 of agent 𝑖 becomes a
function of the joint policy 𝜋 : 𝑆 → E( 𝐴) defined as 𝜋(𝑎|𝑠) = Π𝑖 ∈𝑁 𝜋 𝑖 (𝑎 𝑖 |𝑠).
Figure 7.1 shows three schematic diagrams. First, in (a), the familiar agent/en-
vironment diagram is shown that we used in earlier chapters. Next, in (b), the
multi-agent version of this diagram for Markov games is shown. Finally, in (c) an
extensive-form game tree is shown [869]. Extensive form game trees are introduced
to model imperfect information. Choices by the agent are shown as solid links,
7.1 Multi-Agent Problems 195
and the hidden private information of the other agent is shown dashed, as the
information set of possible situations.
In (a), the agent observes the state 𝑠, performs action 𝑎, and receives reward 𝑟
from the environment. In (b), in the Markov game, all agents choose their actions
𝑎 𝑖 simultaneously and receiving their individual reward 𝑟 𝑖 . In (c), in a two-player
extensive-form game, the agents make decisions on choosing actions 𝑎 𝑖 . They
receive their individual reward 𝑟 𝑖 (𝑧) at the end of the game, where 𝑧 is the score of a
terminal node for the branch that resulted from the information set. The information
set is indicated with a dotted line, to signal stochastic behavior, the environment
(or other agent) chooses an unknown outcome amongst the dotted actions. The
extensive-form notation is designed for expressing imperfect information games,
where all possible (unknown) outcomes are represented in the information set. In
order to calculate the value function, the agent has to regard all possible different
choices in the information sets; the unknown choices of the opponents create a
large state space.
These three behaviors are a useful guide to navigate the landscape of multi-agent
algorithms, and we will do so in this chapter. Let us have a closer look at each type
of behavior.
One of the hallmarks of the field of game theory is a result by John Nash, who
defined the conditions for a stable (and in a certain sense optimal) solution among
multiple rational non-cooperative agents. The Nash equilibrium is defined as a
situation in which no agent has anything to gain by changing its own strategy. For
two agents the Nash equilibrium is the minimax strategy.
In single-agent reinforcement learning the goal is to find the policy that max-
imizes the cumulative future reward of the agent. In multi-agent reinforcement
learning the goal is to find a combined policy of all agents that simultaneously
achieves that goal: the multi-policy that for each agent maximizes their cumulative
future reward. If you have a set of competitive (near-)zero exploitability strategies
this is called a Nash equilibrium.
The Nash equilibrium characterizes an equilibrium point 𝜋★, from which none
of the agents has an incentive to deviate. In other words, for any agent 𝑖 ∈ 𝑁, the
policy 𝜋 𝑖,★ is the best-response of 𝜋 −𝑖,★, where −𝑖 are all agents except 𝑖 [331].
An agent that follows a Nash strategy is guaranteed to do no worse than tie,
against any other opponent strategy. For games of imperfect information or chance,
such as many card games, this is an expected outcome. Since the cards are randomly
dealt, there is no theoretical guarantee that a Nash strategy will win every single
hand, although on average, it cannot do worse than tie against the other agents.
If the opponents also plays a Nash strategy then all will tie. If the opponents make
mistakes, however, then they can lose some hands, allowing the Nash equilibrium
strategy to win. Such a mistake by the opponents would be a deviation from the
Nash strategy, following a hunch or other non-rational reason, despite having no
theoretical incentive to do so. A Nash equilibrium plays perfect defence. It does
not try to exploit the opponent strategy’s flaws, and instead just wins when the
opponent makes mistakes [103, 450].
The Nash strategy gives the best possible outcome we can achieve against our
adversaries when they work against us. In the sense that a Nash strategy is on
average unbeatable, it is considered to be an optimal strategy, and solving a game
is equivalent to computing a Nash equilibrium.
In a few moments we will introduce a method to calculate Nash strategies, called
counterfactual regret minimization, but we will first look at cooperation.
7.1 Multi-Agent Problems 197
at least one agent worse off. We will compare Nash and Pareto soon by looking at
how they are related in the prisoner’s dilemma (Table 7.1).
The Pareto optimum is the best possible outcome for us where we do not hurt
others, and others do not hurt us. It is a cooperative strategy. Pareto equilibria
assume communication and trust. In some sense, it is the opposite of the non-
cooperative Nash strategy. Pareto calculates the situation for an cooperative world,
Nash for an competitive world.
Confess Silent
Defect Cooperate
Confess (−5, −5) ( 0, −10)
Defect Nash
Silent (−10, 0) (−2, −2)
Cooperate Pareto
Table 7.1 Prisoner’s Dilemma
you will both get a light sentence of 5 years. If you both keep quiet, then I will only
have evidence to get you a short sentence of 2 years. However, if one of you confesses,
and the other stays silent, then the confessor will walk free, and the one keeping quiet
goes to prison for 10 years. Please tell me your choice tomorrow morning.
This leaves our two criminals with a tough, but clear, choice. If the other stays
silent, and I confess, then I walk free; if I also stay silent, then we get 2 years in
prison, so confessing is better for me. If the other confesses, and I also confess,
then we both get 5 years; if I then stay silent, then I get 10 years in prison, so
confessing is again better for me. Whatever the other chooses, confessing gives
the lighter sentence, and since they can not coordinate their action, they will each
independently confess. Both will get 5 years in prison, even though both would
have only gotten 2 years if both would have stayed silent. The police is happy, since
both confess and the case is solved.
If they would have been able to communicate, or if they would have trusted
eachother, then both would have stayed silent, and both would have gotten off with
a 2 year sentence.
The dilemma faced by the criminals is that, whatever the other does, each is
better off confessing than remaining silent. However, both also know that if the
other could have been trusted, or if they could have coordinated their answer, then
a better outcome would have been within reach.
The dilemma illustrates individual self-interest against the interest of the group.
In the literature on game theory, confessing is also known as defecting, and staying
silent is the cooperative choice. The confess/confess situation (both defect) is the
optimal non-cooperative strategy, the Nash equilibrium, because for each agent
that stays silent a strategy exists where the sentence will be made worse (10 years)
when the other agent confesses. Hence the agent will not stay silent but confess, so
as to limit the loss.
Silent/silent (both cooperate) is Pareto optimal at 2 years for both, since going
to all other cases will make at least one agent worse off.
The Nash strategy is non-cooperative, non-communication, non-trust, while
Pareto is the cooperative, communication, trust, outcome.
200 7 Multi-Agent Reinforcement Learning
The prisoner’s dilemma is a one-time game. What would happen if we play this
game repeatedly, being able to identify and remember the choices of our opponent?
Could some kind of communication or trust arise, even if the setting is initially
non-cooperative?
This question is answered by the iterated version of the prisoner’s dilemma. Inter-
est in the iterated prisoner’s dilemma grew after a series of publications following a
computer tournament [36, 37]. The tournament was organized by political scientist
Robert Axelrod, around 1980. Game theorists were invited to send in computer
programs to play iterated prisoner’s dilemmas. The programs were organized in a
tournament and played eachother hundreds of times. The goal was to see which
strategy would win, and if cooperation would emerge in this simplest of settings. A
range of programs were entered, some using elaborate response strategies based
on psychology, while others used advanced machine learning to try to predict the
opponent’s actions.
Surprisingly, one of the simplest strategies won. It was submitted by Anatol
Rapoport, a mathematical psychologist who specialized in the modeling of social
interaction. Rapoport’s program played a strategy known as tit for tat. It would start
by cooperating (staying silent) in the first round, and in the next rounds if would
play whatever the opponent did in the previous round. Tit for tat thus rewards
cooperation with cooperation, and punishes defecting with defection—hence the
name.
Tit for tat wins if it is paired with a cooperative opponent, and does not loose
too much by stubbornly cooperating with a non-cooperative opponent. In the long
run, it either ends up in the Pareto optimum or in the Nash equilibrium. Axelrod
attributes the success of tit for tat to a number of properties. First, it is nice, that is,
it is never the first to defect. In his tournament, the top eight programs played nice
strategies. Second, tit for tat is also retaliatory, it is difficult to exploit the strategy
by the non-cooperative strategies. Third, tit for tat is forgiving, when the opponent
plays nice, it rewards the play by reverting to being nice, being willing to forget past
non-cooperative behavior. Finally, the rule has the advantage of being clear and
predictable. Others easily learn its behavior, and adapt to it, by being cooperative,
leading to the mutual win/win Pareto optimum.
7.1.4 Challenges
After this brief overview of theory, let us see how well practical algorithms are
able to solve multi-agemt problems. With the impressive results in single-agent
and two-agent reinforcement learning, the interest (and the expectations) in multi-
agent problems have increased. In the next section, we will have a closer look at
algorithms for multi-agent problems, but let us first look at the challenges that
these algorithms face.
7.1 Multi-Agent Problems 201
Most multi-agent settings are imperfect information settings, where agents have
some private information that is not revealed to other agents. The private informa-
tion can be in the form of hidden cards whose value is unknowns, such as in poker,
blackjack or bridge. In real time strategy games players often do not see the entire
map of the game, but parts of the world are obscured. Another reason for imperfect
information can be that the rules of the game allow simultaneous moves, such as
in Diplomacy, where all agents are determining their next action at the same time,
and agents have to act without full knowledge of the other actions.
All these situations require that all possible states of the world have to considered,
increasing the number of possible states greatly in comparison to perfect informa-
tion games. Imperfect information is best expressed as an extensive-form game. In
fact, given the size of most multi-agent systems, it quickly becomes unfeasible to
communicate and keep track of all the information of all agents, even if all agents
would make their state and intentions public (which they rarely do).
Imperfect information increases the size of the state space, and computing the
unknown outcomes quickly becomes unfeasible.
Moreover, as all agents are improving their policies according to their own interests
concurrently, the agents are faced with a nonstationary environment. The environ-
ment’s dynamics are determined by the joint action space of all agents, in which
the best policies of agents depend on the best policies of the other agents. This
mutual dependence creates an unstable situation.
In a single agent setting a single state needs to be tracked to calculate the next
action. In a multi-agent setting, all states and all agent’s policies need to be taken
into account, and mutually so. Each agent faces a moving target problem.
In multi-agent reinforcement learning the agents learn concurrently, updating
their behavior policies concurrently and often simultaneously [331]. Actions taken
by one agent affect the reward of other agents, and therefore of the next state. This
invalidates the Markov property, which states that all information that is necessary
to determine the next state is present in the current state and the agent’s action.
The powerful arsenal of single-agent reinforcement theory must be adapted before
it can be used.
202 7 Multi-Agent Reinforcement Learning
To handle nonstationarity, agents must account for the joint action space of the
other agents’ actions. The size of the space increases exponentially with the number
of agents. In two-agent settings the agent has to consider all possible replies of a
single opponent to each of its own moves, increasing the state space greatly. In
multi-agent settings the number of replies increases even more, and computing
solutions to thes problems quickly becomes quite expensive. A large number of
agents complicates convergence analysis and increases the computational demands
substantially.
On the other hand, agents may learn from each other, line up their goals, and
collaborate. Collaboration and group forming reduce the number of independent
agents that must be tracked.
207, 97]. Such methods are well-known from single-agent optimization problems;
indeed, population-based methods are successful in solving many complex and
large single-agent optimization problems (including stochastic gradient descent
optimization) [652]. In this section, we will see that evolutionary methods are also
a natural match for mixed multi-agent problems, although they are typically used
for cooperative problems with many homogeneous agents.
Finally, we will discuss an approach based on multi-play self-learning, in which
evolutionary and hierarchical aspects are used. Here different groups of reinforce-
ment learning agents are trained against eachother. This approach has been highly
successful in the games of Capture the Flag and StarCraft, and is suitable for mixed
multi-agent problems.
Let us start with counterfactual regret minimization.
The first setting that we will discuss is the competitive setting. This setting is
still close to single and two-agent reinforcement learning, that are also based on
competition.
algorithms.py
7.2 Multi-Agent Reinforcement Learning Agents 205
1 def cfr (
2 self ,
3 cards : List [ str ] ,
4 history : str ,
5 r e a c h _ p r o b a b i l i t i e s : np . array ,
6 active_player : int ) -> int :
7 if KuhnPoker . is_terminal ( history ) :
8 return KuhnPoker . get_payoff ( history , cards )
9
10 my_card = cards [ active_player ]
11 info_set = self . g e t _ i n f o r m a t i o n _ s e t ( my_card + history )
12
13 strategy = info_set . get_strategy ( r e a c h _ p r o b a b i l i t i e s [
active_player ])
14 opponent = ( active_player + 1) % 2
15 c o u n t e r f a c t u a l _ v a l u e s = np . zeros ( len ( Actions ) )
16
17 for ix , action in enumerate ( Actions ) :
18 a c t i o n _ p r o b a b i l i t y = strategy [ ix ]
19
20 n e w _ r e a c h _ p r o b a b i l i t i e s = r e a c h _ p r o b a b i l i t i e s . copy ()
21 n e w _ r e a c h _ p r o b a b i l i t i e s [ active_player ] *=
action_probability
22
23 c o u n t e r f a c t u a l _ v a l u e s [ ix ] = - self . cfr (
24 cards , history + action , new_reach_probabilities ,
opponent )
25
26 node_value = c o u n t e r f a c t u a l _ v a l u e s . dot ( strategy )
27 for ix , action in enumerate ( Actions ) :
28 c o u n t e r f a c t u a l _ r e g r e t [ ix ] = \
29 r e a c h _ p r o b a b i l i t i e s [ opponent ] * (
c o u n t e r f a c t u a l _ v a l u e s [ ix ] - node_value )
30 info_set . c u m u l a t i v e _ r e g r e t s [ ix ] += c o u n t e r f a c t u a l _ r e g r e t [
ix ]
31
32 return node_value
33
34 def train ( self , num_i terati ons : int ) -> int :
35 util = 0
36 kuhn_cards = [ ’J ’ , ’Q ’ , ’K ’]
37 for _ in range ( num_it eratio ns ) :
38 cards = random . sample ( kuhn_cards , 2)
39 history = ’ ’
40 r e a c h _ p r o b a b i l i t i e s = np . ones (2)
41 util += self . cfr ( cards , history , reach_probabilities , 0)
42 return util
For large problems a deep learning version of the algorithm has been devel-
oped [111]. The goal of deep counterfactual regret minimization is to approximate
the behavior of the tabular algorithm without calculating regrets at each individual
information set. It generalizes across similar infosets using approximation of the
value function via a deep neural network with alternating player updates.
CFR is an algorithm for the competitive setting. The Nash-equilibrium defines the
competitive win/lose multi-agent case—the (−5, −5) situation of the prisoner’s
dilemma of Table 7.1.
We will now move to the cooperative setting. As we have seen, in a cooperative
setting, win/win situations are possible, with higher rewards, both for society as
a whole and for the individuals. The Pareto optimum for the prisoner’s dilemma
example is (−2, −2), only achievable through norms, trust or cooperation by the
agents (as close-knit criminal groups aim to achieve, for example, through a code
of silence), see also Leibo et al. [464].
The achievements in single-agent reinforcement learning inspire multi-agent to
achieve similar results. However, partial observability and nonstationarity create a
computational challenge. Researchers have tried many different approaches, some of
which we will cover, although the size of problems for which the current agorithms
work is still limited. Wong et al. [845] provide a review of these approaches, open
problems in cooperative reinforcement learning are listed by Dafoe et al. [175]. First
we will discuss approaches based on single-agent reinforcement learning methods,
next we will discuss approaches based on opponent modeling, communication, and
psychology [845].
On the other extreme, we can ignore communication and nonstationarity and let
agents train separately. In this approach the agents learn an individual action-value
function and view other agents as part of the environment. This approach simplifies
the computational demands at the cost of gross oversimplification, ignoring multi-
agent interaction.
An in-between approach is centralized training and decentralized execution [426].
Here agents can access extra information during training, such as other agents’
observations, rewards, gradients and parameters. However, they execute their policy
decentrally based on their local observations. The local computation and inter-agent
communication mitigate nonstationarity while still modeling partial observability
and (some) interaction. This approach stabilises the local policy learning of agents,
even when other agents’ policies are changing [426].
When value functions are learned centrally, how should this function then be
used for decentral execution by the agents? A popular method is value-function
factorization. The Value decomposition networks method (VDN) decomposes the
central value function as a sum of individual value functions [734], who are exe-
cuted greedily by the agents. QMIX and QTRAN are two methods that improve on
VDN by allowing nonlinear combinations [627, 719]. Another approach is Multi-
agent variational Exploration (MAVEN) which improves the inefficient exploration
problem of QMIX using a latent space model [495].
Policy-based methods focus on actor critic approaches, with a centralized critic
training decentralized actors. Counterfactual multi-agent (COMA) uses such a
centralized critic to approximate the Q-function that has access to the actors that
train the behavior policies [249].
Lowe et al. [488] introduce a multi-agent version of a popular off-policy single-
agent deep policy-gradient algorithm DDPG (Sect. 4.2.7), called MADDPG. It con-
siders action policies of other agents and their coordination. MADDPG uses an
ensemble of policies for each agent. It uses a decentralized actor, centralized critic
approach, with deterministic policies. MADDPG works for both competitive and col-
laborative multi-agent problems. An extension for collision avoidance is presented
by Cai et al. [123]. A popular on-policy single-agent method is PPO. Han et al. [312]
achieve sample efficient results modeling the continuous half-cheetah task as a
model-based multi-agent problem. Their model-based, multi-agent, work is inspired
by MVE [237] (Sect. 5.2.2.1). Yu et al. [862] achieve good results in cooperative
multi-agent games (StarCraft, Hanabi, and Particle world) with MAPPO.
Modeling cooperative behavior in reinforcement learning in a way that is compu-
tationally feasible is an active area of research. Li et al. [473] use implicit coordina-
tion graphs to model the structure of interactions. They use graph neural networks
to model the coordination graphs [296] for StarCraft and traffic environments [654],
allowing scaling of interaction patterns that are learned by the graph convolutional
network.
208 7 Multi-Agent Reinforcement Learning
The state space of multi-agent problems is large, yet the previous approaches tried
to learn this large space with adaptations of single-agent algorithms. Another
approach is to reduce the size of the state space, for example by explicitly modeling
opponent behavior in the agents. These models can then be used to guide the agent’s
decision making, reducing the state space that it has to traverse. Albrecht and Stone
have written a survey of approaches [14].
One approach to reduce the state space is to assume a set of stationary policies
between which agents switch [231]. The Switching agent model (SAM) [876] learns
an opponent model from observed trajectories with a Bayesian network. The Deep
reinforcement open network (DRON) [319] uses two networks, one to learn the
Q-values, and the other to learn the opponent policy representation.
Opponent modeling is related to the psychological Theory of Mind [609]. Ac-
cording to this Theory, people attribute mental states to others, such as beliefs,
intents, and emotions. Our theory of the minds of others helps us to analyze and
predict their behavior. Theory of mind also holds that we assume that the other
has theory of mind; it allows for a nesting of beliefs of the form: “I believe that you
believe that I believe” [795, 794, 796]. Building on these concepts, Learning with
opponent-learning awareness (LOLA) anticipates opponent’s behavior [250]. Prob-
abilistic recursive reasoning (PR2) models our own and our opponent’s behavior as
a hierarchy of perspectives [834]. Recursive reasoning has been shown to lead to
faster convergence and better performance [534, 176]. Opponent modeling is also
an active area of research.
7.2.2.3 Communication
Another step towards modeling the real world is taken when we explicitly model
communication between agents. A fundamental question is how language between
agents emerges when no predefined communication protocol exists, and how syntax
and meaning evolve out of interaction [845]. A basic approach to communication is
with referential games: a sender sends two images and a message; the receiver then
has to identify which of the images was the target [457]. Language also emerges in
more complicated versions, or in negotiation between agents [126, 423].
Another area where multi-agent systems are frequently used is the study of
coordination, social dilemmas, emergent phenomena and evolutionary processes.
See, for example, [749, 219, 464]. In the card game bridge [715] bidding strategies
have been developed to signal to the other player in the team which cards a player
has [715]. In the game of Diplomacy, an explicit negotion-phase is part of each
game round [185, 429, 583, 21]. Work is ongoing to design communication-aware
variants of reinforcement learning algorithms [710, 314].
7.2 Multi-Agent Reinforcement Learning Agents 209
7.2.2.4 Psychology
Many of the key ideas in reinforcement learning, such as operant conditioning and
trial-and-error, originated in cognitive science. Faced with the large state space,
multi-agent reinforcement learning methods are moving towards human-like agents.
In addition to opponent modeling, studies focus on coordination, pro-social behavior,
and intrinsic motivation. A large literature exists on emergence of social norms and
cultural evolution in multi-agent systems [464, 105, 34, 181]. To deal with nonstation-
airty and large states spaces, humans use heuristics and approximation [274, 499].
However, heuristics can lead to biases and suboptimal decision-making [275]. It
is interesting to see how multi-agent modeling is discovering concepts from psy-
chology. More research in this area is likely to improve the human-like behavior of
artificial agents.
To discuss solution methods for agents in the mixed setting, we will look at one
important approach that is again inspired by biology: population-based algorithms.
Population-based methods such as evolutionary algorithms and swarm comput-
ing work by evolving (or optimizing) a large number of agents at the same time.
We will look closer at evolutionary algorithms and at swarm computing, and then
we will look at the role they play in multi-agent reinforcement learning.
Although they are best known for solving single agent optimization problems,
we will use them here to model multi-agent problems.
Let us look in more detail at how an evolutionary algorithm works (see
Alg. 7.1) [40]. First an initial population is generated. The fitness of each indi-
vidual is computed, and the fittest individuals are selected for reproduction, using
crossover and mutation to create a new generation of individuals. The least fit
individuals of the old populations are replaced by the new individuals.
Compared to reinforcement learning, in evolutionary algorithms the agents
are typically homogeneous, in the sense that the reward (fitness) function for the
individuals is the same. Individuals do have different genes, and thus differ in
their behavior (policy). In reinforcement learning there is a single current behavior
policy, where an evolutionary approach has many candidate policies (individuals).
The fitness function can engender in principle both competitive and cooperative
behavior between individuals, although a typical optimization scenario is to select
a single individual with the genes for the highest fitness (survival of the fittest
competitor).7
Changes to genes of individuals (policies) occur explicitly via crossover and
(random) mutation, and implicitly via selection for fitness. In reinforcement learning
the reward is used more directly as a policy goal; in evolutionary algorithms the
fitness does not directly influence the policy of an individual, only its survival.
Individuals in evolutionary algorithms are passive entities that do not communi-
cate or act, although they do combine to create new individuals.
There are similarities and differences between evolutionary and multi-agent
reinforcement learning algorithms. First of all, in both approaches the goal is to find
the optimal solution, the policy that maximizes (social) reward. In reinforcement
learning this occurs by learning a policy through interaction with an environment,
in evolutionary algorithms by evolving a population through survival of the fittest.
Reinforcement learning deals with a limited number of agents whose policy deter-
mines their actions, evolutionary algorithms deals with many individuals whose
genes determine their survival. Policies are improved using a reward function that
assesses how good actions are, genes mutate and combine, and individuals are
selected using a fitness function. Policies are improved “in place” and agents do
not die, in evolutionary computation the traits (genes) of the best individuals are
7 Survival of the fittest cooperative group of individuals can also be achieved with an appropriate
fitness function [491].
7.2 Multi-Agent Reinforcement Learning Agents 211
selected and copied to new individuals in the next generation after which the old
generation does die.
Although different at first sight, the two approaches share many traits, including
the main goal: optimizing behavior. Evolutionary algorithms are inherently multi-
agent, and may work well in finding good solutions in large and nonstationary
sequential decision problems.
Population based training uses two methods. The first is exploit, which de-
cides whether a worker should abandon the current solution and focus on a more
promising one. The second is explore, which, given the current solution and hy-
perparameters, proposes new solutions to explore the solution space. Members
of the population are trained in parallel. Their weights 𝜃 are updated and eval
measures their current performance. When a member of the population is deemed
ready because it has reached a certain performance threshold, its weights and
hyperparameters are updated by exploit and explore, to replace the current
weights with the weights that have the highest recorded performance in the rest
of the population, and to randomly perturb the hyperparameters with noise. After
exploit and explore, iterative training continues as before until convergence.
Let us have a look at this fusion approach for training leagues of players.
agents from the population. The team employs cooperative strategies, in addition
to competing against the other teams. Agents are trained in an explicit hierarchy.
The goal of self-play league training is to find stable policies for all agents, that
maximize their team reward. In mixed and cooperative settings the teams of agents
may benefit from each others’ strength increase [488]. Self-play league learning
also uses aspects of hierarchical reinforcement learning, a topic that will be covered
in the next chapter.
In the next section we will look deeper into how self-play league learning is
implemented in specific multi-player games.
There are many variants of poker that are regularly played. No-limit Texas
hold’em is a popular variant; the two-player version is called Heads Up and is
easier to analyse because no opponent collusion can occur. Heads-up no-limit Texas
hold’em (HUNL) has been the primary AI benchmark for imperfect-information
game play for several years.
Poker has hidden information (the face-down cards). Because of this, agents
are faced with a large number of possible states; poker is a game that is far more
complex than chess or checkers. The state space of the two-person HUNL version is
reported to be around 10161 [112]. A further complication is that during the course
of the game information is revealed by players through their bets; high bets indicate
good cards, or the wish of the player to make the opponent believe that this is the
case (bluffing). Therefore, a player must choose between betting high on good cards,
and on doing the opposite, so that the opponent does not find out too much, and
can counter-act.
In 2018, one of the top two-player poker programs, Libratus, defeated top human
professionals in HUNL in a 20-day, 120,000-hand competition featuring a $200,000
prize pool. Brown et al. [112] describe the architecture of Libratus. The program
consists of three main modules, one for computing a quick CFR Nash-policy using
a smaller version of the game, a second module for constructing a finer-grained
strategy once a later stage of the game is reached, and a third module to enhance
the first policy by filling in missing branches.
In the experiment against top human players, Libratus analyzed the bet sizes that
were played most often by its opponents at the end of each day of the competition.
The programs would then calculate a response overnight, in order to improve as
the competition proceeded.
216 7 Multi-Agent Reinforcement Learning
Fig. 7.5 Six Strategies in Hide and Seek: Running and Chasing; Fort Building; Ramp Use; Ramp
Defense; Box Surfing; Surf Defense (left-to-right, top-to-bottom) [49]
Hide and Seek features cooperation (inside the team) and competition (between
the hiders and the seekers). It is interesting to see how easy cooperation emerges.
The game does not have explicit communication for the team to coordinate coop-
eration, all cooperative behavior emerges out of basic interaction between agents
that are guided by their reward functions. Cooperative strategies thus emerge out
of the game design: the homogeneous reward functions for each team and the
environment in which blocks are present and follow laws of physics.
During play the agents essentially construct an autocurriculum for them-
selves [463, 49]. Six different behavior strategies are reported, each more advanced
than the other, increasing the competitive pressure to find counter strategies for
the opponent (see Fig. 7.5).
Baker et al. [49] report that initially, the hiders and seekers learn the basic
strategy of running away and chasing. However, after much training (25 million
episodes), hiders start to use boxes to construct shelters behind which they hide.
Then, after another 75 million episodes, the seekers learn to move and use ramps to
jump over obstacles into the shelter. A mere 10 million episodes later, the hiders
learn to defend by moving the ramps to the edge and lock them in place out of
range of the shelters. Then, after a long 270 million episodes of training, the seekers
learned box-surfing. They moved a box to the edge of the play area next to the
locked ramps. One seeker then used the ramp to climb on top of the box and the
other seekers push it to the shelter, where the seeker could peek over te edge and
218 7 Multi-Agent Reinforcement Learning
see the hiders. Finally, in response, the hiders locked all of the boxes in place before
building their shelter, and they were safe from the seekers.
The Hide and Seek experiment is interesting because of the emergence of diverse
behavior strategies. The strategies emerged out of a basic reward function and
random exploration (see also the reward is enough argument [706]).
The emergence of strategies out of basic reward and exploration suggests an
evolutionary process. However, Hide and Seek does not employ population-based
training or evolutionary algorithms, in contrast to the work in Capture the Flag, in
the next section.
The world around us exhibits a mix of competitive and cooperative behavior. Team
collaboration is an important aspect of human life, and it has been studied exten-
sively in biology, sociology and artificial intelligence. It emerges (evolves) out of
the most basic settings—the need to achieve an ambitious goal—as we just saw. In
recent years many research groups have studied the mixed multi-agent model in
real time strategy games, such as Capture the Flag, and StarCraft. We will discuss
both games, and start with Capture the Flag.
First we will discuss the game Capture the Flag, which is played in a Quake III
Arena (see Fig. 7.6). Jaderberg et al. have reported on an extensive experiment with
this game [372], in which the agents learn from scratch to see, act, cooperate,
and compete. In this experiment the agents are trained with population-based
self-play [373], Alg. 7.2. The agents in the population are all different (they have
different genes). The population learns by playing against each other, providing
increased diversity of teammates and opponents, and a more stable and faster
learning process than traditional single-agent deep reinforcement learning methods.
In total 30 different bots were created and pitted against each other. Agents are part
of a team, and the reward functions form a hierarchy. A two-layer optimization
process optimizes the internal rewards for winning, and uses reinforcement learning
on the internal rewards to learn the policies.
In Capture the Flag, the bots start by acting randomly. After 450,000 games, a bot
strategy was found that performed well, and they developed cooperative strategies,
such as following team mates in order to outnumber opponents, and loitering near
the enemy base when their team mate has the flag. Again, as in Hide and Seek,
cooperative strategies emerged out of the basic rules, by combining environment
feedback and survival of the fittest.
7.3 Multi-Agent Environments 219
The work on Capture the Flag is notable since it demonstrated that with only
pixels as input an agent can learn to play competitively in a rich multi-agent
environment. To do so it used a combination of population based training, internal
reward optimization, and temporally hierarchical reinforcement learning (see next
chapter).
StarCraft
The final game that we will discuss in this chapter is StarCraft. StarCraft is a
multi-player real-time strategy game of even larger complexity. The state space
has been estimated to be on the order of 101685 [572], a very large number. Star-
Craft features multi-agent decision making under uncertainty, spatial and temporal
reasoning, competition, team-collaboration, opponent modeling, and real-time plan-
ning. Fig. 1.6 shows a picture of a StarCraft II scene.
Research on StarCraft has been ongoing for some time [572], a special StarCraft
multi-agent challenge has been introduced [654]. A team from DeepMind has
created a program called AlphaStar. In a series of test matches held in December
2018, DeepMind’s AlphaStar beat two top players in two-player single-map matches,
using a different user interface.
AlphaStar plays the full game of StarCraft II. The neural network was initially
trained by supervised learning from anonymized human games, that were then
further trained by playing against other AlphaStar agents, using a population-based
220 7 Multi-Agent Reinforcement Learning
version of self-play reinforcement learning [812, 373].10 These agents are used to
seed a multi-agent reinforcement learning process. A continuous competitive league
was created, with the agents of the league playing games in competition against
each other. By branching from existing competitors, new competitors were added.
Agents learn from games against other competitors. Population-based learning was
taken further, creating a process that explores the very large space of StarCraft
game play by pitting agents against strong opponent strategies, and retaining strong
early strategies.
Diversity in the league is increased by giving each agent its own learning objec-
tive, such as which competitors it should focus on and which game unit it should
build. A form of prioritized league-self-play actor-critic training is used, called
prioritized fictitious self-play—details are in [812]. AlphaStar was trained on a
custom-built scalable distributed training system using Google’s tensor processing
units (TPU). The AlphaStar league was run for 14 days. In this training, each agent
experienced the equivalent of 200 years of real-time StarCraft play.
In StarCraft players can choose to play one of three alien races: Terran, Zerg
or Protoss. AlphaStar was trained to play Protoss only, to reduce training time,
although the same training pipeline could be applied to any race. AlphaStar was
first tested against a human grandmaster named TLO, a top professional Zerg player
and a grandmaster level Protoss player. The human player remarked: I was surprised
by how strong the agent was. AlphaStar takes well-known strategies and turns them
on their head. The agent demonstrated strategies I had not thought of before, which
means there may still be new ways of playing the game that we haven’t fully explored
yet.
Many of the research efforts reported in these final chapters describe significant
efforts by research teams working on complicated and large games. These games
represent the frontier of artificial intelligence, and research teams use all available
computational and software engineering power that they can acquire to get the best
results. Also, typically a large amount of time is spent in training, and in finding
the right hyperparameters for the learning to work.
Replicating results of this scale is highly challenging, and most research efforts
are focused on replicating the results on a smaller, more manageable scale, with
more manageable computational resources.
In this section we will try to replicate some aspects with modest computational
requirements. We will focus on Hide and Seek (Sect. 7.3.2). The original code for
the Hide and Seek experiments is on GitHub.11 Please visit and install the code.
10Similar to the first approach in AlphaGo, where self-play reinforcement learning was also
bootstrapped by supervised learning from human games.
11 https://github.com/openai/multi-agent-emergence-environments
7.3 Multi-Agent Environments 221
Hide and Seek uses MuJoCo and the mujoco-worldgen package. Install them and
the dependencies with
bin/examine.py base
Multiplayer Environment
We will now summarize the chapter and provide pointers to further reading.
Summary
cooperation, and mixed behavior. The field is closely related to game theory—the
basis of the study of rational behavior in economics. A famous problem of game
theory is the prisoner’s dilemma. A famous result of non-cooperative game theory
is the Nash equilibrium, which is defined as the joint strategy where no player has
anything to gain by changing their own strategy. A famous result from cooperative
game-theory is the Pareto optimum, the situation where no individual can be better
off without making someone else worse off.
When agents have private information, a multi-agent problem is partially observ-
able. Multi-agent problems can be modeled by stochastic games or as extensive form
games. The behavior of agents is ultimately determined by the reward functions,
that can be homogeneous, or heterogeneous. When agents have different reward
functions, multi-agent becomes multi-objective reinforcement learning.
The regret of an action is the amount of reward that is missed by the agent for
not choosing the actions with the highest payoff. A regret minimization algorithm
is the stochastic and multi-agent equivalent of minimax. Counterfactual-regret
minimization is an approach for finding Nash strategies in competitive multi-agent
games, such as poker.
Variants of single-agent algorithms are often used for cooperative multi-agent
situations. The large state space due to nonstationarity and partial observability pre-
cludes solving large problems. Other promising approaches are opponent modeling
and explicit communication modeling.
Population-based methods such as evolutionary algorithms and swarm intelli-
gence are used frequently in multi-agent systems. These approaches are suitable for
homogeneous reward functions and competitive, cooperative, and mixed problems.
Evolutionary methods evolve a population of agents, combining behaviors, and
selecting the best according to some fitness function. Evolutionary methods are a
natural fit for parallel computers and are among the most popular and successful
optimization algorithms. Swarm intelligence often introduces (rudimentary) forms
of communication between agents, such as in Ant colony optimization, ACO where
agents communicate through artificial pheromones to indicate which part of the
solution space they have traveled.
For some of the most complicated problems that have recently been tackled, such
as StarCraft, Capture the Flag, and Hide and Seek, hierarchical and evolutionary
principles are often combined in league training, where leagues of teams of agents
are trained in a self-play fashion, and where the fittest agent survive. Current
achievements require large amounts of computational power, future work is trying
to reduce these requirements.
Further Reading
found in [291, 122, 857, 13, 14, 749, 789, 332, 331, 845]. After Littman [482], Shoham
et al. [697] look deeper into MDP modeling.
The classic work on game theory is Von Neumann and Morgenstern [817].
Modern introductions are [544, 180, 258]. Game theory underlies much of the
theory of rational behavior in classical economics. Seminal works of John Nash
are [554, 555, 553]. In 1950 he introduced the Nash equilibrium in his dissertation
of 28 pages, which won him the Nobel prize in Economics in 1994. A biography and
film have been made about the life of John Nash [552].
The game of rock-paper-scissors plays an important role in game theory, and
the study of computer poker [821, 89, 640, 113]. Prospect theory [389], introduced
in 1979, studies human behavior in the face of uncertainty, a topic that evolved into
the field of behavioral economics [842, 538, 128]. Gigerenzer introduced fast and
frugal heuristics to explain human decision making [274].
For more intriguing works on the field of evolution of cooperation and the
emergence of social norms, see, for example [105, 37, 34, 33, 35, 330, 335].
More recently multi-objective reinforcement learning has been studied, a sur-
vey is [483]. In this field the more realistic assumption is adopted that agents
have different rewards functions, leading to different Pareto optima [801, 536, 840].
Oliehoek et al. have written a concise introduction to decentralized multi-agent
modelling [568, 567].
Counterfactual regret minimization has been fundamental for the success in
computer poker [112, 113, 378, 880]. An often-used Monte Carlo version is published
in [450]. A combination with function approximation is studied in [111].
Evolutionary algorithms have delivered highly successful optimization algo-
rithms. Some entries to this vast field are [223, 40, 42, 41, 43]. A related field is
swarm intelligence, where communication between homogeneous agents is tak-
ing place [405, 217, 207, 78]. For further research in multi-agent systems refer
to [846, 792]. For collective intelligence, see, for example, [405, 208, 257, 847, 598].
Many other works report on evolutionary algorithms in a reinforcement learning
setting [730, 408, 535, 837, 148, 163, 652, 841]. Most of these approaches concern
single agent approaches, although some are specifically applied to multi agent
approaches [497, 407, 712, 486, 373, 413].
Research into benchmarks is active. Among interesting approaches are Pro-
cedural content generation [780], MuJoCo Soccer [486], and the Obstacle Tower
Challenge [383]. There is an extensive literature on computer poker. See, for ex-
ample, [89, 90, 276, 640, 53, 103, 90, 276, 104, 656, 532]. StarCraft research can be
found in [814, 654, 812, 654, 572]. Other games studies are [758, 782].
Exercises
Below are a few quick questions to check your understanding of this chapter. For
each question a simple, single sentence answer is sufficient.
224 7 Multi-Agent Reinforcement Learning
Questions
Exercises
Here are some programming exercises to become more familiar with the methods
that we have covered in this chapter.
1. CFR Implement counterfactual regret minimization for a Kuhn poker player. Play
against the program, and see if you can win. Do you see possibilities to extend it
to a more challenging version of poker?
2. Hide and Seek Implement Hide and Seek with cooperation and competition. Add
more types of objects. See if other cooperation behavior emerges.
3. Ant Colony Use the DeepMind Control Suite and setup a collaborative level and
a competitive level, and implement Ant Colony Optimization. Find problem
instances on the web, or in the original paper [208]. Can you implement more
swarm algorithms?
4. Football Go to the Google Football blog12 and implement algorithms for football
agents. Consider using a population-based approach.
5. StarCraft Go to the StarCraft Python interface,13 and implement a StarCraft
player (highly challenging) [654].
12 https://ai.googleblog.com/2019/06/introducing-google-research-football.html
13 https://github.com/deepmind/pysc2
Chapter 8
Hierarchical Reinforcement Learning
225
226 8 Hierarchical Reinforcement Learning
In this chapter we will start with an example to capture the flavor of hierarchical
problem solving. Next, we will look at a theoretical framework that is used to model
hierarchical algorithms, and at a few examples of algorithms. Finally, we will look
deeper at hierarchical environments.
The chapter ends with exercises, a summary, and pointers to further reading.
Core Concepts
Core Problem
Core Algorithms
Planning a Trip
Let us see how we plan a major trip to visit a friend that lives in another city, with
a hierarchical method. The method would break up the trip in different parts. The
first part would be to walk to your closet and get your things, then get your bike.
You would go to the train station, and park your bike. You would then take the train
to the other city, possibly changing trains enroute if that would be necessary to get
you there faster. Arriving in the city your friend would meet you at the station and
would drive you to their house.
A “flat” reinforcement learning method would have at its disposal actions consist-
ing of footsteps in certain directions. This would make for a large space of possible
policies, although the fine grain at which the policy would be planned—individual
footsteps—would sure be able to find the optimal shortest route.
The hierarchical method has at its disposal a wider variety of actions—macro
actions: it can plan a bike ride, a train trip, and getting a ride by your friend. The
8.1 Granularity of the Structure of Problems 227
route may not be the shortest possible (who knows if the train follows the shortest
route between the two cities) but planning will be much faster than painstakingly
optimizing footstep by footstep.
8.1.1 Advantages
We will start with the advantages of hierarchical methods [246]. First of all, hierar-
chical reinforcement learning simplifies problems through abstraction. Problems
are abstracted into a higher level of aggregation. Agents create subgoals and solve
fine grain subtasks first. Actions are abstracted into larger macro actions to solve
these subgoals; agents use temporal abstraction.
Second, the temporal abstractions increase sample efficiency. The number of
interactions with the environment is reduced because subpolicies are learned to
solve subtasks, reducing the environment interactions. Since subtasks are learned,
they can be transfered to other problems, supporting transfer learning.
Third, subtasks reduce brittleness due to overspecialization of policies. Policies
become more general, and are able to adapt to changes in the environment more
easily.
Fourth, and most importantly, the higher level of abstraction allows agents to
solve larger, more complex problems. This is a reason why for complex multi-agent
games such as StarCraft, where teams of agents must be managed, hierarchical
approaches are used.
Multi-agent reinforcement learning often exhibits a hierarchical structure; prob-
lems can be organized such that each agent is assigned its own subproblem, or the
agents themselves may be structured or organized in teams or groups. There can be
228 8 Hierarchical Reinforcement Learning
cooperation within the teams or competition between the teams, or the behavior
can be fully cooperative or fully competitive.
Flet-Berliac [246], in a recent overview, summarizes the promise of hierarchical
reinforcement learning as follows: (1) achieve long-term credit assignment through
faster learning and better generalization, (2) allow structured exploration by explor-
ing with sub-policies rather than with primitive actions, and (3) perform transfer
learning because different levels of hierarchy can encompass different knowledge.
8.1.2 Disadvantages
Conclusion
For a long time, finding good subgoals has been a major challenge. With recent
algorithmic advances, especially in function approximation, important progress
has been made. We will discuss these advances in the next section.
The set of options is denoted as Ω. In the options framework, there are thus two
types of policies: the policy over options 𝜋Ω (𝜔|𝑠) and the subpolicies 𝜋 𝜔 (𝑎|𝑠).
The subpolicies 𝜋 𝜔 are short macros to get from 𝐼 𝜔 to 𝛽 𝜔 quickly, using the
previously learned macro (subpolicy). Temporal abstractions mix actions of different
granularity, short and long, primitive action and subpolicy. They allow traveling
from 𝐼 to 𝛽 without additional learning, using a previously provided or learned
subpolicy.
One of the problems for which the options framework works well, is room
navigation in a grid world (Fig. 8.1). In a regular reinforcement learning problem the
agent would learn to move step by step. In hierarchical reinforcement learning the
doors between rooms are bottleneck states, and are natural subgoals. Macro actions
(subpolicies) are to move to a door in one multi-step action (without considering
alternative actions along the way). Then we can go to a different room, if we choose
the appropriate option, using another macro, closer to where the main goal is
located. The four-room problem from the figure is subsequently used in many
research works in hierarchical reinforcement learning.
In the original options framework the process of identifying the subgoals (the
hallways, doors) is external. The subgoals have to be provided manually, or by other
methods [608, 315, 443, 726]. Subsequently, methods have been published to learn
these subgoals.
Options are goal-conditioned subpolicies. More recently a generalization to
parameterized options has been presented in the universal value function, by Schaul
et al. [664]. Universal value functions provide a unified theory for goal-conditioned
parameterized value approximators 𝑉 (𝑠, 𝑔, 𝜃).
8.2 Divide and Conquer for Agents 231
Find Find
Name Agent Environment Subg Subpol Ref
STRIPS Macro-actions STRIPS planner - - [240]
Abstraction Hier. State abstraction Scheduling/plan. + + [414]
HAM Abstract machines MDP/maze - - [587]
MAXQ Value function decomposition Taxi - - [196]
HTN Task networks Block world - - [171, 271]
Bottleneck Randomized search Four room + + [726]
Feudal manager/worker, RNN Atari + + [810, 182]
Self p. goal emb. self play subgoal Mazebase, AntG + + [731]
Deep Skill Netw. deep skill array, policy distillation Minecraft + + [765]
STRAW end-to-end implicit plans Atari + + [809]
HIRO off-policy Ant maze + + [545]
Option-Critic policy-gradient Four room + + [45]
HAC actor critic, hindsight exper. repl. Four room ant + + [470, 19]
Modul. pol. hier. bit-vector, intrinsic motivation FetchPush + + [589]
h-DQN intrinsic motivation Montezuma’s R. - + [434]
Meta l. sh. hier. shared primitives, strength metric Walk, crawl + + [255]
CSRL model-based transition dynamics Robot tasks - + [476]
Learning Repr. unsup. subg. disc., intrinsic motiv. Montezuma’s R. + + [618]
AMIGo Adversarially intrinsic goals MiniGrid PCG + + [125]
Table 8.1 Hierarchical Reinforcement Learning Approaches
Whether the hierarchical method improves over a traditional flat method depends
on a number of factors. First, there should be enough repeating structure in the
domain to be exploited (are there many rooms?), second, the algorithm must find
appropriate subgoals (can it find the doors?), third, the options that are found
must repeat many times (is the puzzle played frequently enough for the option-
finding cost to be offset?), and, finally, subpolicies must be found that give enough
improvement (are the rooms large enough that options outweigh actions?).
The original options framework assumes that the structure of the domain is
obvious, and that the subgoals are given. When this is not the case, then the subgoals
must be found by the algorithm. Let us look at an overview of approaches, both
tabular and with function approximation.
Fig. 8.2 Termination Probabilities Learned with 4 Options by Option-Critic [45]; Options Tend to
Favor Squares Close to Doors
Conclusion
Looking back at the list of advantages and disadvantages at the start of this chap-
ter, we see a range of interesting and creative ideas that achieve the advantages
(Sect. 8.1.1) while avoiding the disadvantages (Sect. 8.1.2). In general, the tabular
methods are restricted to smaller problems, and often need to be provided with
subgoals. Most of the newer deep learning methods find subgoals by themselves, for
which then subpolicies are found. Many promising methods have been discussed,
and most report to outperform one or more flat baseline algorithms.
8.3 Hierarchical Environments 235
Fig. 8.3 Four Rooms, and One Room with Subpolicy and Subgoal [744]
Sutton et al. [744] presented the four rooms problems to illustrate how the options
model worked (Fig. 8.3; left panel). This environment has been used frequently
in subsequent papers on reinforcement learning. The rooms are connected by
hallways. Options point the way to these hallways, which lead to the goal 𝐺 2 of the
environment. A hierarchical algorithm should identify the hallways as the subgoals,
and create subpolicies for each room to go to the hallway subgoal (Fig. 8.3; right
panel).
The four-room environment is a toy environment with which algorithms can be
explained. More complex versions can be created by increasing the dimensions of
the grids and by increasing the number of rooms.
236 8 Hierarchical Reinforcement Learning
The Hierarchical actor critic paper uses the four-room environment as a basis
for a robot to crawl through. The agent has to learn both the locomotion task and
solving the four-room problem (Fig. 8.4). Other environments that are used for
hierarchical reinforcement learning are robot tasks, such as shown in Fig. 8.5 [632].
One of the most difficult situations for reinforcement learning is when there is little
reward signal, and when it is delayed. The game of Montezuma’s Revenge consists
8.3 Hierarchical Environments 237
of long stretches in which the agent has to walk without the reward changing.
Without smart exploration methods this game cannot be solved. Indeed, the game
has long been a test bed for research into goal-conditioned and exploration methods.
For the state in Fig. 8.6, the player has to go through several rooms while col-
lecting items. However, to pass through doors (top right and top left corners), the
player needs the key. To pick up the key, the player has to climb down the ladders
and move towards the key. This is a long and complex sequence before receiving the
reward increments for collecting the key. Next, the player has to go to the door to
collect another increase in reward. Flat reinforcement learning algorithms struggle
with this environment. For hierarchical reinforcement the long stretches without
a reward can be an opportunity to show the usefulness of the option, jumping
through the space from states where the reward changes to another reward change.
To do so, the algorithm has to be able to identify the key as a subgoal.
Rafati and Noelle [618] learn subgoals in Montezuma’s Revenge, and so do
Kulkarni et al. [434]. Learning to choose promising subgoals is a challenging problem
by itself. Once subgoals are found, the subpolicies can be learned by introducing
a reward signal for achieving the subgoals. Such intrinsic rewards are related to
intrinsic motivation and the psychological concept of curiosity [29, 576].
238 8 Hierarchical Reinforcement Learning
Figure 8.7 illustrates the idea behind intrinsic motivation. In ordinary reinforce-
ment learning, a critic in the environment provides rewards to the agent. When
the agent has an internal environment where an internal critic provides rewards,
these internal rewards provide an intrinsic motivation to the agent. This mechanism
aims to more closely model exploration behavior in animals and humans [711]. For
example, during curiosity-driven activities, children use knowledge to generate
intrinsic goals while playing, building block structures, etc. While doing this, they
construct subgoals such as putting a lighter entity on top of a heavier entity in
order to build a tower [434, 711]. Intrinsic motivation is an active field of research.
A recent survey is [29].
Montezuma’s Revenge has also been used as benchmark for the Go-Explore
algorithm, that has achieved good results in sparse reward problems, using a goal-
conditioned policy with cell aggregation [221]. Go-Explore performs a planning-like
form of backtracking, combining elements of planning and learning in a different
way than AlphaZero.
The research reported in this chapter is of a more manageable scale than in some
other chapters. Environments are smaller, computational demands are more rea-
sonable. Four-room experiments and experiments with movement of single robot
8.3 Hierarchical Environments 239
arms invites experimentation and tweaking. Again, as in the other chapters, the
code of most papers can be found online on GitHub.
Hierarchical reinforcement learning is well suited for experimentation because
the environments are small, and the concepts of hierarchy, team, and subgoal, are
intuitively appealing. Debgugging one’s implementation should be just that bit
easier when the desired behavior of the different pieces of code is clear.
To get you started with hierarchical reinforcement learning we will go to HAC:
Hierarchical actor critic [470]. Algorithm 8.1 shows the pseudocode, where TBD is
the subgoal in hindsight [470]. A blog1 with animations has been written, a video2
240 8 Hierarchical Reinforcement Learning
has been made of the results, and the code can be found in GitHub.3
To run the hierarchical actor critic experiments, you need MuJoCo and the
required Python wrappers. The code is TensorFlow 2 compatible. When you have
cloned the repository, run the experiment with
which will train a UR5 reacher agent with a 3-level hierarchy. Here is a video4
that shows how it should look like after 450 training episodes. You can watch your
trained agent with the command
We will now summarize the chapter and provide pointers to further reading.
Summary
A typical reinforcement learning algorithm moves in small steps. For a state, it picks
an action, gives it to the environment for a new state and a reward, and processes
the reward to pick a new action. Reinforcement learning works step by small step.
In contrast, consider the following problem: in the real world, when we plan a trip
from A to B, we use abstraction to reduce the state space, to be able to reason at a
higher level. We do not reason at the level of footsteps we need to take, but we first
decide on the mode of transportation to get close to our goal, and then we fill in
the different parts of the journey with small steps.
Hierarchical reinforcement learning tries to mimic this idea: conventional rein-
forcement learning works at the level of a single state; hierarchical reinforcement
learning performs abstraction, solving subproblems in sequence. Temporal ab-
straction is described in a paper by Sutton et al. [744]. Hierarchical reinforcement
learning uses the principles of divide and conquer to make solving large problems
1 http://bigai.cs.brown.edu/2019/09/03/hac.html
2 https://www.youtube.com/watch?v=DYcVTveeNK0
3 https://github.com/andrew-j-levy/Hierarchical-Actor-Critc-HAC-
4 https://www.youtube.com/watch?v=R86Vs9Vb6Bc
8.3 Hierarchical Environments 241
feasible. It finds subgoals in the space that it solves with subpolicies (or macros or
options).
Despite the appealing intuition, progress in hierarchical reinforcement learning
was initially slow. Finding these new subgoals and subpolicies is a computationally
intensive problems that is exponential in the number of actions, and in some
situations it is quicker to use conventional “flat” reinforcement learning methods,
unless domain knowledge can be exploited. The advent of deep learning provided a
boost to hierarchical reinforcement learning, and important tasks such as learning
subgoals automatically, and finding subpolicies, is progressing greatly.
Although popular for single-agent reinforcement learning, hierarchical methods
are also used in multi-agent problems. Multi-agent problems often feature agents
that work in teams, that cooperate within, and compete between the teams. Such
an agent-hierarchy is a natural fit for hierachical solution methods. Hierarchical
reinforcment learning remains a promising technique.
Further Reading
Hierarchical reinforcement learning, and subgoal finding, have a rich and long
history [247, 587, 744, 608, 196, 315, 443, 56, 591]; see also Table 8.1. Macro-actions
are a basic approach [315, 625]. Others, using macros, are [854, 851, 214]. The
options framework has provided a boost to the development of the field [744].
Other approaches are MAXQ [195] and Feudal networks [810]. Recent method are
Option-critic [45] and Hierarchical actor-critic [470].
Earlier tabular approaches are [247, 587, 744, 608, 196, 315, 443].
There are many deep learning methods for finding subgoals and subpoli-
cies [470, 581, 247, 597, 545, 255, 805, 664, 734, 177]. Andrychowicz et al. [19] intro-
duce hindsight experience replay, which can improve performance for hierarchical
methods.
Instrinsic motivation is a concept from developmental neuroscience that has
come to reinforcement learning with the purpose to provide learning signals in
large spaces. It is related to curiosity. Botvinick et al. [101] have written an overview
of hierarchical reinforcement learning and neuroscience. Aubret et al. [29] provide
a survey of intrinsic motivation for reinforcement learning. Instrinsic motivation is
used by [434, 618]. Intrinsic motivation is closely related to goal-driven reinforce-
ment learning [664, 649, 575, 576, 577].
Exercises
Questions
Below are some quick questions to check your understanding of this chapter. For
each question a simple, single sentence answer should be sufficient.
Exercises
Let us go to the programming exercises to become more familiar with the methods
that we have covered in this chapter.
1. Four Rooms Implement a hierarchical solver for the four-rooms environment. You
can code the hallway subgoals using domain knowledge. Use a simple tabular,
planning, approach. How will you implement the subpolicies?
2. Flat Implement a flat planning or Q-learning based solver for 4-rooms. Compare
this program to the tabular hierarchical solver. Which is quicker? Which of the
two does fewer environment actions?
3. Sokoban Implement a Sokoban solver using a hierarchical approach (challeng-
ing). The challenge in Sokoban is that there can be dead-ends in the game that
you create rendering the game unsolvable (also see the literature [695, 290]).
Recognizing these dead-end moves is important. What are the subgoals? Rooms,
or each box-task is one subgoal, or can you find a way to code dead-ends as
subgoal? How far can you get? Find Sokoban levels.567
4. Petting Zoo Choose one of the easier multi-agent problems from the Petting
Zoo [760], introduce teams, and write a hierarchical solver. First try a tabular
planning approach, then look at hierarchical actor critic (challenging).
5. StarCraft The same as the previous exercise, only now with StarCraft (very
challenging).
5 http://sneezingtiger.com/sokoban/levels.html
6 http://www.sokobano.de/wiki/index.php?title=Level_format
7 https://www.sourcecode.se/sokoban/levels
Chapter 9
Meta-Learning
Although current deep reinforcement learning methods have obtained great suc-
cesses, training times for most interesting problems are high; they are often mea-
sured in weeks or months, consuming time and resources—as you may have noticed
while doing some of the exercises at the end of the chapters.
Model-based methods aim to reduce the sample complexity in order to speed
up learning—but still, for each new task a new network has to be trained from
scratch. In this chapter we turn to another approach, that aims to re-use information
learned in earlier training tasks from a closely related problem. When humans learn
a new task, they do not learn from a blank slate. Children learn to walk and then
they learn to run; they follow a training curriculum, and they remember. Human
learning builds on existing knowledge, using knowledge from previously learned
tasks to facilitate the learning of new tasks. In machine learning such transfer of
previously learned knowledge from one task to another is called transfer learning.
We will study it in this chapter.
Humans learn continuously. When learning a new task, we do not start from
scratch, zapping our minds first to emptiness. Previously learned task-representa-
tions allow us to learn new representations for new tasks quickly; in effect, we
have learned to learn. Understanding how we (learn to) learn has intrigued artificial
intelligence researchers since the early days, and it is the topic of this chapter.
The fields of transfer learning and meta-learning are tightly related. For both,
the goal is to speed up learning a new task, using previous knowledge. In transfer
learning, we pretrain our parameter network with knowledge from a single task. In
meta-learning, we use multiple related tasks.
Meta-learning is an active field of research, both in reinforcement learning and
in supervised learning, and there is an active exchange of ideas between the two
fields. We will see many results in both.
In this chapter, we first discuss the concept of lifelong learning, something that
is quite familiar to human beings. Then we discuss transfer learning, followed
by meta-learning. Next, we discuss some of the benchmarks that are used to test
transfer learning and meta-learning.
243
244 9 Meta-Learning
Core Concepts
• Knowledge transfer
• Learning to learn
Core Problem
Core Algorithms
Foundation Models
Humans are good at meta-learning. We learn new tasks more easily after we have
learned other tasks. Teach us to walk, and we learn how to run. Teach us to play
the violin, the viola, and the cello, and we more easily learn to play the double bass
(Fig. 9.1).
Current deep learning networks are large, with many layers of neurons and
millions of parameters. For new problems, training large networks on large datasets
or environments takes time, up to weeks, or months—both for supervised and for
reinforcement learning. In order to shorten training times for subsequent networks,
these are often pretrained, using foundation models [96]. With pretraining, some of
the exisiting weights of another network are used as starting point for finetuning a
network on a new dataset, instead of using a randomly initialized network.
Pretraining works especially well on deeply layered architectures. The reason is
that the “knowledge” in the layers goes from generic to specific: lower layers contain
generic filters such as lines and curves, and upper layers contain more specific
filters such as ears, noses, and mouths (for a face recognition application) [458, 496].
These lower layers contain more generic information that is well suited for transfer
to other tasks.
9 Meta-Learning 245
Foundation models are large models in a certain field, such as image recogni-
tion, or natural language processing, that are trained extensively on large datasets.
Foundation models contain general knowledge, that can be specialized for a certain
purpose. The world of applied deep learning has moved from training a net from
scratch for a certain problem, to taking a part of an existing net that is trained for a
related problem and then finetuning it on the new task. Nearly all state-of-the-art
visual perception approaches rely on the same formula: (1) pretrain a convolutional
network on a large, manually annotated image classification dataset and (2) finetune
the network on a smaller, task-specific dataset [277, 200, 864, 8, 359], see Fig. 9.2 for
246 9 Meta-Learning
Training times for modern deep networks are large. Training AlexNet for ImageNet
took 5-6 days on 2 GPUs in 2012 [430], see Sect. B.3.2. In reinforcement learning,
training AlphaGo took weeks [702, 705], in natural language processing, training
also takes a long time [193], even excessively long as in the case of GPT-3 [114].
Clearly, some solution is needed. Before we look closer at transfer learning, let us
have a look at the bigger picture: lifelong learning.
When humans learn a new task, learning is based on previous experience. Initial
learning by infants of elementary skills in vision, speech, and locomotion takes
years. Subsequent learning of new skills builds on the previously acquired skills.
Existing knowledge is adapted, and new skills are learned based on previous skills.
Lifelong learning remains a long-standing challenge for machine learning; in
current methods the continuous acquisition of information often leads to interfer-
ence of concepts or catastrophic forgetting [700]. This limitation represents a major
drawback for deep networks that typically learn representations from stationary
batches of training data. Although some advances have been made in narrow do-
mains, significant advances are necessary to approach generally-applicable lifelong
learning.
Different approaches have been developed. Among the methods are meta-
learning, domain adaptation, multi-task learning, and pretraining. Table 9.1 lists
these approaches, together with regular single task learning. The learning tasks are
formulated as using datasets, as in a regular supervised setting. The table shows
how the lifelong learning methods differ in their training and test dataset, and
the different learning tasks. The first line shows regular single-task learning. For
single-task learning, the training and test dataset are both drawn from the same
distribution (the datasets do not contain the same examples, but they are drawn
from the same original dataset and the data distribution is expected to be the same),
and the task to perform is the same for training and test. In the next line, for trans-
fer learning, networks trained on one dataset are used to speedup training for a
different task, possibly using a much smaller dataset [580]. Since the datasets are
not drawn from the same master dataset, their distribution will differ, and there
typically is only an informal notion of how “related” the datasets are. However, in
practice transfer learning often provides significant speedups, and transfer learning,
pretraining and finetuning are currently used in many real-world training tasks,
sometimes using large foundation models as a basis. In multi-task learning, more
than one task is learned from one dataset [129]. The tasks are often related, such
as classification tasks of different, but related, classes of images, or learning spam
9.2 Transfer Learning and Meta-Learning Agents 247
learning tries to learn hyperparameters over these related learning tasks. Meta-
learning thus aims to learn to learn. In deep meta-learning approaches, the initial
network parameters are typically part of the hyperparameters. Note that in transfer
learning we also use (part of) the parameters to speed up learning (finetuning) a
new, related task. We can say that meta-learning generalizes transfer learning by
learning the initial parameters over not one but a sequence of related tasks [350, 363].
Definitions are still in flux, however, and different authors and different fields have
different definitions.
Transfer learning has become part of the standard approach in machine learning,
meta-learning is still an area of active research. We will look into meta-learning
shortly, after we have looked into transfer learning, multi-task learning, and domain
adaptation.
Transfer learning aims to improve the process of learning new tasks using the
experience gained by solving similar problems [605, 773, 772, 584]. Transfer learning
aims to transfer past experience of source tasks and use it to boost learning in a
related target task [580, 878].
In transfer learning, we first train a base network on a base dataset and task, and
then we repurpose some of the learned features to a second target network to be
trained on a target dataset and task. This process works better if the features are
general, meaning suitable to both base and target tasks, instead of specific to the
base task. This form of transfer learning is called inductive transfer. The scope of
possible models (model bias) is narrowed in a beneficial way by using a model fit
on a different but related task.
First we will look at task similarity, then at transfer learning, multi-task learning,
and domain adaptation.
Clearly, pretraining works better when the task is more similar [129]. Learning
to play the viola based on the violin is more similar than learning the tables of
multiplication based on tennis. Different measures can be used to measure the
similarity of examples and features in datasets, from linear one-dimensional mea-
sures to non-linear multi-dimensional measures. Common measures are the cosine
similarity for real-valued vectors and the radial basis function kernel [751, 808], but
many more elaborate measures have been devised.
Similarity measures are also used to devise meta-learning algorithms, as we will
see later.
9.2 Transfer Learning and Meta-Learning Agents 249
When we want to transfer knowledge, we can transfer the weights of the network,
and then start retraining with the new dataset. Please refer back to Table 9.1. In
pretraining the new dataset is smaller than the old dataset 𝐷 1 𝐷 2 for a new task
𝑇2 ≠ 𝑇1 , since we want to train faster. This works when the new task is different,
but similar, so that the old dataset 𝐷 1 contains useful information for the new task
𝑇2 .
To learn new image recognition problems, it is common to use a deep learning
model pre-trained for a large and challenging image classification task such as
the ImageNet 1000-class photograph classification competition. Three examples
of pretrained models include: the Oxford VGG Model, Google’s Inception Model,
Microsoft’s ResNet Model. For more examples, see the Caffe Model Zoo,1 or other
zoos2 where more pre-trained models are shared.
Transfer learning is effective because the images were trained on a corpus that
requires the model to make predictions on a large number of classes, requiring the
model to be general, and since it efficiently learns to extract features in order to
perform well.
Convolutional neural network features are more generic in lower layers, such
as color blobs or Gabor filters, and more specific to the original dataset in higher
layers. Features must eventually transition from general to specific by the last layers
of the network [861]. Pretraining copies some of the layers to the new task. Care
should be taken how much of the old task network to copy. It is relatively safe to
copy the more general lower layers. Copying the more specific higher layers may
be detrimental to performance.
In natural language processing a similar situation occurs. In natural language pro-
cessing, a word embedding is used that is a mapping of words to a high-dimensional
continuous vector where different words with a similar meaning have a similar
vector representation. Efficient algorithms exist to learn these word representations.
Two examples of common pre-trained word models trained on very large datasets
of text documents include Google’s Word2vec model [512] and Stanford’s GloVe
model [595].
Listing 9.2 Pretraining in Keras (2): create new model and train
Fig. 9.3 Domain Adaptation: Recognizing Items in Different Circumstances is Difficult [300]
still somewhat similar. For example, the task may be to recognize a backpack in a
different orientation, or to recognize a pedestrian in different lighting.
Domain adaptation can be seen as the opposite of pretraining. Pretraining uses
the same dataset for a different task, while domain adaptation adapts to a new
dataset for the same task [127].
In natural language processing, examples of domain shift are an algorithm that
has been trained on news items that is then applied to a dataset of biomedical
documents [179, 732], or a spam filter that is trained on a certain group of email
users, which is deployed to a new target user [74]. Sudden changes in society or the
natural environment (pandemics, severe weather) can also upset machine learning
algorithms.
There are different techniques to overcome domain shift [168, 871, 855, 833].
In visual applications, adaptation can be achieved by re-weighting the samples
of the first dataset, or clustering them for visually coherent sub-domains. Other
approaches try to find transformations that map the source distribution to the
target, or learn a classification model and a feature transformation jointly [781].
Adversarial techniques where feature representations are encouraged to be difficult
to distinguish can be used to achieve adaptation [791, 849, 211], see also Sect. B.2.6.
9.2 Transfer Learning and Meta-Learning Agents 253
9.2.2 Meta-Learning
One of the challenges of lifelong machine learning is how to judge the performance
of an algorithm. Regular train-test generalization do not capture the speed of
adaptation of a meta-learning algorithm.
For this reason, meta-learning tasks are typically evaluated on their few-shot
learning ability. In few-shot learning, we test if a learning algorithm can be made
to recognize examples from classes from which it has seen only few examples in
training. In few-shot learning prior knowledge is available in the network.
To translate few-shot learning to a human setting, we can think of a situation
where a human plays the double bass after only a few minutes of training on the
double bass, but after years on the violin, viola, or cello.
Meta-learning algorithms are often evaluated with few shot learning tasks, in
which the algorithm must recognize items of which it has only seen a few examples.
This is formalized in the 𝑁-way-𝑘-shot approach [141, 444, 828]. Given a large
dataset D, a smaller training dataset 𝐷 is sampled from this dataset. The 𝑁-way-𝑘-
shot classification problem constructs training dataset 𝐷 such that it consists of 𝑁
classes, of which, for each class, 𝑘 examples are present in the dataset. Thus, the
cardinality of |𝐷| = 𝑁 · 𝑘.
A full 𝑁-way-𝑘-shot few-shot learning meta task T consists of many episodes
in which base tasks T𝑖 are performed. A base task consists of a training set and a
4 Although defining base learning as learning parameters, and meta-learning as learning hyperpa-
rameters appears to give us a clear distinction, in practice the definition is not so clear cut, since
in deep meta-learning the initialization of the regular parameters is considered to be an important
“hyper”parameter.
254 9 Meta-Learning
test set, to test generalization. In few-shot terminology, the training set is called
the support set, and the test set is called the query set. The support set has size
𝑁 · 𝑘, the query set consists of a small number of examples. The meta-learning
algorithm can learn from the episodes of 𝑁-way-𝑘-shot query/support base tasks,
until at meta-test time the generalization of the meta-learning algorithm is tested
with another query. Figure 9.4 illustrates this process.
We will now turn our attention to deep meta-learning algorithms. The meta-learning
field is still young and active, and definitions have not settled. Nevertheless, the
field is converging on a set of definitions that we will present here. We start our
explanation in a supervised setting.
Meta-learning is concerned with learning from a sequence of base-learning tasks
{T1 , T2 , T3 , . . . } so that a new (related) meta-test-task will reach a high accuracy
9.2 Transfer Learning and Meta-Learning Agents 255
One of the most popular deep meta-learning approaches of the last few years
is optimization-based meta-learning [363]. This approach optimizes the initial
parameters 𝜃 of the network for fast learning of new tasks. Most optimization-based
techniques do so by approaching meta-learning as a two-level optimization problem.
At the inner level, a base learner makes task-specific updates to 𝜃 for the different
observations in the training set. At the outer level, the meta-learner optimizes
hyperparameters 𝜔 across a sequence of base tasks where the loss of each task is
evaluated using the test data from the base tasks 𝐷 T𝑖 ,𝑡𝑒𝑠𝑡 [628, 350, 462].
The inner loop optimizes the parameters 𝜃, and the outer loop optimizes the
hyperparameters 𝜔 to find the best performance on the set of base tasks 𝑖 = 0, . . . , 𝑀
with the appropriate test data:
The inner loop optimizes 𝜃 𝑖 within the datasets 𝐷 𝑖 of the tasks T𝑖 , and the outer
loop optimizes 𝜔 across the tasks and datasets.
The meta loss function optimizes for the meta objective, which can be accuracy,
speed, or another goal over the set of base tasks (and datasets). The outcome of the
meta optimization is a set of optimal hyperparameters 𝜔★.
In optimization-based meta-learning approaches the most important hyperpa-
rameters 𝜔 are the optimal initial parameters 𝜃★ 0 . This focus on the parameters 𝜃
simplifies our inner/outer formula as follows:
256 9 Meta-Learning
Fig. 9.5 Workflow of recurrent meta-learners in reinforcement learning contexts. State, action,
reward, and termination flag at time step 𝑡 are denotednby 𝑠𝑡 , 𝑎𝑡 , 𝑟𝑡 , and 𝑑𝑡 , ℎ𝑡 refers to the
hidden state [213].
In this approach we meta-optimize the initial parameters 𝜃 0 , such that the loss
function performs well on the test data of the base tasks. Section 9.2.2.4 describes
MAML, a well-known example of this approach.
Deep meta-learning approaches are sometimes categorized as (1) similarity-
metric-based, (2) model-based, and (3) optimization-based. We will now have a
closer look at two of the nine meta reinforcement learning algorithms from Table 9.2,
we will look at Recurrent meta-learning and MAML; the former is a model-based
approach, the latter optimization-based.
For meta reinforcement learning approaches to be able to learn to learn, they must be
able to remember what they have learned across subtasks. Let us see how Recurrent
meta-learning learns across tasks.
Recurrent meta-learning uses recurrent neural networks to remember this knowl-
edge [213, 825]. The recurrent network serves as dynamic storage for the learned
task embedding (weight vector). The recurrence can be implemented by an LSTM
[825] or by gated recurrent units [213]. The choice of recurrent neural meta-network
(meta-RNN) determines how well it adapts to the subtasks, as it gradually accumu-
lates knowledge about the base-task structure.
Recurrent meta-learning tracks variables 𝑠, 𝑎, 𝑟, 𝑑 which denote state, action,
reward, and termination of the episode. For each task T𝑖 , Recurrent meta-learning
inputs the set of environment variables {𝑠𝑡+1 , 𝑎 𝑡 , 𝑟 𝑡 , 𝑑𝑡 } into a meta-RNN at each
time step 𝑡. At every time step the meta-RNN outputs an action and a hidden state
ℎ𝑡 . Conditioned on the hidden state ℎ𝑡 , the meta network outputs action 𝑎 𝑡 . The
goal is to maximize the expected reward in each trial (Fig. 9.5). Since Recurrent
9.2 Transfer Learning and Meta-Learning Agents 257
Fig. 9.6 The Optimization approach aims to learn parameters from which other tasks can be
learned quickly. The intuition behind Optimization approaches such as MAML is that when our
meta-training set consists of tasks A, B, C, and D, then if our meta-learning algorithm adjusts
parameters 𝑎 and 𝑏 to ( 2, 2), then they are close to either of the four tasks, and can be adjustly
quickly to them (after [363, 242]).
Let us look at the process of training a deep learning model’s parameters from a
feature learning standpoint [242, 363], where the goal is that a few gradient steps
can produce good results on a new task. We build a feature representation that is
broadly suitable for many tasks, and by then fine-tuning the parameters slightly
(primarily updating the top layer weights) we achieve good results—not unlike
transfer learning. MAML finds parameters 𝜃 that are easy and fast to finetune,
allowing the adaptation to happen in an embedding space that is well suited for
fast learning. To put it another way, MAML’s goal is to find the point in the middle
of Fig. 9.6, from where the other tasks are easily reachable.
Let us now have a closer look at how MAML works [362, 619, 242]. Please refer
to the pseudocode for MAML in Alg. 9.1. The learning task is an episodic Markov
decision process with horizon 𝑇, where the learner is allowed to query a limited
number of sample trajectories for few-shot learning. Each reinforcement learning
task T𝑖 contains an initial state distribution 𝑝 𝑖 (𝑠1 ) and a transition distribution
𝑝 𝑖 (𝑠𝑡+1 |𝑠𝑡 , 𝑎 𝑡 ). The loss L T𝑖 corresponds to the (negative) reward function 𝑅. The
model being learned, 𝜋 𝜃 , is a policy from states 𝑠𝑡 to a distribution over actions 𝑎 𝑡
at each timestep 𝑡 ∈ {1, ..., 𝑇 }. The loss for task T𝑖 and policy 𝜋 𝜃 takes the familiar
form of the objective (Eq. 2.5):
"𝑇 #
∑︁
L T𝑖 (𝜋 𝜃 ) = −E𝑠𝑡 ,𝑎𝑡 ∼ 𝜋 𝜃 , 𝑝T𝑖 𝑅𝑖 (𝑠𝑡 , 𝑎 𝑡 , 𝑠𝑡+1 ) . (9.1)
𝑡=1
In 𝑘-shot reinforcement learning, 𝑘 rollouts from 𝜋 𝜃 and task T𝑖 , (𝑠1 , 𝑎 1 , ...𝑠𝑇 ), and
the rewards 𝑅(𝑠𝑡 , 𝑎 𝑡 ), may be used for adaptation on a new task T𝑖 . MAML uses
TRPO to estimate the gradient both for the policy gradient update(s) and the meta
optimization [680].
The goal is to quickly learn new concepts, which is equivalent to achieving a
minimal loss in few gradient update steps. The number of gradient steps has to be
specified in advance. For a single gradient update step gradient descent produces
updated parameters
9.2 Transfer Learning and Meta-Learning Agents 259
𝜃 𝑖0 = 𝜃 − 𝛼∇ 𝜃 L T𝑖 (𝜋 𝜃 )
specific to task 𝑖. The meta loss of one gradient step across tasks is
∑︁
𝜃 ← 𝜃 − 𝛽∇ 𝜃 L T𝑖 (𝜋 𝜃𝑖0 ) (9.2)
T𝑖 ∼ 𝑝 ( T)
Meta-learning has been around for a long time, long before deep learning became
popular. It has been applied to classic machine learning tasks, such as regression,
decision trees, support vector machines, clustering algorithms, Bayesian networks,
evolutionary algorithms, and local search [93, 106, 811]. The hyperparameter view
on meta-learning originated here.
It is interesting to briefly discuss this background, also because hyperparameter
optimization is an important technology to find a good set of hyperparameters in
reinforcement learning experiments.
Machine learning algorithms have hyperparameters that govern their behavior,
and finding the optimal setting for these hyperparameters has long been called
meta-learning. A naive approach is to enumerate all combinations and run the
machine learning problem for them. For all but the smallest hyperparameter spaces
such a grid search will be prohibitively slow. Among the smarter meta-optimization
approaches are random search, Bayesian optimization, gradient-based optimization,
and evolutionary optimization.
This meta-algorithm approach has given rise to algorithm configuration re-
search, such as SMAC [365],5 ParamILS [366],6 irace [487],7 and algorithm selection
research [629, 406], such as SATzilla [853].8 Well-known hyperparameter optimiza-
5 https://github.com/automl/SMAC3
6 http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/
7 http://iridia.ulb.ac.be/irace/
8 http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/
260 9 Meta-Learning
Meta-learning uses information from previous learning tasks to learn new tasks
quicker [773]. Meta-learning algorithms are often evaluated in a few-shot setting,
to see how well they do in image classification problems when they are shown only
few training examples. This few-shot learning problem aims to correctly classify
queries with very little previous support for the new class. In the previous sections
we have discussed how meta-learning algorithms aim to achieve few-shot learning.
A discussion of meta-learning would not be complete without mentioning zero-shot
learning.
9 https://scikit-learn.org/stable/
10 https://scikit-optimize.github.io/stable/index.html
11 https://code.fb.com/ai-research/nevergrad/
12 https://optuna.org
13 https://www.automl.org
9.3 Meta-Learning Environments 261
Now that we have seen how transfer learning and meta-learning can be imple-
mented, it is time to look at some of the environments that are used to evaluate the
algorithms. We will list important datasets, environments, and foundation models
for images, behavior, and text, expanding our scope beyond pure reinforcement
learning. We will look at how well the approaches succeed in generalizing quickly
to new machine learning tasks.
Many benchmarks have been introduced to test transfer and meta-learning
algorithms. Benchmarks for conventional machine learning algorithms aim to offer
a variety of challenging learning tasks. Benchmarks for meta-learning, in contrast,
aim to offer related learning tasks. Some benchmarks are parameterized, where the
difference between tasks can be controlled.
262 9 Meta-Learning
Meta-learning aims to learn new and related tasks quicker, trading off speed
versus accuracy. This raises the question how fast and how accurate meta-learning
algorithms are under different circumstances. In answering these questions we
must keep in mind that the closer the learning tasks are, the easier the task is, and
the quicker and the more accurate results will be. Hence, we should carefully look
at which dataset a benchmark uses when we compare results.
Table 9.3 lists some of the environments that are often used for meta-learning
experiments. Some are regular deep learning environments designed for single-task
learning (“single”), some are transfer learning and pretraining datasets (“transfer”),
and some datasets and environments are specifically designed for meta-learning
experiments (“meta”).
We will now describe them in more detail. ALE (Sect. 3.1.1), MuJoCo (Sect. 4.1.3)
and the DeepMind control suite (Sect. 4.3.2) are originally single task deep learning
environments. They are also being used in meta-learning experiments and few-shot
learning, often with moderate results, since the tasks are typically not very similar
(Pong is not like Pac-Man).
Traditionally, two datasets have emerged as de facto benchmarks for few-shot image
learning: Omniglot [445], and mini-ImageNet [645, 813].
Omniglot is a dataset for one-shot learning. This dataset contains 1623 different
handwritten characters from 50 different alphabets and contains 20 examples per
class (character) [445, 446]. Most recent methods obtain very high accuracy on
Omniglot, rendering comparisons between them mostly uninformative.
Mini-ImageNet uses the same setup as Omniglot for testing, consisting of 60,000
colour images of size 84×84 with 100 classes, each having 600 examples [813]. Mini-
ImageNet is formed out of 100 ImageNet classes (64/16/20 for train/validation/test)
and contains 600 examples per class. Albeit harder than Omniglot, most recent meth-
9.3 Meta-Learning Environments 263
ods present similar accuracy when controlling for model capacity. Meta-learning
algorithms such as Bayesian Program Learning and MAML achieved accuracies
comparable to human performance on Omniglot and ImageNet, with accuracies in
the high nineties and error rates as low as a few percent [446]. Models trained on
the largest datasets, such as ImageNet, are used as foundation models [96]. A zoo
of models is available.14
These benchmarks may be too homogeneous. In contrast, real-life learning expe-
riences are heterogeneous: they vary in terms of the number of classes and exam-
ples per class, and are unbalanced. Furthermore, the Omniglot and Mini-ImageNet
benchmarks only measure within-dataset generalization. For meta-learning, we are
eventually after models that can generalize to entirely new distributions. For this
reason, new datasets are being developed specifically for meta-learning.
9.3.3 Meta-Dataset
9.3.4 Meta-World
For reinforcement learning two traditionally popular environments are ALE and
MuJoCo. The games in the ALE benchmarks typically differ considerably, which
16 https://github.com/google-research/meta-dataset
9.3 Meta-Learning Environments 265
makes the ALE test set very challenging for meta-learning, and little success has
been reported. (There are a few exceptions that apply transfer learning (pretraining)
to DQN [585, 519, 717] succeeding in multitask learning in a set of Atari games that
all move a ball.)
Robotics tasks, on the other hand, are more easily parameterizable. Test tasks can
be generated with the desired level of similarity, making robotic tasks amenable to
meta-learning testing. Typical tasks such as reaching and learning different walking
gaits, are more related than two Atari games such as, for example, Breakout and
Space Invaders.
To provide a better benchmark that is more challenging for meta reinforcement
learning, Yu et al. introduced Meta-World [863],17 a benchmark for multi-task
and meta reinforcement learning (see Fig. 9.7 for their pictorial explanation of
the difference between multi-task and meta-learning). Meta-World consists of 50
distinct manipulation tasks with a robotic arm (Fig. 9.8). The tasks are designed
to be different, and contain structure, which can be leveraged for transfer to new
tasks.
When the authors of Meta-World evaluated six state-of-the-art meta-reinforce-
ment and multi-task learning algorithms on these tasks, they found that general-
ization of existing algorithms to heterogeneous tasks is limited. They tried PPO,
TRPO, SAC, RL2 [213], MAML, and PEARL [623]. Small variations of tasks such
as different object positions can be learned with reasonable success, but the al-
gorithms struggled to learn multiple tasks at the same time, even with as few as
ten distinct training tasks. In contrast to more limited meta-learning benchmarks,
Meta-World emphasizes generalization to new tasks and interaction scenarios, not
just a parametric variation in goals.
9.3.5 Alchemy
A final meta reinforcement learning benchmark that we will discuss is Alchemy [824].
The Alchemy benchmark is a procedurally generated 3D video game [691], im-
plemented in Unity [382]. Task generation is parameterized, and varying degrees
of similarity and hidden structure can be chosen. The process by which Alchemy
levels are created is accessible to researchers, and a perfect Bayesian ideal observer
can be implemented.
Experiments with two agents are reported, VMPO and IMPALA. VMPO [721, 586]
is based on a gated transformer network. IMPALA [229, 373] is based on population-
based training with an LSTM core network. Both agents are strong deep learning
methods, although not necessarily for meta-learning. Again, in both agents meta-
learning became more difficult as learning tasks became more diverse. The reported
performance on meta-learning was weak.
17 https://meta-world.github.io
266 9 Meta-Learning
1 import metaworld
2 import random
3
4 print ( metaworld . ML1 . ENV_NAMES ) # Try available environments
5
6 ml1 = metaworld . ML1 ( ’ pick - place - v1 ’) # Construct the benchmark
7
8 env = ml1 . train_classes [ ’ pick - place - v1 ’ ]()
9 task = random . choice ( ml1 . train_tasks )
10 env . set_task ( task ) # Set task
11
12 obs = env . reset () # Reset environment
13 a = env . action_space . sample () # Sample an action
14 obs , reward , done , info = env . step ( a ) # Step the environoment
Note that Meta-World is a robotics benchmark and needs MuJoCo, so you have
to install that too.21 The GitHub site contains brief example instructions on the
usage of the benchmark, please refer to Listing 9.3. The benchmark can be used to
test the meta-learning performance of your favorite algorithm, or you can use one
of the baselines provided in Garage.
18 https://github.com/deepmind/dm_alchemy
19 https://github.com/rlworkgroup/metaworld
20 https://github.com/rlworkgroup/garage
21 https://github.com/openai/mujoco-py#install-mujoco
9.3 Meta-Learning Environments 267
Conclusion
In this chapter we have seen different approaches to learning new and different tasks
with few or even zero examples. Impressive results have been achieved, although
clearly major challenges remain to learn general adaptation when tasks are more
diverse. As is often the case in new fields, many different approaches have been
tried. The field of meta-learning is an active field of research, aiming to reduce
one of the main problems of machine learning, and many new methods will be
developed.
We will now summarize the chapter and provide pointers to further reading.
Summary
This chapter is concerned with learning new tasks faster and with smaller datat-
sets or lower sample complexity. Transfer learning is concerned with transferring
knowledge that has been learned to solve a taks, to another task, to allow quicker
learning. A popular transfer learning approach is pretraining, where some network
layers are copied to intialize a network for a new task, followed by fine tuning, to
improve performance on the new task, but with a smaller dataset.
Another approach is meta-learning, or learning to learn. Here knowledge of how
a sequence of previous tasks is learned is used to learn a new task quicker. Meta-
learning learns hyperparameters of the different tasks. In deep meta-learning, the
set of initial network parameters is usually considered to be such a hyperparameter.
Meta-learning aims to learn hyperparameters that can learn a new task with only
a few training examples, often using 𝑁-way-𝑘-shot learning. For deep few-shot
learning the Model-Agnostic Meta-Learning (MAML) approach is well known, and
has inspired follow-up work.
Meta-learning is of great importance in machine learning. For tasks that are
related, good results are reported. For more challenging benchmarks, where tasks
are less related (such as pictures of animals from very different species), results are
reported that are weaker.
Further Reading
Meta-learning is a highly active field of research. Good entry points are [881, 350,
362, 363, 100]. Meta-learning has attracted much attention in artificial intelligence,
268 9 Meta-Learning
both in supervised learning and in reinforcement learning. Many books and surveys
have been written about the field of meta-learning, see, for example, [106, 683, 803,
666, 835].
There has been active research interest in meta-learning algorithms for some
time, see, for example, [668, 75, 675, 811]. Research into transfer learning and meta-
learning has a long history, starting with Pratt and Thrun [605, 773]. Early surveys
into the field are [580, 757, 833], more recent surveys are [871, 878]. Huh et al. focus
on ImageNet [359, 621]. Yang et al. study the relation between transfer learning
and curriculum learning with Sokoban [858].
Early principles of meta-learning are by Schmidhuber [668, 675]. Meta-learning
surveys are [666, 811, 106, 683, 803, 835, 301, 350, 362, 363]. Papers on similarity-
metric meta-learning are [417, 813, 716, 736, 262, 698]. Papers on model-based
meta-learning are [213, 825, 658, 541, 516, 222, 263]. Papers on optimization-based
meta-learning are many [628, 472, 242, 23, 622, 475, 560, 648, 244, 284, 245, 860, 83].
Domain adaptation is studied in [17, 200, 168, 871, 540]. Zero-shot learning
is an active and promising field. Interesting papers are [87, 355, 453, 579, 727,
10, 850, 448, 635, 197, 621]. Like zero-shot learning, few-shot learning is also a
popular area of meta-learning research [336, 436, 718, 564, 727]. Benchmark papers
are [784, 863, 141, 774, 824].
Exercises
Questions
Below are some quick questions to check your understanding of this chapter. For
each question a simple, single sentence answer is sufficient.
1. What is the reason for the interest in meta-learning and transfer learning?
2. What is transfer learning?
3. What is meta-learning?
4. How is meta-learning different from multi task learning?
5. Zero-shot learning aims to identify classes that it has not seen before. How is
that possible?
6. Is pretraining a form of transfer learning?
7. Can you explain learning to learn?
8. Are the initial network parameters also hyperparameters? Explain.
9. What is an approach for zero-shot learning?
10. As the diversity of tasks increases, does meta-learning achieve good results?
9.3 Meta-Learning Environments 269
Exercises
Let us go to the programming exercises to become more familiar with the methods
that we have covered in this chapter. Meta-learning and transfer learning exper-
iments are often very computationally expensive. You may need to scale down
dataset sizes, or skip some exercises as a last resort.
1. Pretraining Implement pretraining and fine tuning in the Keras pretraining ex-
ample from Sect. 9.2.1.3.22 Do the exercises as suggested, including fine-tuning
on the cats and dogs training set. Note the uses of pre-processing, data augmen-
tation and regularization (dropout and batch normalization). See the effects of
increasing the number of layers that you transfer on training performance and
speed.
2. MAML Reptile [560] is a meta-learning approach inpspired by MAML, but first
order, and faster, specifically designed for few-shot learning. The Keras website
contains a segment on Reptile.23 At the start a number of hyperparameters are
defined: learning rate, step size, batch size, number of meta-learning iterations,
number of evaluation iterations, how many shots, classes, etcetera. See the effect
of tuning different hyperparameters, especially the ones related to few-shot
learning: the classes, the shots, and the number of iterations.
To delve deeper into few-shot learning, also have a look at the MAML code,
which has a section on reinforcement learning.24 Try different environments.
3. Meta World As we have seen in Sect. 9.3.4, Meta World [863] is an elaborate
benchmark suite for meta reinforcement learning. Re-read the section, go to
GitHub, and install the benchmark.25 Also go to Garage to install the agent algo-
rithms so that you are able to test their performance.26 See that they work with
your PyTorch or TensorFlow setup. First try running the ML1 meta benchmark
for PPO. Then try MAML, and RL2. Next, try the more elaborate meta-learning
benchmarks. Read the Meta World paper, and see if you can reproduce their
results.
4. ZSL We go from few-shot learning to zero-shot learning. One of the ways in which
zero-shot learning works is by learning attributes that are shared by classes. Read
the papers Label-Embedding for Image Classification [10], and An embarrassingly
simple approach to zero-shot learning [635], and go to the code [87].27 Implement
it, and try to understand how attribute learning works. Print the attributes for the
classes, and use the different datasets. Does MAML work for few-shot learning?
(challenging)
22 https://keras.io/guides/transfer_learning/
23 https://keras.io/examples/vision/reptile/
24 https://github.com/cbfinn/maml_rl/tree/master/rllab
25 https://github.com/rlworkgroup/metaworld
26 https://github.com/rlworkgroup/garage
27 https://github.com/sbharadwajj/embarrassingly-simple-zero-shot-learning
Chapter 10
Further Developments
We have come to the end of this book and will reflect on what we have learned. We
will review main themes and essential lessons, and we will look at the future.
Why do we study deep reinforcement learning? Our inspiration is the dream of
artificial intelligence; to understand human intelligence and to create intelligent
behavior that can supplement our own, so that together we can grow. For reinforce-
ment learning our goal is to learn from the world, to learn increasingly complex
behaviors for increasingly complex sequential decision problems. The preceding
chapters have shown us that many successful algorithms were inspired by how
humans learn.
Currently many environments consist of games and simulated robots, in the
future this may include human-computer interactions and collaborations in teams
with real humans.
Reinforcement learning has made a remarkable transition, from a method that was
used to learn small tabular toy problems, to learning simulated robots how to walk,
to playing the largest multi-agent real time strategy games, and beating the best
humans in Go and poker. The reinforcement learning paradigm is a framework in
which many learning algorithms have been developed. The framework can incor-
porate powerful ideas from other fields, such as deep learning, and autoencoders.
To appreciate the versatility of reinforcement learning, let us have a closer look
at how the developments in the field have proceeded over time.
271
272 10 Further Developments
ment. Other approaches can be hooked into this framework, to learn new fields, and
to improve performance. These additions can be in order to interpret high dimen-
sional states (as in DQN), or to shrink a state space (as with latent models). When
accomodating self-play, the framework provided us with a curriculum learning
sequence, yielding world class levels of play in two-agent games.
Deep reinforcement learning is being used to understand more real world sequential
decision making situations. Among the applications that motivate these develop-
ments are self driving cars and other autonomous operations, image and speech
recognition, decision making, and, in general, acting naturally.
What will the future bring for deep reinforcement learning? The main challenge
for deep reinforcement learning is to manage the combinatorial explosion that
occurs when a sequence of decisions is chained together. Finding the right kind of
inductive bias can exploit structure in this state space.
We list three major challenges for current and future research in deep reinforce-
ment learning:
1. Solving larger problems faster
2. Solving problems with more agents
3. Interacting with people
The following techniques address these challenges:
1. Solving larger problems faster
• Reducing sample complexity with latent-models
• Curriculum learning in self-play methods
• Hierarchical reinforcement learning
• Learn from previous tasks with transfer learning and meta-learning
• Better exploration through intrinsic motivation
2. Solving problems with more agents
• Hierarchical reinforcement learning
• Population-based self-play league methods
3. Interacting with people
• Explainable AI
• Generalization
Let us have a closer look at these techniques, to see what future developments can
be expected for them.
10.2 Main Challenges 275
10.2.2 Self-Play
divide and conquer. Many large single-agent problems are hierarchically structured.
Hierarchical methods aim to make use of this structure by dividing large problems
into smaller subproblems; they group primitive actions into macro actions. When
a policy has been found with a solution for a certain subproblem, then this can
be re-used when the subproblem surfaces again. Note that for some problems it is
difficult to find a hierarchical structure that can be exploited efficiently.
Hierarchical reinforcement learning has been studied for some time [150, 246,
744]. Recent work has been reported on successful methods for deep hierarchical
learning and population-based training [474, 470, 545, 632], and more is to be
expected.
Among the major challenges of deep reinforcement learning is the long training
time, as the work on AlphaZero [703], and MuZero [678] shows. Transfer learning
and meta-learning aim to reduce the long training times, by transferring learned
knowledge from existing to new (but related) tasks, and by learning to learn from
the training of previous tasks, to speedup learning new (but related) tasks.
In the fields of image recognition and natural language processing it has become
common practice to use networks that are pretrained on ImageNet [191, 236]
or BERT [193] or other large pretrained networks [547, 92]. Optimization-based
methods such as MAML learn better initial network parameters for new tasks, and
have spawned much further research.
Zero-shot learning is a meta-learning approach where outside information is
learned, such as attributes or a textual description for image content, that is then
used to recognize individuals from a new class [635, 718, 850, 10]. Meta-learning is
a highly active field where more results can be expected.
Foundation models are large models, such as ImageNet for image recognition,
and GPT-3 in natural language processing, that are trained extensively on large
datasets. They contain general knowledge, that can be specialized for a certain more
specialized task. They can also be used for multi-modal tasks, where text and image
information is combined. The DALL-E project is able to create images that go with
textual descriptions. See Fig. 10.1 for amusing or beautiful examples (“an armchair
in the shape of an avocado”) [615]. GPT-3 has also been used to study zero-shot
learning, with success, in the CLIP project[624].
Fig. 10.1 DALL-E, an Algorithm that draws Pictures based on Textual Commands [615]
10.2.7 Explainable AI
Explainable AI (XAI) is closely related to the topics of planning and learning that
we discuss in this book, and to natural language processing.
When a human expert suggests an answer, this expert can be questioned to
explain the reasoning behind the answer. This is a desirable property, and enhances
how much we trust the answer. Most clients receiving advice, be it financial or
medical, put greater trust in a well-reasoned explanation than in a yes or no answer
without any explanation.
Decision support systems that are based on classic symbolic AI can often be
made to provide such reasoning easily. For example, interpretable models [610],
decision trees [613], graphical models [380, 456], and search trees [160] can be
traversed and the choices at decision points can be recorded and used to translate
in a human-understandable argument.
Connectionist approaches such as deep learning, in contrast, are less inter-
pretable. Their accuracy, however, is typically much higher than the classical ap-
proaches. Explainable AI aims to combine the ease of interpreting symbolic AI with
the accuracy of connectionist approaches [299, 205, 116].
The work on soft decision trees [256, 340] and adaptive neural trees [753] has
shown how hybrid approaches of planning and learning can try to build an explana-
tory decision tree based on a neural network. These works build in part on model
compression [142, 120] and belief networks [322, 557, 684, 184, 69, 766, 107, 28].
Unsupervised methods can be used to find interpretable models [610, 642, 806].
Model-based reinforcement learning methods aim to perform deep sequential plan-
ning in learned world models [303].
10.2.8 Generalization
This becomes especially diffcult in sim-to-real transfer [875]. One benchmark specifi-
cally designed to increase generalization is Procgen. It aims to increase environment
diversity through procedural content generation, providing 16 parameterizable en-
vironments [158].
Benchmarks will continue to drive progress in artificial intelligence, especially
for generalization [412].
This book has covered the stable basis of deep reinforcement learning, as well as
active areas of research. Deep reinforcement learning is a highly active field, and
many more developments will follow.
We have seen complex methods for solving sequential decision problems, some
of which are easily solved on a daily basis by humans in the world around us. In
certain problems, such as backgammon, chess, checkers, and Go, computational
methods have now surpassed human ability. In most other endeavours, such as
pouring water from a bottle in a cup, writing poetry, or falling in love, humans still
reign supreme.
Reinforcement learning is inspired by biological learning, yet computational and
biological methods for learning are still far apart. Human intelligence is general
and broad—we know much about many different topics, and we use our general
knowledge of previous tasks when learning new things. Artificial intelligence
is specialized and deep—computers can be very good at certain tasks, but their
intelligence is narrow, and learning from other tasks is still a challenge.
Two conclusion are clear. First, for humans, combined intelligence, where human
general intelligence is augmented by specialized artificial intelligence, can be highly
beneficial. Second, for AI, the field of deep reinforcement learning is taking cues
from human learning in hierarchical methods, curriculum learning, learning to
learn, and multi-agent cooperation.
The future of artificial intelligence is human.
Appendices
Appendix A
Mathematical Background
A.1.1 Sets
Discrete set
Examples:
• 𝑋 = {1, 2, .., 𝑛} (integers)
• 𝑋 = {up, down, left, right} (arbitrary elements)
• 𝑋 = {0, 1} 𝑑 (d-dimensional binary space)
283
284 A Mathematical Background
Continuous set
Examples:
• 𝑋 = [2, 11] (bounded interval)
• 𝑋=R (real line)
• 𝑋 = [0, 1] 𝑑 (𝑑-dimensional hypercube)
Conditioning a set
We can also condition within a set, by using : or |. For example, the discrete probability
𝑘-simplex, which is what we actually use to define a discrete probability distribution
over 𝑘 categories, is given by:
∑︁
𝑋 = {𝑥 ∈ [0, 1] 𝑘 : 𝑥 𝑘 = 1}.
𝑘
• The cardinality (size) counts the number of elements in a vector space, for which
we write |𝑋 |.
• The dimensionality counts the number of dimensions in the vector space 𝑋, for
which we write Dim(𝑋).
Examples:
• The discrete space 𝑋 = {0, 1, 2} has cardinality |𝑋 | = 3 and dimension-
ality Dim(𝑋) = 1.
• The discrete vector space 𝑋 = {0, 1}4 has cardinality |𝑋 | = 24 = 16 and
dimensionality Dim(𝑋) = 4.
A.1 Sets and Functions 285
Cartesian product
We can combine two spaces by taking the Cartesian product, denoted by ×, which
consists of all the possible combinations of elements in the first and second set:
𝑋 × 𝑍 = {(𝑥, 𝑧) : 𝑥 ∈ 𝑋, 𝑧 ∈ 𝑍 }
We can also combine discrete and continuous spaces through Cartesian products.
A.1.2 Functions
𝑓 : 𝑋 →𝑌
Examples:
• 𝑦 = 𝑥 2 maps every value in domain 𝑋 ∈ R to range 𝑌 ∈ R+ (see Fig. A.1)
Fig. A.1 𝑦 = 𝑥 2
286 A Mathematical Background
Random variable 𝑋
Particular value 𝑥
• A discrete variable 𝑋 can take values in a discrete set 𝑋 = {1, 2, .., 𝑛}. A particular
value that 𝑋 takes is denoted by 𝑥.
• Discrete variable 𝑋 has an associated probability mass function: 𝑝(𝑋), where
𝑝 : 𝑋 → [0, 1]. Each possible value 𝑥 that the variable can take is associated with
a probability 𝑝(𝑋 = 𝑥) ∈ [0, 1]. (For example, 𝑝(𝑋 = 1) = 0.2, the probability
that 𝑋 is equal to 1 is 20%.) Í
• Probability distributions always sum to 1, i.e.: 𝑥 ∈𝑋 𝑝(𝑥) = 1.
Parameters
Example: A discrete variable 𝑋 that can take three values (𝑋 = {1, 2, 3}),
with associated probability distribution 𝑝(𝑋 = 𝑥):
𝑝(𝑋 = 1) 𝑝(𝑋 = 2) 𝑝(𝑋 = 3)
0.2 0.4 0.4
• A continuous variable 𝑋 can take values in a continuous set, 𝑋 = R (the real line),
or 𝑋 = [0, 1] (a bounded interval).
• Continuous variable 𝑋 has an associated probability density function: 𝑝(𝑋),
where 𝑝 : 𝑋 → R+ (a positive real number).
• In a continuous set, there are infinitely many values that the random value can
take. Therefore, the absolute probability of any particular value is 0.
∫𝑏
• We can only define absolute probability on an interval, 𝑝(𝑎 < 𝑋 ≤ 𝑏) = 𝑎 𝑝(𝑥).
(For example, 𝑝(2 < 𝑋 ≤ 3) = 0.2, the probability that 𝑋 will fall between 2 and
3 is equal to 20%.)
288 A Mathematical Background
Fig. A.2 Examples of discrete (left) versus continuous (right) probability distibution [377]
Parameters
Example: A variable 𝑋 that can take values on the real line with distribution
1 (𝑥 − 𝜇) 2
𝑝(𝑥; 𝜇, 𝜎) = √ exp − .
𝜎 2𝜋 2𝜎 2
Here, the mean parameter 𝜇 and standard deviation 𝜎 are the parameters.
We can change them to change the shape of the distribution, while always
ensuring that it still sums to one. We draw an example normal distribution
in Fig. A.2, right.
The differences between discrete and continuous probability distributions
are summarized in Table A.1.
A.2 Probability Distributions 289
Probability function Probability mass function (pmf) Probability density function (pdf)
𝑝 : 𝑋 → [ 0, 1 ] 𝑝:𝑋 →R
Í ∫
such that 𝑥∈𝑋 𝑝 ( 𝑥) = 1 such that 𝑥∈𝑋 𝑝 ( 𝑥) = 1
a Due to the sum to 1 constraint, we need one parameter less than the size of the sample space,
Í𝑛− 1
since the last probability is 1 minus all the others: 𝑝𝑛 = 1 − 𝑖= 1 𝑝𝑖 .
b Note that for continuous distributions, probabilities are only defined on intervals. The density
function 𝑝 ( 𝑥) only gives relative probabilities, and therefore we may have 𝑝 ( 𝑥) > 1, like
𝑝 ( 𝑥 = 3) = 5.6, which is of course not possible (one should not interpret it as an absolute
∫𝑏
probability). However, 𝑝 (𝑎 ≤ 𝑥 < 𝑏) = 𝑎 𝑝 ( 𝑥) < 1 by definition.
𝑝(𝑦|𝑥) = 𝑁 (2𝑥, 𝑥 2 )
Note that for each value of 𝑋, 𝑝(𝑌 |𝑋) still integrates to 1, it is a valid
probability distribution.
A.2.4 Expectation
Example:
Assume a given 𝑝(𝑋) for a binary variable:
𝑥 𝑝(𝑋 = 𝑥)
0 0.8
1 0.2
The expectation is
More often, and also in the context of reinforcement learning, we will need the
expectation of a function of the random variable, denoted by 𝑓 (𝑋). Often, this
function maps to a continuous output.
• Assume a function 𝑓 : 𝑋 → R, which, for every value 𝑥 ∈ 𝑋 maps to a
continuous value 𝑓 (𝑥) ∈ R.
• The expectation is then defined as follows:
∑︁
E𝑋 ∼ 𝑝 (𝑋 ) [ 𝑓 (𝑋)] = [ 𝑓 (𝑥) · 𝑝(𝑥)] (A.3)
𝑥 ∈𝑋
For a continuous variable the summation again becomes integration. The formula
may look complicated, but it essentially reweights each function outcome by the
probability that this output occurs, see the example below.
Example:
Assume a given density 𝑝(𝑋) and function 𝑓 (𝑥):
𝑥 𝑝(𝑋 = 𝑥) 𝑓 (𝑥)
1 0.2 22.0
2 0.3 13.0
3 0.5 7.4
The expectation of the function can be computed as
The same principle applies when 𝑝(𝑥) is a continuous density, only with the
summation replaced by integration.
292 A Mathematical Background
Fig. A.3 Entropy of a binary discrete variable. Horizontal axis shows the probability that the
variable takes value 1, the vertical axis shows the associated entropy of the distribution. High
entropy implies high spread in the distribution, while low entropy implies little spread.
A.2.5.1 Information
A.2.5.2 Entropy
𝐻 [ 𝑝] = E𝑋 ∼ 𝑝 (𝑋 ) [𝐼 (𝑋)]
= E𝑋 ∼ 𝑝 (𝑋 ) [− log 𝑝(𝑋)]
∑︁
=− 𝑝(𝑥) log 𝑝(𝑥)
𝑥
(A.4)
A.2 Probability Distributions 293
If the base of the logarithm is 2, then we measure it in bits. When the base of the
logarithm is 𝑒, then we measure the entropy in nats. The continuous version of the
above equation is called the continuous entropy or differential entropy.
Informally, the entropy of a distribution is a measure of the amount of “uncer-
tainty” in a distribution, i.e., a measure of its “spread.” We can nicely illustrate this
with a binary variable (0/1), where we plot the probability of a 1 against the entropy
of the distribution (Fig. A.3). We see that on the two extremes, the entropy of the
distribution is 0 (no spread at all), while the entropy is maximal for 𝑝(𝑥 = 1) = 0.5
(and therefore 𝑝(𝑥 = 0) = 0.5), which gives maximal spread to the distribution.
∑︁
𝐻 [ 𝑝] = − 𝑝(𝑥) log 𝑝(𝑥)
𝑥
= −0.2 · ln 0.2 − 0.3 · ln 0.3 − 0.5 · ln 0.5 = 1.03 nats (A.5)
A.2.5.3 Cross-entropy
The cross-entropy is defined between two distributions 𝑝(𝑋) and 𝑞(𝑋) defined
over the same support (sample space). The cross-entropy is given by:
𝐻 [ 𝑝, 𝑞] = E𝑋 ∼ 𝑝 (𝑋 ) [− log 𝑞(𝑋)]
∑︁
=− 𝑝(𝑥) log 𝑞(𝑥)
𝑥
(A.6)
For two distributions 𝑝(𝑋) and 𝑞(𝑋) we can also define the relative entropy, better
known as the Kullback-Leibler (KL) divergence 𝐷 KL :
294 A Mathematical Background
h 𝑞(𝑋) i
𝐷 KL [ 𝑝||𝑞] = E𝑋 ∼ 𝑝 (𝑋 ) − log
𝑝(𝑋)
∑︁ 𝑞(𝑥)
=− 𝑝(𝑥) log
𝑥
𝑝(𝑥)
(A.7)
h 𝑞(𝑋) i
𝐷 KL [ 𝑝||𝑞] = E𝑋 ∼ 𝑝 (𝑋 ) − log
𝑝(𝑋)
∑︁ ∑︁
= 𝑝(𝑥) log 𝑝(𝑥) − 𝑝(𝑥) log 𝑞(𝑥)
𝑥 𝑥
= 𝐻 [ 𝑝] + 𝐻 [ 𝑝, 𝑞]
(A.8)
∇ 𝜃 E 𝑥∼ 𝑝 𝜃 ( 𝑥) [ 𝑓 (𝑥)] (A.9)
We cannot sample the above quantity, because we have to somehow move the
gradient inside the expectation (and then we can sample the expectation to evaluate
1 Other methods to differentiate through an expectation is through the reparametrization trick, as
for example used in variational auto-encoders, but we will not further treat this topic here.
2 If the parameters only appear in the function 𝑓 ( 𝑥) and not in 𝑝 ( 𝑥), then we can simply push
it). To achieve this, we will use a simple rule regarding the gradient of the log of
some function 𝑔(𝑥):
∇ 𝑥 𝑔(𝑥)
∇ 𝑥 log 𝑔(𝑥) = (A.10)
𝑔(𝑥)
This results from simple application of the chain-rule.
We will now expand Eq. A.9, where we midway apply the above log-derivative
trick.
∑︁
∇ 𝜃 E 𝑥∼ 𝑝 𝜃 ( 𝑥) [ 𝑓 (𝑥)] = ∇ 𝜃 𝑓 (𝑥) · 𝑝 𝜃 (𝑥) definition of expectation
𝑥
∑︁
= 𝑓 (𝑥) · ∇ 𝜃 𝑝 𝜃 (𝑥) push gradient through sum
𝑥
∑︁ ∇ 𝜃 𝑝 𝜃 (𝑥)
= 𝑓 (𝑥) · 𝑝 𝜃 (𝑥) · multiply/divide by 𝑝 𝜃 (𝑥)
𝑥
𝑝 𝜃 (𝑥)
∑︁
= 𝑓 (𝑥) · 𝑝 𝜃 (𝑥) · ∇ 𝜃 log 𝑝 𝜃 (𝑥) log-der. rule (Eq. A.10)
𝑥
= E 𝑥∼ 𝑝 𝜃 ( 𝑥) [ 𝑓 (𝑥) · ∇ 𝜃 log 𝑝 𝜃 (𝑥)] rewrite into expectation
What the above derivation essential does is pushing the derivative inside of the sum.
This equally applies when we change the sum into an integral. Therefore, for any
𝑝 𝜃 (𝑥), we have:
Bellman noted that the value function can be written in recursive form, because the
value is also defined at the next states. In his work on dynamic programming [72],
he derived recursive equations for 𝑉 and 𝑄.
The Bellman equations for state-values and state-action values are:
Depending on whether the state and action space are discrete or continuous re-
spectively, we write out these equations differently. For a discrete state space and
296 A Mathematical Background
Fig. A.4 Graphical illustration of REINFORCE estimator. Left: Example distribution 𝑝 𝜃 ( 𝑥) and
function 𝑓 ( 𝑥). When we evaluate the expectation of Eq. A.11, we take 𝑚 samples, indicated by
the blue dots (in this case 𝑚 = 8). The magnitude of 𝑓 ( 𝑥) is shown with the red vertical arrows.
Right: When we apply the gradient update, each sample pushes up the density at that location, but
the magnitude of the push is multiplied by 𝑓 ( 𝑥). Therefore, the higher 𝑓 ( 𝑥), the harder we push.
Since a density needs to integrate to 1, we will increase the density where we push hardest (in the
example on the rightmost sample). The distribution will therefore shift to the right on this update.
∑︁ ∑︁
𝑇𝑎 (𝑠, 𝑠 0) 𝑟 𝑎 (𝑠, 𝑠 0) + 𝛾 · 𝜋(𝑎|𝑠) [𝑄(𝑠 0, 𝑎 0)]
𝑄(𝑠, 𝑎) =
𝑠0 ∈𝑆 𝑎∈𝐴
For continuous state and action spaces, the summations over policy and transition
are replaced by integration:
∫ h∫ i
𝑇𝑎 (𝑠, 𝑠 0) 𝑟 𝑎 (𝑠, 𝑠 0) + 𝛾 · 𝑉 (𝑠 0) d𝑠 0 d𝑎
𝑉 (𝑠) = 𝜋(𝑎|𝑠)
𝑎 𝑠0
The same principle applies to the Bellman equation for state-action values:
∫ ∫ i
𝑇𝑎 (𝑠, 𝑠 0) 𝑟 𝑎 (𝑠, 𝑠 0) + 𝛾 · [𝜋(𝑎 0 |𝑠 0) · 𝑄(𝑠 0, 𝑎 0) d𝑎 0 d𝑠 0
𝑄(𝑠, 𝑎) =
𝑠0 𝑎0
We may also have a continuous state space (such as visual input) with a discrete
action space (such as pressing buttons in a game):
∫ ∑︁ i
𝑇𝑎 (𝑠, 𝑠 0) 𝑟 𝑎 (𝑠, 𝑠 0) + 𝛾 · [𝜋(𝑎 0 |𝑠 0) · 𝑄(𝑠 0, 𝑎 0) d𝑠 0
𝑄(𝑠, 𝑎) =
𝑠0 𝑎0
Appendix B
Deep Supervised Learning
297
298 B Deep Supervised Learning
In machine learning we often deal with large problem domains. We are interested
in a function that works not just on the particular values on which it was trained,
but also in the rest of the problem domain from which the data items were taken.
In this section we will first see how such a learning process for generalization
works; notation and many examples are from [523]. Next, we will discuss the specific
problems of large domains. Finally, we will discuss the phenomenon of overfitting
and how it relates to the bias-variance trade-off.
The state space of a problem is the space of all possible different states (combina-
tions of values of variables). State spaces grow exponentially with the number of
dimensions (variables); high-dimensional problems have large state spaces, and
modern machine learning algorithms have to be able to learn functions in such
large state spaces.
When the input space 𝑋 is high dimensional, we can never store the entire
state space (all possible pixels with all possible values) as a table. The effect of an
exponential need for observation data as the dimensionality grows, has been called
the curse of dimensionality by Richard Bellman [72]. The curse of dimensionality
states that the cardinality (number of unique points) of a space scales exponentially
in the dimensionality of the space. In a formula, we have that
300 B Deep Supervised Learning
|𝑋 | ∼ exp(Dim(𝑋)).
Since the size of the state space grows exponentially, but the number of observa-
tions typically does not, most state spaces in large machine learning problems are
sparsely populated with observations. An important challenge for machine learning
algorithms is to fit good predictive models on sparse data. To reliably estimate a
function, each variable needs a certain number of observations. The number of
samples that are needed to maintain statistical significance increases exponentially
as the number of dimensions grows. A large amount of training data is required
to ensure that there are several samples for each combination of values;2 however,
datasets rarely grow exponentially.
Basic statistics tells us that, according to the law of large numbers, the more obser-
vations we have of an experiment, the more reliable the estimate of their value is
(that is, the average will be close to the expected value) [93]. This has important
2 Unless we introduce some bias into the problem, by assuming smoothness, implying that there
are dependencies between variables, and that the “true” number of independent variables is smaller
than the number of pixels.
B.1 Machine Learning 301
Fig. B.1 Curve fitting: does the curvy red line or the straight dashed blue line best generalize the
information in the data points?
implications for the study of large problems where we would like to have confidence
in the estimated values of our parameters.
In small toy problems the number of variables of the model is also small. Single-
variable linear regression problems model the function as a straight line 𝑦 = 𝑎 · 𝑥 + 𝑏,
with only one independent variable 𝑥 and two parameters 𝑎 and 𝑏. Typically, when
regression is performed with a small number of independent variables, then a
relatively large number of observations is available per variable, giving confidence
in the estimated parameter values. A rule of thumb is that there should be 5 or more
training examples for each variable [768].
In machine learning dimensionality typically means the number of variables of a
model. In statistics, dimensionality can be a relative concept: the ratio of the number
of variables compared to the number of observations. High-dimensional problems
are then problems with more variables (dimensions 𝑑) than training observations
𝑛. Examples of high-dimensional problems are high-resolution image, speech or
text classification problems, where the dataset is often smaller than the number
of variables (see also Table B.2). In practice, the size of the observation dataset is
limited, and then absolute and relative dimensionality do not differ much. In this
book we follow the machine learning definition of dimensions meaning variables.
Modeling high-dimensional problems well typically requires a model with many
parameters, the so-called high-capacity models. Let us look deeper into the conse-
quences of working with high-dimensional problems. To see how to best fit our
model, let us consider machine learning as a curve fitting problem, see Fig. B.1.
In many problems, the observations are measurements of an underlying natural
process. The observations therefore contain some measurement noise. The goal
is (1) that the approximating curve fits the (noisy) observations as accurately as
possible, but, (2) in order to generalize well, to aim to fit the signal, not the noise.
302 B Deep Supervised Learning
Fig. B.2 Bias-variance trade-off; few-parameter model: high bias; many-parameter model: high
variance
the observations. Models with a capacity of 𝑑 ≈ 𝑛 can have both good bias and
variance [93].
Preventing underfitting and overfitting is a matter of matching the capacity
of our model so that 𝑑 matches 𝑛. This is a delicate trade-off, since reducing the
capacity also reduces expressive power of the models. To reduce overfitting, we
can use regularization. Many regularization methods to reduce overfitting have
been devised. Regularization has the effect of dynamically adjusting capacity to the
number of observations.
𝑥1 𝑦1
𝑥2 𝑦2
for weight 𝑖 𝑗 of predecessor neuron 𝑜𝑖 . The outputs of this layer of neurons are fed
to the inputs for the weights of the next layer.
B.2.2 Backpropagation
Loss Function
At the output layer the propagated value 𝑦 is compared with the other part of
the example pair, the label 𝑧. The difference with the label is calculated, yielding
the error. The error function is also known as the loss function L. Two common
error functions are the mean squared error 𝑛1 𝑖𝑛 (𝑦 𝑖 − 𝑦ˆ𝑖 ) 2 (for regression) and the
Í
Í𝑀
cross-entropy error − 𝑖 𝑦 𝑖 log 𝑦ˆ𝑖 (for classification of 𝑀 classes). The backward
pass uses the difference between the forward recognition outcome and the true
label to adjust the weights, so that the error becomes smaller. This method uses the
gradient of the error function over the weights, and is called gradient descent. The
parameters are adjusted as follows:
𝜃 𝑡+1 = 𝜃 𝑡 − 𝛼∇ 𝜃𝑡 L 𝐷 ( 𝑓 𝜃𝑡 )
where 𝜃 are the network parameters, 𝑡 is the optimization time step, 𝛼 is the learning
rate, ∇ 𝜃𝑡 are the current gradient of the loss function of data L 𝐷 , and 𝑓 𝜃 is the
parameterized objective function.
B.2 Deep Learning 307
The training process can be stopped when the error has been reduced below a
certain threshold for a single example, or when the loss on an entire validation set
has dropped sufficiently. More elaborate stopping criteria can be used in relation to
overfitting (see Sect. B.2.7).
Most neural nets are trained using a stochastic version of gradient descent, or
SGD [722]. SGD samples a minibatch of size smaller than the total dataset, and
thereby computes a noisy estimate of the true gradient. This is faster per update
step, and does not affect the direction of the gradient too much. See, Goodfellow et
al. [279], for details.
Let us now look in more detail at how neural networks can be used to implement
end-to-end feature learning.
Approximation can be achieved through the discovery of common features
in states. Let us, again, concentrate on image recognition. Traditionally, feature
discovery was a manual process. Image-specialists would painstakingly pour over
images to identify common features in a dataset, such as lines, squares, circles, and
angles, by hand. They would write small pre-processing algorithms to recognize
the features that were then used with classical machine learning methods such
as decision trees, support vector machines, or principal component analysis to
construct recognizers to classify an image. This hand-crafted methods is a labor-
intensive and error prone process, and researchers have worked to find algorithms
for the full image recognition process, end-to-end. For this to work, also the features
must be learned.
For example, if the function approximator consists of the sum of 𝑛 features, then
with hand-crafted features, only the coefficients 𝑐 𝑖 of the features in the function
are learned, as opposed to the coefficients 𝑐 𝑖 and the features 𝑓𝑖 (𝑠) (as in end-to-end
learning).
Deep end-to-end learning has achieved great success in image recognition, speech
recognition, and natural language processing [430, 286, 852, 193]. End-to-end learn-
ing is the learning of a classifier directly from high-dimensional, raw, un-pre-
processed, pixel data, all the way to the classification layer, as opposed to learning
pre-processed data from intermediate (lower dimensional) hand-crafted features.
We will now see in more detail how neural networks can perform automated
feature discovery. LeCun, Bengio, and Hinton [458], explain how multiple hidden
308 B Deep Supervised Learning
layers in a deep network can learn increasingly abstract representations. The hier-
archy of network layers together can recognize a hierarchy of low-to-high level
concepts [458, 461]. For example, in face recognition (Fig. B.6) the first hidden layer
may encode edges; the second layer then composes and encodes simple structures of
edges; the third layer may encode higher-level concepts such as noses or eyes; and
the fourth layer may work at the abstraction level of a face. Deep feature learning
finds what to abstract at which level on its own [76], and can come up with classes
of intermediate concepts, that work, but look counterintuitive upon inspection by
humans.
Towards the end of the 1990s the work on neural networks moved into deep
learning, a term coined by Dechter in [187]. LeCun et al. [460] published an influ-
ential paper on deep convolutional nets. The paper introduced the architecture
LeNet-5, a seven-layer convolutional neural net trained to classify handwritten
MNIST digits from 32 × 32 pixel images. Listing B.2 shows a modern rendering of
LeNet in Keras. The code straightforwardly lists the layer definitions.
End-to-end learning is computationally quite demanding. After the turn of the
century, methods, datasets, and compute power had improved to such an extent
that full raw, un-pre-processed pictures could be learned, without the intermediate
step of hand-crafting features. End-to-end learning proved very powerful, achieving
higher accuracy in image recognition than previous methods, and even higher than
human test subjects (see, for example, [430]). In natural language processing, deep
transformer models such as BERT and GPT-2 and 3 have reached equally impressive
results [193, 114].
B.2 Deep Learning 309
1
2 def lenet_model ( img_shape =(28 , 28 , 1) , n_classes =10 , l2_reg =0. ,
3 weights = None ) :
4
5 # Initialize model
6 lenet = Sequential ()
7
8 # 2 sets of CRP ( Convolution , RELU , Pooling )
9 lenet . add ( Conv2D (20 , (5 , 5) , padding = " same " ,
10 input_shape = img_shape , k e r n e l _ r e g u l a r i z e r = l2 (
l2_reg ) ) )
11 lenet . add ( Activation ( " relu " ) )
12 lenet . add ( MaxPooling2D ( pool_size =(2 , 2) , strides =(2 , 2) ) )
13
14 lenet . add ( Conv2D (50 , (5 , 5) , padding = " same " ,
15 k e r n e l _ r e g u l a r i z e r = l2 ( l2_reg ) ) )
16 lenet . add ( Activation ( " relu " ) )
17 lenet . add ( MaxPooling2D ( pool_size =(2 , 2) , strides =(2 , 2) ) )
18
19 # Fully connected layers ( w / RELU )
20 lenet . add ( Flatten () )
21 lenet . add ( Dense (500 , k e r n e l _ r e g u l a r i z e r = l2 ( l2_reg ) ) )
22 lenet . add ( Activation ( " relu " ) )
23
24 # Softmax ( for cla ssific ation )
25 lenet . add ( Dense ( n_classes , k e r n e l _ r e g u l a r i z e r = l2 ( l2_reg ) )
)
26 lenet . add ( Activation ( " softmax " ) )
27
28 if weights is not None :
29 lenet . load_weights ( weights )
30
31 # Return the constructed network
32 return lenet
Function Approximation
Let us have a look at the different kinds of functions that we wish to approximate
in machine learning. The most basic function establishing an input/output relation
is regression, which outputs a continuous number. Another important function is
classification, which outputs a discrete number. Regression and classification are
often learned through supervision, with a dataset of examples (observations) and
labels.
In reinforcement learning, three functions are typically approximated: the value
function 𝑉 (𝑠), that relates states to their expected cumulative future rewards, the
action-value function 𝑄(𝑠, 𝑎) that relate actions to their values, and the policy
310 B Deep Supervised Learning
function 𝜋(𝑠) that relates states to an action (or 𝜋(𝑎|𝑠) to an action distribution).4
In reinforcement learning, the functions 𝑉, 𝑄, 𝜋 are learned through reinforcement
by the environment. Table B.3 summarizes these functions.
The first neural networks consisted of fully connected layers (Fig. B.5). In image
recognition, the input layer of a neural network is typically connected directly to
the input image. Higher resolution images therefore need a higher number of input
neurons. If all layers would have more neurons, then the width of the network
grows quickly. Unfortunately, growing a fully connected network (see Fig. B.4) by
increasing its width (number of neurons per layer) will increase the number of
parameters quadratically.
The naive solution of high-resolution problem learning is to increase the capacity
of the model 𝑚. However, because of the problem of overfitting, as 𝑚 grows, so
must the number of examples, 𝑛.
The solution lies in using a sparse interconnection structure instead of a fully
connected network. Convolutional neural nets (CNNs) take their inspiration from
biology. The visual cortex in animals and humans is not fully connected, but locally
connected, in a receptive field [356, 357, 502]. Convolutions efficiently exploit prior
knowledge about the structure of the data: patterns reoccur at different locations in
the data (translation invariance), and therefore we can share parameters by moving
a convolutional window over the image.
A CNN consists of convolutional operators or filters. A typical convolution
operator has a small receptive field (it only connects to a limited number of neurons,
say 5 × 5), whereas a fully connected neuron connects to all neurons in the layer
below. Convolutional filters detect the presence of local patterns. The next layer
thus acts as a feature map. A CNN layer can be seen as a set of learnable filters,
invariant for local transformations [279].
Filters can be used to identify features. Features are basic elements such as edges,
straight lines, round lines, curves, and colors. To work as a curve detector, for
4 In Chap. 5, on model-based learning, we also approximate the transition function 𝑇𝑎 ( ·) and the
reward function 𝑅𝑎 ( ·).
B.2 Deep Learning 311
example, the filter should have a pixel structure with high values indicating a shape
of a curve. By then multiplying and adding these filter values with the pixel values,
we can detect whether the shape is present. The sum of the multiplications in the
input image will be large if there is a shape that resembles the curve in the filter.
This filter can only detect a certain shape of curve. Other filters can detect other
shapes. Larger activation maps can recognize more elements in the input image.
Adding more filters increases the size of the network, which effectively enlarges
the activation map. The filters in the first network layer process (“convolve”) the
input image and fire (have high values) when a specific feature that it is built to
detect is in the input image. Training a convolutional net is training a filter that
consists of layers of subfilters.
By going through the convolutional layers of the network, increasingly complex
features can be represented in the activation maps. Once they are trained, they can
be used for as many recognition tasks as needed. A recognition task consists of a
single quick forward pass through the network.
Let us spend some more time on understanding these filters.
Shared Weights
In CNNs the filter parameters are shared in a layer. Each layer thus defines a filter
operation. A filter is defined by few parameters but is applied to many pixels of
the image; each filter is replicated across the entire visual field. These replicated
units share the same parameterization (weight vector and bias) and form a feature
map. This means that all the neurons in a given convolutional layer respond to
the same feature within their specific response field. Replicating units in this way
allows for features to be detected regardless of their position in the visual field, thus
constituting the property of translation invariance.
This weight sharing is also important to prevent an increase in the number of
weights in deep and wide nets, and to prevent overfitting, as we shall see later.
Real-world images consist of repetitions of many smaller elements. Due to this
so-called translation invariance, the same patterns reappear throughout an image.
CNNs can take advantage of this. The weights of the links are shared, resulting in a
large reduction in the number of weights that have to be trained. Mathematically
CNNs put constraints on what the weight values can be. This is a significant
advantage of CNNs, since the computational requirements of training the weights
of fully connected layers are prohibitive. In addition, statistical strength is gained,
since the effective data per weight increases.
Deep CNNs work well in image recognition tasks, for visual filtering operations
in spatial dependencies, and for feature recognition (edges, shapes) [459].5
5Interestingly, this paper was already published in 1989. The deep learning revolution happened
twenty years later, when publicly available datasets, more efficient algorithms, and more compute
power in the form of GPUs were available.
312 B Deep Supervised Learning
CNN Architecture
Convolutions recognize features—the deeper the network, the more complex the
features. A typical CNN architecture consists of a number of stacked convolutional
layers. In the final layers, fully connected layers are used to then classify the inputs.
In the convolutional layers, by connecting only locally, the number of weights
is dramatically reduced in comparison with a fully connected net. The ability of a
single neuron to recognize different features, however, is less than that of a fully
connected neuron.
By stacking many such locally connected layers on top of each other we can
achieve the desired nonlinear filters whose joint effect becomes increasingly global.6
The neurons become responsive to a larger region of pixel space, so that the network
first creates representations of small parts of the input, and from these representa-
tions create larger areas. By stacking convolutional layers on top of each other, they
can recognize and represent increasingly complex concepts without an explosion
of weights.
A typical CNN architecture consists of an architecture of multiple layers of
convolution, max pooling, and ReLU layers, topped off by a fully connected layer
(Fig. B.7).7
Max Pooling
A further method for reducing the number of weights is weight pooling. Pooling is
a kind of nonlinear downsampling (expressing the information in lower resolution
with fewer bits). Typically, a 2 × 2 block is sampled down to a scalar value (Fig. B.8).
Pooling reduces the dimension of the network. The most frequently used form is
6 Nonlinearity is essential. If all neurons performed linearly, then there would be no need for
layers. Linear recognition functions cannot discriminate between cats and dogs.
7 Often with a softmax function. The softmax function normalizes an input vector of real numbers
𝑓 ( 𝑥)
to a probability distribution [ 0, 1 ]; 𝑝 𝜃 ( 𝑦 |𝑥) = softmax( 𝑓𝜃 ( 𝑥)) = Í 𝑒 𝜃𝑓𝜃 ,𝑘 ( 𝑥)
𝑘 𝑒
B.2 Deep Learning 313
Fig. B.9 RNN 𝑥𝑡 is an input vector, ℎ𝑡 is the output/prediction, and 𝐴 is the RNN [566]
To understand how RNNs work, it helps to unroll the network, as has been
done in Fig. B.10. The recurrent neuron loops have been drawn as a straight line to
show the network in a deep layered style, with connections between the layers. In
reality the layers are time steps in the processing of the recurrent connections. In a
sense, an RNN is a deeply layered neural net folded into a single layer of recurrent
neurons.
Where deep convolutional networks are successful in image classification, RNNs
are used for tasks with a sequential nature, such as captioning challenges. In a
captioning task the network is shown a picture, and then has to come up with a
textual description that makes sense [815].
The main innovation of recurrent nets is that they allow us to work with se-
quences of vectors. Figure B.11 shows different combinations of sequences that we
will discuss now. There can be sequences in the input, in the output, or in both.
Karpathy has written an accessible and well-illustrated blog on the different RNN
configurations [398]. The figure shows different rectangles. Each rectangle is a
vector. Arrows represent computations, such as matrix multiply. Input vectors are
in red, output vectors are in blue, and green vectors hold the state. Following [398],
from left to right we see:
1. One to one, the standard network without RNN. This network maps a fixed-sized
input to fixed-sized output, such as an image classification task (picture in/class
out).
B.2 Deep Learning 315
2. One to many adds a sequence in the output. This can be an image captioning
task that takes an image and outputs a sentence of words.
3. Many to one is the opposite, with a sequence in the input. Think for example of
sentiment analysis (a sentence is classified for words with negative or positive
emotional meaning).
4. Many to many has both a sequence for input and a sequence for output. This
can be the case in machine translation, where a sentence in English is read and
then a sentence in Français is produced.
5. Many to many is a related but different situation, with synchronized input and
output sequences. This can be the case in video classification where each frame
of the video should be labeled.
Deep learning is a highly active field of research, in which many advanced network
architectures have been developed. We will describe some of the better known
architectures.
316 B Deep Supervised Learning
Residual Networks
Normally neural networks are used in forward mode, to discriminate input images
into classes, going from high dimensional to low dimensional. Networks can also
be run backwards, to generate an image that goes with a certain class, going from
low dimensional to high dimensional.8 Going from small to large implies many
possibilities for the image to be instantiated. Extra input is needed to fill in the
degrees of freedom.
Running the recognizers backwards, in generative mode, has created an active
research area called deep generative modeling. An important type of generative
model that has made quite an impact is the Generative adversarial network, or
GAN [280].
Deep networks are susceptible to adversarial attacks. A well-known problem of
the image recognition process is that it is brittle. It was found that if an image is
slightly perturbed, and imperceptibly to the human eye, deep networks can easily
8 Just like the decoding phase of autoencoders.
B.2 Deep Learning 317
Fig. B.16 Autoencoder, finding the “essence” of the data in the middle [537]
Autoencoders
Autoencoders and variational autoencoders (VAE) are used in deep learning for
dimensionality reduction, in unsupervised learning [427, 410, 411]. An autoencoder
network has a butterfly-like architecture, with the same number of neurons in
the input and the output layers, but a decreasing layer-size as we go to the center
(Fig. B.16). The input (contracting) side is said to perform a discriminative action,
such as image classification, and the other (expanding) side is generative [280].
When an image is fed to both the input and the output of the autoencoder, results in
the center layers are exposed to a compression/decompression process, resulting in
B.2 Deep Learning 319
Attention Mechanism
Transformers
B.2.7 Overfitting
• Data Augmentation Overfitting occurs when there are more parameters in the
network than examples to train on. The training dataset is increased through
manipulations such as rotations, reflections, noise, rescaling, etc. A disadvantage
of this method is that the computational cost of training increases.
B.3 Datasets and Software 321
Implementing a full working deep learning algorithm that performs well on a new
problem is a challenging task. Many problems of backpropagation, gradient descent,
overfitting, and numerical stability have to be solved. Fortunately, software packages
provide high-quality ready-to-use solutions. It is because of the free availability
of high-quality deep learning methods that so much progress has been made in
recent years. Advances in image recognition, speech recognition, game playing,
automated translation, and autonomous vehicles are made possible in large part by
these software suites. The free availability of high-quality implementations may be
the fourth reason for the deep learning breakthrough.
PyTorch and TensorFlow provide high-quality implementations of many ma-
chine learning and neural network algorithms and operations. (A tensor is a multi-
dimensional array, often used for matrix transformations.) The programming con-
cept of TensorFlow takes some getting used to. Programs are constructed as a
B.3 Datasets and Software 323
data-flow graph in which the sequence of tensors defines the operations. A higher
level, easier, interface is provided by Keras. Keras comes with TensorFlow, is and is
recommended to start with.
One of the most important elements for the success in image recognition was the
availability of good datasets.
In the early days of deep learning, the field benefited greatly from efforts in
handwriting recognition. This application was of great value to the postal service,
where accurate recognition of handwritten zip codes or postal codes allowed great
improvements to efficient sorting and delivery of the mail. (In the 1980s mail was
delivered physically.) A standard test set for handwriting recognition was MNIST
(for Modified National Institute of Standards and Technology) [460]. Standard
MNIST images are low-resolution 32 × 32 pixel images of single handwritten digits
(Fig. B.19). Of course, researchers wanted to process more complex scenes than
single digits, and higher-resolution images. To achieve higher accuracy, and to
process more complex scenes, networks (and datasets) needed to grow in size and
complexity.
ImageNet
be tested and improved, and new algorithms to be created. ImageNet was conceived
by Fei-Fei Li et al. in 2006, and in later years she developed it further with her group.
Since 2010 an annual software contest has been organized, the ImageNet Large
Scale Visual Recognition Challenge (ILSVRC) [191]. Since 2012 ILSVRC has been
won by deep networks, starting the deep learning boom. The network architecture
that won this challenge in that year has become known as AlexNet, after one of its
authors [430].
The 2012 ImageNet database as used by AlexNet has 14 million labeled images.
The network featured a highly optimized 2D two-GPU implementation of 5 convolu-
tional layers and 3 fully connected layers. The filters in the convolutional layers are
11 × 11 in size. The neurons use a ReLU activation function. In AlexNet images were
scaled to 256 × 256 RGB pixels. The size of the network was large, with 60 million
parameters. This causes considerable overfitting. AlexNet used data augmentation
and dropouts to reduce the impact of overfitting.
Krizhevsky et al. won the 2012 ImageNet competition with an error rate of 15%,
significantly better than the number two, who achieved 26%. Although there were
earlier reports of CNNs that were successful in applications such as bioinformat-
ics and Chinese handwriting recognition, it was this win of the 2012 ImageNet
competition for which AlexNet has become well known.
AlexNet has become an important network architecture for research. Many
resources are available. The original code is available on GitHub at AlexNet.10
Berkeley’s Caffe project maintains a model zoo where multiple trained models are
available, complete with Caffe code.11 Also a Keras implementation of AlexNet,
LeNet, and VGG is available.12
The deep learning breakthrough around 2012 was caused by the co-occurrence of
three major developments: (1) algorithmic advances that solved key problems in
deep learning, (2) the availability of large datasets of labeled training data, and (3)
the availability of computational power in the form of graphical processing units,
GPUs.
The most expensive operations in image processing and neural network training
are operations on matrices. Matrix operations are some of the most well-studied
problems in computer science. Their algorithmic structure is well understood, and
for basic linear algebra operations high-performance parallel implementations for
CPU exist, such as the BLAS [202, 145].
GPUs were originally designed for smooth graphics performance in video games.
Graphical processing requires fast linear algebra computations such as matrix
10 https://github.com/akrizhevsky/cuda-convnet2
11 https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet
12 https://github.com/eweill/keras-deepcv/tree/master/models/classification
B.3 Datasets and Software 325
multiply. These are precisely the kind of operations that are at the core of deep
learning training algorithms. Modern GPUs consist of thousands of small arithmetic
units that are capable of performing linear algebra matrix operations very fast in
parallel. This kind of data parallel processing is based on SIMD computing, for single-
instruction-multiple-data [248, 329]. SIMD data parallism goes back to designs from
1960s and 1970s vector supercomputers such as the Connection Machine series
from Thinking Machines [338, 339, 465]. Figure B.20 shows a picture of the historic
CM-1, and of a modern GPU.
Modern GPUs consist of thousands of processing units optimized to process linear
algebra matrix operations in parallel [655, 485, 720], offering matrix performance
that is orders of magnitude faster than CPUs [565, 152, 745].
It is high time to try out some of the material in practice. Let us see if we can do
some image recognition ourselves.
The field of reinforcement learning has embraced deep learning. Two main deep
learning suites are TensorFlow and PyTorch. We will first install TensorFlow. It is
possible to run TensorFlow in the cloud, in Colab, or in a Docker container. Links to
ready to run Colab environments are on the TensorFlow website. We will, however,
asssume a traditional local installation on your own computer. All major operating
systems are supported: Linux/Ubuntu, macOS, Windows.
326 B Deep Supervised Learning
The programming model of TensorFlow is complex, and not very user friendly.
Fortunately, an easy to use language has been built on top of TensorFlow: Keras.
The Keras language is easy to use, well-documented, and many examples exist to
get you started. When you install TensorFlow, Keras is installed automatically as
well.
To install TensorFlow and Keras go to the TensorFlow page on https://www.
tensorflow.org. It is recommended to make a virtual environment to isolate the
package installation from the rest of your system. This is achieved by typing
(or equivalent) to create the virtual environment. Using the virtual environment
requires activation:13
source ./venv/bin/activate
You will most likely run into version issues when installing packages. Note that
for deep reinforcement learning we will be making extensive use of reinforcement
learning agent algorithms from the so-called Stable Baselines.14 The stable baselines
13 More installation guidance can be found on the TensorFlow page at https://www.tensorflow.
org/install/pip.
14 https://github.com/hill-a/stable-baselines
B.3 Datasets and Software 327
work with version 1 of TensorFlow and with PyTorch, but, as of this writing, not
with version 2 of TensorFlow. TensorFlow version 1.14 has been tested to work.15
Installing is easy with Python’s pip package manager: just type
This should now download and install TensorFlow and Keras. We will check if
everything is working by executing the MNIST training example with the default
training dataset.
Keras is built on top of TensorFlow, and comes when you install TensorFlow. Basic
Keras mirrors the familiar Scikit-learn interface [594].
Each Keras program specifies a model that is to be learned. The model is the
neural network, that consists of weights and layers (of neurons). You will specify the
architecure of the model in Keras, and then fit the model on the training data. When
the model is trained, you can evaluate the loss function on test data, or perform
predictions of the outcome based on some test input example.
Keras has two main programming paradigms: sequential and functional. List-
ing B.3 shows the most basic Sequential Keras model, from the Keras documentation,
with a two-layer model, a ReLU layer, and a softmax layer, using simple SGD for
backpropagation. The Sequential model in Keras has an object-oriented syntax.
The Keras documentation is at https://keras.io/getting_started/intro_
to_keras_for_researchers/. It is quite accessible, and you are encouraged to
learn Keras by working through the online tutorials.
A slightly more useful example is fitting a model on MNIST, see Listing B.4. This
example uses a more flexible Keras syntax, the functional API, in which transforma-
tions are chained on top of the previous layers. This example of Keras code loads
MNIST images in a training set and a test set, and creates a model of dense ReLU
layers using the Functional API. It then creates a Keras model of these layers, and
prints a summary of the model. Then the model is trained from numpy data, with
15 You may be surprised that so many version numbers were mentioned. Unfortunately not
all versions are compatible; some care is necessary to get the software to work: Python 3.7.9,
TensorFlow 1.14.0, Stable Baselines 2, pyglet 1.5.11 worked at the time of writing. This slightly
embarrassing situation is because the field of deep reinforcement learning is driven by a community
of researchers, who collaborate to make code bases work. When new insights trigger a rewrite that
loses backward compatibility, as happened with TensorFlow 2.0, frantic rewriting of dependent
software occurs. The field is still a new field, and some instability of software packages will remain
with us for the foreseeable future.
328 B Deep Supervised Learning
the fit function, for a single epoch, and also from a dataset, and the loss history is
printed.
The code of the example shows how close Keras is to the way in which we
think and reason about neural networks. The examples only show the briefest of
glimpses to what is possible in Keras. Keras has options for performance monitoring,
for checkpointing of long training runs, and for interfacing with TensorBoard, to
visualize the training process. TensorBoard is an indispensible tool, allowing you
to debug your intuition of what should be going on in your network, and what is
going on.
Deep reinforcement learning is still very much a field with more degrees of
freedom in experimentation than established practices, and being able to plot the
progress of training processes is essential for a better understanding of how the
model behaves. The more you explore Keras, the better you will be able to progress
in deep reinforcement learning [269].
B.3 Datasets and Software 329
Exercises
Below are some questions to check your understanding of deep learning. Each
question is a closed question where a simple, single sentence answer is expected.
Questions
1. Datasets are often split into two sub-datasets in machine learning. Which are
those two, and why are they split?
2. What is generalization?
3. Sometimes a third sub-dataset is used. What is it called and what is it used for?
4. If we consider observations and model parameters, when is a problem high-
dimensional and when is it low-dimensional?
5. How do we measure the capacity of a machine learning model?
6. What is a danger with low capacity models?
7. What is a danger with high capacity models?
8. What is the effect of overfitting on generalization?
9. What is the difference between supervised learning and reinforcement learning?
10. What is the difference between a shallow network and a deep network?
11. Supervised learning has a model and a dataset, reinforcement learning has which
two central concepts?
12. What phases does a learning epoch have? What happens in each phase?
13. Name three factors that were essential for the deep learning breakthrough, and
why.
14. What is end-to-end learning? Do you know an alternative? What are the advan-
tages of each?
15. What is underfitting, and what causes it? What is overfitting, and what causes
it? How can you see if you have overfitting?
16. Name three ways to prevent overfitting.
17. Which three types of layers does a neural network have?
18. How many hidden layers does a shallow neural network have?
19. Describe how adjusting weights works in a neural network. Hint: think of a
examples, labels, forward phase, a backward phase, error functions, gradients.
20. What is the difference between a fully connected network and a convolutional
neural network?
21. What is max pooling?
22. Why are shared weights advantageous?
23. What is feature learning?
24. What is representation learning?
25. What is deep learning?
26. What is an advantage of convolutional neural networks of fully connected neural
networks?
27. Name two well-known image recognition data sets.
330 B Deep Supervised Learning
Exercises
Let us now start with some exercises. If you have not done so already, install
PyTorch16 or TensorFlow and Keras (see Sect. 2.2.4.1 or go to the TensorFlow
page).17 Be sure to check the right versions of Python, TensorFlow, and the Stable
Baselines to make sure they work well together. The exercises below are meant to
be done in Keras.
1. Generalization Install Keras. Go to the Keras MNIST example. Perform a clas-
sification task. Note how many epochs the training takes, and in testing, how
well it generalizes. Perform the classification on a smaller training set, how does
learning rate change, how does generalization change. Vary other elements: try a
different optimizer than adam, try a different learning rate, try a different (deeper)
architecture, try wider hidden layers. Does it learn faster? Does it generalize
better?
2. Overfitting Use Keras again, but this time on ImageNet. Now try different over-
fitting solutions. Does the training speed change? Does generalization change?
Now try the hold-out validation set. Do training and generalization change?
3. Confidence How many runs did you do in the previous exercises, just a single
run to see how long training took and how well generalization worked? Try
to run it again. Do you get the same results? How large is the difference? Can
you change the random seeds of Keras or TensorFlow? Can you calculate the
confidence interval, how much does the confidence improve when you do 10
randomized runs? How about 100 runs? Make graphs with error bars.
4. GPU It might be that you have access to a GPU machine that is capable of
running PyTorch or TensorFlow in parallel to speed up the training. Install the
GPU version and check that it recognizes the GPU and is indeed using it.
5. Parallelism It might be that you have access to a multicore CPU machine. When
you are running multiple runs in order to improve confiedence, then an easy way
to speed up your experiment is to spawn multiple jobs at the shell, assigning the
output to different log files, and write a script to combine results and draw graphs.
Write the scripts necessary to achieve this, test them, and do a large-confidence
experiment.
16 https://pytorch.org
17 https://www.tensorflow.org
Appendix C
Deep Reinforcement Learning Suites
Deep reinforcement learning is a highly active field of research. One reason for
the progress is the availability of high quality algorithms and code: high quality
environments, algorithms and deep learning suites are all being made available by
researchers along with their research papers. This appendix provides pointers to
these codes, for your convenience.
331
332 C Deep Reinforcement Learning Suites
C.1 Environments
Progress has benefited greatly from the availability of high quality environments, on
which the algorithms can be tested. We provide pointers to some of the environments
(Table C.1).
For value-based and policy-based methods most mainstream algorithms have been
collected and are freely available. For two-agent, multi-agent, hierarchical, and meta
learning, the agent algorithms are also on GitHub, but not always in the same place
as the basic algorithms. Table C.2 provides pointers.
The two most well-known deep learning suites are TensorFlow and PyTorch. Base-
TensorFlow has a complicated programming model. Keras has been developed as
an easy to use layer on top of TensorFlow. When you use TensorFlow, start with
Keras. Or use PyTorch.
TensorFlow and Keras are at https://www.tensorflow.org.
PyTorch is at https://pytorch.org.
References
1. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu
Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg,
Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay
Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A
system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems
Design and Implementation (OSDI 16), pages 265–283, 2016. 304, 322
2. Pieter Abbeel, Adam Coates, and Andrew Y Ng. Autonomous helicopter aerobatics through
apprenticeship learning. The International Journal of Robotics Research, 29(13):1608–1639,
2010. 6
3. Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y Ng. An application of reinforce-
ment learning to aerobatic helicopter flight. In Advances in Neural Information Processing
Systems, pages 1–8, 2007. 5, 6
4. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and
Martin Riedmiller. Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920,
2018. 170
5. Abhishek. Multi-arm bandits: a potential alternative to a/b tests https://medium.com/
brillio-data-science/multi-arm-bandits-a-potential-alternative-to-a-b-
tests-a647d9bf2a7e, 2019. 48
6. Bruce Abramson. Expected-outcome: A general model of static evaluation. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 12(2):182–193, 1990. 164
7. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Belle-
mare. Deep reinforcement learning at the edge of the statistical precipice. arXiv preprint
arXiv:2108.13264, 2021. 114
8. Pulkit Agrawal, Ross Girshick, and Jitendra Malik. Analyzing the performance of multilayer
neural networks for object recognition. In European Conference on Computer Vision, pages
329–344. Springer, 2014. 245
9. Sanjeevan Ahilan and Peter Dayan. Feudal multi-agent hierarchies for cooperative reinforce-
ment learning. arXiv preprint arXiv:1901.08492, 2019. 233
10. Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding
for attribute-based classification. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 819–826, 2013. 261, 268, 269, 276
11. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna:
A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM
SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2623–2631,
2019. 260
12. Jay Alammer. The illustrated transformer. https://jalammar.github.io/illustrated-
transformer/. 320
335
336 References
13. Stefano Albrecht and Peter Stone. Multiagent learning: foundations and recent trends. In
Tutorial at IJCAI-17 conference, 2017. 223
14. Stefano Albrecht and Peter Stone. Autonomous agents modelling other agents: A compre-
hensive survey and open problems. Artificial Intelligence, 258:66–95, 2018. 208, 223
15. Ethem Alpaydin. Introduction to Machine Learning. MIT press, 2009. 40, 45
16. Safa Alver. The option-critic architecture. https://alversafa.github.io/blog/2018/
11/28/optncrtc.html, 2018. 229
17. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané.
Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016. 268
18. Ankesh Anand, Jacob Walker, Yazhe Li, Eszter Vértes, Julian Schrittwieser, Sherjil Ozair,
Théophane Weber, and Jessica B Hamrick. Procedural generalization by planning with
self-supervised world models. arXiv preprint arXiv:2111.01587, 2021. 136, 140
19. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder,
Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay.
In Advances in Neural Information Processing Systems, pages 5048–5058, 2017. 231, 233, 241
20. Anonymous. Go AI strength vs. time. Reddit post, 2017. 154
21. Thomas Anthony, Tom Eccles, Andrea Tacchetti, János Kramár, Ian M. Gemp, Thomas C.
Hudson, Nicolas Porcel, Marc Lanctot, Julien Pérolat, Richard Everett, Satinder Singh, Thore
Graepel, and Yoram Bachrach. Learning to play no-press diplomacy with best response
policy iteration. In Advances in Neural Information Processing Systems, 2020. 208
22. Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning
and tree search. In Advances in Neural Information Processing Systems, pages 5360–5370,
2017. 144, 179
23. Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. arXiv
preprint arXiv:1810.09502, 2018. 268
24. Grigoris Antoniou and Frank Van Harmelen. A Semantic Web Primer. MIT press Cambridge,
MA, 2008. 9
25. Oleg Arenz. Monte Carlo Chess. Master’s thesis, Universität Darmstadt, 2012. 188
26. Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning.
In Advances in Neural Information Processing Systems, pages 41–48, 2007. 251
27. Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. Deep
reinforcement learning: A brief survey. IEEE Signal Processing Magazine, 34(6):26–38, 2017.
25, 28, 59, 86
28. John Asmuth, Lihong Li, Michael L Littman, Ali Nouri, and David Wingate. A Bayesian
sampling approach to exploration in reinforcement learning. In Proceedings of the Twenty-
Fifth Conference on Uncertainty in Artificial Intelligence, pages 19–26. AUAI Press, 2009.
278
29. Arthur Aubret, Laetitia Matignon, and Salima Hassas. A survey on intrinsic motivation in
reinforcement learning. arXiv preprint arXiv:1908.06976, 2019. 237, 238, 241, 277
30. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of
Machine Learning Research, 3(Nov):397–422, 2002. 47, 168
31. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47(2-3):235–256, 2002. 168, 170
32. Peter Auer and Ronald Ortner. UCB revisited: Improved regret bounds for the stochastic
multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010. 168
33. Robert Axelrod. An evolutionary approach to norms. The American Political Science Review,
pages 1095–1111, 1986. 223
34. Robert Axelrod. The complexity of cooperation: Agent-based models of competition and
collaboration, volume 3. Princeton university press, 1997. 209, 216, 223
35. Robert Axelrod. The dissemination of culture: A model with local convergence and global
polarization. Journal of Conflict Resolution, 41(2):203–226, 1997. 223
36. Robert Axelrod and Douglas Dion. The further evolution of cooperation. Science,
242(4884):1385–1390, 1988. 200
37. Robert Axelrod and William D Hamilton. The evolution of cooperation. Science,
211(4489):1390–1396, 1981. 200, 223
References 337
38. Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Emma Brunskill, Zachary C Lipton,
and Animashree Anandkumar. Surprising negative results for generative adversarial tree
search. arXiv preprint arXiv:1806.05780, 2018. 144
39. Mohammad Babaeizadeh, Mohammad Taghi Saffar, Danijar Hafner, Harini Kannan, Chelsea
Finn, Sergey Levine, and Dumitru Erhan. Models, pixels, and rewards: Evaluating design
trade-offs in visual model-based reinforcement learning. arXiv preprint arXiv:2012.04603,
2020. 140
40. Thomas Bäck. Evolutionary Algorithms in Theory and Practice: Evolutionary Strategies,
Evolutionary Programming, Genetic Algorithms. Oxford University Press, 1996. 209, 210, 223
41. Thomas Bäck, David B Fogel, and Zbigniew Michalewicz. Handbook of evolutionary compu-
tation. Release, 97(1):B1, 1997. 223
42. Thomas Bäck, Frank Hoffmeister, and Hans-Paul Schwefel. A survey of evolution strategies.
In Proceedings of the fourth International Conference on Genetic Algorithms, 1991. 202, 223
43. Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter
optimization. Evolutionary Computation, 1(1):1–23, 1993. 11, 223
44. Christer Backstrom and Peter Jonsson. Planning with abstraction hierarchies can be expo-
nentially less efficient. In Proceedings of the 14th International Joint Conference on Artificial
Intelligence, volume 2, pages 1599–1604, 1995. 228, 232
45. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Proceedings
of the AAAI Conference on Artificial Intelligence, volume 31, 2017. 231, 233, 234, 241
46. Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvit-
skyi, Daniel Guo, and Charles Blundell. Agent57: Outperforming the Atari human benchmark.
arXiv preprint arXiv:2003.13350, 2020. 85
47. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 319, 320
48. Leemon Baird. Residual algorithms: Reinforcement learning with function approximation.
In Machine Learning Proceedings 1995, pages 30–37. Elsevier, 1995. 66, 73
49. Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew,
and Igor Mordatch. Emergent tool use from multi-agent autocurricula. arXiv preprint
arXiv:1909.07528, 2019. 214, 216, 217, 332
50. Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent
complexity via multi-agent competition. arXiv preprint arXiv:1710.03748, 2017. 212
51. Chitta Baral. Knowledge Representation, Reasoning and Declarative Problem Solving. Cam-
bridge university press, 2003. 232
52. Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song,
Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl
Mourad, Hugo Larochelle, Marc G. Bellemare, and Michael Bowling. The Hanabi challenge:
A new frontier for AI research. Artificial Intelligence, 280:103216, 2020. 221
53. Nolan Bard, John Hawkin, Jonathan Rubin, and Martin Zinkevich. The annual computer
poker competition. AI Magazine, 34(2):112, 2013. 214, 223
54. Simon Baron-Cohen, Alan M Leslie, and Uta Frith. Does the autistic child have a “theory of
mind”? Cognition, 21(1):37–46, 1985. 193, 211
55. Andrew Barron, Jorma Rissanen, and Bin Yu. The minimum description length principle in
coding and modeling. IEEE Transactions on Information Theory, 44(6):2743–2760, 1998. 15
56. Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement
learning. Discrete Event Dynamic Systems, 13(1-2):41–77, 2003. 232, 241
57. Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements
that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and
Cybernetics, (5):834–846, 1983. 56, 99, 116
58. OpenAI Baselines. DQN https://openai.com/blog/openai-baselines-dqn/, 2017. 80,
84
59. Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research,
12:149–198, 2000. 247
60. Jonathan Baxter, Andrew Tridgell, and Lex Weaver. Knightcap: a chess program that learns
by combining TD (𝜆) with game-tree search. arXiv preprint cs/9901002, 1999. 188
338 References
61. Jonathan Baxter, Andrew Tridgell, and Lex Weaver. Learning to play chess using temporal
differences. Machine Learning, 40(3):243–263, 2000. 162, 188
62. Don Beal and Martin C. Smith. Temporal difference learning for heuristic search and game
playing. Information Sciences, 122(1):3–21, 2000. 188
63. Mark F Bear, Barry W Connors, and Michael A Paradiso. Neuroscience, volume 2. Lippincott
Williams & Wilkins, 2007. 304
64. Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich
Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith
Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King,
Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. arXiv preprint arXiv:1612.03801,
2016. 332
65. Laurens Beljaards. Ai agents for the abstract strategy game tak. Master’s thesis, Leiden
University, 2017. 163
66. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine
learning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018. 321
67. Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features.
arXiv preprint arXiv:1903.07571, 2019. 321
68. Mikhail Belkin, Daniel J Hsu, and Partha Mitra. Overfitting or perfect fitting? Risk bounds
for classification and regression rules that interpolate. In Advances in Neural Information
Processing Systems, pages 2300–2311, 2018. 321
69. Marc Bellemare, Joel Veness, and Michael Bowling. Bayesian learning of recursively factored
environments. In International Conference on Machine Learning, pages 1211–1219, 2013. 278
70. Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforce-
ment learning. In International Conference on Machine Learning, pages 449–458, 2017. 80,
83
71. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning
Environment: An evaluation platform for general agents. Journal of Artificial Intelligence
Research, 47:253–279, 2013. 7, 41, 65, 67, 85, 86, 237, 262, 332
72. Richard Bellman. Dynamic Programming. Courier Corporation, 1957, 2013. 39, 45, 59, 295,
299
73. Richard Bellman. On the application of dynamic programing to the determination of optimal
play in chess and checkers. Proceedings of the National Academy of Sciences, 53(2):244–247,
1965. 59
74. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of represen-
tations for domain adaptation. Advances in Neural Information Processing Systems, 19:137,
2007. 252
75. Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule.
Technical report, Montreal, 1990. 268
76. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and
new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–
1828, 2013. 308
77. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning.
In Proceedings of the 26th Annual International Conference on Machine Learning, pages 41–48,
2009. 176, 188
78. Gerardo Beni. Swarm intelligence. Complex Social and Behavioral Systems: Game Theory and
Agent-Based Models, pages 791–818, 2020. 223
79. Gerardo Beni and Jing Wang. Swarm intelligence in cellular robotic systems. In Robots and
Biological Systems: Towards a New Bionics?, pages 703–712. Springer, 1993. 211
80. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak,
Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Christopher Hesse, Rafal
Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé
de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon
Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with large scale deep
reinforcement learning. arXiv preprint arXiv:1912.06680, 2019. 69
References 339
81. Tim Berners-Lee, James Hendler, and Ora Lassila. The semantic web. Scientific American,
284(5):28–37, 2001. 9
82. Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity
of decentralized control of markov decision processes. Mathematics of Operations Research,
27(4):819–840, 2002. 197, 206
83. Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with
differentiable closed-form solvers. In International Conference on Learning Representations,
2018. 268
84. R Bertolami, H Bunke, S Fernandez, A Graves, M Liwicki, and J Schmidhuber. A novel
connectionist system for improved unconstrained handwriting recognition. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 31(5), 2009. 313
85. Dimitri P Bertsekas, Dimitri P Bertsekas, Dimitri P Bertsekas, and Dimitri P Bertsekas.
Dynamic Programming and Optimal Control, volume 1. Athena scientific Belmont, MA, 1995.
10
86. Dimitri P Bertsekas and John Tsitsiklis. Neuro-Dynamic Programming. MIT Press Cambridge,
1996. 7, 59
87. Shrisha Bharadwaj. Embarrsingly simple zero shot learning. https://github.com/
chichilicious/embarrsingly-simple-zero-shot-learning, 2018. 268, 269
88. Shalabh Bhatnagar, Doina Precup, David Silver, Richard S Sutton, Hamid R Maei, and Csaba
Szepesvári. Convergent temporal-difference learning with arbitrary smooth function ap-
proximation. In Advances in Neural Information Processing Systems, pages 1204–1212, 2009.
74, 87
89. Darse Billings, Aaron Davidson, Jonathan Schaeffer, and Duane Szafron. The challenge of
poker. Artificial Intelligence, 134(1-2):201–240, 2002. 214, 223
90. Darse Billings, Aaron Davidson, Terence Schauenberg, Neil Burch, Michael Bowling, Robert
Holte, Jonathan Schaeffer, and Duane Szafron. Game-tree search with adaptation in stochastic
imperfect-information games. In International Conference on Computers and Games, pages
21–34. Springer, 2004. 223
91. Darse Billings, Denis Papp, Jonathan Schaeffer, and Duane Szafron. Opponent modeling in
poker. AAAI/IAAI, 493:499, 1998. 161
92. Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python:
analyzing text with the natural language toolkit. O’Reilly Media, Inc., 2009. 276
93. Christopher M Bishop. Pattern Recognition and Machine Learning. Information science and
statistics. Springer Verlag, Heidelberg, 2006. 7, 14, 59, 105, 128, 259, 298, 300, 303, 321
94. Peter Bloem. Transformers http://peterbloem.nl/blog/transformers. 320
95. Christian Blum and Daniel Merkle. Swarm Intelligence: Introduction and Applications. Springer
Science & Business Media, 2008. 203, 211
96. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx,
Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson,
Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie S. Chen, Kathleen
Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin
Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn,
Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha,
Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu,
Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti,
Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna,
and Rohith Kuditipudi. On the opportunities and risks of foundation models. arXiv preprint
arXiv:2108.07258, 2021. 244, 263
97. Eric Bonabeau, Marco Dorigo, and Guy Theraulaz. Swarm Intelligence: From Natural to
Artificial Systems. Oxford University Press, 1999. 11, 203
98. Borealis. Few shot learning tutorial https://www.borealisai.com/en/blog/tutorial-
2-few-shot-learning-and-meta-learning-i/. 254
99. Zdravko I Botev, Dirk P Kroese, Reuven Y Rubinstein, and Pierre L’Ecuyer. The cross-entropy
method for optimization. In Handbook of Statistics, volume 31, pages 35–59. Elsevier, 2013.
133
340 References
100. Matthew Botvinick, Sam Ritter, Jane X Wang, Zeb Kurth-Nelson, Charles Blundell, and Demis
Hassabis. Reinforcement learning, fast and slow. Trends in Cognitive Sciences, 23(5):408–422,
2019. 267
101. Matthew M Botvinick, Yael Niv, and Andew G Barto. Hierarchically organized behavior and
its neural foundations: a reinforcement learning perspective. Cognition, 113(3):262–280, 2009.
241
102. Bruno Bouzy and Bernard Helmstetter. Monte Carlo Go developments. In Advances in
Computer Games, pages 159–174. Springer, 2004. 164, 167
103. Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up Limit
Hold’em poker is solved. Science, 347(6218):145–149, 2015. 196, 214, 223
104. Michael H. Bowling, Nicholas Abou Risk, Nolan Bard, Darse Billings, Neil Burch, Joshua
Davidson, John Alexander Hawkin, Robert Holte, Michael Johanson, Morgan Kan, Bryce
Paradis, Jonathan Schaeffer, David Schnizlein, Duane Szafron, Kevin Waugh, and Martin
Zinkevich. A demonstration of the polaris poker system. In Proceedings of The 8th Interna-
tional Conference on Autonomous Agents and Multiagent Systems, volume 2, pages 1391–1392,
2009. 223
105. Robert Boyd and Peter J Richerson. Culture and the Evolutionary Process. University of
Chicago press, 1988. 209, 216, 223
106. Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning:
Applications to data mining. Springer Science & Business Media, 2008. 247, 259, 268
107. Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on Bayesian optimization of
expensive cost functions, with application to active user modeling and hierarchical reinforce-
ment learning. arXiv preprint arXiv:1012.2599, 2010. 278
108. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,
and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016. 41, 85, 86, 93,
116, 332
109. Rodney A Brooks. Intelligence without representation. Artificial Intelligence, 47(1-3):139–159,
1991. 11
110. Noam Brown, Sam Ganzfried, and Tuomas Sandholm. Hierarchical abstraction, distributed
equilibrium computation, and post-processing, with application to a champion No-Limit
Texas Hold’em agent. In AAAI Workshop: Computer Poker and Imperfect Information, 2015.
204
111. Noam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. Deep counterfactual regret
minimization. In International Conference on Machine Learning, pages 793–802. PMLR, 2019.
206, 216, 223
112. Noam Brown and Tuomas Sandholm. Superhuman AI for Heads-up No-limit poker: Libratus
beats top professionals. Science, 359(6374):418–424, 2018. 204, 215, 223
113. Noam Brown and Tuomas Sandholm. Superhuman AI for multiplayer poker. Science,
365(6456):885–890, 2019. 214, 216, 223
114. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In
Advances in Neural Information Processing Systems, 2020. 246, 308, 320
115. Cameron Browne. Hex Strategy. AK Peters/CRC Press, 2000. 184
116. Cameron Browne, Dennis JNJ Soemers, and Eric Piette. Strategic features for general games.
In KEG@ AAAI, pages 70–75, 2019. 278
117. Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling,
Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon
Colton. A survey of Monte Carlo Tree Search methods. IEEE Transactions on Computational
Intelligence and AI in Games, 4(1):1–43, 2012. 164, 165, 166, 167, 168, 169, 170, 187, 333
118. Bernd Brügmann. Monte Carlo Go. Technical report, Syracuse University, 1993. 164, 167
References 341
119. Bruno Buchberger, George E Collins, Rüdiger Loos, and Rudolph Albrecht. Computer algebra
symbolic and algebraic computation. ACM SIGSAM Bulletin, 16(4):5–5, 1982. 9
120. Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In
Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, pages 535–541, 2006. 278
121. Lars Buesing, Theophane Weber, Sébastien Racaniere, SM Eslami, Danilo Rezende, David P
Reichert, Fabio Viola, Frederic Besse, Karol Gregor, Demis Hassabis, and Daan Wierstra.
Learning and querying fast generative models for reinforcement learning. arXiv preprint
arXiv:1802.03006, 2018. 131, 138
122. Lucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multi-
agent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C
(Applications and Reviews), 38(2):156–172, 2008. 223
123. Zhiyuan Cai, Huanhui Cao, Wenjie Lu, Lin Zhang, and Hao Xiong. Safe multi-agent rein-
forcement learning through decentralized multiple control barrier functions. arXiv preprint
arXiv:2103.12553, 2021. 207
124. Murray Campbell, A Joseph Hoane Jr, and Feng-Hsiung Hsu. Deep Blue. Artificial Intelligence,
134(1-2):57–83, 2002. 57, 64
125. Andres Campero, Roberta Raileanu, Heinrich Küttler, Joshua B Tenenbaum, Tim Rocktäschel,
and Edward Grefenstette. Learning with AMIGo: Adversarially motivated intrinsic goals. In
International Conference on Learning Representations, 2020. 176, 231, 234
126. Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, and Stephen Clark.
Emergent communication through negotiation. In International Conference on Learning
Representations, 2018. 208
127. Thomas Carr, Maria Chli, and George Vogiatzis. Domain adaptation for reinforcement
learning on the atari. In Proceedings of the 18th International Conference on Autonomous
Agents and MultiAgent Systems, AAMAS ’19, Montreal, pages 1859–1861, 2018. 252
128. Edward Cartwright. Behavioral Economics. Routledge, 2018. 223
129. Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. 246, 248, 250, 251
130. Rich Caruana, Steve Lawrence, and C Lee Giles. Overfitting in neural nets: Backpropagation,
conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems,
pages 402–408, 2001. 321
131. Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G Belle-
mare. Dopamine: A research framework for deep reinforcement learning. arXiv preprint
arXiv:1812.06110, 2018. 332
132. Tristan Cazenave. Residual networks for computer Go. IEEE Transactions on Games, 10(1):107–
110, 2018. 172
133. Tristan Cazenave, Yen-Chi Chen, Guan-Wei Chen, Shi-Yu Chen, Xian-Dong Chiu, Julien
Dehos, Maria Elsa, Qucheng Gong, Hengyuan Hu, Vasil Khalidov, Cheng-Ling Li, Hsin-I Lin,
Yu-Jin Lin, Xavier Martinet, Vegard Mella, Jérémy Rapin, Baptiste Rozière, Gabriel Synnaeve,
Fabien Teytaud, Olivier Teytaud, Shi-Cheng Ye, Yi-Jun Ye, Shi-Jim Yen, and Sergey Zagoruyko.
Polygames: Improved zero learning. arXiv preprint arXiv:2001.09832, 2020. 183, 184, 185, 187,
332, 333
134. Tristan Cazenave and Bernard Helmstetter. Combining tactical search and Monte-Carlo in
the game of Go. In Proceedings of the 2005 IEEE Symposium on Computational Intelligence
and Games (CIG05), Essex University, volume 5, pages 171–175, 2005. 170
135. Hyeong Soo Chang, Michael C Fu, Jiaqiao Hu, and Steven I Marcus. An adaptive sampling
algorithm for solving Markov decision processes. Operations Research, 53(1):126–139, 2005.
170
136. Yang Chao. Share and play new sokoban levels. http://Sokoban.org, 2013. 26
137. Guillaume Chaslot. Monte-Carlo tree search. PhD thesis, Maastricht University, 2010. 167
138. Guillaume Chaslot, Sander Bakkes, Istvan Szita, and Pieter Spronck. Monte-Carlo tree search:
A new framework for game AI. In AIIDE, 2008. 170, 179
139. Kumar Chellapilla and David B Fogel. Evolving neural networks to play checkers without
relying on expert knowledge. IEEE Transactions on Neural Networks, 10(6):1382–1391, 1999.
74
342 References
140. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter
Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning
via sequence modeling. arXiv preprint arXiv:2106.01345, 2021. 320
141. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A
closer look at few-shot classification. In International Conference on Learning Representations,
2019. 253, 264, 268
142. Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression and
acceleration for deep neural networks. arXiv preprint arXiv:1710.09282, 2017. 278
143. Maxime Chevalier-Boisvert, Lucas Willems, and Sumans Pal. Minimalistic gridworld en-
vironment for OpenAI Gym https://github.com/maximecb/gym-minigrid, 2018. 177,
234
144. Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent envi-
ronment simulators. In International Conference on Learning Representations, 2017. 138
145. Jaeyoung Choi, Jack J Dongarra, and David W Walker. PB-BLAS: a set of parallel block basic
linear algebra subprograms. Concurrency: Practice and Experience, 8(7):517–535, 1996. 324
146. François Chollet. Keras. https://keras.io, 2015. 87
147. François Chollet. Deep learning with Python. Manning Publications Co., 2017. 322
148. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. Back to basics: Benchmarking canoni-
cal evolution strategies for playing Atari. In Proceedings of the Twenty-Seventh International
Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, pages
1419–1426, 2018. 223
149. Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement
learning in a handful of trials using probabilistic dynamics models. In Advances in Neural
Information Processing Systems, pages 4754–4765, 2018. 129, 133, 138, 144
150. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural
networks. In International Conference on Learning Representations, 2016. 276
151. Carlo Ciliberto, Youssef Mroueh, Tomaso Poggio, and Lorenzo Rosasco. Convex learning of
multiple tasks and their structure. In International Conference on Machine Learning, pages
1548–1557. PMLR, 2015. 247
152. Dan Cireşan, Ueli Meier, Luca Maria Gambardella, and Jürgen Schmidhuber. Deep, big,
simple neural nets for handwritten digit recognition. Neural Computation, 22(12):3207–3220,
2010. 325
153. Dan Cireşan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for
image classification. In 2012 IEEE Conference on Computer Vision and Pattern Recognition,
Providence, RI, US, pages 3642–3649, 2012. 313
154. Christopher Clark and Amos Storkey. Teaching deep convolutional neural networks to play
Go. arxiv preprint. arXiv preprint arXiv:1412.3409, 1, 2014. 74, 188
155. Christopher Clark and Amos Storkey. Training deep convolutional neural networks to play
Go. In International Conference on Machine Learning, pages 1766–1774, 2015. 179, 188
156. Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, and Pieter
Abbeel. Model-based reinforcement learning via meta-policy optimization. In 2nd Annual
Conference on Robot Learning, CoRL 2018, Zürich, Switzerland, pages 617–629, 2018. 128, 138,
144
157. William F Clocksin and Christopher S Mellish. Programming in Prolog: Using the ISO standard.
Springer Science & Business Media, 1981. 9
158. Karl Cobbe, Chris Hesse, Jacob Hilton, and John Schulman. Leveraging procedural generation
to benchmark reinforcement learning. In International Conference on Machine Learning, pages
2048–2056. PMLR, 2020. 177, 279, 332
159. Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying
generalization in reinforcement learning. In International Conference on Machine Learning,
pages 1282–1289, 2018. 278
160. Helder Coelho and Luis Moniz Pereira. Automated reasoning in geometry theorem proving
with prolog. Journal of Automated Reasoning, 2(4):329–390, 1986. 278
References 343
161. Cédric Colas, Pierre Fournier, Mohamed Chetouani, Olivier Sigaud, and Pierre-Yves Oudeyer.
Curious: intrinsically motivated modular multi-goal reinforcement learning. In International
Conference on Machine Learning, pages 1331–1340. PMLR, 2019. 277
162. Cédric Colas, Tristan Karch, Olivier Sigaud, and Pierre-Yves Oudeyer. Intrinsically motivated
goal-conditioned reinforcement learning: a short survey. arXiv preprint arXiv:2012.09830,
2020. 277
163. Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O Stanley,
and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning
via a population of novelty-seeking agents. In Advances in Neural Information Processing
Systems, pages 5032–5043, 2018. 223
164. Rémi Coulom. Efficient selectivity and backup operators in Monte-Carlo Tree Search. In
International Conference on Computers and Games, pages 72–83. Springer, 2006. 164, 170, 187
165. Rémi Coulom. Monte-Carlo tree search in Crazy Stone. In Proceedings Game Programming
Workshop, Tokyo, Japan, pages 74–75, 2007. 167, 170
166. Rémi Coulom. The Monte-Carlo revolution in Go. In The Japanese-French Frontiers of Science
Symposium (JFFoS 2008), Roscoff, France, 2009. 170, 179
167. Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games,
robotics and machine learning. http://pybullet.org, 2016–2019. 92, 93, 116
168. Gabriela Csurka. Domain adaptation for visual applications: A comprehensive survey. In
Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and
Pattern Recognition, pages 1–35. Springer, 2017. 252, 268
169. Joseph Culberson. Sokoban is PSPACE-complete. Technical report, University of Alberta,
1997. 27, 56
170. Joseph C Culberson and Jonathan Schaeffer. Pattern databases. Computational Intelligence,
14(3):318–334, 1998. 167
171. Ken Currie and Austin Tate. O-plan: the open planning architecture. Artificial Intelligence,
52(1):49–86, 1991. 231
172. Wojciech Marian Czarnecki, Gauthier Gidel, Brendan Tracey, Karl Tuyls, Shayegan Omid-
shafiei, David Balduzzi, and Max Jaderberg. Real world games look like spinning tops. In
Advances in Neural Information Processing Systems, 2020. 150
173. Kamil Czarnogórski. Monte Carlo Tree Search beginners guide https://int8.io/monte-
carlo-tree-search-beginners-guide/, 2018. 166, 167
174. Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather, Demis Hassabis,
Rémi Munos, and Matthew Botvinick. A distributional code for value in dopamine-based
reinforcement learning. Nature, pages 1–5, 2020. 83
175. Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R McKee, Joel Z
Leibo, Kate Larson, and Thore Graepel. Open problems in cooperative AI. arXiv preprint
arXiv:2012.08630, 2020. 206
176. Zhongxiang Dai, Yizhou Chen, Bryan Kian Hsiang Low, Patrick Jaillet, and Teck-Hua Ho.
R2-B2: recursive reasoning-based Bayesian optimization for no-regret learning in games. In
International Conference on Machine Learning, pages 2291–2301. PMLR, 2020. 208
177. Christian Daniel, Herke Van Hoof, Jan Peters, and Gerhard Neumann. Probabilistic inference
for determining options in reinforcement learning. Machine Learning, 104(2):337–357, 2016.
241
178. Shubhomoy Das, Weng-Keen Wong, Thomas Dietterich, Alan Fern, and Andrew Emmott.
Incorporating expert feedback into active anomaly discovery. In 2016 IEEE 16th International
Conference on Data Mining (ICDM), pages 853–858. IEEE, 2016. 177
179. Hal Daumé III. Frustratingly easy domain adaptation. In ACL 2007, Proceedings of the 45th
Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague,
2007. 252
180. Morton D Davis. Game Theory: a Nontechnical Introduction. Courier Corporation, 2012. 223
181. Richard Dawkins and Nicola Davis. The Selfish Gene. Macat Library, 2017. 209
182. Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in Neural
Information Processing Systems, pages 271–278, 1993. 231, 233
344 References
183. Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on
the cross-entropy method. Annals of Operations Research, 134(1):19–67, 2005. 133
184. Luis M De Campos, Juan M Fernandez-Luna, José A Gámez, and José M Puerta. Ant colony
optimization for learning Bayesian networks. International Journal of Approximate Reasoning,
31(3):291–311, 2002. 278
185. Dave De Jonge, Tim Baarslag, Reyhan Aydoğan, Catholijn Jonker, Katsuhide Fujita, and
Takayuki Ito. The challenge of negotiation in the game of diplomacy. In International
Conference on Agreement Technologies, pages 100–114. Springer, 2018. 151, 208
186. Joery A. de Vries, Ken S. Voskuil, Thomas M. Moerland, and Aske Plaat. Visualizing MuZero
models. arXiv preprint arXiv:2102.12924, 2021. 140, 142, 146, 275, 333
187. Rina Dechter. Learning while searching in constraint-satisfaction problems. AAAI, 1986. 308
188. Marc Deisenroth and Carl E Rasmussen. PILCO: a model-based and data-efficient approach
to policy search. In Proceedings of the 28th International Conference on Machine Learning
(ICML-11), pages 465–472, 2011. 128, 138, 143
189. Marc Peter Deisenroth, Dieter Fox, and Carl Edward Rasmussen. Gaussian processes for
data-efficient learning in robotics and control. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 37(2):408–423, 2013. 128, 143
190. Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for
robotics. In Foundations and Trends in Robotics 2, pages 1–142. Now publishers, 2013. 143,
144
191. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-
scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern
Recognition, pages 248–255. Ieee, 2009. 64, 276, 323, 324
192. Mohit Deshpande. Deep RL policy methods https://mohitd.github.io/2019/01/20/
deep-rl-policy-methods.html. 95
193. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training
of deep bidirectional transformers for language understanding. In Proceedings of the 2019
Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2019, Minneapolis, 2018. 246, 262, 263, 276, 307,
308, 320
194. Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec
Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. OpenAI baselines.
https://github.com/openai/baselines, 2017. 333
195. Thomas G Dietterich. The MAXQ method for hierarchical reinforcement learning. In
International Conference on Machine Learning, volume 98, pages 118–126, 1998. 241
196. Thomas G Dietterich. Hierarchical reinforcement learning with the MAXQ value function
decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000. 27, 56, 231, 232,
241
197. Chuong B Do and Andrew Y Ng. Transfer learning for text classification. Advances in Neural
Information Processing Systems, 18:299–306, 2005. 268
198. Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle
Pineau, and R Devon Hjelm. On-line adaptative curriculum learning for GANs. In Proceedings
of the AAAI Conference on Artificial Intelligence, volume 33, pages 3470–3477, 2019. 178, 188
199. Andreas Doerr, Christian Daniel, Martin Schiegg, Duy Nguyen-Tuong, Stefan Schaal, Marc
Toussaint, and Sebastian Trimpe. Probabilistic recurrent state-space models. arXiv preprint
arXiv:1801.10395, 2018. 131
200. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor
Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In
International Conference on Machine Learning, pages 647–655. PMLR, 2014. 245, 268
201. Hao Dong, Zihan Ding, and Shanghang Zhang. Deep Reinforcement Learning. Springer, 2020.
86
202. Jack J Dongarra, Jeremy Du Croz, Sven Hammarling, and Richard J Hanson. An extended set
of FORTRAN basic linear algebra subprograms. ACM Transactions on Mathematical Software,
14(1):1–17, 1988. 324
References 345
203. Christian Donninger. Null move and deep search. ICGA Journal, 16(3):137–143, 1993. 163
204. Dorit Dor and Uri Zwick. Sokoban and other motion planning problems. Computational
Geometry, 13(4):215–228, 1999. 27, 56
205. Derek Doran, Sarah Schulz, and Tarek R Besold. What does explainable AI really mean? A
new conceptualization of perspectives. arXiv preprint arXiv:1710.00794, 2017. 278
206. Marco Dorigo. Optimization, learning and natural algorithms. PhD Thesis, Politecnico di
Milano, 1992. 212
207. Marco Dorigo and Mauro Birattari. Swarm intelligence. Scholarpedia, 2(9):1462, 2007. 203,
223
208. Marco Dorigo, Mauro Birattari, and Thomas Stutzle. Ant colony optimization. IEEE Compu-
tational Intelligence Magazine, 1(4):28–39, 2006. 212, 223, 224
209. Marco Dorigo and Luca Maria Gambardella. Ant colony system: a cooperative learning
approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation,
1(1):53–66, 1997. 11, 212
210. Norman R Draper and Harry Smith. Applied Regression Analysis, volume 326. John Wiley &
Sons, 1998. 14
211. Yuntao Du, Zhiwen Tan, Qian Chen, Xiaowen Zhang, Yirong Yao, and Chongjun Wang. Dual
adversarial domain adaptation. arXiv preprint arXiv:2001.00153, 2020. 252
212. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking
deep reinforcement learning for continuous control. In International Conference on Machine
Learning, pages 1329–1338, 2016. 114, 115, 116
213. Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. RL2 :
Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779,
2016. 188, 254, 256, 257, 265, 268, 275
214. Ishan P Durugkar, Clemens Rosenbaum, Stefan Dernbach, and Sridhar Mahadevan. Deep
reinforcement learning with macro-actions. arXiv preprint arXiv:1606.04615, 2016. 241
215. Werner Duvaud and Aurèle Hainaut. MuZero general: Open reimplementation of muzero.
https://github.com/werner-duvaud/muzero-general, 2019. 146, 275
216. Zach Dwiel, Madhavun Candadai, Mariano Phielipp, and Arjun K Bansal. Hierarchical policy
learning is sensitive to goal space design. arXiv preprint arXiv:1905.01537, 2019. 232
217. Russell C Eberhart, Yuhui Shi, and James Kennedy. Swarm Intelligence. Elsevier, 2001. 223
218. Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual
foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv
preprint arXiv:1812.00568, 2018. 132
219. Tom Eccles, Edward Hughes, János Kramár, Steven Wheelwright, and Joel Z Leibo. Learning
reciprocity in complex sequential social dilemmas. arXiv preprint arXiv:1903.08082, 2019. 208
220. Adrien Lucas Ecofet. An intuitive explanation of policy gradient https:
//towardsdatascience.com/an-intuitive-explanation-of-policy-gradient-
part-1-reinforce-aa4392cbfd3c. 95
221. Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. First return,
then explore. Nature, 590(7847):580–586, 2021. 238, 277, 333
222. Harrison Edwards and Amos Storkey. Towards a neural statistician. In International Confer-
ence on Learning Representations, 2017. 268
223. Agoston E Eiben and Jim E Smith. What is an evolutionary algorithm? In Introduction to
Evolutionary Computing, pages 25–48. Springer, 2015. 223
224. Jeffrey L Elman. Learning and development in neural networks: The importance of starting
small. Cognition, 48(1):71–99, 1993. 176
225. Arpad E Elo. The Rating of Chessplayers, Past and Present. Arco Pub., 1978. 180
226. Markus Enzenberger, Martin Muller, Broderick Arneson, and Richard Segal. Fuego—an
open-source framework for board games and Go engine based on Monte Carlo tree search.
IEEE Transactions on Computational Intelligence and AI in Games, 2(4):259–270, 2010. 179, 188
227. Tom Erez, Yuval Tassa, and Emanuel Todorov. Simulation tools for model-based robotics:
Comparison of Bullet, Havok, MuJoCo, Ode and Physx. In 2015 IEEE International Conference
on Robotics and Automation (ICRA), pages 4397–4404. IEEE, 2015. 93
346 References
228. Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement
learning. Journal of Machine Learning Research, 6:503–556, April 2005. 87
229. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam
Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA:
Scalable distributed deep-RL with importance weighted actor-learner architectures. In
International Conference on Machine Learning, pages 1407–1416. PMLR, 2018. 265
230. Eyal Even-Dar, Yishay Mansour, and Peter Bartlett. Learning rates for Q-learning. Journal of
machine learning Research, 5(1), 2003. 54
231. Richard Everett and Stephen Roberts. Learning against non-stationary agents with opponent
modelling and deep reinforcement learning. In 2018 AAAI Spring Symposium Series, 2018.
208
232. Theodoros Evgeniou and Massimiliano Pontil. Regularized multi-task learning. In Proceedings
of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 109–117. ACM, 2004. 251
233. Linxi Fan, Yuke Zhu, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa,
Silvio Savarese, and Li Fei-Fei. Surreal: Open-source reinforcement learning framework and
robot manipulation benchmark. In Conference on Robot Learning, pages 767–782, 2018. 113
234. Jesse Farebrother, Marlos C Machado, and Michael Bowling. Generalization and regularization
in DQN. arXiv preprint arXiv:1810.00123, 2018. 278
235. Gregory Farquhar, Tim Rocktäschel, Maximilian Igl, and SA Whiteson. TreeQN and ATreeC:
Differentiable tree planning for deep reinforcement learning. In International Conference on
Learning Representations, 2018. 57, 135, 138, 144
236. Li Fei-Fei, Jia Deng, and Kai Li. Imagenet: Constructing a large-scale image database. Journal
of Vision, 9(8):1037–1037, 2009. 64, 262, 276, 323
237. Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I Jordan, Joseph E Gonzalez, and Sergey
Levine. Model-based value estimation for efficient model-free reinforcement learning. arXiv
preprint arXiv:1803.00101, 2018. 132, 138, 144, 207
238. Dieqiao Feng, Carla P Gomes, and Bart Selman. Solving hard AI planning instances using
curriculum-driven deep reinforcement learning. arXiv preprint arXiv:2006.02689, 2020. 57,
144, 177, 178, 188, 275
239. Santiago Fernández, Alex Graves, and Jürgen Schmidhuber. An application of recurrent
neural networks to discriminative keyword spotting. In International Conference on Artificial
Neural Networks, pages 220–229. Springer, 2007. 313
240. Richard E Fikes, Peter E Hart, and Nils J Nilsson. Learning and executing generalized robot
plans. Artificial Intelligence, 3:251–288, 1972. 231, 232
241. Richard E Fikes and Nils J Nilsson. STRIPS: A new approach to the application of theorem
proving to problem solving. Artificial Intelligence, 2(3-4):189–208, 1971. 9
242. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Learning for fast
adaptation of deep networks. In International Conference on Machine Learning, pages 1126–
1135. PMLR, 2017. 254, 257, 258, 259, 264, 268, 333
243. Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In 2017
IEEE International Conference on Robotics and Automation (ICRA), pages 2786–2793. IEEE,
2017. 132, 138, 144
244. Chelsea Finn, Aravind Rajeswaran, Sham Kakade, and Sergey Levine. Online meta-learning.
In International Conference on Machine Learning, pages 1920–1930. PMLR, 2019. 268
245. Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic Model-Agnostic Meta-Learning.
In Advances in Neural Information Processing Systems, pages 9516–9527, 2018. 268
246. Yannis Flet-Berliac. The promise of hierarchical reinforcement learning. https://
thegradient.pub/the-promise-of-hierarchical-reinforcement-learning/, March
2019. 227, 228, 276
247. Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. Automatic goal generation
for reinforcement learning agents. In International Conference on Machine Learning, pages
1515–1528. PMLR, 2018. 178, 188, 241
248. Michael J Flynn. Some computer organizations and their effectiveness. IEEE Transactions on
Computers, 100(9):948–960, 1972. 325
References 347
249. Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon White-
son. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI Conference on
Artificial Intelligence, volume 32, 2018. 207
250. Jakob N Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and
Igor Mordatch. Learning with opponent-learning awareness. arXiv preprint arXiv:1709.04326,
2017. 208
251. David B Fogel. An introduction to simulated evolutionary optimization. IEEE Transactions
on Neural Networks, 5(1):3–14, 1994. 11, 202
252. David B Fogel, Timothy J Hays, Sarah L Hahn, and James Quon. Further evolution of a
self-learning chess program. In Computational Intelligence in Games, 2005. 162
253. Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex
Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, and
Shane Legg. Noisy networks for exploration. In International Conference on Learning
Representations, 2018. 80, 83
254. Vincent François-Lavet, Peter Henderson, Riashat Islam, Marc G Bellemare, and Joelle Pineau.
An introduction to deep reinforcement learning. Foundations and Trends in Machine Learning,
11(3-4):219–354, 2018. 25, 28, 59, 95
255. Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared
hierarchies. In International Conference on Learning Representations, 2018. 231, 241
256. Nicholas Frosst and Geoffrey Hinton. Distilling a neural network into a soft decision tree. In
Proceedings of the First International Workshop on Comprehensibility and Explanation in AI
and ML, 2017. 278
257. Vittorio Gallese and Alvin Goldman. Mirror neurons and the simulation theory of mind-
reading. Trends in Cognitive Sciences, 2(12):493–501, 1998. 193, 211, 223
258. Sam Ganzfried and Tuomas Sandholm. Game theory-based opponent modeling in large
imperfect-information games. In The 10th International Conference on Autonomous Agents
and Multiagent Systems, volume 2, pages 533–540, 2011. 161, 223
259. Sam Ganzfried and Tuomas Sandholm. Endgame solving in large imperfect-information
games. In Proceedings of the 2015 International Conference on Autonomous Agents and Multia-
gent Systems, pages 37–45, 2015. 204
260. The garage contributors. Garage: A toolkit for reproducible reinforcement learning research.
https://github.com/rlworkgroup/garage, 2019. 114, 266, 332
261. Carlos E Garcia, David M Prett, and Manfred Morari. Model predictive control: Theory and
practice—a survey. Automatica, 25(3):335–348, 1989. 132, 144
262. Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In International
Conference on Learning Representations, 2017. 268
263. Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton,
Murray Shanahan, Yee Whye Teh, Danilo Rezende, and SM Ali Eslami. Conditional neural
processes. In International Conference on Machine Learning, pages 1704–1713. PMLR, 2018.
268
264. Alessandro Gasparetto, Paolo Boscariol, Albano Lanzutti, and Renato Vidoni. Path planning
and trajectory planning algorithms: A general overview. In Motion and Operation Planning
of Robotic Systems, pages 3–27. Springer, 2015. 27, 56
265. Michael Gelfond and Vladimir Lifschitz. Action languages. Electronic Transactions on Artificial
Intelligence, 2(3–4):193–210, 1998. 232
266. Sylvain Gelly, Levente Kocsis, Marc Schoenauer, Michele Sebag, David Silver, Csaba
Szepesvári, and Olivier Teytaud. The grand challenge of computer Go: Monte Carlo tree
search and extensions. Communications of the ACM, 55(3):106–113, 2012. 179
267. Sylvain Gelly and David Silver. Achieving master level play in 9 × 9 computer Go. In AAAI,
volume 8, pages 1537–1540, 2008. 179, 188
268. Sylvain Gelly, Yizao Wang, and Olivier Teytaud. Modification of UCT with patterns in
Monte-Carlo Go. Technical Report RR-6062, INRIA, 2006. 167, 179, 188
269. Aurélien Géron. Hands-on machine learning with Scikit-Learn and TensorFlow: concepts, tools,
and techniques to build intelligent systems. O’Reilly Media, Inc., 2019. 77, 87, 328
348 References
270. Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to forget: Continual
prediction with LSTM. In Ninth International Conference on Artificial Neural Networks ICANN
99. IET, 1999. 315
271. Malik Ghallab, Dana Nau, and Paolo Traverso. Automated Planning: theory and practice.
Elsevier, 2004. 231
272. Mohammad Ghavamzadeh, Sridhar Mahadevan, and Rajbala Makar. Hierarchical multi-agent
reinforcement learning. Autonomous Agents and Multi-Agent Systems, 13(2):197–229, 2006.
238
273. Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P Adams, and Sergey Levine.
Why generalization in RL is difficult: Epistemic POMDPs and implicit partial observability.
Advances in Neural Information Processing Systems, 34, 2021. 278
274. Gerd Gigerenzer and Daniel G Goldstein. Reasoning the fast and frugal way: models of
bounded rationality. Psychological review, 103(4):650, 1996. 209, 223
275. Thomas Gilovich, Dale Griffin, and Daniel Kahneman. Heuristics and Biases: The Psychology
of Intuitive Judgment. Cambridge university press, 2002. 209
276. Andrew Gilpin and Tuomas Sandholm. A competitive Texas Hold’em poker player via
automated abstraction and real-time equilibrium computation. In Proceedings of the National
Conference on Artificial Intelligence, volume 21, page 1007, 2006. 223
277. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for
accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 580–587, 2014. 245
278. John C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal
Statistical Society: Series B (Methodological), 41(2):148–164, 1979. 47
279. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, Cambridge,
2016. 11, 69, 86, 303, 304, 306, 307, 310, 313, 315, 316, 320, 321
280. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in
Neural Information Processing Systems, pages 2672–2680, 2014. 316, 318
281. Geoffrey J Gordon. Stable function approximation in dynamic programming. In Machine
Learning Proceedings 1995, pages 261–268. Elsevier, 1995. 87
282. Geoffrey J Gordon. Approximate solutions to Markov decision processes. Carnegie Mellon
University, 1999. 66, 73
283. Tobias Graf and Marco Platzner. Adaptive playouts in Monte-Carlo tree search with policy-
gradient reinforcement learning. In Advances in Computer Games, pages 1–11. Springer, 2015.
188
284. Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting
gradient-based meta-learning as hierarchical bayes. In International Conference on Learning
Representations, 2018. 268
285. Alex Graves, Santiago Fernández, and Jürgen Schmidhuber. Bidirectional LSTM networks for
improved phoneme classification and recognition. In International Conference on Artificial
Neural Networks, pages 799–804. Springer, 2005. 315
286. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep
recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and
Signal Processing, pages 6645–6649. IEEE, 2013. 307
287. Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber.
LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems,
28(10):2222–2232, 2017. 315
288. Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis
Antonoglou, and Rémi Munos. Monte-carlo tree search as regularized policy optimization.
In International Conference on Machine Learning, pages 3769–3778. PMLR, 2020. 136, 140, 170
289. Christopher Grimm, André Barreto, Satinder Singh, and David Silver. The value equiva-
lence principle for model-based reinforcement learning. In Advances in Neural Information
Processing Systems, 2020. 129, 140, 142
References 349
290. Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, Philippe Preux, and Matthieu Geist. There
is no turning back: A self-supervised approach for reversibility-aware reinforcement learning.
arXiv preprint arXiv:2106.04480, 2021. 242
291. Sven Gronauer and Klaus Diepold. Multi-agent deep reinforcement learning: a survey.
Artificial Intelligence Review, pages 1–49, 2021. 198, 223
292. Ivo Grondman, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. A survey of actor-
critic reinforcement learning: Standard and natural policy gradients. IEEE Transactions on
Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6):1291–1307, 2012. 83,
99
293. Peter D Grünwald. The minimum description length principle. MIT press, 2007. 15, 319
294. Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare,
and Remi Munos. The reactor: A fast and sample-efficient actor-critic agent for reinforcement
learning. In International Conference on Learning Representations, 2018. 85
295. Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep Q-
learning with model-based acceleration. In International Conference on Machine Learning,
pages 2829–2838, 2016. 132, 138, 144
296. Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent planning with factored MDPs.
In Advances in Neural Information Processing Systems, volume 1, pages 1523–1530, 2001. 207
297. Arthur Guez, Mehdi Mirza, Karol Gregor, Rishabh Kabra, Sébastien Racanière, Theophane
Weber, David Raposo, Adam Santoro, Laurent Orseau, Tom Eccles, Greg Wayne, David Silver,
and Timothy P. Lillicrap. An investigation of model-free planning. In International Conference
on Machine Learning, pages 2464–2473, 2019. 57, 138, 144
298. Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gómez Colmenarejo,
Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, et al. RL
unplugged: Benchmarks for offline reinforcement learning. arXiv preprint arXiv:2006.13888,
2020. 332
299. David Gunning. Explainable artificial intelligence (XAI). Defense Advanced Research Projects
Agency (DARPA), 2, 2017. 278
300. Xifeng Guo, Wei Chen, and Jianping Yin. A simple approach for unsupervised domain
adaptation. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 1566–
1570. IEEE, 2016. 252
301. Abhishek Gupta, Russell Mendonca, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Meta-
reinforcement learning of structured exploration strategies. In Advances in Neural Information
Processing Systems, pages 5307–5316, 2018. 268
302. David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. In
Advances in Neural Information Processing Systems, pages 2450–2462, 2018. 131
303. David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
129, 131, 138, 143, 144, 278
304. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning
with deep energy-based policies. In International Conference on Machine Learning, pages
1352–1361. PMLR, 2017. 107
305. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy
maximum entropy deep reinforcement learning with a stochastic actor. In International
Conference on Machine Learning, pages 1861–1870. PMLR, 2018. 94, 107, 140, 272
306. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan,
Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-
critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018. 107
307. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to con-
trol: Learning behaviors by latent imagination. In International Conference on Learning
Representations, 2020. 129, 131, 138, 140, 144
308. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee,
and James Davidson. Learning latent dynamics for planning from pixels. In International
Conference on Machine Learning, pages 2555–2565, 2019. 129, 131, 133, 138, 140, 141, 144, 333
350 References
309. Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with
discrete world models. In International Conference on Learning Representations, 2021. 129,
131, 138, 139, 140, 144, 333
310. Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control. Machine
Learning, 84(1-2):137–169, 2011. 108
311. Dongge Han, Wendelin Boehmer, Michael Wooldridge, and Alex Rogers. Multi-agent hi-
erarchical reinforcement learning with dynamic termination. In Pacific Rim International
Conference on Artificial Intelligence, pages 80–92. Springer, 2019. 238
312. Dongge Han, Chris Xiaoxuan Lu, Tomasz Michalak, and Michael Wooldridge. Multiagent
model-based credit assignment for continuous control, 2021. 207
313. Hado V Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems,
pages 2613–2621, 2010. 82
314. Matthew John Hausknecht. Cooperation and Communication in Multiagent Deep Reinforcement
Learning. PhD thesis, University of Texas at Austin, 2016. 208
315. Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kaelbling, Thomas L Dean, and Craig
Boutilier. Hierarchical solution of Markov decision processes using macro-actions. In UAI
’98: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, University
of Wisconsin Business School, Madison, Wisconsin, 1998. 230, 241
316. Conor F. Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Källström, Matthew Mac-
farlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley,
Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel
de Oliveira Ramos, Marcello Restelli, Peter Vamplew, and Diederik M. Roijers. A practical
guide to multi-objective reinforcement learning and planning. arXiv preprint arXiv:2103.09568,
2021. 198
317. Simon Haykin. Neural Networks: a Comprehensive Foundation. Prentice Hall, 1994. 11
318. Ryan B Hayward and Bjarne Toft. Hex: The Full Story. CRC Press, 2019. 184
319. He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daumé III. Opponent modeling in deep
reinforcement learning. In International Conference on Machine Learning, pages 1804–1813.
PMLR, 2016. 161, 208
320. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 770–778, 2016. 172, 316
321. Robert A Hearn and Erik D Demaine. Games, Puzzles, and Computation. CRC Press, 2009.
27, 56
322. David Heckerman, Dan Geiger, and David M Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine Learning, 20(3):197–243, 1995. 278
323. Nicolas Heess, David Silver, and Yee Whye Teh. Actor-critic reinforcement learning with
energy-based policies. In European Workshop on Reinforcement Learning, pages 45–58, 2013.
74, 87
324. Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval
Tassa, Tom Erez, Ziyu Wang, SM Eslami, Martin Riedmiller, and David Silver. Emergence of
locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017. 90, 112,
113, 212
325. Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa.
Learning continuous control policies by stochastic value gradients. In Advances in Neural
Information Processing Systems, pages 2944–2952, 2015. 138, 144
326. Ernst A Heinz. New self-play results in computer chess. In International Conference on
Computers and Games, pages 262–276. Springer, 2000. 188
327. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David
Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on
Artificial Intelligence, 2018. 87, 114, 115, 116, 137
328. Mark Hendrikx, Sebastiaan Meijer, Joeri Van Der Velden, and Alexandru Iosup. Procedural
content generation for games: A survey. ACM Transactions on Multimedia Computing,
Communications, and Applications, 9(1):1–22, 2013. 57
References 351
329. John L Hennessy and David A Patterson. Computer Architecture: a Quantitative Approach.
Elsevier, 2017. 325
330. Joseph Henrich, Robert Boyd, and Peter J Richerson. Five misunderstandings about cultural
evolution. Human Nature, 19(2):119–137, 2008. 223
331. Pablo Hernandez-Leal, Michael Kaisers, Tim Baarslag, and Enrique Munoz de Cote. A
survey of learning in multiagent environments: Dealing with non-stationarity. arXiv preprint
arXiv:1707.09183, 2017. 193, 196, 201, 223
332. Pablo Hernandez-Leal, Bilal Kartal, and Matthew E Taylor. A survey and critique of multiagent
deep reinforcement learning. Autonomous Agents and Multi-Agent Systems, 33(6):750–797,
2019. 198, 223
333. Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre,
Theophane Weber, David Silver, and Hado van Hasselt. Muesli: Combining improvements in
policy optimization. In International Conference on Machine Learning, pages 4214–4226, 2021.
140, 275
334. Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will
Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining
improvements in deep reinforcement learning. In AAAI, pages 3215–3222, 2018. 80, 81, 82, 87
335. Francis Heylighen. What makes a Meme Successful? Selection Criteria for Cultural Evolution.
Association Internationale de Cybernetique, 1998. 223
336. Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel,
Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot
transfer in reinforcement learning. In International Conference on Machine Learning, pages
1480–1490. PMLR, 2017. 268
337. Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene
Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert,
Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Stable baselines. https:
//github.com/hill-a/stable-baselines, 2018. 333
338. W Daniel Hillis. New computer architectures and their relationship to physics or why
computer science is no good. International Journal of Theoretical Physics, 21(3-4):255–262,
1982. 325
339. W Daniel Hillis and Lewis W Tucker. The CM-5 connection machine: A scalable supercom-
puter. Communications of the ACM, 36(11):30–41, 1993. 325
340. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network.
arXiv preprint arXiv:1503.02531, 2015. 278
341. Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with
neural networks. Science, 313(5786):504–507, 2006. 319
342. Geoffrey E Hinton and Terrence Joseph Sejnowski, editors. Unsupervised Learning: Founda-
tions of Neural Computation. MIT press, 1999. 15
343. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhut-
dinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv
preprint arXiv:1207.0580, 2012. 321
344. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation,
9(8):1735–1780, 1997. 313, 315
345. John Holland. Adaptation in natural and artificial systems: an introductory analysis with
application to biology. Control and Artificial Intelligence, 1975. 47, 202
346. John H Holland. Genetic algorithms. Scientific American, 267(1):66–73, 1992. 11
347. Bert Hölldobler and Edward O Wilson. The Superorganism: the Beauty, Elegance, and
Strangeness of Insect Societies. WW Norton & Company, 2009. 191
348. John J Hopfield. Neural networks and physical systems with emergent collective compu-
tational abilities. Proceedings of the National Academy of Sciences, 79(8):2554–2558, 1982.
313
349. Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado Van Has-
selt, and David Silver. Distributed prioritized experience replay. In International Conference
on Learning Representations, 2018. 221
352 References
350. Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in
neural networks: A survey. arXiv preprint arXiv:2004.05439, 2020. 247, 248, 253, 255, 267, 268
351. Ronald A Howard. Dynamic programming and Markov processes. New York: John Wiley,
1964. 28
352. Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, and Moritz Hardt. Revisiting design
choices in proximal policy optimization. arXiv preprint arXiv:2009.10897, 2020. 106
353. Feng-Hsiung Hsu. Behind Deep Blue: Building the computer that defeated the world chess
champion. Princeton University Press, 2004. 64
354. Feng-Hsiung Hsu, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk. A
grandmaster chess machine. Scientific American, 263(4):44–51, 1990. 69
355. R Lily Hu, Caiming Xiong, and Richard Socher. Zero-shot image classification guided by
natural language descriptions of classes: A meta-learning approach. In Advances in Neural
Information Processing Systems, 2018. 268
356. David H Hubel and Torsten N Wiesel. Shape and arrangement of columns in cat’s striate
cortex. The Journal of Physiology, 165(3):559–568, 1963. 310
357. David H Hubel and Torsten N Wiesel. Receptive fields and functional architecture of monkey
striate cortex. The Journal of Physiology, 195(1):215–243, 1968. 310
358. Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Mohammadamin Barekatain,
Simon Schmitt, and David Silver. Learning and planning in complex action spaces. In
International Conference on Machine Learning, pages 4476–4486, 2021. 140, 275
359. Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes ImageNet good for transfer
learning? arXiv preprint arXiv:1608.08614, 2016. 245, 268
360. Jonathan Hui. RL—DQN Deep Q-network https://medium.com/@jonathan_hui/rl-dqn-
deep-q-network-e207751f7ae4. Medium post. 84
361. Jonathan Hui. Model-based reinforcement learning https://medium.com/@jonathan_hui/
rl-model-based-reinforcement-learning-3c2b6f0aa323. Medium post, 2018. 129
362. Mike Huisman, Jan van Rijn, and Aske Plaat. Metalearning for deep neural networks. In
Pavel Brazdil et al., editors, Metalearning: Applications to data mining. Springer, 2022. 247,
258, 259, 267, 268
363. Mike Huisman, Jan N. van Rijn, and Aske Plaat. A survey of deep meta-learning. Artificial
Intelligence Review, 2021. 248, 253, 254, 255, 257, 258, 267, 268, 379
364. Matthew Hutson. Artificial Intelligence faces reproducibility crisis. Science, 359:725–726,
2018. 87
365. Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization
for general algorithm configuration. In International Conference on Learning and Intelligent
Optimization, pages 507–523. Springer, 2011. 259
366. Frank Hutter, Holger H Hoos, Kevin Leyton-Brown, and Thomas Stützle. ParamILS: an
automatic algorithm configuration framework. Journal of Artificial Intelligence Research,
36:267–306, 2009. 259
367. Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren. Automated Machine Learning: Methods,
Systems, Challenges. Springer Nature, 2019. 260
368. Roman Ilin, Robert Kozma, and Paul J Werbos. Efficient learning in cellular simultaneous
recurrent neural networks—the case of maze navigation problem. In 2007 IEEE International
Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 324–
329, 2007. 135, 139, 144
369. Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-
normalized models. In Advances in Neural Information Processing Systems, pages 1945–1953,
2017. 321
370. Riashat Islam, Peter Henderson, Maziar Gomrokchi, and Doina Precup. Reproducibility
of benchmarked deep reinforcement learning tasks for continuous control. arXiv preprint
arXiv:1708.04133, 2017. 87
371. Athul Paul Jacob, David J Wu, Gabriele Farina, Adam Lerer, Anton Bakhtin, Jacob Andreas,
and Noam Brown. Modeling strong and human-like gameplay with KL-regularized search.
arXiv preprint arXiv:2112.07544, 2021. 170
References 353
372. Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia
Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas
Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray
Kavukcuoglu, and Thore Graepel. Human-level performance in 3D multiplayer games with
population-based reinforcement learning. Science, 364(6443):859–865, 2019. 68, 69, 214, 218,
219, 277
373. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue,
Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando,
and Koray Kavukcuoglu. Population based training of neural networks. arXiv preprint
arXiv:1711.09846, 2017. 212, 213, 218, 220, 223, 265, 277, 333
374. Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. RLbench: the
robot learning benchmark & learning environment. IEEE Robotics and Automation Letters,
5(2):3019–3026, 2020. 113
375. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model:
Model-based policy optimization. In Advances in Neural Information Processing Systems,
pages 12498–12509, 2019. 128, 132, 138, 144
376. Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence
modeling problem. arXiv preprint arXiv:2106.02039, 2021. 320
377. Andrew A Jawlik. Statistics from A to Z: Confusing concepts clarified. John Wiley & Sons,
2016. 288
378. Michael Johanson, Nolan Bard, Marc Lanctot, Richard G Gibson, and Michael Bowling.
Efficient Nash equilibrium approximation through Monte Carlo counterfactual regret mini-
mization. In AAMAS, pages 837–846, 2012. 203, 204, 223
379. Ian T Jolliffe and Jorge Cadima. Principal component analysis: a review and recent de-
velopments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 374(2065):20150202, 2016. 15
380. Michael Irwin Jordan. Learning in Graphical Models, volume 89. Springer Science & Business
Media, 1998. 278
381. Arthur Juliani. Simple reinforcement learning with tensorflow part 8: Asynchronous actor-
critic agents (A3C) https://medium.com/emergent-future/simple-reinforcement-
learning-with-tensorflow-part-8-asynchronous-actor-critic-agents-a3c-
c88f72a5e9f2, 2016. 103
382. Arthur Juliani, Vincent-Pierre Berges, Ervin Teng, Andrew Cohen, Jonathan Harper, Chris
Elion, Chris Goy, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. Unity: A
general platform for intelligent agents. arXiv preprint arXiv:1809.02627, 2018. 265, 332
383. Arthur Juliani, Ahmed Khalifa, Vincent-Pierre Berges, Jonathan Harper, Ervin Teng, Hunter
Henry, Adam Crespi, Julian Togelius, and Danny Lange. Obstacle tower: A generalization
challenge in vision, control, and planning. arXiv preprint arXiv:1902.01378, 2019. 223
384. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-
neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex
Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino
Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen,
David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas
Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray
Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure
prediction with AlphaFold. Nature, 596(7873):583–589, 2021. 188, 275
385. Andreas Junghanns and Jonathan Schaeffer. Sokoban: Enhancing general single-agent search
methods using domain knowledge. Artificial Intelligence, 129(1-2):219–251, 2001. 27, 56
386. Niels Justesen, Philip Bontrager, Julian Togelius, and Sebastian Risi. Deep learning for video
game playing. IEEE Transactions on Games, 12(1):1–20, 2019. 87
387. Niels Justesen, Ruben Rodriguez Torrado, Philip Bontrager, Ahmed Khalifa, Julian Togelius,
and Sebastian Risi. Illuminating generalization in deep reinforcement learning through
procedural level generation. arXiv preprint arXiv:1806.10729, 2018. 177
388. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A
survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. 7, 59
354 References
389. Daniel Kahneman and Amos Tversky. Prospect theory: An analysis of decision under risk.
In Handbook of the Fundamentals of Financial Decision Making: Part I, pages 99–127. World
Scientific, 2013. 223
390. Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H. Campbell, Konrad
Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Ryan Sepassi,
George Tucker, and Henryk Michalewski. Model-based reinforcement learning for Atari.
arXiv:1903.00374, 2019. 129, 133, 138, 144
391. Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin L. Edelman, Tristan Yang,
Boaz Barak, and Haofeng Zhang. SGD on neural networks learns functions of increasing
complexity. In Advances in Neural Information Processing Systems, pages 3491–3501, 2019.
321
392. Gabriel Kalweit and Joschka Boedecker. Uncertainty-driven imagination for continuous deep
reinforcement learning. In Conference on Robot Learning, pages 195–206, 2017. 132
393. Reza Kamyar and Ehsan Taheri. Aircraft optimal terrain/threat-based trajectory planning
and control. Journal of Guidance, Control, and Dynamics, 37(2):466–483, 2014. 132
394. Satwik Kansal and Brendan Martin. Learn data science webpage., 2018. 44, 52, 53, 54, 61, 126
395. Hilbert J Kappen. Path integrals and symmetry breaking for optimal control theory. Journal
of statistical mechanics: theory and experiment, 2005(11):P11011, 2005. 107
396. Steven Kapturowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. Recurrent
experience replay in distributed reinforcement learning. In International Conference on
Learning Representations, 2018. 85
397. Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick Van der Smagt. Deep vari-
ational Bayes filters: Unsupervised learning of state space models from raw data. arXiv
preprint arXiv:1605.06432, 2016. 131
398. Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. http:
//karpathy.github.io/2015/05/21/rnn-effectiveness/. Andrej Karpathy Blog, 2015.
314, 315
399. Andrej Karpathy. Deep reinforcement learning: Pong from pixels. http://karpathy.
github.io/2016/05/31/rl/. Andrej Karpathy Blog, 2016. 80
400. Andrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and understanding recurrent
networks. arXiv preprint arXiv:1506.02078, 2015. 142
401. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs
for improved quality, stability, and variation. In International Conference on Learning Repre-
sentations, 2018. 317, 318
402. Henry J Kelley. Gradient theory of optimal flight paths. American Rocket Society Journal,
30(10):947–954, 1960. 135, 139, 144
403. Stephen Kelly and Malcolm I Heywood. Multi-task learning in Atari video games with
emergent tangled program graphs. In Proceedings of the Genetic and Evolutionary Computation
Conference, pages 195–202. ACM, 2017. 251
404. Stephen Kelly and Malcolm I Heywood. Emergent tangled program graphs in multi-task
learning. In IJCAI, pages 5294–5298, 2018. 251
405. James Kennedy. Swarm intelligence. In Handbook of Nature-Inspired and Innovative Comput-
ing, pages 187–219. Springer, 2006. 11, 223
406. Pascal Kerschke, Holger H Hoos, Frank Neumann, and Heike Trautmann. Automated
algorithm selection: Survey and perspectives. Evolutionary Computation, 27(1):3–45, 2019.
259
407. Shauharda Khadka, Somdeb Majumdar, Tarek Nassar, Zach Dwiel, Evren Tumer, Santiago
Miret, Yinyin Liu, and Kagan Tumer. Collaborative evolutionary reinforcement learning. In
International Conference on Machine Learning, pages 3341–3350. PMLR, 2019. 223, 277
408. Shauharda Khadka and Kagan Tumer. Evolutionary reinforcement learning. arXiv preprint
arXiv:1805.07917, 2018. 223
409. Khimya Khetarpal, Zafarali Ahmed, Andre Cianflone, Riashat Islam, and Joelle Pineau. Re-
evaluate: Reproducibility in evaluating reinforcement learning algorithms. In Reproducibility
in Machine Learning Workshop, ICML, 2018. 87
References 355
410. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In International
Conference on Learning Representations, 2014. 15, 131, 318
411. Diederik P Kingma and Max Welling. An introduction to variational autoencoders. Found.
Trends Mach. Learn., 12(4):307–392, 2019. 15, 131, 318
412. Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. A survey of generali-
sation in deep reinforcement learning. arXiv preprint arXiv:2111.09794, 2021. 279
413. Daan Klijn and AE Eiben. A coevolutionairy approach to deep multi-agent reinforcement
learning. arXiv preprint arXiv:2104.05610, 2021. 221, 223
414. Craig A Knoblock. Learning abstraction hierarchies for problem solving. In AAAI, pages
923–928, 1990. 231, 232
415. Donald E Knuth and Ronald W Moore. An analysis of alpha-beta pruning. Artificial Intelli-
gence, 6(4):293–326, 1975. 151, 161, 163
416. Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey.
The International Journal of Robotics Research, 32(11):1238–1274, 2013. 111, 143, 144
417. Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for
one-shot image recognition. In ICML Deep Learning workshop, volume 2. Lille, 2015. 268
418. Levente Kocsis and Csaba Szepesvári. Bandit based Monte-Carlo planning. In European
Conference on Machine Learning, pages 282–293. Springer, 2006. 164, 167, 168, 187
419. Vijay R Konda and John N Tsitsiklis. Actor–critic algorithms. In Advances in Neural
Information Processing Systems, pages 1008–1014, 2000. 99
420. Vijaymohan R Konda and Vivek S Borkar. Actor–Critic-type learning algorithms for Markov
Decision Processes. SIAM Journal on Control and Optimization, 38(1):94–123, 1999. 99
421. Richard E Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artificial
intelligence, 27(1):97–109, 1985. 163
422. Petar Kormushev, Sylvain Calinon, and Darwin G Caldwell. Robot motor skill coordination
with em-based reinforcement learning. In 2010 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pages 3232–3237. IEEE, 2010. 5, 6
423. Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Natural language does not
emerge ’naturally’ in multi-agent dialog. In Proceedings of the 2017 Conference on Empirical
Methods in Natural Language Processing, EMNLP 2017, Copenhagen, pages 2962–2967, 2017.
208
424. Samuel Kotz, Narayanaswamy Balakrishnan, and Norman L Johnson. Continuous Multivariate
Distributions, Volume 1: Models and Applications. John Wiley & Sons, 2004. 48, 106
425. Basil Kouvaritakis and Mark Cannon. Model Predictive Control. Springer, 2016. 144
426. Landon Kraemer and Bikramjit Banerjee. Multi-agent reinforcement learning as a rehearsal
for decentralized planning. Neurocomputing, 190:82–94, 2016. 207
427. Mark A Kramer. Nonlinear principal component analysis using autoassociative neural
networks. AIChE journal, 37(2):233–243, 1991. 318
428. Sarit Kraus, Eithan Ephrati, and Daniel Lehmann. Negotiation in a non-cooperative envi-
ronment. Journal of Experimental & Theoretical Artificial Intelligence, 3(4):255–281, 1994.
151
429. Sarit Kraus and Daniel Lehmann. Diplomat, an agent in a multi agent environment: An
overview. In IEEE International Performance Computing and Communications Conference,
pages 434–438, 1988. 208
430. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems, pages
1097–1105, 2012. 65, 246, 307, 308, 324
431. Kai A Krueger and Peter Dayan. Flexible shaping: How learning in small steps helps.
Cognition, 110(3):380–394, 2009. 176
432. Steven Kuhn. Prisoner’s Dilemma. The Stanford Encyclopedia of Philosophy, https://
plato.stanford.edu/entries/prisoner-dilemma/, 1997. 198
433. Jan Kuipers, Aske Plaat, Jos AM Vermaseren, and H Jaap van den Herik. Improving multi-
variate Horner schemes with Monte Carlo tree search. Computer Physics Communications,
184(11):2391–2395, 2013. 170
356 References
434. Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical
deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In
Advances in Neural Information Processing Systems, pages 3675–3683, 2016. 229, 231, 237, 238,
241
435. Solomon Kullback and Richard A Leibler. On information and sufficiency. The Annals of
Mathematical Statistics, 22(1):79–86, 1951. 105
436. Yen-Ling Kuo, Boris Katz, and Andrei Barbu. Encoding formulas as deep networks: Rein-
forcement learning for zero-shot execution of LTL formulas. arXiv preprint arXiv:2006.01110,
2020. 268
437. Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zajac, Olivier Bachem, Lasse Espeholt,
Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, and Sylvain Gelly.
Google research football: A novel reinforcement learning environment. In Proceedings of the
AAAI Conference on Artificial Intelligence, volume 34, pages 4501–4510, 2020. 221, 332
438. Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-ensemble
trust-region policy optimization. In International Conference on Learning Representations,
2018. 129, 144
439. W Hi Kwon, AM Bruckstein, and T Kailath. Stabilizing state-feedback design via the moving
horizon method. International Journal of Control, 37(3):631–643, 1983. 132
440. Michail G Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine
Learning Research, 4:1107–1149, Dec 2003. 87
441. Tze Leung Lai. Adaptive treatment allocation and the multi-armed bandit problem. The
Annals of Statistics, pages 1091–1114, 1987. 48
442. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules.
Advances in Applied Mathematics, 6(1):4–22, 1985. 47, 48
443. John E Laird, Paul S Rosenbloom, and Allen Newell. Chunking in Soar: the anatomy of a
general learning mechanism. Machine learning, 1(1):11–46, 1986. 230, 241
444. Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning
of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society,
volume 33, 2011. 253, 262, 332
445. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept
learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. 262
446. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. The Omniglot challenge:
a 3-year progress report. Current Opinion in Behavioral Sciences, 29:97–104, 2019. 262, 263
447. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable
predictive uncertainty estimation using deep ensembles. In Advances in Neural Information
Processing Systems, pages 6402–6413, 2017. 133
448. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen
object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision
and Pattern Recognition, pages 951–958. IEEE, 2009. 261, 268
449. Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinícius Flores Zambaldi, Satyaki
Upadhyay, Julien Pérolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omid-
shafiei, Daniel Hennes, Dustin Morrill, Paul Muller, Timo Ewalds, Ryan Faulkner, János
Kramár, Bart De Vylder, Brennan Saeta, James Bradbury, David Ding, Sebastian Borgeaud,
Matthew Lai, Julian Schrittwieser, Thomas W. Anthony, Edward Hughes, Ivo Danihelka, and
Jonah Ryan-Davis. Openspiel: A framework for reinforcement learning in games. arXiv
preprint arXiv:1908.09453, 2019. 332
450. Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael H Bowling. Monte carlo sampling
for regret minimization in extensive games. In Advances in Neural Information Processing
Systems, pages 1078–1086, 2009. 196, 203, 204, 223
451. Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien
Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent
reinforcement learning. In Advances in Neural Information Processing Systems, pages 4190–
4203, 2017. 194
References 357
452. Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement
learning. In The 2010 International Joint Conference on Neural Networks (IJCNN), pages 1–8.
IEEE, 2010. 74
453. Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In
AAAI, volume 1, page 3, 2008. 261, 268
454. Alexandre Laterre, Yunguan Fu, Mohamed Khalil Jabri, Alain-Sam Cohen, David Kas, Karl
Hajjar, Torbjorn S Dahl, Amine Kerkeni, and Karim Beguir. Ranked reward: Enabling self-play
reinforcement learning for combinatorial optimization. arXiv preprint arXiv:1807.01672, 2018.
177, 178, 188, 275
455. Jean-Claude Latombe. Robot Motion Planning, volume 124. Springer Science & Business
Media, 2012. 27, 56
456. Steffen L Lauritzen. Graphical Models, volume 17. Clarendon Press, 1996. 278
457. Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and
the emergence of (natural) language. In International Conference on Learning Representations,
2017. 208
458. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436,
2015. 11, 86, 244, 307, 308
459. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne
Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recogni-
tion. Neural Computation, 1(4):541–551, 1989. 311
460. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. 308, 309,
323
461. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng. Convolutional deep belief
networks for scalable unsupervised learning of hierarchical representations. In Proceedings
of the 26th Annual International Conference on Machine Learning, pages 609–616. ACM, 2009.
308
462. Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric
and subspace. In International Conference on Machine Learning, pages 2927–2936. PMLR,
2018. 255
463. Joel Z Leibo, Edward Hughes, Marc Lanctot, and Thore Graepel. Autocurricula and the
emergence of innovation from social interaction: A manifesto for multi-agent intelligence
research. arXiv preprint arXiv:1903.00742, 2019. 187, 212, 217, 275
464. Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-
agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th
Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil,
pages 464–473, 2017. 206, 208, 209, 216
465. Charles E. Leiserson, Zahi S. Abuhamdeh, David C. Douglas, Carl R. Feynman, Mahesh N.
Ganmukhi, Jeffrey V. Hill, W. Daniel Hillis, Bradley C. Kuszmaul, Margaret A. St. Pierre,
David S. Wells, Monica C. Wong, Shaw-Wen Yang, and Robert C. Zak. The network architec-
ture of the connection machine CM-5. In Proceedings of the fourth annual ACM Symposium
on Parallel Algorithms and Architectures, pages 272–285, 1992. 325
466. Matteo Leonetti, Luca Iocchi, and Peter Stone. A synthesis of automated planning and
reinforcement learning for efficient, robust decision-making. Artificial Intelligence, 241:103–
130, 2016. 132
467. Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search
under unknown dynamics. In Advances in Neural Information Processing Systems, pages
1071–1079, 2014. 138, 144
468. Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on
Machine Learning, pages 1–9, 2013. 128, 144
469. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning:
Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
251
358 References
470. Andrew Levy, George Konidaris, Robert Platt, and Kate Saenko. Learning multi-level hierar-
chies with hindsight. In International Conference on Learning Representations, 2019. 231, 233,
236, 239, 241, 276
471. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the
loss landscape of neural nets. In Advances in Neural Information Processing Systems, pages
6391–6401, 2018. 142
472. Ke Li and Jitendra Malik. Learning to optimize neural nets. arXiv preprint arXiv:1703.00441,
2017. 268
473. Sheng Li, Jayesh K Gupta, Peter Morales, Ross Allen, and Mykel J Kochenderfer. Deep
implicit coordination graphs for multi-agent reinforcement learning. In AAMAS ’21: 20th
International Conference on Autonomous Agents and Multiagent Systems, 2021. 207
474. Siyuan Li, Rui Wang, Minxue Tang, and Chongjie Zhang. Hierarchical reinforcement learning
with advantage-based auxiliary rewards. In Advances in Neural Information Processing Systems,
pages 1407–1417, 2019. 276
475. Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-SGD: learning to learn quickly for
few-shot learning. arXiv preprint arXiv:1707.09835, 2017. 254, 268
476. Zhuoru Li, Akshay Narayan, and Tze-Yun Leong. An efficient approach to model-based
hierarchical reinforcement learning. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 31, 2017. 231
477. Eric Liang, Richard Liaw, Philipp Moritz, Robert Nishihara, Roy Fox, Ken Goldberg, Joseph E
Gonzalez, Michael I Jordan, and Ion Stoica. RLlib: abstractions for distributed reinforcement
learning. In International Conference on Machine Learning, pages 3059–3068, 2018. 332
478. Diego Pérez Liébana, Simon M Lucas, Raluca D Gaina, Julian Togelius, Ahmed Khalifa,
and Jialin Liu. General video game artificial intelligence. Synthesis Lectures on Games and
Computational Intelligence, 3(2):1–191, 2019. 177
479. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,
David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In
International Conference on Learning Representations, 2016. 91, 94, 108, 109, 116, 140, 272
480. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and
teaching. Machine Learning, 8(3-4):293–321, 1992. 75, 87
481. Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report,
Carnegie-Mellon Univ Pittsburgh PA School of Computer Science, 1993. 74, 75
482. Michael L Littman. Markov games as a framework for multi-agent reinforcement learning.
In Machine Learning Proceedings 1994, pages 157–163. Elsevier, 1994. 28, 194, 223
483. Chunming Liu, Xin Xu, and Dewen Hu. Multiobjective reinforcement learning: A comprehen-
sive overview. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(3):385–398,
2014. 223
484. Hao Liu and Pieter Abbeel. Hybrid discriminative-generative training via contrastive learning.
arXiv preprint arXiv:2007.09070, 2020. 72
485. Hui Liu, Song Yu, Zhangxin Chen, Ben Hsieh, and Lei Shao. Sparse matrix-vector multipli-
cation on NVIDIA GPU. International Journal of Numerical Analysis & Modeling, Series B,
3(2):185–191, 2012. 325
486. Siqi Liu, Guy Lever, Josh Merel, Saran Tunyasuvunakool, Nicolas Heess, and Thore Grae-
pel. Emergent coordination through competition. In International Conference on Learning
Representations, 2019. 212, 223
487. Manuel López-Ibáñez, Jérémie Dubois-Lacoste, Leslie Pérez Cáceres, Mauro Birattari, and
Thomas Stützle. The irace package: Iterated racing for automatic algorithm configuration.
Operations Research Perspectives, 3:43–58, 2016. 259
488. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent
Actor-Critic for mixed cooperative-competitive environments. In Advances in Neural Infor-
mation Processing Systems, pages 6379–6390, 2017. 207, 214, 333
489. Gabriel Loye. The attention mechanism. https://blog.floydhub.com/attention-
mechanism/. 319
References 359
490. Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the
effectiveness of SGD in modern over-parametrized learning. In International Conference on
Machine Learning, pages 3331–3340, 2018. 321
491. Xiaoliang Ma, Xiaodong Li, Qingfu Zhang, Ke Tang, Zhengping Liang, Weixin Xie, and
Zexuan Zhu. A survey on cooperative co-evolutionary algorithms. IEEE Transactions on
Evolutionary Computation, 23(3):421–441, 2018. 210
492. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of
Machine Learning Research, 9:2579–2605, Nov 2008. 15
493. Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and
Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and
open problems for general agents. Journal of Artificial Intelligence Research, 61:523–562, 2018.
86
494. Hamid Reza Maei, Csaba Szepesvári, Shalabh Bhatnagar, and Richard S Sutton. Toward
off-policy learning control with function approximation. In International Conference on
Machine Learning, 2010. 74
495. Anuj Mahajan, Tabish Rashid, Mikayel Samvelyan, and Shimon Whiteson. Maven: Multi-
agent variational exploration. In Advances in Neural Information Processing Systems, pages
7611–7622, 2019. 207
496. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan
Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised
pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), pages
181–196, 2018. 244
497. Somdeb Majumdar, Shauharda Khadka, Santiago Miret, Stephen McAleer, and Kagan Tumer.
Evolutionary reinforcement learning for sample-efficient multiagent coordination. In Inter-
national Conference on Machine Learning, 2020. 223
498. Rajbala Makar, Sridhar Mahadevan, and Mohammad Ghavamzadeh. Hierarchical multi-agent
reinforcement learning. In Proceedings of the Fifth International Conference on Autonomous
Agents, pages 246–253. ACM, 2001. 238
499. Julian N Marewski, Wolfgang Gaissmaier, and Gerd Gigerenzer. Good judgments do not
require complex cognition. Cognitive Processing, 11(2):103–121, 2010. 209
500. Vince Martinelli. How robots autonomously see, grasp, and pick. https://www.
therobotreport.com/grasp-sight-picking-evolve-robots/, 2019. 93
501. Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum
learning. IEEE Trans. Neural Networks Learn. Syst., 31(9):3732–3740, 2020. 188
502. Masakazu Matsugu, Katsuhiko Mori, Yusuke Mitari, and Yuji Kaneda. Subject independent
facial expression recognition with robust face detection using a convolutional neural network.
Neural Networks, 16(5-6):555–559, 2003. 310
503. Kiminori Matsuzaki. Empirical analysis of PUCT algorithm with evaluation functions of
different quality. In 2018 Conference on Technologies and Applications of Artificial Intelligence
(TAAI), pages 142–147. IEEE, 2018. 168, 169
504. David Q Mayne, James B Rawlings, Christopher V Rao, and Pierre OM Scokaert. Constrained
model predictive control: Stability and optimality. Automatica, 36(6):789–814, 2000. 144
505. James L McClelland, Bruce L McNaughton, and Randall C O’Reilly. Why there are comple-
mentary learning systems in the hippocampus and neocortex: insights from the successes and
failures of connectionist models of learning and memory. Psychological Review, 102(3):419,
1995. 75
506. Francisco S Melo and M Isabel Ribeiro. Convergence of Q-learning with linear function
approximation. In 2007 European Control Conference (ECC), pages 2671–2678. IEEE, 2007. 71
507. Josh Merel, Arun Ahuja, Vu Pham, Saran Tunyasuvunakool, Siqi Liu, Dhruva Tirumala, Nico-
las Heess, and Greg Wayne. Hierarchical visuomotor control of humanoids. In International
Conference on Learning Representations, 2019. 113
508. Josh Merel, Diego Aldarondo, Jesse Marshall, Yuval Tassa, Greg Wayne, and Bence Ölveczky.
Deep neuroethology of a virtual rodent. In International Conference on Learning Representa-
tions, 2020. 113
360 References
509. Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne,
Yee Whye Teh, and Nicolas Heess. Neural probabilistic motor primitives for humanoid
control. In International Conference on Learning Representations, 2019. 113
510. Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg
Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial
imitation. arXiv preprint arXiv:1707.02201, 2017. 113
511. Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink, Olivier Francon,
Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Evolving
deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain
Computing, pages 293–312. Elsevier, 2019. 277
512. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word
representations in vector space. In International Conference on Learning Representations, 2013.
246, 249
513. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed
representations of words and phrases and their compositionality. In Advances in Neural
Information Processing Systems, 2013. 246
514. Jonathan K Millen. Programming the game of Go. Byte Magazine, 1981. 153
515. S Ali Mirsoleimani, Aske Plaat, Jaap Van Den Herik, and Jos Vermaseren. Scaling Monte
Carlo tree search on Intel Xeon Phi. In Parallel and Distributed Systems (ICPADS), 2015 IEEE
21st International Conference on, pages 666–673. IEEE, 2015. 187
516. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive
meta-learner. In International Conference on Learning Representations, 2018. 254, 268
517. Tom M Mitchell. The need for biases in learning generalizations. Technical Report CBM-TR-
117, Department of Computer Science, Rutgers University, 1980. 178
518. Tom M Mitchell. The discipline of machine learning. Technical Report CMU-ML-06-108,
Carnegie Mellon University, School of Computer Science, Machine Learning, 2006. 178
519. Akshita Mittel and Purna Sowmya Munukutla. Visual transfer between Atari games using
competitive reinforcement learning. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops, pages 0–0, 2019. 265
520. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap,
Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep rein-
forcement learning. In International Conference on Machine Learning, pages 1928–1937, 2016.
94, 103, 104, 116, 272
521. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan
Wierstra, and Martin Riedmiller. Playing Atari with deep reinforcement learning. arXiv
preprint arXiv:1312.5602, 2013. 41, 66, 71, 72, 74, 75, 78, 79, 80, 83, 87, 150
522. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.
Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig
Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran,
Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep rein-
forcement learning. Nature, 518(7540):529–533, 2015. 7, 66, 73, 74, 75, 76, 77, 78, 79, 83, 84, 87,
179, 272
523. Thomas Moerland. Continuous Markov decision process and policy search. Lecture notes
for the course reinforcement learning, Leiden University, 2021. 28, 71, 106, 107, 109, 283, 298
524. Thomas M Moerland. The Intersection of Planning and Learning. PhD thesis, Delft University
of Technology, 2021. 275
525. Thomas M Moerland, Joost Broekens, and Catholijn M Jonker. Efficient exploration with
double uncertain value networks. arXiv preprint arXiv:1711.10789, 2017. 83
526. Thomas M Moerland, Joost Broekens, and Catholijn M Jonker. The potential of the return
distribution for exploration in RL. arXiv preprint arXiv:1806.04242, 2018. 83
527. Thomas M Moerland, Joost Broekens, and Catholijn M Jonker. A framework for reinforcement
learning and planning. arXiv preprint arXiv:2006.15009, 2020. 127
528. Thomas M Moerland, Joost Broekens, and Catholijn M Jonker. Model-based reinforcement
learning: A survey. arXiv preprint arXiv:2006.16712, 2020. 139, 144
References 361
529. Thomas M Moerland, Joost Broekens, Aske Plaat, and Catholijn M Jonker. A0C: Alpha zero
in continuous action space. arXiv preprint arXiv:1805.09613, 2018. 169, 187
530. Thomas M Moerland, Joost Broekens, Aske Plaat, and Catholijn M Jonker. Monte Carlo tree
search for asymmetric trees. arXiv preprint arXiv:1805.09218, 2018. 187
531. Andrew William Moore. Efficient memory-based learning for robot control. Technical
Report UCAM-CL-TR-209, University of Cambridge, UK, https://www.cl.cam.ac.uk/
techreports/UCAM-CL-TR-209.pdf, 1990. 55
532. Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin Morrill, Nolan Bard, Trevor
Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level
artificial intelligence in heads-up no-limit poker. Science, 356(6337):508–513, 2017. 214, 216,
223
533. Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-
agent populations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32,
2018. 212
534. Pol Moreno, Edward Hughes, Kevin R McKee, Bernardo Avila Pires, and Théophane We-
ber. Neural recursive belief states in multi-agent reinforcement learning. arXiv preprint
arXiv:2102.02274, 2021. 208
535. David E Moriarty, Alan C Schultz, and John J Grefenstette. Evolutionary algorithms for
reinforcement learning. Journal of Artificial Intelligence Research, 11:241–276, 1999. 223
536. Hossam Mossalam, Yannis M Assael, Diederik M Roijers, and Shimon Whiteson. Multi-
objective deep reinforcement learning. arXiv preprint arXiv:1610.02707, 2016. 223
537. Hussain Mujtaba. Introduction to autoencoders. https://www.mygreatlearning.com/
blog/autoencoder/, 2020. 318
538. Sendhil Mullainathan and Richard H Thaler. Behavioral economics. Technical report,
National Bureau of Economic Research, 2000. 223
539. Martin Müller. Computer Go. Artificial Intelligence, 134(1-2):145–179, 2002. 68
540. Matthias Müller-Brockhausen, Mike Preuss, and Aske Plaat. Procedural content generation:
Better benchmarks for transfer reinforcement learning. In Conference on Games, 2021. 177,
268
541. Tsendsuren Munkhdalai and Hong Yu. Meta networks. In International Conference on Machine
Learning, pages 2554–2563. PMLR, 2017. 254, 268
542. Yoshio Murase, Hitoshi Matsubara, and Yuzuru Hiraga. Automatic making of Sokoban
problems. In Pacific Rim International Conference on Artificial Intelligence, pages 592–600.
Springer, 1996. 27, 56
543. Derick Mwiti. Reinforcement learning applications. https://neptune.ai/blog/
reinforcement-learning-applications. 4
544. Roger B Myerson. Game Theory. Harvard university press, 2013. 223
545. Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical
reinforcement learning. In Advances in Neural Information Processing Systems, pages 3307–
3317, 2018. 231, 233, 241, 276
546. Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap
between value and policy based reinforcement learning. In Advances in Neural Information
Processing Systems, pages 2775–2785, 2017. 107
547. Prakash M Nadkarni, Lucila Ohno-Machado, and Wendy W Chapman. Natural language pro-
cessing: an introduction. Journal of the American Medical Informatics Association, 18(5):544–
551, 2011. 276
548. Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network
dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018
IEEE International Conference on Robotics and Automation (ICRA), pages 7559–7566, 2018.
133, 144, 145
549. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever.
Deep double descent: Where bigger models and more data. In 8th International Conference
on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 2020. 321
362 References
550. Nantas Nardelli, Gabriel Synnaeve, Zeming Lin, Pushmeet Kohli, Philip HS Torr, and Nicolas
Usunier. Value propagation networks. In 7th International Conference on Learning Represen-
tations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2018. 138, 144
551. Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E Taylor, and Peter
Stone. Curriculum learning for reinforcement learning domains: A framework and survey.
Journal Machine Learning Research, 2020. 178, 275
552. Sylvia Nasar. A Beautiful Mind. Simon and Schuster, 2011. 223
553. John Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951. 223
554. John F Nash. Equilibrium points in 𝑛-person games. Proceedings of the National Academy of
Sciences, 36(1):48–49, 1950. 223
555. John F Nash Jr. The bargaining problem. Econometrica: Journal of the econometric society,
pages 155–162, 1950. 223
556. Yu Nasu. Efficiently updatable neural-network-based evaluation functions for computer
shogi. The 28th World Computer Shogi Championship Appeal Document, 2018. 162
557. Richard E Neapolitan. Learning Bayesian networks. Pearson Prentice Hall, Upper Saddle
River, NJ, 2004. 278
558. Andrew Y Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In
Proceedings of the Twenty-first International Conference on Machine Learning, page 78. ACM,
2004. 321
559. Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transfor-
mations: Theory and application to reward shaping. In International Conference on Machine
Learning, volume 99, pages 278–287, 1999. 52
560. Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms.
arXiv preprint arXiv:1803.02999, 2018. 254, 259, 268, 269
561. Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa C Smith, and Jelena Kovačević.
Generalized value iteration networks: Life beyond lattices. In Thirty-Second AAAI Conference
on Artificial Intelligence, 2018. 134, 139
562. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-
conditional video prediction using deep networks in Atari games. In Advances in Neural
Information Processing Systems, pages 2863–2871, 2015. 138, 144
563. Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In Advances in
Neural Information Processing Systems, pages 6118–6128, 2017. 129, 130, 131, 135, 138, 140,
144, 333
564. Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization
with multi-task deep reinforcement learning. In International Conference on Machine Learning,
pages 2661–2670. PMLR, 2017. 268
565. Kyoung-Su Oh and Keechul Jung. GPU implementation of neural networks. Pattern Recogni-
tion, 37(6):1311–1314, 2004. 325
566. Chris Olah. Understanding LSTM networks. http://colah.github.io/posts/2015-08-
Understanding-LSTMs/, 2015. 313, 314
567. Frans A Oliehoek. Decentralized POMDPs. In Reinforcement Learning, pages 471–503.
Springer, 2012. 223
568. Frans A Oliehoek and Christopher Amato. A Concise Introduction to Decentralized POMDPs.
Springer, 2016. 151, 197, 212, 223
569. Frans A Oliehoek, Matthijs TJ Spaan, Christopher Amato, and Shimon Whiteson. Incremental
clustering and expansion for faster optimal planning in Dec-POMDPs. Journal of Artificial
Intelligence Research, 46:449–509, 2013. 212
570. Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P How, and John Vian.
Deep decentralized multi-task multi-agent reinforcement learning under partial observability.
In International Conference on Machine Learning, pages 2681–2690. PMLR, 2017. 212
571. Joseph O’Neill, Barty Pleydell-Bouverie, David Dupret, and Jozsef Csicsvari. Play it again:
reactivation of waking experience and memory. Trends in Neurosciences, 33(5):220–229, 2010.
75
References 363
572. Santiago Ontanón, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and
Mike Preuss. A survey of real-time strategy game AI research and competition in StarCraft.
IEEE Transactions on Computational Intelligence and AI in Games, 5(4):293–311, 2013. 68, 219,
223
573. David Opitz and Richard Maclin. Popular ensemble methods: An empirical study. Journal of
Artificial Intelligence Research, 11:169–198, 1999. 128, 144
574. Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva,
Katrina McKinney, Tor Lattimore, Csaba Szepesvári, Satinder Singh, Benjamin Van Roy,
Richard S. Sutton, David Silver, and Hado van Hasselt. Behaviour suite for reinforcement
learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa,
Ethiopia, April 2020. 332
575. Pierre-Yves Oudeyer and Frederic Kaplan. How can we define intrinsic motivation? In the 8th
International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic
Systems. Lund University Cognitive Studies, Lund: LUCS, Brighton, 2008. 241
576. Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? A typology of
computational approaches. Frontiers in Neurorobotics, 1:6, 2009. 237, 241
577. Pierre-Yves Oudeyer, Frederic Kaplan, and Verena V Hafner. Intrinsic motivation systems for
autonomous mental development. IEEE Transactions on Evolutionary Computation, 11(2):265–
286, 2007. 176, 241
578. Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbühl, Vladlen Koltun, and Dawn Song.
Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282,
2018. 278
579. Mark M Palatucci, Dean A Pomerleau, Geoffrey E Hinton, and Tom Mitchell. Zero-shot
learning with semantic output codes. In Advances in Neural Information Processing Systems
22, 2009. 261, 268
580. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on
Knowledge and Data Engineering, 22(10):1345–1359, 2010. 246, 248, 251, 268
581. Aleksandr I Panov and Aleksey Skrynnik. Automatic formation of the structure of ab-
stract machines in hierarchical reinforcement learning with state clustering. arXiv preprint
arXiv:1806.05292, 2018. 241
582. Giuseppe Davide Paparo, Vedran Dunjko, Adi Makmal, Miguel Angel Martin-Delgado, and
Hans J Briegel. Quantum speedup for active learning agents. Physical Review X, 4(3):031002,
2014. 188
583. Philip Paquette, Yuchen Lu, Steven Bocco, Max Smith, O-G Satya, Jonathan K Kummerfeld,
Joelle Pineau, Satinder Singh, and Aaron C Courville. No-press diplomacy: Modeling multi-
agent gameplay. In Advances in Neural Information Processing Systems, pages 4476–4487,
2019. 208
584. German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter.
Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71,
2019. 248
585. Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and
transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015. 265
586. Emilio Parisotto, Francis Song, Jack Rae, Razvan Pascanu, Caglar Gulcehre, Siddhant Jayaku-
mar, Max Jaderberg, Raphael Lopez Kaufman, Aidan Clark, Seb Noury, et al. Stabilizing
transformers for reinforcement learning. In International Conference on Machine Learning,
pages 7487–7498. PMLR, 2020. 265
587. Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In
Advances in Neural Information Processing Systems, pages 1043–1049, 1998. 231, 232, 241
588. Gian-Carlo Pascutto. Leela zero. https://github.com/leela-zero/leela-zero, 2017.
183
589. Alexander Pashevich, Danijar Hafner, James Davidson, Rahul Sukthankar, and Cordelia
Schmid. Modulated policy hierarchies. arXiv preprint arXiv:1812.00025, 2018. 231
590. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,
Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas
364 References
Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,
Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-
performance deep learning library. In Advances in Neural Information Processing Systems,
pages 8024–8035, 2019. 304, 322
591. Shubham Pateria, Budhitama Subagdja, Ah-hweewee Tan, and Chai Quek. Hierarchical
reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5):1–
35, 2021. 232, 241
592. Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-
Wesley, Reading, MA, 1984. 9, 161
593. Judea Pearl and Dana Mackenzie. The Book of Why: the New Science of Cause and Effect. Basic
Books, 2018. 9
594. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion,
Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake
VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and
Edouard Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning
Research, 12(Oct):2825–2830, 2011. 260, 327
595. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for
word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 1532–1543, 2014. 249
596. Alexandre Péré, Sébastien Forestier, Olivier Sigaud, and Pierre-Yves Oudeyer. Unsupervised
learning of goal spaces for intrinsically motivated goal exploration. In International Conference
on Learning Representations, 2018. 233
597. Karl Pertsch, Oleh Rybkin, Frederik Ebert, Shenghao Zhou, Dinesh Jayaraman, Chelsea
Finn, and Sergey Levine. Long-horizon visual planning with goal-conditioned hierarchical
predictors. In Advances in Neural Information Processing Systems, 2020. 241
598. Aske Plaat. De vlinder en de mier / The butterfly and the ant—on modeling behavior in
organizations. Inaugural lecture. Tilburg University, 2010. 223
599. Aske Plaat. Learning to Play: Reinforcement Learning and Games. Springer Verlag, Heidelberg,
https://learningtoplay.net, 2020. 59, 161, 163, 187
600. Aske Plaat, Walter Kosters, and Mike Preuss. High-accuracy model-based reinforcement
learning, a survey. arXiv preprint arXiv:2107.08241, 2021. 128, 136, 137, 138, 139, 144, 379
601. Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie De Bruin. Best-first fixed-depth minimax
algorithms. Artificial Intelligence, 87(1-2):255–293, 1996. 163
602. Matthias Plappert. Keras-RL. https://github.com/keras-rl/keras-rl, 2016. 77
603. Jordan B Pollack and Alan D Blair. Why did TD-gammon work? In Advances in Neural
Information Processing Systems, pages 10–16, 1997. 74
604. Aditya Prasad. Lessons from implementing alphazero https://medium.com/oracledevs/
lessons-from-implementing-alphazero-7e36e9054191, 2018. 173
605. Lorien Y Pratt. Discriminability-based transfer between neural networks. In Advances in
Neural Information Processing Systems, pages 204–211, 1993. 248, 268
606. Lutz Prechelt. Automatic early stopping using cross validation: quantifying the criteria.
Neural Networks, 11(4):761–767, 1998. 321
607. Lutz Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55–69.
Springer, 1998. 321
608. Doina Precup, Richard S Sutton, and Satinder P Singh. Planning with closed-loop macro
actions. In Working notes of the 1997 AAAI Fall Symposium on Model-directed Autonomous
Systems, pages 70–76, 1997. 230, 241
609. David Premack and Guy Woodruff. Does the chimpanzee have a theory of mind? Behavioral
and Brain Sciences, 1(4):515–526, 1978. 208
610. Hugo M Proença and Matthijs van Leeuwen. Interpretable multiclass classification by
mdl-based rule lists. Information Sciences, 512:1372–1393, 2020. 278
611. Max Pumperla and Kevin Ferguson. Deep Learning and the Game of Go. Manning, 2019. 187
612. J Ross Quinlan. Learning efficient classification procedures and their application to chess
end games. In Machine Learning, pages 463–482. Springer, 1983. 162
References 365
613. J Ross Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986. 278
614. Sébastien Racanière, Theophane Weber, David P. Reichert, Lars Buesing, Arthur Guez,
Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yu-
jia Li, Razvan Pascanu, Peter W. Battaglia, Demis Hassabis, David Silver, and Daan Wierstra.
Imagination-augmented agents for deep reinforcement learning. In Advances in Neural
Information Processing Systems, pages 5690–5701, 2017. 57, 138
615. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya
Sutskever. Learning transferable visual models from natural language supervision. In
International Conference on Machine Learning, 2021. 276, 277, 320
616. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving lan-
guage understanding by generative pre-training. https://openai.com/blog/language-
unsupervised/, 2018. 246, 262, 263, 320
617. Roxana Rădulescu, Patrick Mannion, Diederik M Roijers, and Ann Nowé. Multi-objective
multi-agent decision making: a utility-based analysis and survey. Autonomous Agents and
Multi-Agent Systems, 34(1):1–52, 2020. 198
618. Jacob Rafati and David C Noelle. Learning representations in model-free hierarchical rein-
forcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,
pages 10009–10010, 2019. 229, 231, 237, 241
619. Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature
reuse? Towards understanding the effectiveness of maml. In International Conference on
Learning Representations, 2020. 258
620. Roberta Raileanu and Tim Rocktäschel. RIDE: rewarding impact-driven exploration for
procedurally-generated environments. In International Conference on Learning Representa-
tions, 2020. 177, 234
621. Rajat Raina, Andrew Y Ng, and Daphne Koller. Constructing informative priors using
transfer learning. In Proceedings of the 23rd international conference on Machine learning,
pages 713–720, 2006. 268
622. Aravind Rajeswaran, Chelsea Finn, Sham Kakade, and Sergey Levine. Meta-learning with
implicit gradients. In Advances in Neural Information Processing Systems, 2019. 254, 268
623. Kate Rakelly, Aurick Zhou, Chelsea Finn, Sergey Levine, and Deirdre Quillen. Efficient
off-policy meta-reinforcement learning via probabilistic context variables. In International
Conference on Machine Learning, pages 5331–5340. PMLR, 2019. 265
624. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark
Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on
Machine Learning, 2021. 276, 320
625. Jette Randlov. Learning macro-actions in reinforcement learning. In Advances in Neural
Information Processing Systems, pages 1045–1051, 1998. 241
626. J. Rapin and O. Teytaud. Nevergrad - A gradient-free optimization platform. https://
GitHub.com/FacebookResearch/Nevergrad, 2018. 260
627. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster,
and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent
reinforcement learning. In International Conference on Machine Learning, pages 4295–4304.
PMLR, 2018. 207
628. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In
International Conference on Learning Representations, 2017. 254, 255, 268
629. John R. Rice. The algorithm selection problem. Advances in Computers, 15(65-118):5, 1976.
259
630. Martin Riedmiller. Neural fitted Q iteration—first experiences with a data efficient neural
reinforcement learning method. In European Conference on Machine Learning, pages 317–328.
Springer, 2005. 74, 87
631. Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the
American Mathematical Society, 58(5):527–535, 1952. 47
366 References
632. Frank Röder, Manfred Eppe, Phuong DH Nguyen, and Stefan Wermter. Curious hierarchical
actor-critic reinforcement learning. In International Conference on Artificial Neural Networks,
pages 408–419. Springer, 2020. 236, 276
633. Diederik M Roijers, Willem Röpke, Ann Nowé, and Roxana Rădulescu. On following pareto-
optimal policies in multi-objective planning and reinforcement learning. In Multi-Objective
Decision Making Workshop, 2021. 198
634. Diederik M Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey
of multi-objective sequential decision-making. Journal of Artificial Intelligence Research,
48:67–113, 2013. 198
635. Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot
learning. In International Conference on Machine Learning, pages 2152–2161, 2015. 268, 269,
276
636. Willem Röpke, Roxana Radulescu, Diederik M Roijers, and Ann Ann Nowé. Communication
strategies in multi-objective normal-form games. In Adaptive and Learning Agents Workshop
2021, 2021. 198
637. Christopher D Rosin. Multi-armed bandits with episode context. Annals of Mathematics and
Artificial Intelligence, 61(3):203–230, 2011. 168, 169
638. Denis Rothman. Transformers for Natural Language Processing. Packt Publishing, 2021. 263
639. Neil Rubens, Mehdi Elahi, Masashi Sugiyama, and Dain Kaplan. Active learning in rec-
ommender systems. In Recommender Systems Handbook, pages 809–846. Springer, 2015.
177
640. Jonathan Rubin and Ian Watson. Computer poker: A review. Artificial intelligence, 175(5-
6):958–987, 2011. 223
641. Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint
arXiv:1609.04747, 2016. 85
642. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions
and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019. 278
643. Ben Ruijl, Jos Vermaseren, Aske Plaat, and Jaap van den Herik. Hepgame and the simplifica-
tion of expressions. arXiv preprint arXiv:1405.6369, 2014. 188
644. Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems.
Technical report, University of Cambridge, Department of Engineering Cambridge, UK, 1994.
45, 50, 59
645. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-
Fei. Imagenet large scale visual recognition challenge. International Journal of Computer
Vision, 115(3):211–252, 2015. 245, 246, 262
646. Stuart J Russell and Peter Norvig. Artificial intelligence: a modern approach. Pearson Education
Limited, Malaysia, 2016. 14, 17, 56, 57, 59
647. Daniel Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. A tutorial
on Thompson sampling. Found. Trends Mach. Learn., 11(1):1–96, 2018. 48
648. Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osin-
dero, and Raia Hadsell. Meta-learning with latent embedding optimization. In International
Conference on Learning Representations, 2019. 268
649. Richard M Ryan and Edward L Deci. Intrinsic and extrinsic motivations: Classic definitions
and new directions. Contemporary Educational Psychology, 25(1):54–67, 2000. 241
650. Jordi Sabater and Carles Sierra. Reputation and social network analysis in multi-agent
systems. In Proceedings of the First International Joint Conference on Autonomous Agents and
Multiagent Systems: Part 1, pages 475–482, 2002. 216
651. Sumit Saha. A comprehensive guide to convolutional neural networks—the ELI5 way.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-
neural-networks-the-eli5-way-3bd2b1164a53. Towards Data Science, 2018. 305, 312
652. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies
as a scalable alternative to reinforcement learning. arXiv:1703.03864, 2017. 203, 212, 223, 277
653. Brian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions.
Journal of Machine Learning Research, 5:1063–1088, Aug 2004. 74, 87
References 367
654. Mikayel Samvelyan, Tabish Rashid, Christian Schroeder De Witt, Gregory Farquhar, Nantas
Nardelli, Tim GJ Rudner, Chia-Man Hung, Philip HS Torr, Jakob Foerster, and Shimon
Whiteson. The starcraft multi-agent challenge. In Proceedings of the 18th International
Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, Montreal, 2019. 207,
219, 223, 224
655. Jason Sanders and Edward Kandrot. CUDA by example: an introduction to general-purpose
GPU programming. Addison-Wesley Professional, 2010. 325
656. Tuomas Sandholm. The state of solving large incomplete-information games, and application
to poker. AI Magazine, 31(4):13–32, 2010. 223
657. Tuomas Sandholm. Abstraction for solving large incomplete-information games. In Proceed-
ings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. 204
658. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap.
Meta-learning with memory-augmented neural networks. In International Conference on
Machine Learning, pages 1842–1850, 2016. 268
659. Vieri Giuliano Santucci, Pierre-Yves Oudeyer, Andrew Barto, and Gianluca Baldassarre.
Intrinsically motivated open-ended learning in autonomous robots. Frontiers in Neurorobotics,
13:115, 2020. 277
660. Steve Schaefer. Mathematical recreations. http://www.mathrec.org/old/2002jan/
solutions.html, 2002. 151
661. Jonathan Schaeffer. One Jump Ahead: Computer Perfection at Checkers. Springer Science &
Business Media, 2008. 64
662. Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant. Chinook, the world man-
machine checkers champion. AI Magazine, 17(1):21, 1996. 57
663. Jonathan Schaeffer, Aske Plaat, and Andreas Junghanns. Unifying single-agent and two-
player search. Information Sciences, 135(3-4):151–175, 2001. 188
664. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function
approximators. In International Conference on Machine Learning, pages 1312–1320, 2015. 230,
241
665. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay.
In International Conference on Learning Representations, 2016. 80, 82
666. Tom Schaul and Jürgen Schmidhuber. Metalearning. Scholarpedia, 5(6):4650, 2010. 247, 268
667. Daniel Schleich, Tobias Klamt, and Sven Behnke. Value iteration networks on multiple
levels of abstraction. In Robotics: Science and Systems XV, University of Freiburg, Freiburg im
Breisgau, Germany, 2019. 135
668. Jürgen Schmidhuber. Evolutionary Principles in Self-Referential Learning, or on Learning how
to Learn: the Meta-Meta-. . . Hook. PhD thesis, Technische Universität München, 1987. 253,
268
669. Jürgen Schmidhuber. Making the world differentiable: On using self-supervised fully recur-
rent neural networks for dynamic reinforcement learning and planning in non-stationary
environments. Technical report, Inst. für Informatik, 1990. 131
670. Jürgen Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning
in reactive environments. In 1990 IJCNN International Joint Conference on Neural Networks,
pages 253–258. IEEE, 1990. 135, 139, 144
671. Jürgen Schmidhuber. Curious model-building control systems. In Proceedings International
Joint Conference on Neural Networks, pages 1458–1463, 1991. 176
672. Jürgen Schmidhuber. Learning to generate sub-goals for action sequences. In Artificial neural
networks, pages 967–972, 1991. 227
673. Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building
neural controllers. In Proc. of the international conference on simulation of adaptive behavior:
From animals to animats, pages 222–227, 1991. 277
674. Jürgen Schmidhuber, F Gers, and Douglas Eck. Learning nonregular languages: A comparison
of simple recurrent networks and LSTM. Neural Computation, 14(9):2039–2041, 2002. 315
675. Jürgen Schmidhuber, Jieyu Zhao, and MA Wiering. Simple principles of metalearning.
Technical report, IDSIA, 1996. 268
368 References
676. Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Kernel principal component
analysis. In International Conference on Artificial Neural Networks, pages 583–588. Springer,
1997. 15
677. Nicol N Schraudolph, Peter Dayan, and Terrence J Sejnowski. Temporal difference learning of
position evaluation in the game of Go. In Advances in Neural Information Processing Systems,
pages 817–824, 1994. 74
678. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre,
Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy
Lillicrap, and David Silver. Mastering Atari, go, chess and shogi by planning with a learned
model. Nature, 588(7839):604–609, 2020. 138, 139, 140, 143, 144, 275, 276
679. Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis
Antonoglou, and David Silver. Online and offline reinforcement learning by planning with a
learned model. arXiv preprint arXiv:2104.06294, 2021. 140, 275
680. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust
region policy optimization. In International Conference on Machine Learning, pages 1889–1897,
2015. 94, 104, 105, 116, 258
681. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-
dimensional continuous control using generalized advantage estimation. In International
Conference on Learning Representations, 2016. 112
682. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. 94, 105, 116, 272
683. Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural
Networks, 16(1):5–9, 2003. 268
684. Marco Scutari. Learning Bayesian networks with the bnlearn R package. Journal of Statistical
Software, 35(i03), 2010. 278
685. Thomas D Seeley. The honey bee colony as a superorganism. American Scientist, 77(6):546–
553, 1989. 191
686. Marwin HS Segler, Mike Preuss, and Mark P Waller. Planning chemical syntheses with deep
neural networks and symbolic AI. Nature, 555(7698):604, 2018. 188, 275
687. Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, and Deepak
Pathak. Planning to explore via self-supervised world models. In International Conference on
Machine Learning, 2020. 129, 138, 140, 144
688. Oliver G Selfridge, Richard S Sutton, and Andrew G Barto. Training and tracking in robotics.
In International Joint Conference on Artificial Intelligence, pages 670–672, 1985. 176
689. Andrew W. Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green,
Chongli Qin, Augustin Zídek, Alexander W. R. Nelson, Alex Bridgland, Hugo Penedones,
Stig Petersen, Karen Simonyan, Steve Crossan, Pushmeet Kohli, David T. Jones, David Silver,
Koray Kavukcuoglu, and Demis Hassabis. Improved protein structure prediction using
potentials from deep learning. Nature, 577(7792):706–710, 2020. 188
690. Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-
Madison Department of Computer Sciences, 2009. 177
691. Noor Shaker, Julian Togelius, and Mark J Nelson. Procedural Content Generation in Games.
Springer, 2016. 57, 177, 265
692. Guy Shani, Joelle Pineau, and Robert Kaplow. A survey of point-based POMDP solvers.
Autonomous Agents and Multi-Agent Systems, 27(1):1–51, 2013. 151
693. Claude E Shannon. Programming a computer for playing chess. In Computer Chess Com-
pendium, pages 2–13. Springer, 1988. 6, 152
694. Lloyd S Shapley. Stochastic games. In Proceedings of the National Academy of Sciences,
volume 39, pages 1095–1100, 1953. 194
695. Yaron Shoham and Gal Elidan. Solving Sokoban with forward-backward reinforcement
learning. In Proceedings of the International Symposium on Combinatorial Search, volume 12,
pages 191–193, 2021. 242
696. Yoav Shoham and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic,
and Logical Foundations. Cambridge University Press, 2008. 193
References 369
697. Yoav Shoham, Rob Powers, and Trond Grenager. Multi-agent reinforcement learning: a
critical survey. Technical report, Stanford University, 2003. 223
698. Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators.
In International Conference on Machine Learning, pages 3173–3181. PMLR, 2017. 268
699. Robin C Sickles and Valentin Zelenyuk. Measurement of productivity and efficiency. Cambridge
University Press, 2019. 197
700. Daniel L Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond
learning algorithms. In 2013 AAAI Spring Symposium Series, 2013. 246
701. David Silver. Reinforcement learning and simulation based search in the game of Go. PhD
thesis, University of Alberta, 2009. 167
702. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den
Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot,
Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy
Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mas-
tering the game of Go with deep neural networks and tree search. Nature, 529(7587):484,
2016. 57, 150, 172, 175, 178, 179, 182, 187, 246
703. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap,
Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that
masters chess, shogi, and Go through self-play. Science, 362(6419):1140–1144, 2018. 144, 178,
182, 183, 187, 275, 276
704. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller.
Deterministic policy gradient algorithms. In International Conference on Machine Learning,
pages 387–395, 2014. 107, 108, 132
705. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur
Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy
Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis
Hassabis. Mastering the game of Go without human knowledge. Nature, 550(7676):354, 2017.
74, 144, 150, 169, 171, 172, 173, 175, 178, 180, 181, 182, 183, 187, 246, 275
706. David Silver, Satinder Singh, Doina Precup, and Richard S Sutton. Reward is enough. Artificial
Intelligence, page 103535, 2021. 17, 218
707. David Silver, Richard S Sutton, and Martin Müller. Reinforcement learning of local shape in
the game of Go. In International Joint Conference on Artificial Intelligence, volume 7, pages
1053–1058, 2007. 188
708. David Silver, Richard S Sutton, and Martin Müller. Temporal-difference search in computer
Go. Machine Learning, 87(2):183–219, 2012. 131
709. David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley,
Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, and Thomas Degris.
The predictron: End-to-end learning and planning. In Proceedings of the 34th International
Conference on Machine Learning, pages 3191–3199, 2017. 129, 135, 138, 143, 144
710. David Simões, Nuno Lau, and Luís Paulo Reis. Multi agent deep learning with cooperative
communication. Journal of Artificial Intelligence and Soft Computing Research, 10, 2020. 208
711. Satinder Singh, Andrew G Barto, and Nuttapong Chentanez. Intrinsically motivated rein-
forcement learning. Technical report, University of Amherst, Mass, Department of Computer
Science, 2005. 237, 238, 277
712. Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically motivated
reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous
Mental Development, 2(2):70–82, 2010. 223
713. David J Slate and Lawrence R Atkin. Chess 4.5—the northwestern university chess program.
In Chess skill in Man and Machine, pages 82–118. Springer, 1983. 163
714. Gillian Smith. An analog history of procedural content generation. In Foundations of Digital
Games, 2015. 177
715. Stephen J Smith, Dana Nau, and Tom Throop. Computer bridge: A big win for AI planning.
AI magazine, 19(2):93–93, 1998. 208
370 References
716. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning.
In Advances in Neural Information Processing Systems, pages 4077–4087, 2017. 264, 268
717. Doron Sobol, Lior Wolf, and Yaniv Taigman. Visual analogies between Atari games for
studying transfer learning in RL. arXiv preprint arXiv:1807.11074, 2018. 265
718. Sungryull Sohn, Junhyuk Oh, and Honglak Lee. Hierarchical reinforcement learning for
zero-shot generalization with subtask dependencies. In Advances in Neural Information
Processing Systems, pages 7156–7166, 2018. 268, 276
719. Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, and Yung Yi. Qtran:
Learning to factorize with transformation for cooperative multi-agent reinforcement learning.
In International Conference on Machine Learning, pages 5887–5896. PMLR, 2019. 207
720. Fengguang Song and Jack Dongarra. Scaling up matrix computations on shared-memory
manycore systems with 1000 CPU cores. In Proceedings of the 28th ACM International
Conference on Supercomputing, pages 333–342. ACM, 2014. 325
721. H. Francis Song, Abbas Abdolmaleki, Jost Tobias Springenberg, Aidan Clark, Hubert Soyer,
Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov,
Martin A. Riedmiller, and Matthew M. Botvinick. V-MPO: on-policy maximum a posteriori
policy optimization for discrete and continuous control. In International Conference on
Learning Representations, 2019. 265
722. Mei Song, A Montanari, and P Nguyen. A mean field view of the landscape of two-layers
neural networks. In Proceedings of the National Academy of Sciences, volume 115, pages
E7665–E7671, 2018. 307
723. Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Universal
planning networks. In International Conference on Machine Learning, pages 4739–4748, 2018.
135
724. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut-
dinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of
Machine Learning Research, 15(1):1929–1958, 2014. 321
725. Eric Steinberger. Single deep counterfactual regret minimization. arXiv preprint
arXiv:1901.07621, 2019. 333
726. Martin Stolle and Doina Precup. Learning options in reinforcement learning. In International
Symposium on Abstraction, Reformulation, and Approximation, pages 212–223. Springer, 2002.
230, 231, 232
727. Lise Stork, Andreas Weber, Jaap van den Herik, Aske Plaat, Fons Verbeek, and Katherine
Wolstencroft. Large-scale zero-shot learning in the wild: Classifying zoological illustrations.
Ecological Informatics, 62:101222, 2021. 268
728. Darin Straus. Alphazero implementation and tutorial. https://towardsdatascience.com/
alphazero-implementation-and-tutorial-f4324d65fdfc, 2018. 173
729. Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling
deep neural networks. IEEE Transactions on Evolutionary Computation, 2019. 317
730. Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley,
and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for
training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567,
2017. 223
731. Sainbayar Sukhbaatar, Emily Denton, Arthur Szlam, and Rob Fergus. Learning goal embed-
dings via self-play for hierarchical reinforcement learning. arXiv preprint arXiv:1811.09083,
2018. 231, 233
732. Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. 252
733. Wenyu Sun and Ya-Xiang Yuan. Optimization Theory and Methods: Nonlinear Programming,
volume 1. Springer Science & Business Media, 2006. 105
734. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi,
Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, and Thore Graepel.
Value-decomposition networks for cooperative multi-agent learning. In Proceedings of the
17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018,
Stockholm, Sweden, 2017. 207, 241
References 371
735. Peter Sunehag, Guy Lever, Siqi Liu, Josh Merel, Nicolas Heess, Joel Z Leibo, Edward Hughes,
Tom Eccles, and Thore Graepel. Reinforcement learning agents acquire flocking and symbiotic
behaviour in simulated ecosystems. In Artificial Life Conference Proceedings, pages 103–110.
MIT Press, 2019. 211
736. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales.
Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE
conference on computer vision and pattern recognition, pages 1199–1208, 2018. 264, 268
737. Ilya Sutskever and Vinod Nair. Mimicking Go experts with convolutional neural networks.
In International Conf. on Artificial Neural Networks, pages 101–110. Springer, 2008. 74
738. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014. 320
739. Richard S Sutton. Learning to predict by the methods of temporal differences. Machine
Learning, 3(1):9–44, 1988. 45, 59
740. Richard S Sutton. Integrated architectures for learning, planning, and reacting based on
approximating dynamic programming. In Machine Learning Proceedings 1990, pages 216–224.
Elsevier, 1990. 121, 123, 124, 125, 143
741. Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM
Sigart Bulletin, 2(4):160–163, 1991. 143
742. Richard S Sutton and Andrew G Barto. Reinforcement learning, An Introduction, Second Edition.
MIT Press, 2018. 7, 27, 31, 51, 59, 66, 70, 73, 99, 122, 124, 143, 169
743. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient
methods for reinforcement learning with function approximation. In Advances in Neural
Information Processing Systems, pages 1057–1063, 2000. 116
744. Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: a
framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-
2):181–211, 1999. 229, 230, 235, 240, 241, 276
745. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. Efficient processing of deep
neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12):2295–2329, 2017. 325
746. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian
Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International
Conference on Learning Representations, 2013. 317
747. Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration
networks. In Advances in Neural Information Processing Systems, pages 2154–2162, 2016. 134,
135, 138, 139, 143, 144
748. Oskari Tammelin. Solving large imperfect information games using CFR+. arXiv preprint
arXiv:1407.5042, 2014. 203, 204
749. Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru,
Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement
learning. PloS one, 12(4):e0172395, 2017. 208, 223
750. Ming Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In
International Conference on Machine Learning, pages 330–337, 1993. 206
751. Pang-Ning Tan, Michael Steinbach, and Vipin Kumar. Introduction to Data Mining. Pearson
Education India, 2016. 248
752. Hongyao Tang, Jianye Hao, Tangjie Lv, Yingfeng Chen, Zongzhang Zhang, Hangtian Jia,
Chunxu Ren, Yan Zheng, Zhaopeng Meng, Changjie Fan, and Li Wang. Hierarchical deep
multiagent reinforcement learning with temporal abstraction. arXiv preprint arXiv:1809.09332,
2018. 238
753. Ryutaro Tanno, Kai Arulkumaran, Daniel C Alexander, Antonio Criminisi, and Aditya Nori.
Adaptive neural trees. In International Conference on Machine Learning, pages 6166–6175,
2019. 278
754. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David
Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin
Riedmiller. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018. 113, 114, 116, 262,
332
372 References
755. Yuval Tassa, Tom Erez, and Emanuel Todorov. Synthesis and stabilization of complex
behaviors through online trajectory optimization. In 2012 IEEE/RSJ International Conference
on Intelligent Robots and Systems, pages 4906–4913, 2012. 138, 143
756. Yuval Tassa, Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron, Siqi Liu, Steven Bohez,
Josh Merel, Tom Erez, Timothy Lillicrap, and Nicolas Heess. dm_control: Software and tasks
for continuous control. arXiv preprint arXiv:2006.12983, 2020. 113
757. Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A
survey. Journal of Machine Learning Research, 10(Jul):1633–1685, 2009. 268
758. Shoshannah Tekofsky, Pieter Spronck, Martijn Goudbeek, Aske Plaat, and Jaap van den Herik.
Past our prime: A study of age and play style development in Battlefield 3. IEEE Transactions
on Computational Intelligence and AI in Games, 7(3):292–303, 2015. 223
759. Justin K Terry and Benjamin Black. Multiplayer support for the arcade learning environment.
arXiv preprint arXiv:2009.09341, 2020. 221
760. Justin K Terry, Benjamin Black, Ananth Hari, Luis Santos, Clemens Dieffendahl, Niall L
Williams, Yashas Lokesh, Caroline Horsch, and Praveen Ravi. Pettingzoo: Gym for multi-
agent reinforcement learning. arXiv preprint arXiv:2009.14471, 2020. 221, 242
761. Gerald Tesauro. Neurogammon wins Computer Olympiad. Neural Computation, 1(3):321–323,
1989. 74, 149
762. Gerald Tesauro. TD-gammon: A self-teaching backgammon program. In Applications of
Neural Networks, pages 267–285. Springer, 1995. 46, 74, 178, 275
763. Gerald Tesauro. Temporal difference learning and TD-Gammon. Communications of the
ACM, 38(3):58–68, 1995. 74, 150, 178
764. Gerald Tesauro. Programming backgammon using self-teaching neural nets. Artificial
Intelligence, 134(1-2):181–199, 2002. 149
765. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel Mankowitz, and Shie Mannor. A deep
hierarchical approach to lifelong learning in minecraft. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 31, 2017. 231, 233
766. Marc Teyssier and Daphne Koller. Ordering-based search: A simple and effective algorithm
for learning Bayesian networks. arXiv preprint arXiv:1207.1429, 2012. 278
767. Shantanu Thakoor, Surag Nair, and Megha Jhunjhunwala. Learning to play othello without
human knowledge. Stanford University CS238 Final Project Report, 2017. 173, 183, 184, 333
768. Sergios Theodoridis and Konstantinos Koutroumbas. Pattern recognition. Academic Press,
1999. 301
769. William R Thompson. On the likelihood that one unknown probability exceeds another in
view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. 48
770. Sebastian Thrun. Learning to play the game of chess. In Advances in Neural Information
Processing Systems, pages 1069–1076, 1995. 162
771. Sebastian Thrun. Is learning the 𝑛-th thing any easier than learning the first? In Advances
in Neural Information Processing Systems, pages 640–646. Morgan Kaufman, 1996. 250, 253
772. Sebastian Thrun. Explanation-based neural network learning: A lifelong learning approach,
volume 357. Springer, 2012. 248
773. Sebastian Thrun and Lorien Pratt. Learning to Learn. Springer, 2012. 248, 253, 260, 268
774. Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, and Phillip Isola. Rethinking
few-shot image classification: a good embedding is all you need? In European Conference on
Computer Vision, 2020. 264, 268
775. Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and C Lawrence Zitnick. ELF:
An extensive, lightweight and flexible research platform for real-time strategy games. In
Advances in Neural Information Processing Systems, pages 2659–2669, 2017. 173, 183, 184, 333
776. Yuandong Tian, Jerry Ma, Qucheng Gong, Shubho Sengupta, Zhuoyuan Chen, and
C. Lawrence Zitnick. ELF OpenGo. https://github.com/pytorch/ELF, 2018. 184
777. Yuandong Tian and Yan Zhu. Better computer Go player with neural network and long-term
prediction. In International Conference on Learning Representations, 2016. 184
778. Emanuel Todorov. Linearly-solvable markov decision problems. In Advances in Neural
Information Processing Systems, pages 1369–1376, 2007. 107
References 373
779. Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based
control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages
5026–5033, 2012. 41, 92, 93, 116, 262, 332
780. Julian Togelius, Alex J Champandard, Pier Luca Lanzi, Michael Mateas, Ana Paiva, Mike
Preuss, and Kenneth O Stanley. Procedural content generation: Goals, challenges and
actionable steps. In Artificial and Computational Intelligence in Games. Schloss Dagstuhl-
Leibniz-Zentrum für Informatik, 2013. 57, 177, 223
781. Tatiana Tommasi, Martina Lanzi, Paolo Russo, and Barbara Caputo. Learning the roots of
visual domain shift. In European Conference on Computer Vision, pages 475–482. Springer,
2016. 247, 251, 252
782. Armon Toubman, Jan Joris Roessingh, Pieter Spronck, Aske Plaat, and Jaap Van Den Herik.
Dynamic scripting with team coordination in air combat simulation. In International Confer-
ence on Industrial, Engineering and other Applications of Applied Intelligent Systems, pages
440–449. Springer, 2014. 223
783. Thomas Trenner. Beating kuhn poker with CFR using python. https://ai.plainenglish.
io/building-a-poker-ai-part-6-beating-kuhn-poker-with-cfr-using-python-
1b4172a6ab2d. 204, 205
784. Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross
Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle.
Meta-dataset: A dataset of datasets for learning to learn from few examples. In International
Conference on Learning Representations, 2020. 262, 263, 268, 332
785. John Tromp. Number of legal Go states. http://tromp.github.io/go/legal.html, 2016.
68
786. John N Tsitsiklis and Benjamin Van Roy. Analysis of temporal-diffference learning with
function approximation. In Advances in Neural Information Processing Systems, pages 1075–
1081, 1997. 66, 73, 87
787. Alan M Turing. Digital Computers Applied to Games. Pitman & Sons, 1953. 6, 152, 161, 163
788. Karl Tuyls, Julien Perolat, Marc Lanctot, Joel Z Leibo, and Thore Graepel. A generalised
method for empirical game theoretic analysis. In Proceedings of the 17th International
Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden,
2018. 194
789. Karl Tuyls and Gerhard Weiss. Multiagent learning: Basics, challenges, and prospects. AI
Magazine, 33(3):41–41, 2012. 223
790. Paul Tylkin, Goran Radanovic, and David C Parkes. Learning robust helpful behaviors in
two-player cooperative atari environments. In Proceedings of the 20th International Conference
on Autonomous Agents and MultiAgent Systems, pages 1686–1688, 2021. 221
791. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative
domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 7167–7176, 2017. 252
792. Wiebe Van der Hoek and Michael Wooldridge. Multi-agent systems. Foundations of Artificial
Intelligence, 3:887–928, 2008. 223
793. Michiel Van Der Ree and Marco Wiering. Reinforcement learning in the game of Othello:
learning against a fixed opponent and learning from self-play. In IEEE Adaptive Dynamic
Programming and Reinforcement Learning, pages 108–115. IEEE, 2013. 188
794. Max J van Duijn. The Lazy Mindreader: a Humanities Perspective on Mindreading and Multiple-
Order Intentionality. PhD thesis, Leiden University, 2016. 208
795. Max J Van Duijn, Ineke Sluiter, and Arie Verhagen. When narrative takes over: The rep-
resentation of embedded mindstates in Shakespeare’s Othello. Language and Literature,
24(2):148–166, 2015. 208
796. Max J Van Duijn and Arie Verhagen. Recursive embedding of viewpoints, irregularity, and
the role for a flexible framework. Pragmatics, 29(2):198–225, 2019. 208
797. Frank Van Harmelen, Vladimir Lifschitz, and Bruce Porter. Handbook of Knowledge Represen-
tation. Elsevier, 2008. 232
798. Hado Van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, and Joseph
Modayil. Deep reinforcement learning and the deadly triad. arXiv:1812.02648, 2018. 74
374 References
799. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with Double
Q-Learning. In AAAI, volume 2, page 5. Phoenix, AZ, 2016. 80, 81, 82
800. Matthijs Van Leeuwen and Arno Knobbe. Diverse subgroup set discovery. Data Mining and
Knowledge Discovery, 25(2):208–242, 2012. 15
801. Kristof Van Moffaert and Ann Nowé. Multi-objective reinforcement learning using sets of
pareto dominating policies. Journal of Machine Learning Research, 15(1):3483–3512, 2014.
198, 223
802. Gerard JP Van Westen, Jörg K Wegner, Peggy Geluykens, Leen Kwanten, Inge Vereycken,
Anik Peeters, Adriaan P IJzerman, Herman WT van Vlijmen, and Andreas Bender. Which
compound to select in lead optimization? Prospectively validated proteochemometric models
guide preclinical development. PloS One, 6(11):e27518, 2011. 188
803. Joaquin Vanschoren. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018. 268
804. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural
Information Processing Systems, pages 5998–6008, 2017. 320
805. Vivek Veeriah, Tom Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado
van Hasselt, David Silver, and Satinder Singh. Discovery of options via meta-learned subgoals.
arXiv preprint arXiv:2102.06741, 2021. 241
806. Alfredo Vellido, José David Martín-Guerrero, and Paulo JG Lisboa. Making machine learning
models interpretable. In ESANN, volume 12, pages 163–172, 2012. 278
807. Jos AM Vermaseren. New features of form. arXiv preprint math-ph/0010025, 2000. 169
808. Jean-Philippe Vert, Koji Tsuda, and Bernhard Schölkopf. A primer on kernel methods. In
Kernel Methods in Computational Biology, volume 47, pages 35–70. MIT press Cambridge,
MA, 2004. 248
809. Alexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John
Agapiou, and Koray Kavukcuoglu. Strategic attentive writer for learning macro-actions. In
Advances in Neural Information Processing Systems, pages 3486–3494, 2016. 231, 233
810. Alexander Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David
Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. In
Intl Conf on Machine Learning, pages 3540–3549. PMLR, 2017. 231, 233, 241
811. Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial
Intelligence Review, 18(2):77–95, 2002. 259, 268
812. Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik,
Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk
Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai,
John P. Agapiou, Max Jaderberg, Alexander Sasha Vezhnevets, Rémi Leblond, Tobias Pohlen,
Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom Le Paine, Çaglar Gülçehre,
Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina
McKinney, Oliver Smith, Tom Schaul, Timothy P. Lillicrap, Koray Kavukcuoglu, Demis
Hassabis, Chris Apps, and David Silver. Grandmaster level in starcraft II using multi-agent
reinforcement learning. Nature, 575(7782):350–354, 2019. 7, 69, 214, 220, 223, 238, 277
813. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks
for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638,
2016. 262, 264, 268, 332
814. Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets,
Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John P. Agapiou, Julian Schrittwieser,
John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt,
David Silver, Timothy P. Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David
Lawrence, Anders Ekermo, Jacob Repp, and Rodney Tsing. Starcraft II: A new challenge for
reinforcement learning. arXiv:1708.04782, 2017. 223, 332
815. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural
image caption generator. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 3156–3164, 2015. 314
816. Vanessa Volz, Jacob Schrum, Jialin Liu, Simon M Lucas, Adam Smith, and Sebastian Risi.
Evolving mario levels in the latent space of a deep convolutional generative adversarial
References 375
858. Zhao Yang, Mike Preuss, and Aske Plaat. Transfer learning and curriculum learning in
sokoban. arXiv preprint arXiv:2105.11702, 2021. 57, 268
859. Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, and Yang Gao. Mastering Atari
games with limited data. arXiv preprint arXiv:2111.00210, 2021. 140
860. Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn.
Bayesian model-agnostic meta-learning. In Proceedings of the 32nd International Conference
on Neural Information Processing Systems, pages 7343–7353, 2018. 254, 268
861. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in
deep neural networks? In Neural Information Processing Systems, pages 3320–3328, 2014. 249
862. Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. The surprising
effectiveness of PPO in cooperative, multi-agent games. arXiv preprint arXiv:2103.01955, 2021.
207
863. Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn,
and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta
reinforcement learning. In Conference on Robot Learning, pages 1094–1100. PMLR, 2020. 113,
262, 264, 265, 268, 269, 332
864. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks.
In European Conference on Computer Vision, pages 818–833. Springer, 2014. 245
865. Qinsong Zeng, Jianchang Zhang, Zhanpeng Zeng, Yongsheng Li, Ming Chen, and Sifan Liu.
PhoenixGo. https://github.com/Tencent/PhoenixGo, 2018. 173, 183, 184
866. Amy Zhang, Nicolas Ballas, and Joelle Pineau. A dissection of overfitting and generalization
in continuous reinforcement learning. arXiv preprint arXiv:1806.07937, 2018. 278, 321
867. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Under-
standing deep learning (still) requires rethinking generalization. Communications of the
ACM, 64(3):107–115, 2021. 278
868. Chiyuan Zhang, Oriol Vinyals, Remi Munos, and Samy Bengio. A study on overfitting in
deep reinforcement learning. arXiv preprint arXiv:1804.06893, 2018. 278
869. Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A
selective overview of theories and algorithms. arXiv preprint arXiv:1911.10635, 2019. 194, 195
870. Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully decentralized
multi-agent reinforcement learning with networked agents. In International Conference on
Machine Learning, pages 5872–5881. PMLR, 2018. 212
871. Lei Zhang. Transfer adaptation learning: A decade survey. arXiv:1903.04687, 2019. 252, 268
872. Lunjun Zhang, Ge Yang, and Bradly C Stadie. World model as a graph: Learning latent
landmarks for planning. In International Conference on Machine Learning, pages 12611–12620.
PMLR, 2021. 131, 138, 233
873. Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J Johnson, and Sergey
Levine. Solar: Deep structured representations for model-based reinforcement learning. In
International Conference on Machine Learning, pages 7444–7453, 2019. 144
874. Shangtong Zhang and Richard S Sutton. A deeper look at experience replay. arXiv preprint
arXiv:1712.01275, 2017. 75, 76, 87
875. Wenshuai Zhao, Jorge Peña Queralta, and Tomi Westerlund. Sim-to-real transfer in deep
reinforcement learning for robotics: a survey. In 2020 IEEE Symposium Series on Computational
Intelligence (SSCI), pages 737–744. IEEE, 2020. 279
876. Yan Zheng, Zhaopeng Meng, Jianye Hao, Zongzhang Zhang, Tianpei Yang, and Changjie
Fan. A deep bayesian policy reuse approach against non-stationary agents. In 32nd Neural
Information Processing Systems, pages 962–972, 2018. 208
877. Neng-Fa Zhou and Agostino Dovier. A tabled Prolog program for solving Sokoban. Funda-
menta Informaticae, 124(4):561–575, 2013. 27, 56
878. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui
Xiong, and Qing He. A comprehensive survey on transfer learning. Proceedings of the IEEE,
109(1):43–76, 2020. 248, 268
879. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy
inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.
107
378 References
880. Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret
minimization in games with incomplete information. In Advances in Neural Information
Processing Systems, pages 1729–1736, 2008. 203, 223, 333
881. Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast
context adaptation via meta-learning. In International Conference on Machine Learning, pages
7693–7702. PMLR, 2019. 267
List of Tables
379
380 List of Tables
𝐴
action in a state. 30
𝐴
advantage function in actor critic. 101
𝐶𝑝
exploration/exploitation constant in MCTS; high is more exploration. 169
𝐷
dataset. 297
𝐼𝜔
initiation set for option 𝜔. 229
𝑄
state-action value. 37
𝑅
reward. 33
𝑆
state. 29
𝑇
transition. 31
𝑉
value. 36
Ω
set of options 𝜔 (hierarchical reinforcement learning). 230
𝛼
learning rate. 46
𝛽 𝜔 (𝑠)
termination condition for option 𝜔 at state 𝑠. 229
𝜖 -greedy
exploration/exploitation rule that selects an 𝜖 fraction of random exploration actions. 48
𝛾
discount rate, to reduce the importance of future rewards. 33
L
loss function. 306
T𝑖
base-learning task T𝑖 = (𝐷𝑖,𝑡𝑟 𝑎𝑖𝑛 , L𝑖 ), part of a meta-learning task. 255
𝜔
option (hierarchical reinforcement learning). 229
𝜔
hyperparameters (meta-learning). 255
381
382 Glossary
𝜙
parameters for the value network in actor critic (as opposed to 𝜃, the policy parameters). 99
𝜋
policy. 33
𝜏
trajectory, trace, episode, sequence. 35
𝜃
parameters, weights in the neural network. 69, 99, 299
A2C
advantage actor critic. 103
A3C
asynchronous advantage actor critic. 103
accuracy
the total number of true positives and negatives divided by the total number of predictions. 299
ACO
ant colony optimization. 222
ALE
atari learning environment. 67
BERT
bidirectional encoder representations from transformers. 263
bootstrapping
old estimates of a value are refined with new updates. 45
CFR
counterfactual regret minimization. 202
CPU
central processing unit. 324
D4PG
distributional distributed deep deterministic policy gradient. 113
DDDQN
dueling double deep Q-network. 82
DDPG
deep deterministic policy gradient. 108
DDQN
double deep Q-network. 81
deep learning
training a deep neural network to approximate a function, used for high-dimensional problems.
2
deep reinforcement learning
approximating value, policy, and transition functions with a deep neural network. 3
deep supervised learning
approximating a function with a deep neural network; often for regression of image classifica-
tion. 297
DQN
deep Q-network. 75
entropy
measure of the amount of uncertainty in a distribution. 292
exploitation
selecting actions as suggested by the current best policy 𝜋 (𝑠). 48
exploration
selecting other actions than those that the policy 𝜋 (𝑠) suggests. 47
Glossary 383
few-shot learning
task with which meta learning is often evaluated, to see how well the meta-learner can learn
with only a few training examples. 253
finetuning
training the pre-trained network on the new dataset. 247
function approximation
approximation of a mathematical function, a main goal of machine learning, often performed
by deep learning. 12
GAN
generative adversarial network. 316
GPT-3
generative pretrained transformer 3. 263
GPU
graphical processing unit. 322
hyperparameters
determine the behavior of a learning algorithm; Base-learning learns parameters 𝜃, meta-
learning learns hyperparameters 𝜔. 267
LSTM
long short-term memory. 313
machine learning
learning a function or model from data. 12
MADDPG
multi agent DDPG. 207
MAML
model-agnostic meta-learning. 257
Markov decision process
stochastic decision process that has the Markov (no-memory) property: the next state depends
only on the current state and the action. 28
MuJoCo
multi Joint dynamics with Contact. 92
optimization
find an optimal element in a space; used in many aspects in machine learning. 8
overfitting
high-capacity models can overtrain, where they model the signal and the noise, instead of just
the signal. 302
parameters
the parameters 𝜃 (weights of a neural network) connect the neurons, together they determine
the functional relation between input and output. 299
PBT
population based training. 212
PETS
probabilisitic ensemble with trajectory sampling. 129
PEX
prioritized experience replay. 82
PILCO
probabilistic inference for learning control. 128
PPO
proximal policy optimization. 105
384 Glossary
pretraining
parameter transfer of the old task to the new task. 247
REINFORCE
REward Increment = Non-negative Factor × Offset Reinforcement × Characteristic Eligibility.
97
reinforcement learning
agent learns a policy for a sequential decision problem from environment feedback on its
actions. 3
SAC
soft actor critic. 106
SARSA
state action reward state action. 50
sequential decision problem
problem consisting of a sequence of decisions. 4
supervised learning
training a predictive model on a labeled dataset. 3
TD
temporal difference. 45
TPU
tensor processing unit. 180
transfer learning
using part of a network (pretraining) to speedup learning (finetuning) on a new dataset. 247
TRPO
trust region policy optimization. 104
unsupervised learning
clustering elements in an unlabeled dataset based on an inherent metric. 15
VI
value iteration. 40
VIN
value iteration network. 134
VPN
value prediction network. 129
XAI
explainable artificial intelligence. 278
zero-shot learning
an example has to be recognized as belonging to a class without ever having been trained on
an example of this class. 260
ZSL
zero-shot learning. 261
Index
385
386 Index
Skinner, 8 trace, 35
SMAC, 259 training set, 298
smoothness assumption, 303 trajectory, 35
soft actor critic, 106 transfer learning, 248
soft decision trees, 278 transformers, 263, 320
softmax, 312 transposition table, 163
sparse reward, 51 trial and error, 3, 32, 33
stable baselines, 77 TRPO, 104
StarCraft, 7, 219 trust region policy optimization, 104
state, 29
state space, 299 UCT, 168
stochastic environment, 29 underfitting, 301
stochastic gradient descent, 307 undo-move, 189
Stockfish, 182 universal value function, 230
supervised learning, 14 unsupervised learning, 15, 318
support set, 253 upward learning, 33
swarm computing, 211
Szepesvári, Csaba, 168 VAE, 318
value function, 33
tabula rasa, 178
value iteration, 40
tabular method, 44
value prediction network, 129
TD, 45
variance, 47
TD-Gammon, 74, 148
variational autoencoder, 318
temporal abstraction, 227
temporal difference, 45
TensorFlow, 322 weight sharing, 311
Tesauro, Gerald, 74, 149
test set, 298 XAI, 278
text to image, 313
theory of mind, 208, 211 zero-shot learning, 260
Tit for Tat, 200 zero-sum game, 151