Cs229-Notes12 Reinforcement in Control
Cs229-Notes12 Reinforcement in Control
Andrew Ng
Part XIII
Reinforcement Learning and
Control
We now begin our study of reinforcement learning and adaptive control.
In supervised learning, we saw algorithms that tried to make their outputs
mimic the labels y given in the training set. In that setting, the labels gave
an unambiguous “right answer” for each of the inputs x. In contrast, for
many sequential decision making and control problems, it is very difficult to
provide this type of explicit supervision to a learning algorithm. For example,
if we have just built a four-legged robot and are trying to program it to walk,
then initially we have no idea what the “correct” actions to take are to make
it walk, and so do not know how to provide explicit supervision for a learning
algorithm to try to mimic.
In the reinforcement learning framework, we will instead provide our al-
gorithms only a reward function, which indicates to the learning agent when
it is doing well, and when it is doing poorly. In the four-legged walking
example, the reward function might give the robot positive rewards for mov-
ing forwards, and negative rewards for either moving backwards or falling
over. It will then be the learning algorithm’s job to figure out how to choose
actions over time so as to obtain large rewards.
Reinforcement learning has been successful in applications as diverse as
autonomous helicopter flight, robot legged locomotion, cell-phone network
routing, marketing strategy selection, factory control, and efficient web-page
indexing. Our study of reinforcement learning will begin with a definition of
the Markov decision processes (MDP), which provides the formalism in
which RL problems are usually posed.
1
2
• Psa are the state transition probabilities. For each state s ∈ S and
action a ∈ A, Psa is a distribution over the state space. We’ll say more
about this later, but briefly, Psa gives the distribution over what states
we will transition to if we take action a in state s.
Or, when we are writing rewards as a function of the states only, this becomes
For most of our development, we will use the simpler state-rewards R(s),
though the generalization to state-action rewards R(s, a) offers no special
difficulties.
3
This says that the expected sum of discounted rewards V π (s) for starting
in s consists of two terms: First, the immediate reward R(s) that we get
right away simply for starting in state s, and second, the expected sum of
future discounted rewards. Examining the second term in more detail, we
see that the summation term above can be rewritten Es′ ∼Psπ(s) [V π (s′ )]. This
is the expected sum of discounted rewards for starting in state s′ , where s′
is distributed according Psπ(s) , which is the distribution over where we will
end up after taking the first action π(s) in the MDP from state s. Thus, the
second term above gives the expected sum of discounted rewards obtained
after the first step in the MDP.
Bellman’s equations can be used to efficiently solve for V π . Specifically,
in a finite-state MDP (|S| < ∞), we can write down one such equation for
V π (s) for every state s. This gives us a set of |S| linear equations in |S|
variables (the unknown V π (s)’s, one for each state), which can be efficiently
solved for the V π (s)’s.
1
This notation in which we condition on π isn’t technically correct because π isn’t a
random variable, but this is quite standard in the literature.
4
In other words, this is the best possible expected sum of discounted rewards
that can be attained using any policy. There is also a version of Bellman’s
equations for the optimal value function:
X
V ∗ (s) = R(s) + max γ Psa (s′ )V ∗ (s′ ). (2)
a∈A
s′ ∈S
The first term above is the immediate reward as before. The second term
is the maximum over all actions a of the expected future sum of discounted
rewards we’ll get upon after action a. You should make sure you understand
this equation and see why it makes sense. (A derivation for equation (2) and
the equation (3) below are given in Appendix B)
We also define a policy π ∗ : S 7→ A as follows:
X
π ∗ (s) = arg max Psa (s′ )V ∗ (s′ ). (3)
a∈A
s′ ∈S
Note that π ∗ (s) gives the action a that attains the maximum in the “max”
in Equation (2).
It is a fact that for every state s and every policy π, we have
∗
V ∗ (s) = V π (s) ≥ V π (s).
∗
The first equality says that the V π , the value function for π ∗ , is equal to the
optimal value function V ∗ for every state s. Further, the inequality above
says that π ∗ ’s value is at least a large as the value of any other other policy.
In other words, π ∗ as defined in Equation (3) is the optimal policy.
Note that π ∗ has the interesting property that it is the optimal policy for
all states s. Specifically, it is not the case that if we were starting in some
state s then there’d be some optimal policy for that state, and if we were
starting in some other state s′ then there’d be some other policy that’s opti-
mal policy for s′ . The same policy π ∗ attains the maximum in Equation (1)
for all states s. This means that we can use the same policy π ∗ no matter
what the initial state of our MDP is.
∞, |A| < ∞). In this section, we will also assume that we know the state
transition probabilities {Psa } and the reward function R.
The first algorithm, value iteration, is as follows:
Thus, the inner-loop repeatedly computes the value function for the cur-
rent policy, and then updates the policy using the current value function.
(The policy π found in step (b) is also called the policy that is greedy with
respect to V .) Note that step (a) can be done via solving Bellman’s equa-
tions as described earlier, which in the case of a fixed policy, is just a set of
|S| linear equations in |S| variables.
After at most a finite number of iterations of this algorithm, V will con-
verge to V ∗ , and π will converge to π ∗ .2
Both value iteration and policy iteration are standard algorithms for solv-
ing MDPs, and there isn’t currently universal agreement over which algo-
rithm is better. For small MDPs, policy iteration is often very fats and con-
verges with very few iterations. However, for MDPs with large state spaces,
solving for V π explicitly would involve solving a large system of linear equa-
tions, and could be difficult (and note that one has to solve the linear system
multiple times in policy iteration). In these problems, value iteration may be
preferred. For this reason, in practice value iteration seems to be used more
often than policy iteration. For some more discussions on the comparison
and connection of value iteration and policy iteration, please see Section A.
whereas policy iteration with an exact linear system solver, can. This is because when
the actions space and policy space are discrete and finite, and once the policy reaches the
optimal policy in policy iteration, then it will not change at all. On the other hand, even
though value iteration will converge to the V ∗ , but there is always some non-zero error in
the learned value function.
7
(j) (j)
Here, si is the state we were at time i of trial j, and ai is the cor-
responding action that was taken from that state. In practice, each of the
trials above might be run until the MDP terminates (such as if the pole falls
over in the inverted pendulum problem), or it might be run for some large
but finite number of timesteps.
Given this “experience” in the MDP consisting of a number of trials,
we can then easily derive the maximum likelihood estimates for the state
transition probabilities:
1. Initialize π randomly.
2. Repeat {
We note that, for this particular algorithm, there is one simple optimiza-
tion that can make it run much more quickly. Specifically, in the inner loop
of the algorithm where we apply value iteration, if instead of initializing value
iteration with V = 0, we initialize it with the solution found during the pre-
vious iteration of our algorithm, then that will provide value iteration with
a much better initial starting point and make it converge more quickly.
4.1 Discretization
Perhaps the simplest way to solve a continuous-state MDP is to discretize
the state space, and then to use an algorithm like value iteration or policy
iteration, as described previously.
For example, if we have 2d states (s1 , s2 ), we can use a grid to discretize
the state space:
Technically, θ is an orientation and so the range of θ is better written θ ∈ [−π, π) than
3
Here, each grid cell represents a separate discrete state s̄. We can then ap-
proximate the continuous-state MDP via a discrete-state one (S̄, A, {Ps̄a }, γ, R),
where S̄ is the set of discrete states, {Ps̄a } are our state transition probabil-
ities over the discrete states, and so on. We can then use value iteration or
policy iteration to solve for the V ∗ (s̄) and π ∗ (s̄) in the discrete state MDP
(S̄, A, {Ps̄a }, γ, R). When our actual system is in some continuous-valued
state s ∈ S and we need to pick an action to execute, we compute the
corresponding discretized state s̄, and execute action π ∗ (s̄).
This discretization approach can work well for many problems. However,
there are two downsides. First, it uses a fairly naive representation for V ∗
(and π ∗ ). Specifically, it assumes that the value function is takes a constant
value over each of the discretization intervals (i.e., that the value function is
piecewise constant in each of the gridcells).
To better understand the limitations of such a representation, consider a
supervised learning problem of fitting a function to this dataset:
5.5
4.5
3.5
y
2.5
1.5
1 2 3 4 5 6 7 8
x
10
4.5
3.5
y
2.5
1.5
1 2 3 4 5 6 7 8
x
There are several ways that one can get such a model. One is to use
physics simulation. For example, the simulator for the inverted pendulum
in PS4 was obtained by using the laws of physics to calculate what position
and orientation the cart/pole will be in at time t + 1, given the current state
at time t and the action a taken, assuming that we know all the parameters
of the system such as the length of the pole, the mass of the pole, and so
on. Alternatively, one can also use an off-the-shelf physics simulation software
package which takes as input a complete physical description of a mechanical
system, the current state st and action at , and computes the state st+1 of the
system a small fraction of a second into the future.4
An alternative way to get a model is to learn one from data collected in
the MDP. For example, suppose we execute n trials in which we repeatedly
take actions in an MDP, each trial for T timesteps. This can be done picking
actions at random, executing some specific policy, or via some other way of
4
Open Dynamics Engine (http://www.ode.com) is one example of a free/open-source
physics simulator that can be used to simulate systems like the inverted pendulum, and
that has been a reasonably popular choice among RL researchers.
12
choosing actions. We would then observe n state sequences like the following:
(1) (1) (1) (1)
(1) a0 (1) a1 (1) a2 aT −1 (1)
s0 −→ s1 −→ s2 −→ · · · −→ sT
(2) (2) (2) (2)
(2) a (2) a (2) a aT −1 (2)
s0 −→ 0
s1 −→
1
s2 −→
2
· · · −→ sT
···
(n) (n) (n) (n)
(n) a (n) a (n) a aT −1 (n)
s0 −→
0
s1 −→
1
s2 −→
2
· · · −→ sT
We can then apply a learning algorithm to predict st+1 as a function of st
and at .
For example, one may choose to learn a linear model of the form
st+1 = Ast + Bat , (6)
using an algorithm similar to linear regression. Here, the parameters of the
model are the matrices A and B, and we can estimate them using the data
collected from our n trials, by picking
X
n X
T −1 2
(i) (i) (i)
arg min st+1 − Ast + Bat .
A,B 2
i=1 t=0
We could also potentially use other loss functions for learning the model.
For example, it has been found in recent work [?] that using k · k2 norm
(without the square) may be helpful in certain cases.
Having learned A and B, one option is to build a deterministic model,
in which given an input st and at , the output st+1 is exactly determined.
Specifically, we always compute st+1 according to Equation (6). Alterna-
tively, we may also build a stochastic model, in which st+1 is a random
function of the inputs, by modeling it as
st+1 = Ast + Bat + ϵt ,
where here ϵt is a noise term, usually modeled as ϵt ∼ N (0, Σ). (The covari-
ance matrix Σ can also be estimated from data in a straightforward way.)
Here, we’ve written the next-state st+1 as a linear function of the current
state and action; but of course, non-linear functions are also possible. Specif-
ically, one can learn a model st+1 = Aϕs (st ) + Bϕa (at ), where ϕs and ϕa are
some non-linear feature mappings of the states and actions. Alternatively,
one can also use non-linear learning algorithms, such as locally weighted lin-
ear regression, to learn to estimate st+1 as a function of st and at . These
approaches can also be used to build either deterministic or stochastic sim-
ulators of an MDP.
13
V (s) = θT ϕ(s).
2. Initialize θ := 0.
5
In practice, most MDPs have much smaller action spaces than state spaces. E.g., a car
has a 6d state space, and a 2d action space (steering and velocity controls); the inverted
pendulum has a 4d state space, and a 1d action space; a helicopter has a 12d state space,
and a 4d action space. So, discretizing this set of actions is usually less of a problem than
discretizing the state space would have been.
14
3. Repeat {
For i = 1, . . . , n {
For each action a ∈ A {
Sample s′1 , . . . , s′k ∼ Ps(i) a (using a model of the MDP).
P
Set q(a) = k1 kj=1 R(s(i) ) + γV (s′j )
// Hence, q(a) is an estimate of R(s(i) )+γEs′ ∼Ps(i) a [V (s′ )].
}
Set y (i) = maxa q(a).
// Hence, y (i) is an estimate of R(s(i) )+γ maxa Es′ ∼Ps(i) a [V (s′ )].
}
// In the original value iteration algorithm (over discrete states)
// we updated the value function according to V (s(i) ) := y (i) .
// In this algorithm, we want V (s(i) ) ≈ y (i) , which we’ll achieve
// using supervised learning (linear regression).
P 2
Set θ := arg minθ 12 ni=1 θT ϕ(s(i) ) − y (i)
Above, we had written out fitted value iteration using linear regression
as the algorithm to try to make V (s(i) ) close to y (i) . That step of the algo-
rithm is completely analogous to a standard supervised learning (regression)
problem in which we have a training set (x(1) , y (1) ), (x(2) , y (2) ), . . . , (x(n) , y (n) ),
and want to learn a function mapping from x to y; the only difference is that
here s plays the role of x. Even though our description above used linear re-
gression, clearly other regression algorithms (such as locally weighted linear
regression) can also be used.
Unlike value iteration over a discrete set of states, fitted value iteration
cannot be proved to always to converge. However, in practice, it often does
converge (or approximately converge), and works well for many problems.
Note also that if we are using a deterministic simulator/model of the MDP,
then fitted value iteration can be simplified by setting k = 1 in the algorithm.
This is because the expectation in Equation (8) becomes an expectation over
a deterministic distribution, and so a single example is sufficient to exactly
compute that expectation. Otherwise, in the algorithm above, we had to
draw k samples, and average to try to approximate that expectation (see the
definition of q(a), in the algorithm pseudo-code).
15
In other words, here we are just setting ϵt = 0 (i.e., ignoring the noise in
the simulator), and setting k = 1. Equivalent, this can be derived from
Equation (9) using the approximation
where here the expectation is over the random s′ ∼ Psa . So long as the noise
terms ϵt are small, this will usually be a reasonable approximation.
However, for problems that don’t lend themselves to such approximations,
having to sample k|A| states using the model, in order to approximate the
expectation above, can be computationally expensive.
return V
Require: hyperparameter k.
5: Initialize π randomly.
6: for until convergence do
7: Let V = VE(π, k).
8: For each state s, let
X
π(s) := arg max Psa (s′ )V (s′ ). (13)
a∈A
s′
faster than using the Procedure VE for a large k. On the flip side, when such
a speeding-up effect no longer exists, e.g.„ when the state space is large and
linear system solver is also not fast, then value iteration is more preferable.
Therefore, we have
V π (s) = E R(s0 ) + γR(s1 ) + γ 2 R(s2 ) + · · · s0 = s, π]
= R(s) + γE [R(s1 ) + γR(s2 ) + · · · ]
= R(s) + γEs1 ∼Psπ(s) [R(s1 ) + γR(s2 ) + · · · ]
= R(s) + γEs1 ∼Psπ(s) [V π (s1 )] ,
Now we derive the Bellman Equation for the optimal value function.
Here the fourth equality is because that for MDP, the optimal action at a
later state is independent of actions at previous states, hence the optimal
policy at the current state can be decomposed to an action followed by the
optimal policy at the new state.