Optimal Control
Optimal Control
1
(t)
.
.
.
m
(t)
_
_
_
.
4
Note very carefully that our solution x() of (ODE) depends upon () and the initial
condition. Consequently our notation would be more precise, but more complicated, if we
were to write
x() = x(, (), x
0
),
displaying the dependence of the response x() upon the control and the initial value.
PAYOFFS. Our overall task will be to determine what is the best control for our
system. For this we need to specify a specic payo (or reward) criterion. Let us dene
the payo functional
(P) P[()] :=
_
T
0
r(x(t), (t)) dt +g(x(T)),
where x() solves (ODE) for the control (). Here r : R
n
A R and g : R
n
R are
given, ane we call r the running payo and g the terminal payo. The terminal time T > 0
is given as well.
THE BASIC PROBLEM. Our aim is to nd a control
()] P[()]
for all controls () /. Such a control
() is called optimal.
This task presents us with these mathematical issues:
(i) Does an optimal control exist?
(ii) How can we characterize an optimal control mathematically?
(iii) How can we construct an optimal control?
These turn out to be sometimes subtle problems, as the following collection of examples
illustrates.
1.2 EXAMPLES
EXAMPLE 1: CONTROL OF PRODUCTION AND CONSUMPTION.
Suppose we own, say, a factory whose output we can control. Let us begin to construct
a mathematical model by setting
x(t) = amount of output produced at time t 0.
We suppose that we consume some fraction of our output at each time, and likewise can
reinvest the remaining fraction. Let us denote
(t) = fraction of output reinvested at time t 0.
5
This will be our control, and is subject to the obvious constraint that
0 (t) 1 for each time t 0.
Given such a control, the corresponding dynamics are provided by the ODE
_
x(t) = k(t)x(t)
x(0) = x
0
.
the constant k > 0 modelling the growth rate of our reinvestment. Let us take as a payo
functional
P[()] =
_
T
0
(1 (t))x(t) dt.
The meaning is that we want to maximize our total consumption of the output, our
consumption at a given time t being (1 (t))x(t). This model ts into our general
framework for n = m = 1, once we put
A = [0, 1], f(x, a) = kax, r(x, a) = (1 a)x, g 0.
0 T
t*
* = 1
* = 0
A bang-bang control
As we will see later in 4.4.2, an optimal control
() is given by
(t) =
_
1 if 0 t t
0 if t
< t T
for an appropriate switching time 0 t
will have to
be determined. We call
() a bangbang control.
EXAMPLE 2: REPRODUCTIVE STATEGIES IN SOCIAL INSECTS
6
The next example is from Chapter 2 of the book Caste and Ecology in Social Insects,
by G. Oster and E. O. Wilson [O-W]. We attempt to model how social insects, say a
population of bees, determine the makeup of their society.
Let us write T for the length of the season, and introduce the variables
w(t) = number of workers at time t
q(t) = number of queens
(t) = fraction of colony eort devoted to increasing work force
The control is constrained by our requiring that
0 (t) 1.
We continue to model by introducing dynamics for the numbers of workers and the
number of queens. The worker population evolves according to
_
w(t) = w(t) +bs(t)(t)w(t)
w(0) = w
0
.
Here is a given constant (a death rate), b is another constant, and s(t) is the known rate
at which each worker contributes to the bee economy.
We suppose also that the population of queens changes according to
_
q(t) = q(t) +c(1 (t))s(t)w(t)
q(0) = q
0
,
for constants and c.
Our goal, or rather the bees, is to maximize the number of queens at time T:
P[()] = q(T).
So in terms of our general notation, we have x(t) = (w(t), q(t))
T
and x
0
= (w
0
, q
0
)
T
. We
are taking the running payo to be r 0, and the terminal payo g(w, q) = q.
The answer will again turn out to be a bangbang control, as we will explain later.
EXAMPLE 3: A PENDULUM.
We look next at a hanging pendulum, for which
(t) = angle at time t.
If there is no external force, then we have the equation of motion
_
(t) +
(t) +
2
(t) = 0
(0) =
1
,
(0) =
2
;
7
the solution of which is a damped oscillation, provided > 0.
Now let () denote an applied torque, subject to the physical constraint that
[[ 1.
Our dynamics now become
_
(t) +
(t) +
2
(t) = (t)
(0) =
1
,
(0) =
2
.
Dene x
1
(t) = (t), x
2
(t) =
(t), and x(t) = (x
1
(t), x
2
(t)). Then we can write the evolution
as the system
x(t) =
_
x
1
x
2
_
=
_
_
=
_
x
2
x
2
2
x
1
+(t)
_
= f (x, ).
We introduce as well
P[()] =
_
0
1 dt = ,
for
= (()) = rst time that x() = 0 (that is, () =
() = 0.)
We want to maximize P[], meaning that we want to minimize the time it takes to bring
the pendulum to rest.
Observe that this problem does not quite fall within the general framework described
in 1.1, since the terminal time is not xed, but rather depends upon the control. This is
called a xed endpoint, free time problem.
EXAMPLE 4: A MOON LANDER
This model asks us to bring a spacecraft to a soft landing on the lunar surface, using
the least amount of fuel.
We introduce the notation
h(t) = height at time t
v(t) = velocity =
h(t)
m(t) = mass of spacecraft (changing as fuel is burned)
(t) = thrust at time t
We assume that
0 (t) 1,
and Newtons law tells us that
m
h = gm+,
8
height = h(t)
noon's surIace
A spacecraft landing on the moon
the right hand side being the dierence of the gravitational force and the thrust of the
rocket. This system is modeled by the ODE
_
_
_
v(t) = g +
(t)
m(t)
h(t) = v(t)
m(t) = k(t).
We summarize these equations in the form
x(t) = f (x(t), (t))
for x(t) = (v(t), h(t), m(t)).
We want to minimize the amount of fuel used up, that is, to maximize the amount
remaining once we have landed. Thus
P[()] = m(),
where
denotes the rst time that h() = v() = 0.
This is a variable endpoint problem, since the nal time is not given in advance. We have
also the extra constraints
h(t) 0, m(t) 0.
k=0
t
k
M
k
k!
,
the last formula being the denition of the exponential e
tM
. Observe that
X
1
(t) = X(t).
THEOREM 2.1 (SOLVING LINEAR SYSTEMS OF ODE).
(i) The unique solution of the homogeneous system of ODE
_
x(t) = Mx(t)
x(0) = x
0
is
x(t) = X(t)x
0
= e
tM
x
0
.
(ii) The unique solution of the nonhomogeneous system
_
x(t) = Mx(t) +f (t)
x(0) = x
0
.
is
x(t) = X(t)x
0
+X(t)
_
t
0
X
1
(s)f (s) ds.
This expression is the variation of parameters formula.
2.3 CONTROLLABILITY OF LINEAR EQUATIONS.
16
According to the variation of parameters formula, the solution of (ODE) for a given
control () is
x(t) = X(t)x
0
+X(t)
_
t
0
X
1
(s)N(s) ds,
where X(t) = e
tM
. Furthermore, observe that
x
0
((t)
if and only if
(2.1) there exists a control () / such that x(t) = 0
if and only if
(2.2) 0 = X(t)x
0
+X(t)
_
t
0
X
1
(s)N(s) ds for some control () /
if and only if
(2.3) x
0
=
_
t
0
X
1
(s)N(s) ds for some control () /.
We make use of these formulas to study the reachable set:
THEOREM 2.2 (STRUCTURE OF REACHABLE SET).
(i) The reachable set ( is symmetric and convex.
(ii) Also, if x
0
((
t), then x
0
((t) for all times t
t.
DEFINITIONS.
(i) We say a set S is symmetric if x S implies x S.
(ii) The set S is convex if x, x S and 0 1 imply x + (1 ) x S.
Proof. 1. (Symmetry) Let t 0 and x
0
((t). Then x
0
=
_
t
0
X
1
(s)N(s) ds for some
admissible control /. Therefore x
0
=
_
t
0
X
1
(s)N((s)) ds, and / since
the set A is symmetric. Therefore x
0
((t), and so each set ((t) symmetric. It follows
that ( is symmetric.
2. (Convexity) Take x
0
, x
0
(; so that x
0
((t), x
0
((
t 0. Assume t
t. Then
x
0
=
_
t
0
X
1
(s)N(s) ds for some control /,
x
0
=
_
t
0
X
1
(s)N (s) ds for some control /.
Dene a new control
(s) :=
_
(s) if 0 s t
0 if s > t.
17
Then
x
0
=
_
t
0
X
1
(s)N (s) ds,
and hence x
0
((
t) (.
3. Assertion (ii) follows from the foregoing if we take
t =
t.
A SIMPLE EXAMPLE. Let n = 2 and m = 1, A = [1, 1], and write x(t) =
(x
1
(t), x
2
(t))
T
. Suppose
_
x
1
= 0
x
2
= (t).
This is a system of the form x = Mx +N, for
M =
_
0 0
0 0
_
, N =
_
0
1
_
Clearly ( = (x
1
, x
2
) [ x
1
= 0, the x
2
axis.
We next wish to establish some general algebraic conditions ensuring that ( contains a
neighborhood of the origin.
DEFINITION. The controllability matrix is
G = G(M, N) := [N, MN, M
2
N, . . . , M
n1
N]
. .
n(mn) matrix
.
THEOREM 2.3 (CONTROLLABILITY MATRIX). We have
rank G = n
if and only if
0 (
.
NOTATION. We write (
n1
+ +
1
1
+
0
,
then
p(M) = M
n
+
n1
M
n1
+ +
1
M +
0
I = 0.
Therefore
M
n
=
n1
M
n1
n2
M
n2
1
M
0
I,
and so
b
T
M
n
N = b
T
(
n1
M
n1
. . . )N = 0.
Similarly, b
T
M
n+1
N = b
T
(
n1
M
n
. . . )N = 0, etc. The claim (2.4) is proved.
Now notice that
b
T
X
1
(s)N = b
T
e
sM
N = b
T
k=0
(s)
k
M
k
N
k!
=
k=0
(s)
k
k!
b
T
M
k
N = 0,
according to (2.4).
3. Assume next that x
0
((t). This is equivalent to having
x
0
=
_
t
0
X
1
(s)N(s) ds for some control () /.
19
Then
b x
0
=
_
t
0
b
T
X
1
(s)N(s) ds = 0.
This says that b is orthogonal x
0
. In other words, ( must lie in the hyperplane orthogonal
to b ,= 0. Consequently (
= .
4. Conversely, assume 0 / (
. Thus 0 / (
n
(s I),
where [v[ :=
_
n
i=1
[v
i
[
2
_1
2
. Then
0 =
_
t
0
v(s) (s) ds =
_
I
v(s)
n
v(s)
[v(s)[
ds =
1
n
_
I
[v(s)[ ds
This implies the contradiction that v 0 in I.
DEFINITION. We say the linear system (ODE) is controllable if ( = R
n
.
THEOREM 2.5 (CRITERION FOR CONTROLLABILITY). Let A be the cube
[1, 1]
m
in R
n
. Suppose as well that rank G = n, and Re < 0 for each eigenvalue of
the matrix M.
Then the system (ODE) is controllable.
Proof. Since rank G = n, Theorem 2.3 tells us that ( contains some ball B centered at 0.
Now take any x
0
R
n
and consider the evolution
_
x(t) = Mx(t)
x(0) = x
0
;
in other words, take the control () 0. Since Re < 0 for each eigenvalue of M, then
the origin is asymptotically stable. So there exists a time T such that x(T) B. Thus
21
x(T) B (; and hence there exists a control () / steering x(T) into 0 in nite
time.
EXAMPLE. We once again consider the rocket railroad car, from 1.2, for which n = 2,
m = 1, A = [1, 1], and
x =
_
0 1
0 0
_
x +
_
0
1
_
.
Then
G = (N, MN) =
_
0 1
1 0
_
.
Therefore
rank G = 2 = n.
Also, the characteristic polynomial of the matrix M is
p() = det(I M) = det
_
1
0
_
=
2
.
Since the eigenvalues are both 0, we fail to satisfy the hypotheses of Theorem 2.5.
This example motivates the following extension of the previous theorem:
THEOREM 2.6 (IMPROVED CRITERION FOR CONTROLLABILITY). Assume
rank G = n and Re 0 for each eigenvalue of M.
Then the system (ODE) is controllable.
Proof. 1. If ( , = R
n
, then the convexity of ( implies that there exist a vector b ,= 0 and a
real number such that
(2.8) b x
0
for all x
0
(. Indeed, in the picture we see that b (x
0
z
0
) 0; and this implies (2.8)
for := b z
0
.
b
x
o
z
o
C
22
We will derive a contradiction.
2. Given b ,= 0, R, our intention is to nd x
0
( so that (2.8) fails. Recall x
0
(
if and only if there exist a time t > 0 and a control () / such that
x
0
=
_
t
0
X
1
(s)N(s) ds.
Then
b x
0
=
_
t
0
b
T
X
1
(s)N(s) ds
Dene
v(s) := b
T
X
1
(s)N
3. We assert that
(2.9) v , 0.
To see this, suppose instead that v 0. Then k times dierentiate the expression
b
T
X
1
(s)N with respect to s and set s = 0, to discover
b
T
M
k
N = 0
for k = 0, 1, 2, . . . . This implies b is orthogonal to the columns of G, and so rank G < n.
This is a contradiction to our hypothesis, and therefore (2.9) holds.
4. Next, dene () this way:
(s) :=
_
v(s)
|v(s)|
if v(s) ,= 0
0 if v(s) = 0.
Then
b x
0
=
_
t
0
v(s)(s) ds =
_
t
0
[v(s)[ ds.
We want to nd a time t > 0 so that
_
t
0
[v(s)[ ds > . In fact, we assert that
(2.10)
_
0
[v(s)[ ds = +.
To begin the proof of (2.10), introduce the function
(t) :=
_
t
v(s) ds.
23
We will nd an ODE satises. Take p() to be the characteristic polynomial of M.
Then
p
_
d
dt
_
v(t) = p
_
d
dt
_
[b
T
e
tM
N] = b
T
_
p
_
d
dt
_
e
tM
_
N = b
T
(p(M)e
tM
)N 0,
since p(M) = 0, according to the CayleyHamilton Theorem. But since p
_
d
dt
_
v(t) 0,
it follows that
d
dt
p
_
d
dt
_
(t) = p
_
d
dt
__
d
dt
_
= p
_
d
dt
_
v(t) = 0.
Hence solves the (n + 1)
th
order ODE
d
dt
p
_
d
dt
_
(t) = 0.
We also know () , 0. Let
1
, . . . ,
n+1
be the solutions of p() = 0. According to
ODE theory, we can write
(t) = sum of terms of the form p
i
(t)e
i
t
for appropriate polynomials p
i
().
Furthermore, we see that
n+1
= 0 and
k
=
k
, where
1
, . . . ,
n
are the eigenvalues
of M. By assumption Re
k
0, for k = 1, . . . , n. If
_
0
[v(s)[ ds < , then
[(t)[
_
t
[v(s)[ ds 0 as t ;
that is, (t) 0 as t . This is a contradiction to the representation formula of
(t) = p
i
(t)e
i
t
, with Re
i
0. Assertion (2.10) is proved.
5. Consequently given any , there exists t > 0 such that
b x
0
=
_
t
0
[v(s)[ ds > ,
a contradiction to (2.8). Therefore ( = R
n
.
2.4 OBSERVABILITY
We again consider the linear system of ODE
(ODE)
_
x(t) = Mx(t)
x(0) = x
0
24
where M M
nn
.
In this section we address the observability problem, modeled as follows. We suppose
that we can observe
(O) y(t) := Nx(t) (t 0),
for a given matrix N M
mn
. Consequently, y(t) R
m
. The interesting situation is when
m << n and we interpret y() as low-dimensional observations or measurements of
the high-dimensional dynamics x().
OBSERVABILITY QUESTION: Given the observations y(), can we in principle re-
construct x()? In particular, do observations of y() provide enough information for us to
deduce the initial value x
0
for (ODE)?
DEFINITION. The pair (ODE),(O) is called observable if the knowledge of y() on any
time interval [0, t] allows us to compute x
0
.
More precisely, (ODE),(O) is observable if for all solutions x
1
(), x
2
(), Nx
1
() Nx
2
()
on a time interval [0, t] implies x
1
(0) = x
2
(0).
TWO SIMPLE EXAMPLES. (i) If N 0, then clearly the system is not observable.
(ii) On the other hand, if m = n and N is invertible, then clearly x(t) = N
1
y(t) is
observable.
The interesting cases lie between these extremes.
THEOREM 2.7 (OBSERVABILITY AND CONTROLLABILITY). The system
(2.11)
_
x(t) = Mx(t)
y(t) = Nx(t)
is observable if and only if the system
(2.12) z(t) = M
T
z(t) +N
T
(t), A = R
m
is controllable, meaning that ( = R
n
.
INTERPRETATION. This theorem asserts that somehow observability and controlla-
bility are dual concepts for linear systems.
Proof. 1. Suppose (2.11) is not observable. Then there exist points x
1
,= x
2
R
n
, such
that
_
x
1
(t) = Mx
1
(t), x
1
(0) = x
1
x
2
(t) = Mx
2
(t), x
2
(0) = x
2
but y(t) := Nx
1
(t) Nx
2
(t) for all times t 0. Let
x(t) := x
1
(t) x
2
(t), x
0
:= x
1
x
2
.
25
Then
x(t) = Mx(t), x(0) = x
0
,= 0,
but
Nx(t) = 0 (t 0).
Now
x(t) = X(t)x
0
= e
tM
x
0
.
Thus
Ne
tM
x
0
= 0 (t 0).
Let t = 0, to nd Nx
0
= 0. Then dierentiate this expression k times in t and let t = 0,
to discover as well that
NM
k
x
0
= 0
for k = 0, 1, 2, . . . . Hence (x
0
)
T
(M
k
)
T
N
T
= 0, and hence (x
0
)
T
(M
T
)
k
N
T
= 0. This
implies
(x
0
)
T
[N
T
, M
T
N
T
, . . . , (M
T
)
n1
N
T
] = 0.
Since x
0
,= 0, rank[N
T
, . . . , (M
T
)
n1
N
T
] < n. Thus problem (2.12) is not controllable.
Consequently, (2.12) controllable implies (2.11) is observable.
2. Assume now (2.12) not controllable. Then rank[N
T
, . . . , (M
T
)
n1
N
T
] < n, and
consequently according to Theorem 2.3 there exists x
0
,= 0 such that
(x
0
)
T
[N
T
, . . . , (M
T
)
n1
N
T
] = 0.
That is, NM
k
x
0
= 0 for all k = 0, 1, 2, . . . , n 1.
We want to show that y(t) = Nx(t) 0, where
_
x(t) = Mx(t)
x(0) = x
0
.
According to the CayleyHamilton Theorem, we can write
M
n
=
n1
M
n1
0
I.
for appropriate constants. Consequently NM
n
x
0
= 0. Likewise,
M
n+1
= M(
n1
M
n1
0
I) =
n1
M
n
0
M;
and so NM
n+1
x
0
= 0. Similarly, NM
k
x
0
= 0 for all k.
Now
x(t) = X(t)x
0
= e
Mt
x
0
=
k=0
t
k
M
k
k!
x
0
;
and therefore Nx(t) = N
k=0
t
n
M
k
k!
x
0
= 0.
We have shown that if (2.12) is not controllable, then (2.11) is not observable.
2.5 BANG-BANG PRINCIPLE.
For this section, we will again take A to be the cube [1, 1]
m
in R
m
.
26
DEFINITION. A control () / is called bang-bang if for each time t 0 and each
index i = 1, . . . , m, we have [
i
(t)[ = 1, where
(t) =
_
_
_
1
(t)
.
.
.
m
(t)
_
_
_
.
THEOREM 2.8 (BANG-BANG PRINCIPLE). Let t > 0 and suppose x
0
((t),
for the system
x(t) = Mx(t) +N(t).
Then there exists a bang-bang control () which steers x
0
to 0 at time t.
To prove the theorem we need some tools from functional analysis, among them the
KreinMilman Theorem, expressing the geometric fact that every bounded convex set has
an extreme point.
2.5.1 SOME FUNCTIONAL ANALYSIS. We will study the geometry of certain
innite dimensional spaces of functions.
NOTATION:
L
= L
(0, t; R
m
) = () : (0, t) R
m
[ sup
0st
[(s)[ < .
||
L
= sup
0st
[(s)[.
DEFINITION. Let
n
L
for n = 1, . . . and L
. We say
n
converges to in
the weak* sense, written
,
provided
_
t
0
n
(s) v(s) ds
_
t
0
(s) v(s) ds
as n , for all v() : [0, t] R
m
satisfying
_
t
0
[v(s)[ ds < .
We will need the following useful weak* compactness theorem for L
:
ALAOGLUS THEOREM. Let
n
/, n = 1, . . . . Then there exists a subsequence
n
k
and /, such that
n
k
.
27
DEFINITIONS. (i) The set Kis convex if for all x, x Kand all real numbers 0 1,
x + (1 ) x K.
(ii) A point z K is called extreme provided there do not exist points x, x K and
0 < < 1 such that
z = x + (1 ) x.
KREIN-MILMAN THEOREM. Let K be a convex, nonempty subset of L
, which is
compact in the weak topology.
Then K has at least one extreme point.
2.5.2 APPLICATION TO BANG-BANG CONTROLS.
The foregoing abstract theory will be useful for us in the following setting. We will take
K to be the set of controls which steer x
0
to 0 at time t, prove it satises the hypotheses
of KreinMilman Theorem and nally show that an extreme point is a bang-bang control.
So consider again the linear dynamics
(ODE)
_
x(t) = Mx(t) +N(t)
x(0) = x
0
.
Take x
0
((t) and write
K = () / [() steers x
0
to 0 at time t.
LEMMA 2.9 (GEOMETRY OF SET OF CONTROLS). The collection K of ad-
missible controls satises the hypotheses of the KreinMilman Theorem.
Proof. Since x
0
((t), we see that K ,= .
Next we show that K is convex. For this, recall that () K if and only if
x
0
=
_
t
0
X
1
(s)N(s) ds.
Now take also K and 0 1. Then
x
0
=
_
t
0
X
1
(s)N (s) ds;
and so
x
0
=
_
t
0
X
1
(s)N((s) + (1 ) (s)) ds
Hence + (1 ) K.
28
Lastly, we conrm the compactness. Let
n
K for n = 1, . . . . According to Alaoglus
Theorem there exist n
k
and / such that
n
k
() is bang-bang.
Proof. 1. We must show that for almost all times 0 s t and for each i = 1, . . . , m, we
have
[
i
(s)[ = 1.
Suppose not. Then there exists an index i 1, . . . , m and a subset E [0, t] of
positive measure such that [
i
(s)[ < 1 for s E. In fact, there exist a number > 0 and
a subset F E such that
[F[ > 0 and [
i
(s)[ 1 for s F.
Dene
I
F
(()) :=
_
F
X
1
(s)N(s) ds,
for
() := (0, . . . , (), . . . , 0)
T
,
the function in the i
th
slot. Choose any real-valued function () , 0, such that
I
F
(()) = 0
and [()[ 1. Dene
1
() :=
() +()
2
() :=
() (),
where we redene to be zero o the set F
29
2. We claim that
1
(),
2
() K.
To see this, observe that
_
t
0
X
1
(s)N
1
(s) ds =
_
t
0
X
1
(s)N
(s) ds
_
t
0
X
1
(s)N(s) ds
= x
0
_
F
X
1
(s)N(s) ds
. .
I
F
(())=0
= x
0
.
Note also
1
() /. Indeed,
_
1
(s) =
(s) (s / F)
1
(s) =
i
(s)[ 1 , and therefore
[
1
(s)[ [
(s)[ +[(s)[ 1 + = 1.
Similar considerations apply for
2
. Hence
1
,
2
K, as claimed above.
3. Finally, observe that
_
1
=
+,
1
,=
2
=
,
2
,=
.
But
1
2
1
+
1
2
2
=
;
and this is a contradiction, since
is an extreme point of K.
2.6 REFERENCES.
See Chapters 2 and 3 of MackiStrauss [M-S]. An interesting recent article on these
matters is Terrell [T].
30
CHAPTER 3: LINEAR TIME-OPTIMAL CONTROL
3.1 Existence of time-optimal controls
3.2 The Maximum Principle for linear time-optimal control
3.3 Examples
3.4 References
3.1 EXISTENCE OF TIME-OPTIMAL CONTROLS.
Consider the linear system of ODE:
(ODE)
_
x(t) = Mx(t) +N(t)
x(0) = x
0
,
for given matrices M M
nn
and N M
nm
. We will again take A to be the cube
[1, 1]
m
R
m
.
Dene next
(P) P[()] :=
_
0
1 ds = ,
where = (()) denotes the rst time the solution of our ODE (3.1) hits the origin 0.
(If the trajectory never hits 0, we set = .)
OPTIMAL TIME PROBLEM: We are given the starting point x
0
R
n
, and want to
nd an optimal control
() such that
P[
()] = max
()A
P[()].
Then
= T[
().
Proof. Let
:= inft [ x
0
((t). We want to show that x
0
((
() steering x
0
to 0 at time
.
Choose t
1
t
2
t
3
. . . so that x
0
((t
n
) and t
n
. Since x
0
((t
n
), there
exists a control
n
() / such that
x
0
=
_
t
n
0
X
1
(s)N
n
(s) ds.
If necessary, redene
n
(s) to be 0 for t
n
s. By Alaoglus Theorem, there exists a
subsequence n
k
and a control
() so that
.
31
We assert that
(s) = 0, s
.
Also
x
0
=
_
t
n
k
0
X
1
(s)N
n
k
(s) ds =
_
t
1
0
X
1
(s)N
n
k
(s) ds,
since
n
k
= 0 for s t
n
k
. Let n
k
:
x
0
=
_
t
1
0
X
1
(s)N
(s) ds =
_
0
X
1
(s)N
(s) ds
because
(s) = 0 for s
. Hence x
0
((
), and therefore
() is optimal.
According to Theorem 2.10 there in fact exists an opimal bang-bang control.
3.2 THE MAXIMUM PRINCIPLE FOR LINEAR TIME OPTIMAL CON-
TROL
The really interesting practical issue now is understanding how to compute an optimal
control
().
DEFINITION. We dene K(t, x
0
) to be the reachable set for time t. That is,
K(t, x
0
) = x
1
[ there exists () / which steers from x
0
to x
1
at time t.
Since x() solves (ODE), we have x
1
K(t, x
0
) if and only if
x
1
= X(t)x
0
+X(t)
_
t
0
X
1
(s)N(s) ds = x(t)
for some control () /.
THEOREM 3.2 (GEOMETRY OF THE SET K). The set K(t, x
0
) is convex and
closed.
Proof. 1. (Convexity) Let x
1
, x
2
K(t, x
0
). Then there exists
1
,
2
/ such that
x
1
= X(t)x
0
+X(t)
_
t
0
X
1
(s)N
1
(s) ds
x
2
= X(t)x
0
+X(t)
_
t
0
X
1
(s)N
2
(s) ds.
Let 0 1. Then
x
1
+ (1 )x
2
= X(t)x
0
+X(t)
_
t
0
X
1
(s)N(
1
(s) + (1 )
2
(s))
. .
A
ds,
32
and hence x
1
+ (1 )x
2
K(t, x
0
).
2. (Closedness) Assume x
k
K(t, x
0
) for (k = 1, 2, . . . ) and x
k
y. We must show
y K(t, x
0
). As x
k
K(t, x
0
), there exists
k
() / such that
x
k
= X(t)x
0
+X(t)
_
t
0
X
1
(s)N
k
(s) ds.
According to Alaoglus Theorem, there exist a subsequence k
j
and / such
that
k
. Let k = k
j
in the expression above, to nd
y = X(t)x
0
+X(t)
_
t
0
X
1
(s)N(s) ds.
Thus y K(t, x
0
), and hence K(t, x
0
) is closed.
NOTATION. If S is a set, we write S to denote the boundary of S.
Recall that
denotes the minimum time it takes to steer to 0, using the optimal control
, x
0
).
THEOREM 3.3 (PONTRYAGIN MAXIMUM PRINCIPLE FOR LINEAR
TIME-OPTIMAL CONTROL). There exists a nonzero vector h such that
(M) h
T
X
1
(t)N
(t) = max
aA
h
T
X
1
(t)Na
for each time 0 t
.
INTERPRETATION. The signicance of this assertion is that if we know h then the
maximization principle (M) provides us with a formula for computing
(), or at least
extracting useful information.
We will see in the next chapter that assertion (M) is a special case of the general
Pontryagin Maximum Principle.
Proof. 1. We know 0 K(
, x
0
). Since K(
, x
0
) is convex, there exists a supporting
plane to K(
, x
0
) at 0; this means that for some g ,= 0, we have
g x
1
0 for all x
1
K(
, x
0
).
2. Now x
1
K(
, x
0
) if and only if there exists () / such that
x
1
= X(
)x
0
+X(
)
_
0
X
1
(s)N(s) ds.
Also
0 = X(
)x
0
+X(
)
_
0
X
1
(s)N
(s) ds.
33
Since g x
1
0, we deduce that
g
T
_
X(
)x
0
+X(
)
_
0
X
1
(s)N(s) ds
_
0 = g
T
_
X(
)x
0
+X(
)
_
0
X
1
(s)N
(s) ds
_
.
Dene h
T
:= g
T
X(
). Then
_
0
h
T
X
1
(s)N(s) ds
_
0
h
T
X
1
(s)N
(s) ds;
and therefore
_
0
h
T
X
1
(s)N(
(s) (s)) ds 0
for all controls () /.
3. We claim now that the foregoing implies
h
T
X
1
(s)N
(s) = max
aA
h
T
X
1
(s)Na
for almost every time s.
For suppose not; then there would exist a subset E [0,
(s) (s / E)
(s) (s E)
where (s) is selected so that
max
aA
h
T
X
1
(s)Na = h
T
X
1
(s)N(s).
Then
_
E
h
T
X
1
(s)N(
(s) (s))
. .
<0
ds 0.
This contradicts Step 2 above.
For later reference, we pause here to rewrite the foregoing into dierent notation; this
will turn out to be a special case of the general theory developed later in Chapter 4. First
of all, dene the Hamiltonian
H(x, p, a) := (Mx +Na) p (x, p R
n
, a A).
34
THEOREM 3.4 (ANOTHER WAY TO WRITE PONTRYAGIN MAXIMUM
PRINCIPLE FOR TIME-OPTIMAL CONTROL). Let
() be a time optimal
control and x
() : [0,
] R
n
, such that
(ODE) x
(t) =
p
H(x
(t), p
(t),
(t)),
(ADJ) p
(t) =
x
H(x
(t), p
(t),
(t)),
and
(M) H(x
(t), p
(t),
(t)) = max
aA
H(x
(t), p
(t), a).
We call (ADJ) the adjoint equations and (M) the maximization principle. The function
p
() is the costate.
Proof. 1. Select the vector h as in Theorem 3.3, and consider the system
_
p
(t) = M
T
p
(t)
p
(0) = h.
The solution is p
(t) = e
tM
T
h; and hence
p
(t)
T
= h
T
X
1
(t),
since (e
tM
T
)
T
= e
tM
= X
1
(t).
2. We know from condition (M) in Theorem 3.3 that
h
T
X
1
(t)N
(t) = max
aA
h
T
X
1
(t)Na
Since p
(t)
T
= h
T
X
1
(t), this means that
p
(t)
T
(Mx
(t) +N
(t)) = max
aA
p
(t)
T
(Mx
(t) +Na).
3. Finally, we observe that according to the denition of the Hamiltonian H, the
dynamical equations for x
(), p
(t) = max
|a|1
h
T
X
1
(t)Na.
We will extract the interesting fact that an optimal control
(t) = max
|a|1
(th
1
+h
2
)a;
and this implies that
(t) = sgn(th
1
+h
2
)
for the sign function
sgnx =
_
_
1 x > 0
0 x = 0
1 x < 0.
36
Therefore the optimal control
is constant.
Since the optimal control switches at most once, then the control we constructed by a
geometric method in 1.3 must have been optimal.
EXAMPLE 2: CONTROL OF A VIBRATING SPRING. Consider next the simple
dynamics
x +x = ,
where we interpret the control as an exterior force acting on an oscillating weight (of unit
mass) hanging from a spring. Our goal is to design an optimal exterior forcing
() that
brings the motion to a stop in minimum time.
spring
nass
We have n = 2, m = 1. The individual dynamical equations read:
_
x
1
(t) = x
2
(t)
x
2
(t) = x
1
(t) +(t);
which in vector notation become
(ODE) x(t) =
_
0 1
1 0
_
. .
M
x(t) +
_
0
1
_
. .
N
(t)
for [(t)[ 1. That is, A = [1, 1].
Using the maximum principle. We employ the Pontryagin Maximum Principle,
which asserts that there exists h ,= 0 such that
(M) h
T
X
1
(t)N
(t) = max
aA
h
T
X
1
(t)Na.
To extract useful information from (M) we must compute X(). To do so, we observe
that the matrix M is skew symmetric, and thus
M
0
= I, M =
_
0 1
1 0
_
, M
2
=
_
1 0
0 1
_
= I
Therefore
M
k
=I if k = 0, 4, 8, . . .
M
k
=M if k = 1, 5, 9, . . .
M
k
=I if k = 2, 6, . . .
M
k
=M if k = 3, 7, . . . ;
37
and consequently
e
tM
= I +tM +
t
2
2!
M
2
+. . .
= I +tM
t
2
2!
I
t
3
3!
M +
t
4
4!
I +. . .
= (1
t
2
2!
+
t
4
4!
. . . )I + (t
t
3
3!
+
t
5
5!
. . . )M
= cos tI + sintM =
_
cos t sint
sint cos t
_
.
So we have
X
1
(t) =
_
cos t sint
sint cos t
_
and
X
1
(t)N =
_
cos t sint
sint cos t
__
0
1
_
=
_
sint
cos t
_
;
whence
h
T
X
1
(t)N = (h
1
, h
2
)
_
sint
cos t
_
= h
1
sint +h
2
cos t.
According to condition (M), for each time t we have
(h
1
sint +h
2
cos t)
(t) = max
|a|1
(h
1
sint +h
2
cos t)a.
Therefore
(t) = sgn(h
1
sint +h
2
cos t).
Finding the optimal control. To simplify further, we may assume h
2
1
+h
2
2
= 1. Recall
the trig identity sin(x + y) = sinxcos y + cos xsiny, and choose such that h
1
= cos ,
h
2
= sin. Then
() : [0, T] R
n
that minimizes the functional
(4.1) I[x()] :=
_
T
0
L(x(t), x(t)) dt
among all functions x() satisfying x(0) = x
0
and x(T) = x
1
.
Now assume x
()?
4.1.1 DERIVATION OF EULERLAGRANGE EQUATIONS.
NOTATION. We write L = L(x, v), and regard the variable x as denoting position, the
variable v as denoting velocity. The partial derivatives of L are
L
x
i
= L
x
i
,
L
v
i
= L
v
i
(1 i n),
and we write
x
L := (L
x
1
, . . . , L
x
n
),
v
L := (L
v
1
, . . . , L
v
n
).
41
THEOREM 4.1 (EULERLAGRANGE EQUATION). Let x
(t), x
(t))] =
x
L(x
(t), x
(t)).
The signicance of preceding theorem is that if we can solve the EulerLagrange equa-
tions (E-L), then the solution of our original calculus of variations problem (assuming it
exists) will be among the solutions.
Note that (E-L) is a quasilinear system of n secondorder ODE. The i
th
component of
the system reads
d
dt
[L
v
i
(x
(t), x
(t))] = L
x
i
(x
(t), x
(t)).
Proof. 1. Select any smooth curve y[0, T] R
n
, satisfying y(0) = y(T) = 0. Dene
i() := I[x() +y()]
for R and x() = x
(0) = 0.
2. We must compute i
() =
_
T
0
_
n
i=1
L
x
i
(x(t) +y(t), x(t) + y(t))y
i
(t) +
n
i=1
L
v
i
( ) y
i
(t)
_
dt.
Let = 0. Then
0 = i
(0) =
n
i=1
_
T
0
L
x
i
(x(t), x(t))y
i
(t) +L
v
i
(x(t), x(t)) y
i
(t) dt.
This equality holds for all choices of y : [0, T] R
n
, with y(0) = y(T) = 0.
42
3. Fix any 1 j n. Choose y() so that
y
i
(t) 0 i ,= j, y
j
(t) = (t),
where is an arbitary function. Use this choice of y() above:
0 =
_
T
0
L
x
j
(x(t), x(t))(t) +L
v
j
(x(t), x(t))
(t) dt.
Integrate by parts, recalling that (0) = (T) = 0:
0 =
_
T
0
_
L
x
j
(x(t), x(t))
d
dt
_
L
v
j
(x(t), x(t))
_
_
(t) dt.
This holds for all : [0, T] R, (0) = (T) = 0 and therefore
L
x
j
(x(t), x(t))
d
dt
_
L
v
j
(x(t), x(t))
_
= 0
for all times 0 t T. To see this, observe that otherwise L
x
j
d
dt
(L
v
j
) would be, say,
positive on some subinterval on I [0, T]. Choose 0 o I, > 0 on I. Then
_
T
0
_
L
x
j
d
dt
_
L
v
j
_
_
dt > 0,
a contradiction.
4.1.2 CONVERSION TO HAMILTONS EQUATIONS.
DEFINITION. For the given curve x(), dene
p(t) :=
v
L(x(t), x(t)) (0 t T).
We call p() the generalized momentum.
Our intention now is to rewrite the EulerLagrange equations as a system of rstorder
ODE for x(), p().
IMPORTANT HYPOTHESIS: Assume that for all x, p R
n
, we can solve the equa-
tion
(4.2) p =
v
L(x, v)
for v in terms of x and p. That is, we suppose we can solve the identity (4.2) for v = v(x, p).
43
DEFINITION. Dene the dynamical systems Hamiltonian H : R
n
R
n
R by the
formula
H(x, p) = p v(x, p) L(x, v(x, p)),
where v is dened above.
NOTATION. The partial derivatives of H are
H
x
i
= H
x
i
,
H
p
i
= H
p
i
(1 i n),
and we write
x
H := (H
x
1
, . . . , H
x
n
),
p
H := (H
p
1
, . . . , H
p
n
).
THEOREM 4.2 (HAMILTONIAN DYNAMICS). Let x() solve the EulerLagrange
equations (E-L) and dene p()as above. Then the pair (x(), p()) solves Hamiltons equa-
tions:
(H)
_
x(t) =
p
H(x(t), p(t))
p(t) =
x
H(x(t), p(t))
Furthermore, the mapping t H(x(t), p(t)) is constant.
Proof. Recall that H(x, p) = p v(x, p) L(x, v(x, p)), where v = v(x, p) or, equivalently,
p =
v
L(x, v). Then
x
H(x, p) = p
x
v
x
L(x, v(x, p))
v
L(x, v(x, p))
x
v
=
x
L(x, v(x, p))
because p =
v
L. Now p(t) =
v
L(x(t), x(t)) if and only if x(t) = v(x(t), p(t)). Therefore
(E-L) implies
p(t) =
x
L(x(t), x(t))
=
x
L(x(t), v(x(t), p(t))) =
x
H(x(t), p(t)).
Also
p
H(x, p) = v(x, p) +p
p
v
v
L
p
v = v(x, p)
since p =
v
L(x, v(x, p)). This implies
p
H(x(t), p(t)) = v(x(t), p(t)).
But
p(t) =
v
L(x(t), x(t))
44
and so x(t) = v(x(t), p(t)). Therefore
x(t) =
p
H(x(t), p(t)).
Finally note that
d
dt
H(x(t), p(t)) =
x
H x(t) +
p
H p(t) =
x
H
p
H +
p
H (
x
H) = 0.
x
L = V (x),
v
L = mv.
Therefore the Euler-Lagrange equation is
m x(t) = V (x(t)),
which is Newtons law. Furthermore
p =
v
L(x, v) = mv
is the momentum, and the Hamiltonian is
H(x, p) = p
p
m
L
_
x,
p
m
_
=
[p[
2
m
m
2
p
m
2
+V (x) =
[p[
2
2m
+V (x),
the sum of the kinetic and potential energies. For this example, Hamiltons equations read
_
x(t) =
p(t)
m
p(t) = V (x(t)).
) = max
xR
n f(x), then x
is a critical point of f:
f(x
) = 0.
CONSTRAINED OPTIMIZATION. We modify the problem above by introducing
the region
R := x R
n
[ g(x) 0,
determined by some given function g : R
n
R. Suppose x
R and f(x
) = max
xR
f(x).
We would like a characterization of x
) = 0.
R
X
*
gradient oI I
figure 1
Case 2: x
). A geometric
picture like Figure 1 is impossible; for if it were so, then f(y
)
for some other point y
R. So it must be f(x
) is perpendicular to R at x
, as
shown in Figure 2.
46
R = |g < 0|
X
*
gradient oI I
gradient oI g
figure 2
Since g is perpendicular to R = g = 0, it follows that f(x
) is parallel to g(x
).
Therefore
(4.4) f(x
) = g(x
)
for some real number , called a Lagrange multiplier.
CRITIQUE. The foregoing argument is in fact incomplete, since we implicitly assumed
that g(x
) ,= 0, in which case the Implicit Function Theorem implies that the set g = 0
is an (n 1)-dimensional surface near x
(as illustrated).
If instead g(x
; and the
reasoning discussed as Case 2 above is not complete.
The correct statement is this:
(4.5)
_
There exist real numbers and , not both equal to 0, such that
f(x
) = g(x
).
If ,= 0, we can divide by and convert to the formulation (4.4). And if g(x
) = 0, we
can take = 1, = 0, making assertion (4.5) correct (if not particularly useful).
4.3 STATEMENT OF PONTRYAGIN MAXIMUM PRINCIPLE
We come now to the key assertion of this chapter, the theoretically interesting and prac-
tically useful theorem that if
(),
called the costate, that satises a certain maximization principle. We should think of the
function p
() such that
P[
()] = max
()A
P[()].
The Pontryagin Maximum Principle, stated below, asserts the existence of a function
p
() is
optimal for (ODE), (P) and x
: [0, T] R
n
such that
(ODE) x
(t) =
p
H(x
(t), p
(t),
(t)),
(ADJ) p
(t) =
x
H(x
(t), p
(t),
(t)),
and
(M) H(x
(t), p
(t),
(t)) = max
aA
H(x
(t), p
(t), a) (0 t
).
48
In addtion,
the mapping t H(x
(t), p
(t),
(t)) is constant.
Finally, we have the terminal condition
(T) p
(T) = g(x
(T)).
REMARKS AND INTERPRETATIONS. (i) The identities (ADJ) are the adjoint
equations and (M) the maximization principle. Notice that (ODE) and (ADJ) resemble
the structure of Hamiltons equations, discussed in 4.1.
We also call (T) the transversality condition and will discuss its signicance later.
(ii) More precisely, formula (ODE) says that for 1 i n, we have
x
i
(t) = H
p
i
(x
(t), p
(t),
(t)) = f
i
(x
(t),
(t)),
which is just the original equation of motion. Likewise, (ADJ) says
p
i
(t) = H
x
i
(x
(t), p
(t),
(t))
=
n
j=1
p
j
(t)f
j
x
i
(x
(t),
(t)) r
x
i
(x
(t),
(t)).
4.3.2 FREE TIME, FIXED ENDPOINT PROBLEM. Let us next record the ap-
propriate form of the Maximum Principle for a xed endpoint problem.
As before, given a control () /, we solve for the corresponding evolution of our
system:
(ODE)
_
x(t) = f (x(t), (t)) (t 0)
x(0) = x
0
.
Assume now that a target point x
1
R
n
is given. We introduce then the payo
functional
(P) P[()] =
_
0
r(x(t), (t)) dt
Here r : R
n
A R is the given running payo, and = [()] denotes the rst
time the solution of (ODE) hits the target point x
1
.
As before, the basic problem is to nd an optimal control
() such that
P[
()] = max
()A
P[()].
Dene the Hamilton H as in 4.3.1.
49
THEOREM 4.4 (PONTRYAGIN MAXIMUM PRINCIPLE). Assume
() is
optimal for (ODE), (P) and x
: [0,
] R
n
such that
(ODE) x
(t) =
p
H(x
(t), p
(t),
(t)),
(ADJ) p
(t) =
x
H(x
(t), p
(t),
(t)),
and
(M) H(x
(t), p
(t),
(t)) = max
aA
H(x
(t), p
(t), a) (0 t
).
Also,
H(x
(t), p
(t),
(t)) 0 (0 t
).
Here
() the costate.
REMARK AND WARNING. More precisely, we should dene
H(x, p, q, a) = f (x, a) p +r(x, a)q (q R).
A more careful statement of the Maximum Principle says there exists a constant q 0
and a function p
: [0, t
] R
n
such that (ODE), (ADJ), and (M) hold.
If q > 0, we can renormalize to get q = 1, as we have done above. If q = 0, then H does
not depend on running payo r and in this case the Pontryagin Maximum Principle is not
useful. This is a socalled abnormal problem.
Compare these comments with the critique of the usual Lagrange multiplier method at
the end of 4.2, and see also the proof in A.5 of the Appendix.
4.4 APPLICATIONS AND EXAMPLES
HOW TO USE THE MAXIMUM PRINCIPLE. We mentioned earlier that the
costate p
= (x
1
, . . . , x
n
)
T
has n unknown components we must nd. Somewhat unexpectedly, it
turns out in practice to be easier to solve (4.4) for the n+1 unknowns x
1
, . . . , x
n
and . We
repeat this key insight: it is actually easier to solve the problem if we add a new unknown,
namely the Lagrange multiplier. Worked examples abound in multivariable calculus books.
50
Calculations with the costate. This same principle is valid for our much more
complicated control theory problems: it is usually best not just to look for an optimal
control
(). In practice, we add the equations (ADJ) and (M) to (ODE) and then try to solve
for
(), x
() and for p
().
The following examples show how this works in practice, in certain cases for which
we can actually solve everything explicitly or, failing that, at least deduce some useful
information.
4.4.1 EXAMPLE 1: LINEAR TIME-OPTIMAL CONTROL. For this example,
let A denote the cube [1, 1]
m
in R
n
. We consider again the linear dynamics:
(ODE)
_
x(t) = Mx(t) +N(t)
x(0) = x
0
,
for the payo functional
(P) P[()] =
_
0
1 dt = ,
where denotes the rst time the trajectory hits the target point x
1
= 0. We have r 1,
and so
H(x, p, a) = f p +r = (Mx +Na) p 1.
In Chapter 3 we introduced the Hamiltonian H = (Mx + Na) p, which diers by a
constant from the present H. We can redene H in Chapter III to match the present
theory: compare then Theorems 3.4 and 4.3.
4.4.2 EXAMPLE 2: CONTROL OF PRODUTION AND CONSUMPTION.
We return to Example 1 in Chapter 1, a model for optimal consumption in a simple
economy. Recall that
x(t) = output of economy at time t,
(t) = fraction of output reinvested at time t.
We have the constraint 0 (t) 1; that is, A = [0, 1] R. The economy evolves
according to the dynamics
(ODE)
_
x(t) = (t)x(t) (0 t T)
x(0) = x
0
where x
0
> 0 and we have set the growth factor k = 1. We want to maximize the total
consumption
(P) P[()] :=
_
T
0
(1 (t))x(t) dt
51
How can we characterize an optimal control
()?
Introducing the maximum principle. We apply Pontryagin Maximum Principle,
and to simplify notation we will not write the superscripts for the optimal control,
trajectory, etc. We have n = m = 1,
f(x, a) = xa, g 0, r(x, a) = (1 a)x;
and therefore
H(x, p, a) = f(x, a)p +r(x, a) = pxa + (1 a)x = x +ax(p 1).
The dynamical equation is
(ODE) x(t) = H
p
= (t)x(t),
and the adjoint equation is
(ADJ) p(t) = H
x
= 1 (t)(p(t) 1).
The terminal condition reads
(T) p(T) = g
x
(x(T)) = 0.
Lastly, the maximality principle asserts
(M) H(x(t), p(t), (t)) = max
0a1
x(t) +ax(t)(p(t) 1)
Using the maximum principle. We now deduce useful information from (ODE),
(ADJ), (M) and (T).
According to (M), at each time t the control value (t) must be selected to maximize
a(p(t) 1) for 0 a 1. This is so, since x(t) > 0. Thus
(t) =
_
1 if p(t) > 1
0 if p(t) 1.
Hence if we know p(), we can design the optimal control ().
So next we must solve for the costate p(). We know from (ADJ) and (T) that
_
p(t) = 1 (t)[p(t) 1] (0 t T)
p(T) = 0
52
Since p(T) = 0, we deduce by continuity that p(t) 1 for t close to T, t < T. Thus
(t) = 0 for such values of t. Therefore p(t) = 1, and consequently p(t) = T t for times
t in this interval. So we have that p(t) = T t so long as p(t) 1. And this holds for
T 1 t T
But for times t T 1, with t near T 1, we have (t) = 1; and so (ADJ) becomes
p(t) = 1 (p(t) 1) = p(t).
Since p(T 1) = 1, we see that p(t) = e
T1t
> 1 for all times 0 t T 1. In particular
there are are no switches in the control over this time interval.
Restoring the superscript * to our notation, we consequently deduce that an optimal
control is
(t) =
_
1 if 0 t t
0 if t
t T
for the optimal switching time t
= T 1.
We leave it as an exercise to compute the switching time if the growth constant k ,= 1.
d =
p
x
p x
x
2
,
and recall that
_
x = x +
p
2
p = 2x p.
Therefore
d =
2x p
x
p
x
2
_
x +
p
2
_
= 2 d d
_
1 +
d
2
_
= 2 2d
d
2
2
.
Since p(T) = 0, the terminal condition is d(T) = 0.
So we have obtained a nonlinear rstorder ODE in p() with a terminal boundary
condition:
(R)
_
d = 2 2d
1
2
d
2
(0 t < T)
d(T) = 0.
This is called the Riccati equation.
In summary so far, to solve our linearquadratic regulator problem, we need to rst
solve the Riccati equation (R) and then set
(t) =
1
2
d(t)x(t).
How to solve the Riccati equation. It turns out that we can convert (R) it into a
secondorder, linear ODE. To accomplish this, write
d(t) =
2
b(t)
b(t)
for a function b() to be found. What equation does b() solve? We compute
d =
2
b
b
2(
b)
2
b
2
=
2
b
b
d
2
2
.
Hence (R) gives
2
b
b
=
d +
d
2
2
= 2 2d = 2 2
2
b
b
;
55
and consequently
_
b = b 2
b (0 t < T)
b(T) = 0, b(T) = 1.
This is a terminal-value problem for secondorder linear ODE, which we can solve by
standard techniques. We then set d =
2
b
b
, to derive the solution of the Riccati equation
(R).
We will generalize this example later to systems, in 5.2.
4.4.4 EXAMPLE 4: MOON LANDER. This is a much more elaborate and inter-
esting example, already introduced in Chapter 1. We follow the discussion of Fleming and
Rishel [F-R].
Introduce the notation
h(t) = height at time t
v(t) = velocity =
h(t)
m(t) = mass of spacecraft (changing as fuel is used up)
(t) = thrust at time t.
The thrust is constrained so that 0 (t) 1; that is, A = [0, 1]. There are also the
constraints that the height and mass be nonnegative: h(t) 0, m(t) 0.
The dynamics are
(ODE)
_
_
_
h(t) = v(t)
v(t) = g +
(t)
m(t)
m(t) = k(t),
with initial conditions
_
_
h(0) = h
0
> 0
v(0) = v
0
m(0) = m
0
> 0.
The goal is to land on the moon safely, maximizing the remaining fuel m(), where
= [()] is the rst time h() = v() = 0. Since =
m
k
, our intention is equivalently
to minimize the total applied thrust before landing; so that
(P) P[()] =
_
0
(t) dt.
This is so since
_
0
(t) dt =
m
0
m()
k
.
Introducing the maximum principle. In terms of the general notation, we have
x(t) =
_
_
h(t)
v(t)
m(t)
_
_
, f =
_
_
v
g +a/m
ka
_
_
.
56
Hence the Hamiltonian is
H(x, p, a) = f p +r
= (v, g +a/m, ka) (p
1
, p
2
, p
3
) a
= a +p
1
v +p
2
_
g +
a
m
_
+p
3
(ka).
We next have to gure out the adjoint dynamics (ADJ). For our particular Hamiltonian,
H
x
1
= H
h
= 0, H
x
2
= H
v
= p
1
, H
x
3
= H
m
=
p
2
a
m
2
.
Therefore
(ADJ)
_
_
p
1
(t) = 0
p
2
(t) = p
1
(t)
p
3
(t) =
p
2
(t)(t)
m(t)
2
.
The maximization condition (M) reads
(M)
H(x(t), p(t), (t)) = max
0a1
H(x(t), p(t), a)
= max
0a1
_
a +p
1
(t)v(t) +p
2
(t)
_
g +
a
m(t)
_
+p
3
(t)(ka)
_
= p
1
(t)v(t) p
2
(t)g + max
0a1
_
a
_
1 +
p
2
(t)
m(t)
kp
3
(t)
__
.
Thus the optimal control law is given by the rule:
(t) =
_
_
_
1 if 1
p
2
(t)
m(t)
+kp
3
(t) < 0
0 if 1
p
2
(t)
m(t)
+kp
3
(t) > 0.
Using the maximum principle. Now we will attempt to gure out the form of the
solution, and check it accords with the Maximum Principle.
Let us start by guessing that we rst leave rocket engine of (i.e., set 0) and turn
the engine on only at the end. Denote by the rst time that h() = v() = 0, meaning
that we have landed. We guess that there exists a switching time t
1 for t
t .
Therefore, for times t
h(t) = v(t)
v(t) = g +
1
m(t)
(t
t )
m(t) = k
57
with h() = 0, v() = 0, m(t
) = m
0
. We solve these dynamics:
_
_
m(t) = m
0
+k(t
t)
v(t) = g( t) +
1
k
log
_
m
0
+k(t
)
m
0
+k(t
t)
_
h(t) = complicated formula.
Now put t = t
:
_
_
m(t
) = m
0
v(t
) = g( t
) +
1
k
log
_
m
0
+k(t
)
m
0
_
h(t
) =
g(t
)
2
2
m
0
k
2
log
_
m
0
+k(t
)
m
0
_
+
t
k
.
Suppose the total amount of fuel to start with was m
1
; so that m
0
m
1
is the weight
of the empty spacecraft. When 1, the fuel is used up at rate k. Hence
k( t
) m
1
,
and so 0 t
m
1
k
.
v axis
h axis
t*=n
1
/k
powered descent tra|ectory ( = 1)
Before time t
h = v
v = g
m = 0;
and thus
_
_
m(t) m
0
v(t) = gt +v
0
h(t) =
1
2
gt
2
+tv
0
+h
0
.
58
We combine the formulas for v(t) and h(t), to discover
h(t) = h
0
1
2g
(v
2
(t) v
2
0
) (0 t t
).
We deduce that the freefall trajectory (v(t), h(t)) therefore lies on a parabola
h = h
0
1
2g
(v
2
v
2
0
).
v axis
h axis
IreeIall tra|ectory ( = o)
powered tra|ectory ( = 1)
If we then move along this parabola until we hit the soft-landing curve from the previous
picture, we can then turn on the rocket engine and land safely.
v axis
h axis
In the second case illustrated, we miss switching curve, and hence cannot land safely on
the moon switching once.
59
To justify our guess about the structure of the optimal control, let us now nd the
costate p() so that () and x() described above satisfy (ODE), (ADJ), (M). To do this,
we will have have to gure out appropriate initial conditions
p
1
(0) =
1
, p
2
(0) =
2
, p
3
(0) =
3
.
We solve (ADJ) for () as above, and nd
_
_
p
1
(t)
1
(0 t )
p
2
(t) =
2
1
t (0 t )
p
3
(t) =
_
3
(0 t t
3
+
_
t
1
s
(m
0
+k(s))
2
ds (t
t ).
Dene
r(t) := 1
p
2
(t)
m(t)
+p
3
(t)k;
then
r =
p
2
m
+
p
2
m
m
2
+ p
3
k =
1
m
+
p
2
m
2
(k) +
_
p
2
m
2
_
k =
1
m(t)
.
Choose
1
< 0, so that r is decreasing. We calculate
r(t
) = 1
(
2
1
t
)
m
0
+
3
k
and then adjust
2
,
3
so that r(t
) = 0.
Then r is nonincreasing, r(t
), r < 0 on (t
, ].
But (M) says
(t) =
_
1 if r(t) < 0
0 if r(t) > 0.
Thus an optimal control changes just once from 0 to 1; and so our earlier guess of ()
does indeed satisfy the Pontryagin Maximum Principle.
4.5 MAXIMUM PRINCIPLE WITH TRANSVERSALITY CONDITIONS
Consider again the dynamics
(ODE) x(t) = f (x(t), (t)) (t > 0)
In this section we discuss another variant problem, one for which the intial position is
constrained to lie in a given set X
0
R
n
and the nal position is also constrained to lie
within a given set X
1
R
n
.
60
X
1
X
0
X
0
X
1
So in this model we get to choose the the starting point x
0
X
0
in order to maximize
(P) P[()] =
_
0
r(x(t), (t)) dt,
where = [()] is the rst time we hit X
1
.
NOTATION. We will assume that X
0
, X
1
are in fact smooth surfaces in R
n
. We let
T
0
denote the tangent plane to X
0
at x
0
, and T
1
the tangent plane to X
1
at x
1
.
THEOREM 4.5 (MORE TRANSVERSALITY CONDITIONS). Let
() and
x
(0), x
1
= x
).
Then there exists a function p
() : [0,
] R
n
, such that (ODE), (ADJ) and (M) hold
for 0 t
. In addition,
(T)
_
p
) is perpendicular to T
1
,
p
(0) is perpendicular to T
0
.
We call (T) the transversality conditions.
REMARKS AND INTERPRETATIONS. (i) If we have T > 0 xed and
P[()] =
_
T
0
r(x(t), (t)) dt +g(x(T)),
then (T) says
p
(T) = g(x
(T)),
61
in agreement with our earlier form of the terminal/transversality condition.
(ii) Suppose that the surface X
1
is the graph X
1
= x [ g
k
(x) = 0, k = 1, . . . , l. Then
(T) says that p
) =
l
k=1
k
g
k
(x
1
)
for some unknown constants
1
, . . . ,
l
.
4.6 MORE APPLICATIONS
4.6.1 EXAMPLE 1: DISTANCE BETWEEN TWO SETS. As a rst and simple
example, let
(ODE) x(t) = (t)
for A = S
1
, the unit sphere in R
2
: a S
1
if and only if [a[
2
= a
2
1
+a
2
2
= 1. In other words,
we are considering only curves that move with unit speed.
We take
(P)
P[()] =
_
0
[ x(t)[ dt = the length of the curve
=
_
0
dt = time it takes to reach X
1
.
We want to minimize the length of the curve and, as a check on our general theory, will
prove that the minimum is of course a straight line.
Using the maximum principle. We have
H(x, p, a) = f (x, a) p +r(x, a)
= a p 1 = p
1
a
1
+p
2
a
2
1.
The adjoint dynamics equation (ADJ) says
p(t) =
x
H(x(t), p(t), (t)) = 0,
and therefore
p(t) constant = p
0
,= 0.
The maximization principle (M) tells us that
H(x(t), p(t), (t)) = max
aS
1
[1 +p
0
1
a
1
+p
0
2
a
2
].
62
The right hand side is maximized by a
0
=
p
0
|p
0
|
, a unit vector that points in the same
direction of p
0
. Thus () a
0
is constant in time. According then to (ODE) we have
x = a
0
, and consequently x() is a straight line.
Finally, the transversality conditions say that
(T) p(0) T
0
, p(t
1
) T
1
.
In other words, p
0
T
0
and p
0
T
1
; and this means that the tangent planes T
0
and T
1
are parallel.
X
1
X
0
X
0
X
1
T
1
T
0
Now all of this is pretty obvious from the picture, but it is reassuring that the general
theory predicts the proper answer.
4.6.2 EXAMPLE 2: COMMODITY TRADING. Next is a simple model for the
trading of a commodity, say wheat. We let T be the xed length of trading period, and
introduce the variables
x
1
(t) = money on hand at time t
x
2
(t) = amount of wheat owned at time t
(t) = rate of buying or selling of wheat
q(t) = price of wheat at time t (known)
= cost of storing a unit amount of wheat for a unit of time.
We suppose that the price of wheat q(t) is known for the entire trading period 0 t T
(although this is probably unrealistic in practice). We assume also that the rate of selling
and buying is constrained:
[(t)[ M,
where (t) > 0 means buying wheat, and (t) < 0 means selling.
63
Our intention is to maximize our holdings at the end time T, namely the sum of the
cash on hand and the value of the wheat we then own:
(P) P[()] = x
1
(T) +q(T)x
2
(T).
The evolution is
(ODE)
_
x
1
(t) = x
2
(t) q(t)(t)
x
2
(t) = (t).
This is a nonautonomous (= time dependent) case, but it turns out that the Pontryagin
Maximum Principle still applies.
Using the maximum principle. What is our optimal buying and selling strategy?
First, we compute the Hamiltonian
H(x, p, t, a) = f p +r = p
1
(x
2
q(t)a) +p
2
a,
since r 0. The adjoint dynamics read
(ADJ)
_
p
1
= 0
p
2
= p
1
,
with the terminal condition
(T) p(T) = g(x(T)).
In our case g(x
1
, x
2
) = x
1
+q(T)x
2
, and hence
(T)
_
p
1
(T) = 1
p
2
(T) = q(T).
We then can solve for the costate:
_
p
1
(t) 1
p
2
(t) = (t T) +q(T).
The maximization principle (M) tells us that
(M)
H(x(t), p(t), t, (t)) = max
|a|M
p
1
(t)(x
2
(t) q(t)a) +p
2
(t)a
= p
1
(t)x
2
(t) + max
|a|M
a(q(t) +p
2
(t)).
So
(t) =
_
M if q(t) < p
2
(t)
M if q(t) > p
2
(t)
64
for p
2
(t) := (t T) +q(T).
CRITIQUE. In some situations the amount of money on hand x
2
() becomes negative
for part of the time. The economic problem has a natural constraint x
2
0 (unless we can
borrow with no interest charges) which we did not take into account in the mathematical
model.
4.7 MAXIMUM PRINCIPLE WITH STATE CONSTRAINTS
We return once again to our usual setting:
(ODE)
_
x(t) = f (x(t), (t))
x(0) = x
0
,
(P) P[()] =
_
0
r(x(t), (t)) dt
for = [()], the rst time that x() = x
1
. This is the xed endpoint problem.
STATE CONSTRAINTS. We introduce a new complication by asking that our
dynamics x() must always remain within a given region R R
n
. We will as above
suppose that R has the explicit representation
R = x R
n
[ g(x) 0
for a given function g() : R
n
R.
DEFINITION. It will be convenient to introduce the quantity
c(x, a) := g(x) f (x, a).
Notice that
if x(t) R for times s
0
t s
1
, then c(x(t), (t)) 0 (s
0
t s
1
).
This is so since f is then tangent to R, whereas g is perpendicular.
THEOREM 4.6 (MAXIMUM PRINCIPLE FOR STATE CONSTRAINTS). Let
(), x
(t) R for
s
0
t s
1
.
Then there exists a costate function p
() : [s
0
, s
1
] R
n
and there exists
() :
[s
0
, s
1
] R such that (ODE) holds.
Also, for times s
0
t s
1
we have
(ADJ
) p
(t) =
x
H(x
(t), p
(t),
(t)) +
(t)
x
c(x
(t),
(t));
65
and
(M
) H(x
(t), p
(t),
(t)) = max
aA
H(x
(t), p
(t), a) [ c(x
(t), a) = 0.
To keep things simple, we have omitted some technical assumptions really needed for
the Theorem to be valid.
REMARKS AND INTERPRETATIONS (i) Let A R
m
be of this form:
A = a R
m
[ g
1
(a) 0, . . . , g
s
(a) 0
for given functions q
1
, . . . , q
s
: R
m
R. In this case we can use Lagrange multipliers to
deduce from (M
) that
(M
)
a
H(x
(t), p
(t),
(t)) =
(t)
a
c(x
(t),
(t)) +
s
i=1
i
(t)
a
g
i
(x
(t)).
The function
).
If x
(s
0
0) = p
(s
0
+ 0),
where s
0
is a time that p
when we hit
the boundary of the constraint R.
However,
p
(s
1
+ 0) = p
(s
1
0)
(s
1
)g(x
(s
1
));
this says there is (possibly) a jump in p
() when we leave R.
4.8 MORE APPLICATIONS
4.8.1 EXAMPLE 1: SHORTEST DISTANCE BETWEEN TWO POINTS,
AVOIDING AN OBSTACLE.
x
1 0
x
0
r
66
What is the shortest path between two points that avoids the disk B = B(0, r), as
drawn?
Let us take
(ODE)
_
x(t) = (t)
x(0) = x
0
for A = S
1
, with the payo
(P) P[()] =
_
0
[ x[ dt = length of the curve x().
We have
H(x, p, a) = f p +r = p
1
a
1
+p
2
a
2
1.
Case 1: avoiding the obstacle. Assume x(t) / B on some time interval. In this
case, the usual Pontryagin Maximum Principle applies, and we deduce as before that
p =
x
H = 0.
Hence
(ADJ) p(t) constant = p
0
.
Condition (M) says
H(x(t), p(t), (t)) = max
aS
1
(1 +p
0
1
a
1
+p
0
2
a
2
).
The maximum occurs for =
p
0
|p
0
|
. Furthermore,
1 +p
0
1
1
+p
0
2
2
0;
and therefore p
0
= 1. This means that [p
0
[ = 1, and hence in fact = p
0
. We have
proved that the trajectory x() is a straight line away from the obstacle.
Case 2: touching the obstacle. Suppose now x(t) B for some time interval
s
0
t s
1
. Now we use the modied version of Maximum Principle, provided by
Theorem 4.6.
First we must calculate c(x, a) = g(x) f (x, a). In our case,
R = R
2
B =
_
x [ x
2
1
+x
2
2
r
2
_
= x [ g := r
2
x
2
1
x
2
2
0.
Then g =
_
2x
1
2x
2
_
. Since f =
_
a
1
a
2
_
, we have
c(x, a) = 2a
1
x
1
2x
2
a
2
.
67
Now condition (ADJ
) implies
p(t) =
x
H +(t)
x
c;
which is to say,
(4.6)
_
p
1
= 2
1
p
2
= 2
2
.
Next, we employ the maximization principle (M
). We need to maximize
H(x(t), p(t), a)
subject to the requirements that c(x(t), a) = 0 and g
1
(a) = a
2
1
+ a
2
2
1 = 0, since A =
a R
2
[ a
2
1
+a
2
2
= 1. According to (M
) we must solve
a
H = (t)
a
c +(t)
a
g
1
;
that is,
_
p
1
= (2x
1
) +2
1
p
2
= (2x
2
) +2
2
.
We can combine these identities to eliminate . Since we also know that x(t) B, we
have (x
1
)
2
+(x
2
)
2
= r
2
; and also = (
1
,
2
)
T
is tangent to the circle. Using these facts,
we nd after some calculations that
(4.7) =
p
2
1
p
1
2
2r
.
But we also know
(4.8) (
1
)
2
+ (
2
)
2
= 1
and
H 0 = 1 +p
1
1
+p
2
2
;
hence
(4.9) p
1
1
+p
2
2
1.
Solving for the unknowns. We now have the ve equations (4.6) (4.9) for the
ve unknown functions p
1
, p
2
,
1
,
2
, that depend on t. We introduce the angle , as
illustrated, and note that
d
d
= r
d
dt
. A calculation then conrms that the solutions are
_
1
() = sin
2
() = cos ,
68
0
x(t)
=
k +
2r
,
and
_
p
1
() = k cos sin + cos
p
2
() = k sin + cos + sin
for some constant k.
Case 3: approaching and leaving the obstacle. In general, we must piece together
the results from Case 1 and Case 2. So suppose now x(t) R = R
2
B for 0 t < s
0
and x(t) B for s
0
t s
1
.
We have shown that for times 0 t < s
0
, the trajectory x() straight line. For this case
we have shown already that p = and therefore
_
p
1
cos
0
p
2
sin
0
,
for the angle
0
as shown in the picture.
By the jump conditions, p() is continuous when x() hits B at the time s
0
, meaning
in this case that
_
k cos
0
sin
0
+
0
cos
0
= cos
0
k sin
0
+ cos
0
+
0
sin
0
= sin
0
.
These identities hold if and only if
_
k =
0
0
+
0
=
2
.
The second equality says that the optimal trajectory is tangent to the disk B when it hits
B.
We turn next to the trajectory as it leaves B: see the next picture. We then have
_
p
1
(
1
) =
0
cos
1
sin
1
+
1
cos
1
p
2
(
1
) =
0
sin
1
+ cos
1
+
1
sin
1
.
69
0
x
0
0
Now our formulas above for and k imply (
1
) =
0
1
2r
. The jump conditions give
p(
+
1
) = p(
1
) (
1
)g(x(
1
))
for g(x) = r
2
x
2
1
x
2
2
. Then
(
1
)g(x(
1
)) = (
1
0
)
_
cos
1
sin
1
_
.
x
1
0
1
Therefore
_
p
1
(
+
1
) = sin
1
p
2
(
+
1
) = cos
1
,
and so the trajectory is tangent to B. If we apply usual Maximum Principle after x()
leaves B, we nd
p
1
constant = cos
1
p
2
constant = sin
1
.
Thus
_
cos
1
= sin
1
sin
1
= cos
1
70
and so
1
+
1
= .
CRITIQUE. We have carried out elaborate calculations to derive some pretty obvious
conclusions in this example. It is best to think of this as a conrmation in a simple case
of Theorem 4.6, which applies in far more complicated situations.
4.8.2 AN INVENTORY CONTROL MODEL. Now we turn to a simple model
for ordering and storing items in a warehouse. Let the time period T > 0 be given, and
introduce the variables
x(t) = amount of inventory at time t
(t) = rate of ordering from manufacturers, 0,
d(t) = customer demand (known)
= cost of ordering 1 unit
= cost of storing 1 unit.
Our goal is to ll all customer orders shipped from our warehouse, while keeping our
storage and ordering costs at a minimum. Hence the payo to be maximized is
(P) P[()] =
_
T
0
(t) +x(t) dt.
We have A = [0, ) and the constraint that x(t) 0. The dynamics are
(ODE)
_
x(t) = (t) d(t)
x(0) = x
0
> 0.
Guessing the optimal strategy. Let us just guess the optimal control strategy: we
should at rst not order anything ( = 0) and let the inventory in our warehouse fall o
to zero as we ll demands; thereafter we should order just enough to meet our demands
( = d).
s
0
x axis
x
0
t axis
71
Using the maximum principle. We will prove this guess is right, using the Maximum
Principle. Assume rst that x(t) > 0 on some interval [0, s
0
]. We then have
H(x, p, t, a) = (a d(t))p a x;
and (ADJ) says p =
x
H = . Condition (M) implies
H(x(t), p(t), (t), t) = max
a0
a x(t) +p(t)(a d(t))
= x(t) p(t)d(t) + max
a0
a(p(t) ).
Thus
(t) =
_
0 if p(t)
+ if p(t) > .
If (t) + on some interval, then P[()] = +, which is impossible, because there
exists a control with nite payo. So it follows that () 0 on [0, s
0
]: we place no orders.
According to (ODE), we have
_
x(t) = d(t) (0 t s
0
)
x(0) = x
0
.
Thus s
0
is rst time the inventory hits 0. Now since x(t) = x
0
_
t
0
d(s) ds, we have
x(s
0
) = 0. That is,
_
s
0
0
d(s) ds = x
0
and we have hit the constraint. Now use Pontryagin
Maximum Principle with state constraint for times t s
0
R = x 0 = g(x) := x 0
and
c(x, a, t) = g(x) f(x, a, t) = (1)(a d(t)) = d(t) a.
We have
(M
() =
_
0
(x)e
x
sinx
x
dx =
_
0
sinx e
x
dx =
1
2
+ 1
,
where we integrated by parts twice to nd the last equality. Consequently
I() = arctan +C,
and we must compute the constant C. To do so, observe that
0 = I() = arctan() +C =
2
+C,
and so C =
2
. Hence I() = arctan +
2
, and consequently
_
0
sinx
x
dx = I(0) =
2
.
We want to adapt some version of this idea to the vastly more complicated setting of
control theory. For this, x a terminal time T > 0 and then look at the controlled dynamics
(ODE)
_
x(s) = f (x(s), (s)) (0 < s < T)
x(0) = x
0
,
73
with the associated payo functional
(P) P[()] =
_
T
0
r(x(s), (s)) ds +g(x(T)).
We embed this into a larger family of similar problems, by varying the starting times
and starting points:
(5.1)
_
x(s) = f (x(s), (s)) (t s T)
x(t) = x.
with
(5.2) P
x,t
[()] =
_
T
t
r(x(s), (s)) ds +g(x(T)).
Consider the above problems for all choices of starting times 0 t T and all initial
points x R
n
.
DEFINITION. For x R
n
, 0 t T, dene the value function v(x, t) to be the
greatest payo possible if we start at x R
n
at time t. In other words,
(5.3) v(x, t) := sup
()A
P
x,t
[()] (x R
n
, 0 t T).
Notice then that
(5.4) v(x, T) = g(x) (x R
n
).
5.1.2 DERIVATION OF HAMILTON-JACOBI-BELLMAN EQUATION. Our
rst task is to show that the value function v satises a certain nonlinear partial dierential
equation.
Our derivation will be based upon the reasonable principle that its better to be smart
from the beginning, than to be stupid for a time and then become smart. We want to
convert this philosophy of life into mathematics.
To simplify, we hereafter suppose that the set A of control parameter values is compact.
THEOREM 5.1 (HAMILTON-JACOBI-BELLMAN EQUATION). Assume that
the value function v is a C
1
function of the variables (x, t). Then v solves the nonlinear
partial dierential equation
(HJB) v
t
(x, t) + max
aA
f (x, a)
x
v(x, t) +r(x, a) = 0 (x R
n
, 0 t < T),
74
with the terminal condition
v(x, T) = g(x) (x R
n
).
REMARK. We call (HJB) the HamiltonJacobiBellman equation, and can rewrite it as
(HJB) v
t
(x, t) +H(x,
x
v) = 0 (x R
n
, 0 t < T),
for the partial dierential equations Hamiltonian
H(x, p) := max
aA
H(x, p, a) = max
aA
f (x, a) p +r(x, a)
where x, p R
n
.
Proof. 1. Let x R
n
, 0 t < T and let h > 0 be given. As always
/ = () : [0, ) A measurable.
Pick any parameter a A and use the constant control
() a
for times t s t +h. The dynamics then arrive at the point x(t +h), where t +h < T.
Suppose now a time t + h, we switch to an optimal control and use it for the remaining
times t +h s T.
What is the payo of this procedure? Now for times t s t +h, we have
_
x(s) = f (x(s), a)
x(t) = x.
The payo for this time period is
_
t+h
t
r(x(s), a) ds. Furthermore, the payo incurred from
time t + h to T is v(x(t + h), t + h), according to the dention of the payo function v.
Hence the total payo is
_
t+h
t
r(x(s), a) ds +v(x(t +h), t +h).
But the greatest possible payo if we start from (x, t) is v(x, t). Therefore
(5.5) v(x, t)
_
t+h
t
r(x(s), a) ds +v(x(t +h), t +h).
75
2. We now want to convert this inequality into a dierential form. So we rearrange
(5.5) and divide by h > 0:
v(x(t +h), t +h) v(x, t)
h
+
1
h
_
t+h
t
r(x(s), a) ds 0.
Let h 0:
v
t
(x, t) +
x
v(x(t), t) x(t) +r(x(t), a) 0.
But x() solves the ODE
_
x(s) = f (x(s), a) (t s t +h)
x(t) = x.
Employ this above, to discover:
v
t
(x, t) +f (x, a)
x
v(x, t) +r(x, a) 0.
This inequality holds for all control parameters a A, and consequently
(5.6) max
aA
v
t
(x, t) +f (x, a)
x
v(x, t) +r(x, a) 0.
3. We next demonstrate that in fact the maximum above equals zero. To see this,
suppose
(), x
() were optimal for the problem above. Let us utilize the optimal control
(s),
(s)) ds
and the remaining payo is v(x
(s),
(s)) ds +v(x
(s),
(s)) ds = 0.
Let h 0 and suppose
(0) = a
A. Then
v
t
(x, t) +
x
v(x, t) x
(t)
. .
f (x,a
)
+r(x, a
) = 0;
76
and therefore
v
t
(x, t) +f (x, a
)
x
v(x, t) +r(x, a
) = 0
for some parameter value a
(s) = f (x
(s), (x
(s), s)) (t s T)
x(t) = x.
Finally, dene the feedback control
(5.7)
(s) := (x
(s), s).
In summary, we design the optimal control this way: If the state of system is x at time t,
use the control which at time t takes on the parameter value a A such that the minimum
in (HJB) is obtained.
We demonstrate next that this construction does indeed provide us with an optimal
control.
THEOREM 5.2 (VERIFICATION OF OPTIMALITY). The control
() dened
by the construction (5.7) is optimal.
Proof. We have
P
x,t
[
()] =
_
T
t
r(x
(s),
(s)) ds +g(x
(T)).
77
Furthermore according to the denition (5.7) of ():
P
x,t
[
()] =
_
T
t
(v
t
(x
(s), s) f (x
(s),
(s))
x
v(x
(T))
=
_
T
t
v
t
(x
(s), s) +
x
v(x
(s), s) x
(s) ds +g(x
(T))
=
_
T
t
d
ds
v(x
(s), s) ds +g(x
(T))
= v(x
(T), T) +v(x
(t), t) +g(x
(T))
= g(x
(T)) +v(x
(t), t) +g(x
(T))
= v(x, t) = sup
()A
P
x,t
[()].
That is,
P
x,t
[
()] = sup
()A
P
x,t
[()];
and so
() is optimal, as asserted.
5.2 EXAMPLES
5.2.1 EXAMPLE 1: DYNAMICS WITH THREE VELOCITIES. Let us begin
with a fairly easy problem:
(ODE)
_
x(s) = (s) (0 t s 1)
x(t) = x
where our set of control parameters is
A = 1, 0, 1.
We want to minimize
_
1
t
[x(s)[ ds,
and so take for our payo functional
(P) P
x,t
[()] =
_
1
t
[x(s)[ ds.
As our rst illustration of dynamic programming, we will compute the value function
v(x, t) and conrm that it does indeed solve the appropriate Hamilton-Jacobi-Bellman
equation. To do this, we rst introduce the three regions:
78
l
x = t-1
t=1
t=0
lll
ll
x = 1-t
(t,x)
=-1
t axis
x axis
t=1
(1,x-1+t)
Optimal path in Region III
Region I = (x, t) [ x < t 1, 0 t 1.
Region II = (x, t) [ t 1 < x < 1 t, 0 t 1.
Region III = (x, t) [ x > 1 t, 0 t 1.
We will consider the three cases as to which region the initial data (x, t) lie within.
Region III. In this case we should take 1, to steer as close to the origin 0 as
quickly as possible. (See the next picture.) Then
v(x, t) = area under path taken = (1 t)
1
2
(x +x +t 1) =
(1 t)
2
(2x +t 1).
Region I. In this region, we should take 1, in which case we can similarly compute
v(x, t) =
_
1t
2
_
(2x +t 1).
Region II. In this region we take 1, until we hit the origin, after which we take
0. We therefore calculate that v(x, t) =
x
2
2
in this region.
79
(t,x)
=-1
t axis
x axis
t=1
(t+x,0)
Optimal path in Region II
Checking the Hamilton-Jacobi-Bellman PDE. Now the Hamilton-Jacobi-Bellman
equation for our problem reads
(5.8) v
t
+ max
aA
f
x
v +r = 0
for f = a, r = [x[. We rewrite this as
v
t
+ max
a=1,0
av
x
[x[ = 0;
and so
(HJB) v
t
+[v
x
[ [x[ = 0.
We must check that the value function v, dened explicitly above in Regions I-III, does in
fact solve this PDE, with the terminal condition that v(x, 1) = g(x) = 0.
Now in Region II v =
x
2
2
, v
t
= 0, v
x
= x. Hence
v
t
+[v
x
[ [x[ = 0 +[ x[ [x[ = 0 in Region II,
in accordance with (HJB).
In Region III we have
v(x, t) =
(1 t)
2
(2x +t 1);
and therefore
v
t
=
1
2
(2x +t 1)
(1 t)
2
= x 1 +t, v
x
= t 1, [t 1[ = 1 t 0.
80
Hence
v
t
+[v
x
[ [x[ = x 1 +t +[t 1[ [x[ = 0 in Region III,
because x > 0 there.
Likewise the Hamilton-Jacobi-Bellman PDE holds in Region I.
REMARKS. (i) In the example, v is not continuously dierentiable on the borderlines
between Regions II and I or III.
(ii) In general, it may not be possible actually to nd the optimal feedback control.
For example, reconsider the above problem, but now with A = 1, 1. We still have
= sgn(v
x
), but there is no optimal control in Region II.
5.2.2 EXAMPLE 2: ROCKET RAILROAD CAR. Recall that the equations of
motion in this model are
_
x
1
x
2
_
=
_
0 1
0 0
__
x
1
x
2
_
+
_
0
1
_
, [[ 1
and
P
X
[()] = time to reach (0, 0) =
_
0
1 dt = .
To use the method of dynamic programming, we dene v(x
1
, x
2
) to be minus the least
time it takes to get to the origin (0, 0), given we start at the point (x
1
, x
2
).
What is the Hamilton-Jacobi-Bellman equation? Note v does not depend on t, and so
we have
max
aA
f
x
v +r = 0
for
A = [1, 1], f =
_
x
2
a
_
, r = 1
Hence our PDE reads
max
|a|1
x
2
v
x
1
+av
x
2
1 = 0;
and consequently
(HJB)
_
x
2
v
x
1
+[v
x
2
[ 1 = 0 in R
2
v(0, 0) = 0.
Checking the Hamilton-Jacobi-Bellman PDE. We now conrm that v really sat-
ises (HJB). For this, dene the regions
I := (x
1
, x
2
) [ x
1
1
2
x
2
[x
2
[ and II := (x
1
, x
2
) [ x
1
1
2
x
2
[x
2
[.
81
A direct computation, the details of which we omit, reveals that
v(x) =
_
x
2
2
_
x
1
+
1
2
x
2
2
_1
2
in Region I
x
2
2
_
x
1
+
1
2
x
2
2
_1
2
in Region II.
In Region I we compute
v
x
2
= 1
_
x
1
+
x
2
2
2
_
1
2
x
2
,
v
x
1
=
_
x
1
+
x
2
2
2
_
1
2
;
and therefore
x
2
v
x
1
+[v
x
2
[ 1 = x
2
_
x
1
+
x
2
2
2
_
1
2
+
_
1 +x
2
_
x
1
+
x
2
2
2
_
1
2
_
1 = 0.
This conrms that our (HJB) equation holds in Region I, and a similar calculation holds
in Region II.
Optimal control. Since
max
|a|1
x
2
v
x
1
+av
x
2
+ 1 = 0,
the optimal control is
= sgnv
x
2
.
_
f = Mx +Na
r = x
T
Bx a
T
Ca
g = x
T
Dx.
Rewrite:
(HJB) v
t
+ max
aR
m
(v)
T
Na a
T
Ca + (v)
T
Mx x
T
Bx = 0.
We also have the terminal condition
v(x, T) = x
T
Dx
Minimization. For what value of the control parameter a is the minimum attained?
To understand this, we dene Q(a) := (v)
T
Na a
T
Ca, and determine where Q has
a minimum by computing the partial derivatives Q
a
j
for j = 1, . . . , m and setting them
equal to 0. This gives the identitites
Q
a
j
=
n
i=1
v
x
i
n
ij
2a
i
c
ij
= 0.
Therefore (v)
T
N = 2a
T
C, and then 2C
T
a = N
T
v. But C
T
= C. Therefore
a =
1
2
C
1
N
T
x
v.
83
This is the formula for the optimal feedback control: It will be very useful once we compute
the value function v.
Finding the value function. We insert our formula a =
1
2
C
1
N
T
v into (HJB),
and this PDE then reads
(HJB)
_
v
t
+
1
4
(v)
T
NC
1
N
T
v + (v)
T
Mx x
T
Bx = 0
v(x, T) = x
T
Dx
Our next move is to guess the form of the solution, namely
v(x, t) = x
T
K(t)x,
provided the symmetric n n-matrix valued function K() is properly selected. Will this
guess work?
Now, since x
T
K(T)x = v(x, T) = x
T
Dx, we must have the terminal condition that
K(T) = D.
Next, compute that
v
t
= x
T
K(t)x,
x
v = 2K(t)x.
We insert our guess v = x
T
K(t)x into (HJB), and discover that
x
T
K(t) +K(t)NC
1
N
T
K(t) + 2K(t)M Bx = 0.
Look at the expression
2x
T
KMx = x
T
KMx + [x
T
KMx]
T
= x
T
KMx +x
T
M
T
Kx.
Then
x
T
K +KNC
1
N
T
K +KM +M
T
K Bx = 0.
This identity will hold if K() satises the matrix Riccati equation
(R)
_
K(t) +K(t)NC
1
N
T
K(t) +K(t)M +M
T
K(t) B = 0 (0 t < T)
K(T) = D
In summary, if we can solve the Riccati equation (R), we can construct an optimal
feedback control
(t) = C
1
N
T
K(t)x(t)
Furthermore, (R) in fact does has a solution, as explained for instance in the book of
Fleming-Rishel [F-R].
84
5.3 DYNAMIC PROGRAMMING AND THE PONTRYAGIN MAXIMUM
PRINCIPLE
5.3.1 THE METHOD OF CHARACTERISTICS.
Assume H : R
n
R
u
R and consider this initialvalue problem for the Hamilton
Jacobi equation:
(HJ)
_
u
t
(x, t) +H(x,
x
u(x, t)) = 0 (x R
n
, 0 < t < T)
u(x, 0) = g(x).
A basic idea in PDE theory is to introduce some ordinary dierential equations, the
solution of which lets us compute the solution u. In particular, we want to nd a curve
x() along which we can, in principle at least, compute u(x, t).
This section discusses this method of characteristics, to make clearer the connections
between PDE theory and the Pontryagin Maximum Principle.
NOTATION.
x(t) =
_
_
_
x
1
(t)
.
.
.
x
n
(t)
_
_
_
, p(t) =
x
u(x(t), t) =
_
_
_
p
1
(t)
.
.
.
p
n
(t)
_
_
_
.
Derivation of characteristic equations. We have
p
k
(t) = u
x
k
(x(t), t),
and therefore
p
k
(t) = u
x
k
t
(x(t), t) +
n
i=1
u
x
k
x
i
(x(t), t) x
i
.
Now suppose u solves (HJ). We dierentiate this PDE with respect to the variable x
k
:
u
tx
k
(x, t) = H
x
k
(x, u(x, t))
n
i=1
H
p
i
(x, u(x, t)) u
x
k
x
i
(x, t).
Let x = x(t) and substitute above:
p
k
(t) = H
x
k
(x(t),
x
u(x(t), t)
. .
p(t)
) +
n
i=1
( x
i
(t) H
p
i
(x(t),
x
u(x, t)
. .
p(t)
)u
x
k
x
i
(x(t), t).
We can simplify this expression if we select x() so that
x
i
(t) = H
p
i
(p(t), x(t)), (1 i n);
85
then
p
k
(t) = H
x
k
(p(t), x(t)), (1 k n).
These are Hamiltons equations, already discussed in a dierent context in 4.1:
(H)
_
x(t) =
p
H(p(t), x(t))
p(t) =
x
H(p(t), x(t)).
We next demonstrate that if we can solve (H), then this gives a solution to PDE (HJ),
satisfying the initial conditions u = g on t = 0. Set p
0
= g(x
0
). We solve (H), with
x(0) = x
0
and p(0) = p
0
. Next, let us calculate
d
dt
u(x(t), t) = u
t
(x(t), t) +
x
u(x(t), t) x(t)
= H(
x
u(x(t), t)
. .
p(t)
, x(t)) +
x
u(x(t), t)
. .
p(t)
p
H(p(t), x(t))
= H(p(t), x(t)) +p(t)
p
H(p(t), x(t))
Note also u(x(0), 0) = u(x
0
, 0) = g(x
0
). Integrate, to compute u along the curve x():
u(x(t), t) =
_
t
0
H +
p
H pds +g(x
0
)
which gives us the solution, once we have calculated x() and p().
5.3.2 CONNECTIONS BETWEEN DYNAMIC PROGRAMMING AND THE
PONTRYAGIN MAXIMUM PRINCIPLE.
Return now to our usual control theory problem, with dynamics
(ODE)
_
x(s) = f (x(s), (s)) (t s T)
x(t) = x
and payo
(P) P
x,t
[()] =
_
T
t
r(x(s), (s)) ds +g(x(T))
As above, the value function is
v(x, t) = sup
()
P
x,t
[()].
The next theorem demonstrates that the costate in the Pontryagin Maximum Principle
is in fact the gradient in x of the value function v, taken along an optimal trajectory:
86
THEOREM 5.3 (COSTATES AND GRADIENTS). Assume
(), x
() solve the
control problem (ODE), (P).
If the value function v is C
2
, then the costate p
(s) =
x
v(x
(s), s) (t s T).
Proof. 1. As usual, suppress the superscript *. Dene p(t) :=
x
v(x(t), t).
We claim that p() satises conditions (ADJ) and (M) of the Pontryagin Maximum
Principle. To conrm this assertion, look at
p
i
(t) =
d
dt
v
x
i
(x(t), t) = v
x
i
t
(x(t), t) +
n
j=1
v
x
i
x
j
(x(t), t) x
j
(t).
We know v solves
v
t
(x, t) + max
aA
f (x, a)
x
v(x, t) +r(x, a) = 0;
and, applying the optimal control (), we nd:
v
t
(x(t), t) +f (x(t), (t))
x
v(x(t), t) +r(x(t), (t)) = 0.
2. Now freeze the time t and dene the function
h(x) := v
t
(x, t) +f (x, (t))
x
v(x, t) +r(x, (t)) 0.
Observe that h(x(t)) = 0. Consequently h() has a maximum at the point x = x(t); and
therefore for i = 1, . . . , n,
0 = h
x
i
(x(t)) = v
tx
i
(x(t), t) +f
x
i
(x(t), (t))
x
v(x(t), t)
+f (x(t), (t))
x
v
x
i
(x(t), t) +r
x
i
(x(t), (t)).
Substitute above:
p
i
(t) = v
x
i
t
+
n
i=1
v
x
i
x
j
f
j
= v
x
i
t
+f
x
v
x
i
= f
x
i
x
v r
x
i
.
Recalling that p(t) =
x
v(x(t), t), we deduce that
p(t) = (
x
f )p
x
r.
Recall also
H = f p +r,
x
H = (
x
f )p +
x
r.
87
Hence
p(t) =
x
H(p(t), x(t)),
which is (ADJ).
3. Now we must check condition (M). According to (HJB),
v
t
(x(t), t) + max
aA
f (x(t), a) v(x(t), t)
. .
p(t)
+r(x(t), t) = 0,
and maximum occurs for a = (t). Hence
max
aA
H(x(t), p(t), a) = H(x(t), p(t), (t));
and this is assertion (M) of the Maximum Principle.
INTERPRETATIONS. The foregoing provides us with another way to look at transver-
sality conditions:
(i) Free endpoint problem: Recall that we stated earlier in Theorem 4.4 that for the
free endpoint problem we have the the condition
(T) p
(T) = g(x
(T))
for the payo functional
_
T
t
r(x(s), (s)) ds +g(x(T)).
To understand this better, note p
(s) = v(x
(T) =
x
v(x
(T), T) = g(x
(T)).
(ii) Constrained initial and target sets:
Recall that for this problem we stated in Theorem 4.5 the transvelsality conditions that
(T)
_
p
(0) is perpendicular to T
0
p
) is perpendicular to T
1
when
denotes the rst time the optimal trajectory hits the target set X
1
.
Now let v be the value function for this problem:
v(x) = sup
()
P
x
[()],
88
with the constraint that we start at x
0
X
0
and end at x
1
X
1
But then v will be
constant on the set X
0
and also constant on X
1
. Since v is perpendicular to any level
surface, v is therefore perpendicular to both X
0
and X
1
. And since
p
(t) = v(x
(t)),
this means that
_
p
is perpendicular to X
0
at t = 0,
p
is perpendicular to X
1
at t =
5.4 REFERENCES
See the book [B-CD] by M. Bardi and I. Capuzzo-Dolcetta for more about the modern
theory of PDE methods in dynamic programming. Barron and Jensen present in [B-J] a
proof of Theorem 5.3 that does not require v to be C
2
.
89
CHAPTER 6: DIFFERENTIAL GAMES
6.1 Denitions
6.2 Dynamic Programming
6.3 Games and the Pontryagin Maximum Principle
6.4 Application: war of attrition and attack
6.5 References
6.1 DEFINITIONS
We introduce in this section a model for a two-person, zero-sum dierential game. The
basic idea is that two players control the dynamics of some evolving system, and one tries
to maximize, the other to minimize, a payo functional that depends upon the trajectory.
What are optimal strategies for each player? This is a very tricky question, primarily
since at each moment of time, each players control decisions will depend upon what the
other has done previously.
A MODEL PROBLEM. Let a time 0 t < T be given, along with sets A R
m
,
B R
l
and a function f : R
n
AB R
n
.
DEFINITION. A measurable mapping () : [t, T] A is a control for player I (starting
at time t). A measurable mapping () : [t, T] B is a control for player II.
Corresponding to each pair of controls, we have corresponding dynamics:
(ODE)
_
x(s) = f (x(s), (s), (s)) (t s T)
x(t) = x,
the initial point x R
n
being given.
DEFINITION. The payo of the game is
(P) P
x,t
[(), ()] :=
_
T
t
r(x(s), (s), (s)) ds +g(x(T)).
Player I, whose control is (), wants to maximize the payo functional P[]. Player II
has the control () and wants to minimize P[]. This is a twoperson, zerosum dierential
game.
We intend now to dene value functions and to study the game using dynamic program-
ming.
90
DEFINITION. The sets of controls for the game of the game are
A(t) := () : [t, T] A, () measurable
B(t) := () : [t, T] B, () measurable.
We need to model the fact that at each time instant, neither player knows the others
future moves. We will use concept of strategies, as employed by Varaiya and Elliott
Kalton. The idea is that one player will select in advance, not his control, but rather his
responses to all possible controls that could be selected by his opponent.
DEFINITIONS. (i) A mapping : B(t) A(t) is called a strategy for player I if for
all times t s T,
()
() for t s
implies
(6.1) []() [
]() for t s.
We can think of [] as the response of player I to player IIs selection of control ().
Condition (6.1) expresses that player I cannot foresee the future.
(ii) A strategy for II is mapping : M(t) N(t) such that for all times t s T,
() () for t s
implies
[]() [ ]() for t s.
DEFINITION. The sets of strategies are
/(t) := strategies for player I (starting at t)
B(t) := strategies for player II (starting at t).
Finally, we introduce value functions:
DEFINITION. The lower value function is
(6.2) v(x, t) := inf
B(t)
sup
()A(t)
P
x,t
[(), []()],
and the upper value function is
(6.3) u(x, t) := sup
A
inf
()B(t)
P
x,t
[[](), ()].
One of the two players announces his strategy in response to the others choice of control,
the other player chooses the control. The player who plays second, i.e., who chooses the
strategy, has an advantage. In fact, it turns out that always
v(x, t) u(x, t).
6.2 DYNAMIC PROGRAMMING, ISAACS EQUATIONS
91
THEOREM 6.1 (PDE FOR THE UPPER AND LOWER VALUE FUNC-
TIONS). Assume u, v are continuously dierentiable. Then u solves the upper Isaacs
equation
(6.4)
_
u
t
+ min
bB
max
aA
f (x, a, b)
x
u(x, t) +r(x, a, b) = 0
u(x, T) = g(x),
and v solves the lower Isaacs equation
(6.5)
_
v
t
+ max
aA
min
bB
f (x, a, b)
x
v(x, t) +r(x, a, b) = 0
v(x, T) = g(x).
Isaacs equations are analogs of HamiltonJacobiBellman equation in twoperson,
zerosum control theory. We can rewrite these in the forms
u
t
+H
+
(x,
x
u) = 0
for the upper PDE Hamiltonian
H
+
(x, p) := min
bB
max
aA
f (x, a, b) p +r(x, a, b);
and
v
t
+H
(x,
x
v) = 0
for the lower PDE Hamiltonian
H
(x, p) := max
aA
min
bB
f (x, a, b) p +r(x, a, b).
INTERPRETATIONS AND REMARKS. (i) In general, we have
max
aA
min
bB
f (x, a, b) p +r(x, a, b) < min
bB
max
aA
f (x, a, b) p +r(x, a, b),
and consequently H
(x, p) < H
+
(x, p). The upper and lower Isaacs equations are then
dierent PDE and so in general the upper and lower value functions are not the same:
u ,= v.
The precise interpretation of this is tricky, but the idea is to think of a slightly dierent
situation in which the two players take turns exerting their controls over short time inter-
vals. In this situation, it is a disadvantage to go rst, since the other player then knows
what control is selected. The value function u represents are sort of inntesimal version
of this situation, for which player I has the advantage. The value function v represents the
reverse situation, for which player II has the advantage.
92
If however
(6.6) max
aA
min
bB
f ( ) p +r( ) = min
bB
max
aA
f ( ) p +r( ),
for all p, x, we say the game satises the minimax condition, also called Isaacs condition.
In this case it turns out that u v and we say the game has value.
(ii) As in dynamic programming from control theory, if (6.6) holds, we can solve Isaacs
for u v and then, at least in principle, design optimal controls for I and II.
(iii) To say that
(),
(),
()) is a saddle
point for P
x,t
. This means
(6.7) P
x,t
[(),
()] P
x,t
[
(),
()] P
x,t
[
(), ()]
for all controls (), (). Player I will select
().
Player II will play
().
6.3 GAMES AND THE PONTRYAGIN MAXIMUM PRINCIPLE
Assume the minimax condition (6.6) holds and we design optimal
(),
() as above.
Let x
(),
().
Then dene
p
(t) :=
x
v(x
(t), t) =
x
u(x
(t), t).
It turns out that
(ADJ) p
(t) =
x
H(x
(t), p
(t),
(t),
(t))
for the game-theory Hamiltonian
H(x, p, a, b) := f (x, a, b) p +r(x, a, b).
6.4 APPLICATION: WAR OF ATTRITION AND ATTACK.
In this section we work out an example, due to R. Isaacs [I].
6.4.1 STATEMENT OF PROBLEM. We assume that two opponents I and II are
at war with each other. Let us dene
x
1
(t) = supply of resources for I
x
2
(t) = supply of resources for II.
93
Each player at each time can devote some fraction of his/her eorts to direct attack, and
the remaining fraction to attrition (= guerrilla warfare). Set A = B = [0, 1], and dene
(t) = fraction of Is eort devoted to attrition
1 (t) = fraction of Is eort devoted to attack
(t) = fraction of IIs eort devoted to attrition
1 (t) = fraction of IIs eort devoted to attack.
We introduce as well the parameters
m
1
= rate of production of war material for I
m
2
= rate of production of war material for II
c
1
= eectiveness of IIs weapons against Is production
c
2
= eectiveness of Is weapons against IIs production
We will assume
c
2
> c
1
,
a hypothesis that introduces an asymmetry into the problem.
The dynamics are governed by the system of ODE
(6.8)
_
x
1
(t) = m
1
c
1
(t)x
2
(t)
x
2
(t) = m
2
c
2
(t)x
1
(t).
Let us nally introduce the payo functional
P[(), ()] =
_
T
0
(1 (t))x
1
(t) (1 (t))x
2
(t) dt
the integrand recording the advantage of I over II from direct attacks at time t. Player I
wants to maximize P, and player II wants to minimize P.
6.4.2 APPLYING DYNAMIC PROGRAMMING. First, we check the minimax
condition, for n = 2, p = (p
1
, p
2
):
f (x, a, b) p +r(x, a, b) = (m
1
c
1
bx
2
)p
1
+ (m
2
c
2
ax
1
)p
2
+ (1 a)x
1
(1 b)x
2
= m
1
p
1
+m
2
p
2
+x
1
x
2
+a(x
1
c
2
x
1
p
2
) +b(x
2
c
1
x
2
p
1
).
Since a and b occur in separate terms, the minimax condition holds. Therefore v u and
the two forms of the Isaacs equations agree:
v
t
+H(x,
x
v) = 0,
94
for
H(x, p) := H
+
(x, p) = H
(x, p).
We recall A = B = [0, 1] and p =
x
v, and then choose a [0, 1] to maximize
ax
1
(1 c
2
v
x
2
) .
Likewise, we select b [0, 1] to minimize
bx
2
(1 c
1
v
x
1
) .
Thus
(6.9) =
_
1 if 1 c
2
v
x
2
0
0 if 1 c
2
v
x
2
< 0,
and
(6.10) =
_
0 if 1 c
1
v
x
1
0
1 if 1 c
1
v
x
1
< 0.
So if we knew the value function v, we could then design optimal feedback controls for I,
II.
It is however hard to solve Isaacs equation for v, and so we switch approaches.
6.4.3 APPLYING THE MAXIMUM PRINCIPLE. Assume (), () are se-
lected as above, and x() corresponding solution of the ODE (6.8). Dene
p(t) :=
x
v(x(t), t).
By results stated above, p() solves the adjoint equation
(6.11) p(t) =
x
H(x(t), p(t), (t), (t))
for
H(x, p, a, b) = p f (x, a, b) +r(x, a, b)
= p
1
(m
1
c
1
bx
2
) +p
2
(m
2
c
2
ax
1
) + (1 a)x
1
(1 b)x
2
.
Therefore (6.11) reads
(6.12)
_
p
1
= 1 +p
2
c
2
p
2
= 1 +p
1
c
1
,
with the terminal conditions p
1
(T) = p
2
(T) = 0.
95
We introduce the further notation
s
1
:= 1 c
2
v
x
2
= 1 c
2
p
2
, s
2
:= 1 c
1
v
x
1
= 1 c
1
p
1
;
so that, according to (6.9) and (6.10), the functions s
1
and s
2
control when player I and
player II switch their controls.
Dynamics for s
1
and s
2
. Our goal now is to nd ODE for s
1
, s
2
. We compute
s
1
= c
2
p
2
= c
2
( 1 p
1
c
1
) = c
2
(1 +(1 p
1
c
1
)) = c
2
(1 +s
2
)
and
s
2
= c
1
p
1
= c
1
(1 p
2
c
2
) = c
1
(1 +(1 p
2
c
2
)) = c
1
(1 +s
1
).
Therefore
(6.13)
_
s
1
= c
2
(1 +s
2
), s
1
(T) = 1
s
2
= c
1
(1 +s
1
), s
2
(T) = 1.
Recall from (6.9) and (6.10) that
=
_
1 if s
1
0
0 if s
1
< 0,
=
_
1 if s
2
0
0 if s
2
> 0.
Consequently, if we can nd s
1
, s
2
, then we can construct the optimal controls and .
Calculating s
1
and s
2
. We work backwards from the terminal time T. Since at
time T, we have s
1
< 0 and s
2
> 0, the same inequalities holds near T. Hence we have
= 0 near T, meaning a full attack from both sides.
Next, let t
, T], we have
() () 0. Thus (6.13) gives
s
1
= c
2
, s
1
(T) = 1, s
2
= c
1
, s
2
(T) = 1;
and therefore
s
1
(t) = 1 +c
2
(T t), s
2
(t) = 1 +c
1
(t T)
for times t
t T. Hence s
1
hits 0 at time T
1
c
2
; s
2
hits 0 at timeT
1
c
1
. Remember
that we are assuming c
2
> c
1
. Then T
1
c
1
< T
1
c
2
, and hence
t
= T
1
c
2
.
96
Now dene t
< t
, t
) = 0
s
2
= c
1
(1 +s
1
), s
2
(t
) = 1
c
1
c
2
We solve these equations and discover that
_
s
1
(t) = 1 +c
2
(T t)
s
2
(t) = 1
c
1
2c
2
c
1
c
2
2
(t T)
2
.
(t
t t
).
Now s
1
> 0 on [t
, t
. But s
2
= 0 at
t
:= T
1
c
2
_
2c
2
c
1
1
_
1/2
.
If we now solve (6.13) on [0, t
()] = max
A()
P
x,t
[A()].
DYNAMIC PROGRAMMING. We will adapt the dynamic programming methods
from Chapter 5. To do so, we rstly dene the value function
v(x, t) := sup
A()
P
x,t
[A()].
The overall plan to nd an optimal control A
.
It will be particularly interesting to see in 7.5 how the stochastic eects modify the
structure of the Hamilton-Jacobi-Bellman (HJB) equation, as compared with the deter-
ministic case already discussed in Chapter 5.
7.2 REVIEW OF PROBABILITY THEORY, BROWNIAN MOTION.
This and the next two sections provide a very, very rapid introduction to mathematical
probability theory and stochastic dierential equations. The discussion will be much too
fast for novices, whom we advise to just scan these sections. See 7.7 for some suggested
reading to learn more.
DEFINITION. A probability space is a triple (, T, P), where
(i) is a set,
(ii) T is a -algebra of subsets of ,
(iii) P is a mapping from T into [0, 1] such that P() = 0, P() = 1, and
P(
i=1
A
i
) =
i=1
P(A
i
), provided A
i
A
j
= for all i ,= j.
A typical point in is denoted and is called a sample point. A set A T is called
an event. We call P a probability measure on , and P(A) [0, 1] is probability of the
event A.
99
DEFINITION. A random variable X is a mapping X : R such that for all t R
[ X() t T.
We mostly employ capital letters denote random variables. Often the dependence of X
on is not explicitly displayed in the notation.
DEFINITION. Let X be a random variable, dened on some probability space (, T, P).
The expected value of X is
E[X] :=
_
X dP.
EXAMPLE. Assume R
m
, and P(A) =
_
A
f d for some function f : R
m
[0, ),
with
_
Xf d.
2
2
_
b
a
e
(x)
2
2
2
dx,
We write X is N(,
2
).
DEFINITIONS. (i) Two events A, B T are called independent if
P(A B) = P(A)P(B).
(ii) Two random variables X and Y are independent if
P(X t and Y s) = P(X t)P(Y s)
for all t, s R. In other words, X and Y are independent if for all t, s the events A =
X t and B = Y s are independent.
100
DEFINITION. A stochastic process is a collection of random variable X(t) (0 t < ),
each dened on the same probability space(, T, P).
The mapping t X(t, ) is the -th sample path of the process.
DEFINITION. A real-valued stochastic process W(t) is called a Wiener process or
Brownian motion if
(i) W(0) = 0,
(ii) each sample path is continuous,
(iii) W(t) is Gaussian with = 0,
2
= t (that is, W(t) is N(0, t)),
(iv) for all choices of times 0 < t
1
< t
2
< < t
n
the random variables
W(t
1
), W(t
2
) W(t
1
), . . . , W(t
k
) W(t
k1
)
are independent random variables.
Assertion (iv) says that W has independent increments.
INTERPRETATION. We heuristically interpret the one-dimensional white noise ()
as equalling
dW(t)
dt
. However, this is only formal, since for almost all , the sample path
t W(t, ) is in fact nowhere dierentiable.
DEFINITION. An n-dimensional Brownian motion is
W(t) = (W
1
(t), W
2
(t), . . . , W
n
(t))
T
when the W
i
(t) are independent one-dimensional Brownian motions.
We use boldface below to denote vector-valued functions and stochastic processes.
7.3 STOCHASTIC DIFFERENTIAL EQUATIONS.
We discuss next how to understand stochastic dierential equations, driven by white
noise. Consider rst of all
(7.3)
_
X(t) = f (X(t)) +(t) (t > 0)
X(0) = x
0
,
where we informally think of =
W.
DEFINITION. A stochastic process X() solves (7.3) if for all times t 0, we have
(7.4) X(t) = x
0
+
_
t
0
f(X(s)) ds +W(t).
101
REMARKS. (i) It is possible to solve (7.4) by the method of successive approximation.
For this, we set X
0
() x, and inductively dene
X
k+1
(t) := x
0
+
_
t
0
f (X
k
(s)) ds +W(t).
It turns out that X
k
(t) converges to a limit X(t) for all t 0 and X() solves the integral
identities (7.4).
(ii) Consider a more general SDE
(7.5)
X(t) = f (X(t)) +H(X(t))(t) (t > 0),
which we formally rewrite to read:
dX(t)
dt
= f (X(t)) +H(X(t))
dW(t)
dt
and then
dX(t) = f (X(t))dt +H(X(t))dW(t).
This is an It o stochastic dierential equation. By analogy with the foregoing, we say X()
is a solution, with the intial condition X(0) = x
0
, if
X(t) = x
0
+
_
t
0
f (X(s)) ds +
_
t
0
H(X(s)) dW(s)
for all times t 0. In this expression
_
t
0
H(X(s)) dW(s) is is called an It o stochastic
integral.
REMARK. Given a Brownian motion W() it is possible to dene the It o stochastic
integral
_
t
0
Y dW
for processes Y() having the property that each time 0 s t Y(s) depends on
W() for times 0 s, but not on W() for times s . Such processes are called
nonanticipating.
We will not here explain the construction of the It o integral, but will just record one of
its useful properties:
(7.6) E
__
t
0
Y dW
_
= 0.
102
7.4 STOCHASTIC CALCULUS, I
O CHAIN RULE.
Once the Ito stochastic integral is dened, we have in eect constructed a new calculus,
the properties of which we should investigate. This section explains that the chain rule in
the It o calculus contains additional terms as compared with the usual chain rule. These
extra stochastic corrections will lead to modications of the (HJB) equation in 7.5.
7.4.1 ONE DIMENSION. We suppose that n = 1 and
(7.7)
_
dX(t) = A(t)dt +B(t)dW(t) (t 0)
X(0) = x
0
The expression (7.7) means that
X(t) = x
0
+
_
t
0
A(s) ds +
_
t
0
B(s) dW(s)
for all times t 0.
Let u : R R and dene
Y (t) := u(X(t)).
We ask: what is the law of motion governing the evolution of Y in time? Or, in other
words, what is dY (t)?
It turns out, quite surprisingly, that it is incorrect to calculate
dY (t) = d(u(X(t)) = u
(X(t))dX(t) = u
(X(t))(A(t)dt +B(t)dW(t))
IT
O CHAIN RULE. We try again and make use of the heuristic principle that dW =
(dt)
1/2
. So let us expand u into a Taylors series, keeping only terms of order dt or larger.
Then
dY (t) = d(u(X(t)))
= u
(X(t))dX(t) +
1
2
u
(X(t))dX(t)
2
+
1
6
u
(X(t))dX(t)
3
+. . .
= u
(X(t))[A(t)dt +B(t)dW(t)] +
1
2
u
(X(t))[A(t)dt +B(t)dW(t)]
2
+. . . ,
the last line following from (7.7). Now, formally at least, the heuristic that dW = (dt)
1/2
implies
[A(t)dt +B(t)dW(t)]
2
= A(t)
2
dt
2
+ 2A(t)B(t)dtdW(t) +B
2
(t)dW(t)
2
= B
2
(t)dt +o(dt)
103
Thus, ignoring the o(dt) term, we derive the one-dimensional It o chain rule
(7.8)
dY (t) = d(u(X(t)))
=
_
u
(X(t))A(t) +
1
2
B
2
(t)u
(X(t))
_
dt +u
(X(t))B(t)dW(t).
This means that for each time t > 0
u(X(t)) = Y (t) = Y (0)+
_
t
0
_
u
(X(s))A(s) +
1
2
B
2
(s)u
(X(s))
_
ds
+
_
t
0
u
(X(s))B(s)dW(s).
7.4.2 HIGHER DIMENSIONS. We turn now to stochastic dierential equations in
higher dimensions. For simplicity, we consider only the special form
(7.9)
_
dX(t) = A(t)dt +dW(t) (t 0)
X(0) = x
0
,
We write
X(t) = (X
1
(t), X
2
(t), . . . , X
n
(t))
T
.
The stochastic dierential equations means that for each index i, we have dX
i
(t) =
A
i
(t)dt +dW
i
(t).
IT
i=1
u
x
i
(X(t), t)dX
i
(t)
+
1
2
n
i,j=1
u
x
i
x
j
(X(t), t)dX
i
(t)dX
j
(t).
Now use (7.9) and the heuristic rules that
dW
i
= (dt)
1/2
and dW
i
dW
j
=
_
dt if i = j
0 if i ,= j.
104
The second rule holds since the components of dW are independent. Plug these identities
into the calculation above and keep only terms of order dt or larger:
(7.10)
dY (t) = u
t
(X(t), t) +
n
i=1
u
x
i
(X(t), t)[A
i
(t)dt +dW
i
(t)]
+
2
2
n
i=1
u
x
i
x
i
(X(t), t)dt
= u
t
(X(t), t) +
x
u(X(t), t) [A(t)dt +dW(t)]
+
2
2
u(X(t), t)dt.
This is It os chain rule in n-dimensions. Here
=
n
i=1
2
x
2
i
denotes the Laplacian.
7.4.3 APPLICATIONS TO PDE.
A. A stochastic representation formula for harmonic functions. Consider a
region U R
n
and the boundary-value problem
(7.11)
_
u = 0 (x U)
u = g (x U)
where, as above, =
n
i=1
2
x
2
i
is the Laplacian. We call u a harmonic function
We develop a stochastic representation formula for the solution of (7.11). Consider the
random process X(t) = W(t) +x; that is,
_
dX(t) = dW(t) (t > 0)
X(0) = x
and W() denotes an n-dimensional Brownian motion. To nd the link with the PDE
(7.11), we dene Y (t) := u(X(t)). Then It os rule (7.10) gives
dY (t) = u(X(t)) dW(t) +
1
2
u(X(t))dt.
Since u 0, we have
dY (t) = u(X(t)) dW(t);
which means
u(X(t)) = Y (t) = Y (0) +
_
t
0
u(X(s)) dW(s).
105
Let denote the (random) rst time the sample path hits U. Then, putting t =
above, we have
u(x) = u(X())
_
0
u dW(s).
But u(X()) = g(X()), by denition of . Next, average over all sample paths:
u(x) = E[g(X())] E
__
0
u dW
_
.
The last term equals zero, according to (7.6). Consequently,
u(x) = E[g(X())].
INTERPRETATION. Consider all the sample paths of the Brownian motion starting
at x and take the average of g(X()). This gives the value of u at x.
B. A time-dependent problem. We next modify the previous calculation to cover
the terminal-value problem for the inhomogeneous backwards heat equation:
(7.11)
_
u
t
(x, t) +
2
2
u(x, t) = f(x, t) (x R
n
, 0 t < T)
u(x, T) = g(x).
Fix x R, 0 t < T. We introduce the stochastic process
_
dX(s) = dW(s) (s t)
X(t) = x.
Use Itos chain rule (7.10) to compute du(X(s), s):
du(X(s), s) = u
s
(X(s), s) ds +
x
u(X(s), s) dX(s) +
2
2
u(X(s), s) ds.
Now integrate for times t s T, to discover
u(X(T), T) = u(X(t), t) +
_
T
t
2
2
u(X(s), s) +u
s
(X(s), s) ds
+
_
T
t
x
u(X(s), s) dW(s).
Then, since u solves (7.11):
u(x, t) = E
_
g(X(T))
_
T
t
f(X(s), s) ds.
_
106
This is a stochastic representation formula for the solution u of the PDE (7.11).
7.5 DYNAMIC PROGRAMMING.
We now turn our attention to controlled stochastic dierential equations, of the form
(SDE)
_
dX(s) = f (X(s), A(s)) ds +dW(s) (t s T)
X(t) = x.
Therefore
X() = x +
_
t
f (X(s), A(s)) ds +[dW() W(t)]
for all t T. We introduce as well the expected payo functional
(P) P
x,t
[A()] := E
_
_
T
t
r(X(s), A(s)) +g(X(T))
_
.
The value function is
v(x, t) := sup
A()A
P
x,t
[A()].
We will employ the method of dynamic programming. To do so, we must (i) nd a PDE
satised by v, and then (ii) use this PDE to design an optimal control A
().
7.5.1 A PDE FOR THE VALUE FUNCTION.
Let A() be any control, and suppose we use it for times t s t + h, h > 0, and
thereafter employ an optimal control. Then
(7.12) v(x, t) E
_
_
t+h
t
r(X(s), A(s)) ds +v(X(t +h), t +h)
_
,
and the inequality in (7.12) becomes an equality if we take A() = A
(), an optimal
control.
Now from (7.12) we see for an arbitrary control that
0 E
_
_
t+h
t
r(X(s), A(s)) ds +v(X(t +h), t +h) v(x, t)
_
= E
_
_
t+h
t
r ds
_
+Ev(X(t +h), t +h) v(x, t).
Recall next Itos formula:
(7.13)
dv(X(s), s) = v
t
(X(s), s) ds +
n
i=1
v
x
i
(X(s), s)dX
i
(s)
+
1
2
n
i,j=1
v
x
i
x
j
(X(s), s)dX
i
(s)dX
j
(s)
= v
t
ds +
x
v (f ds +dW(s)) +
1
2
v ds.
107
This means that
v(X(t +h), t +h) v(X(t), t) =
_
t+h
t
_
v
t
+
x
v f +
2
2
v
_
ds
+
_
t+h
t
x
v dW(s);
and so we can take expected values, to deduce
(7.14) E[v(X(t +h), t +h) v(x, t)] = E
_
_
t+h
t
_
v
t
+
x
v f +
2
2
v
_
ds
_
.
We derive therefore the formula
0 E
_
_
t+h
t
_
r +v
t
+
x
v f +
2
2
v
_
ds
_
.
Divide by h:
0 E
_
1
h
_
t+h
t
r(X(s), A(s)) +
v
t
(X(s), s) +f (X(s), A(s))
x
v(X(s), s) +
2
2
v(X(s), s) ds
_
.
If we send h 0, recall that X(t) = x and set A(t) := a A, we see that
0 r(x, a) +v
t
(x, t) +f (x, a)
x
v(x, t) +
2
2
v(x, t).
The above identity holds for all x, t, a and is actually an equality for the optimal control.
Hence
max
aA
_
v
t
+f
x
v +
2
2
v +r
_
= 0.
Stochastic Hamilton-Jacobi-Bellman equation. In summary, we have shown that
the value function v for our stochastic control problem solves this PDE:
(HJB)
_
v
t
(x, t) +
2
2
v(x, t) + max
aA
f (x, a)
x
v(x, t) +r(x, a) = 0
v(x, T) = g(x).
This semilinear parabolic PDE is the stochastic HamiltonJacobiBellman equation.
Our derivation has been very imprecise: see the references for rigorous derivations.
7.5.2 DESIGNING AN OPTIMAL CONTROL.
108
Assume now that we can somehow solve the (HJB) equation, and therefore know
the function v. We can then compute for each point (x, t) a value a A for which
x
v(x, t) f (x, a) + r(x, a) attains its maximum. In otherwords, for each (x, t) we choose
a = (x, t) such that
max
aA
[f (x, a)
x
v(x, t) +r(x, a)]
occurs for a = (x, t). Next solve
_
dX
(s) = f (X
(s), (X
(t) = x.
assuming this is possible. Then A
(s) = (X
1
(t) = fraction of wealth invested in the stock
2
(t) = amount of wealth consumed.
Then
(7.15) 0
1
(t) 1, 0
2
(t) X(t) (0 t T).
We assume that the value of the bond grows at the known rate r > 0:
(7.16) db = rbdt;
whereas the price of the risky stock changes according to
(7.17) dS = RSdt +SdW.
Here r, R, are constants, with
R > r > 0, ,= 0.
109
This means that the average return on the stock is greater than that for the risk-free bond.
According to (7.16) and (7.17), the total wealth evolves as
(7.18) dX = (1
1
(t))Xrdt +
1
(t)X(Rdt +dW)
2
(t)dt.
Let
Q := (x, t) [ 0 t T, x 0
and denote by the (random) rst time X() leaves Q. Write A(t) = (
1
(t),
2
(t))
T
for
the control.
The payo functional to be maximized is
P
x,t
[A()] = E
__
t
e
t
F(
2
(s)) ds
_
,
where F is a given utility function and > 0 is the discount rate.
Guided by theory similar to that developed in 7.5, we discover that the corresponding
(HJB) equation is
(7.19) u
t
+ max
0a
1
1,a
2
0
_
(a
1
x)
2
2
u
xx
+ ((1 a
1
)xr +a
1
xR a
2
)u
x
+e
t
F(a
2
)
_
= 0,
with the boundary conditions that
(7.20) u(0, t) = 0, u(x, T) = 0.
We compute the maxima to nd
(7.21)
1
=
(R r)u
x
2
xu
xx
, F
(
2
) = e
t
u
x
,
provided that the constraints 0
1
1 and 0
2
x are valid: we will need to
worry about this later. If we can nd a formula for the value function u, we will then be
able to use (7.21) to compute optimal controls.
Finding an explicit solution. To go further, we assume the utility function F has the
explicit form
F(a) = a
,
110
for some function g to be determined. Then (7.21) implies that
1
=
R r
2
(1 )
,
2
= [e
t
g(t)]
1
1
x.
Plugging our guess for the form of u into (7.19) and setting a
1
=
1
, a
2
=
2
, we nd
_
g
= 0
for the constant
:=
(R r)
2
2
2
(1 )
+r.
Now put
h(t) := (e
t
g(t))
1
1
to obtain a linear ODE for h. Then we nd
g(t) = e
t
_
1
_
1 e
()(Tt)
1
_
_
1
.
If R r
2
(1 ), then 0
1
1 and
2
0 as required.
7.7 REFERENCES
The lecture notes [E], available online, present a fast but more detailed discussion of
stochastic dierential equations. See also Oskendals nice book [O].
Good books on stochastic optimal control include Fleming-Rishel [F-R], Fleming-Soner
[F-S], and Krylov [Kr].
111
APPENDIX: PROOFS OF THE PONTRYAGIN MAXIMUM PRINCIPLE
A.1 Simple control variations
A.2 Free endpoint problem, no running payo
A.3 Free endpoint problem with running payos
A.4 Multiple control variations
A.5 Fixed endpoint problem
A.6 References
A.1. SIMPLE CONTROL VARIATIONS.
Recall that the response x() to a given control () is the unique solution of the system
of dierential equations:
(ODE)
_
x(t) = f (x(t), (t)) (t 0)
x(0) = x
0
.
We investigate in this section how certain simple changes in the control aect the response.
DEFINITION. Fix a time s > 0 and a control parameter value a A. Select > 0 so
small that 0 < s < s and dene then the modied control
(8.1)
(t) :=
_
a if s < t < s
(t) otherwise.
We call
(t) = f (x
(t),
(t)) (t > 0)
x
(0) = x
0
.
We want to understand how our choices of s and a cause x
(t) = f (y
(t), (t)) (t 0)
y
(0) = x
0
+y
0
+o().
Then
y
(t) x(t) =
_
t
s
f (x
(s) = x(s) +y
s
+o(),
for y
s
dened by (8.5).
According to Lemma A.1, we have
x
().
We are taking the running payo r 0, and hence the control theory Hamiltonian is
therefore
H(x, p, a) = f (x, a) p.
We must nd p
: [0, T] R
n
, such that
(ADJ) p
(t) =
x
H(x
(t), p
(t),
(t)) (0 t T)
and
(M) H(x
(t), p
(t),
(t)) = max
aA
H(x
(t), p
(t), a).
To simplify notation we henceforth drop the superscript and so write x() for x
(),
() for
()][
=0
= p(s) [f (x(s), a) f (x(s), (s))].
Proof. According to Lemma A.2,
P[
()] = g(x
()][
=0
= g(x(T)) y(T).
On the other hand, (8.5) and (8.7) imply
d
dt
(p(t) y(t)) = p(t) y(t) +p(t) y(t)
= A
T
(t)p(t) y(t) +p(t) A(t)y(t)
= 0.
Hence
g(x(T)) y(T) = p(T) y(T) = p(s) y(s) = p(s) y
s
.
Since y
s
= f (x(s), a) f (x(s), (s)), this identity and (8.9) imply (8.8).
We now restore the superscripts in our notation.
THEOREM A.4 (PONTRYAGIN MAXIMUM PRINCIPLE). There exists a
function p
: [0, T] R
n
satisfying the adjoint dynamics (ADJ), the maximization prin-
ciple (M) and the terminal/transversality condition (T).
Proof. The adjoint dynamics and terminal condition are both in (8.7). To conrm (M),
x 0 < s < T and a A, as above. Since the mapping P[
()] = p
(s) [f (x
(s), a) f (x
(s),
(s)].
Hence
H(x
(s), p
(s), a) = f (x
(s), a) p
(s)
f (x
(s),
(s)) p
(s) = H(x
(s), p
(s),
(s))
for each 0 < s < T and a A. This proves the maximization condition (M).
115
A.3. FREE ENDPOINT PROBLEM WITH RUNNING COSTS
We next cover the case that the payo functional includes a running payo:
(P) P[()] =
_
T
0
r(x(s), (s)) ds +g(x(T)).
The control theory Hamiltonian is now
H(x, p, a) = f (x, a) p +r(x, a)
and we must manufacture a costate function p
: [0, T] R
n+1
satisfying (M) for the Hamiltonian
(8.11)
H( x, p, a) =
f ( x, a) p.
116
Also the adjoint equations (ADJ) hold, with the terminal transversality condition
(T) p
(T) = g( x
(T)).
But
f does not depend upon the variable x
n+1
, and so the (n+1)
th
equation in the adjoint
equations (ADJ) reads
p
n+1,
(t) =
H
x
n+1
= 0.
Since g
x
n+1
= 1, we deduce that
(8.12) p
n+1,
(t) 1.
As the (n + 1)
th
component of the vector function
f is r, we then conclude from (8.11)
that
(t) :=
_
_
_
p
1,
(t)
.
.
.
p
n,
(t)
_
_
_
satises (ADJ), (M) for the Hamiltonian H.
A.4. MULTIPLE CONTROL VARIATIONS.
To derive the Pontryagin Maximum Principle for the xed endpoint problem in A.5 we
will need to introduce some more complicated control variations, discussed in this section.
DEFINITION. Let us select times 0 < s
1
< s
2
< s
N
, positive numbers 0 <
1
, . . . ,
N
,
and also control parameters a
1
, a
2
, . . . , a
N
A.
We generalize our earlier denition (8.1) by now dening
(8.13)
(t) :=
_
a
k
if s
k
k
t < s
k
(k = 1, . . . , N)
(t) otherwise,
for > 0 taken so small that the intervals [s
k
k
, s
k
] do not overlap. This we will call
a multiple variation of the control ().
Let x
(t) = f (x
(t),
(t)) (t 0)
x
(0) = x
0
.
NOTATION. (i) As before, A() =
x
f (x(), ()) and we write
(8.15) y(t) = Y(t, s)y
s
(t s)
117
to denote the solution of
(8.16)
_
y(t) = A(t)y(t) (t s)
y(s) = y
s
,
where y
s
R
n
is given.
(ii) Dene
(8.17) y
s
k
:= f (x(s
k
), a
k
)) f (x(s
k
), (s
k
))
for k = 1, . . . , N.
We next generalize Lemma A.2:
LEMMA A.5 (MULTIPLE CONTROL VARIATIONS). We have
(8.18) x
_
y(t) = 0 (0 t s
1
)
y(t) =
m
k=1
k
Y(t, s
k
)y
s
k
(s
m
t s
m+1
) for m = 1, . . . , N 1
y(t) =
N
k=1
k
Y(t, s
k
)y
s
k
(s
N
t).
DEFINTION. The cone of variations at time t is the set
(8.20) K(t) :=
_
N
k=1
k
Y(t, s
k
)y
s
k
N = 1, 2, . . . ,
k
> 0, a
k
A,
0 < s
1
s
2
s
N
< t
_
.
Observe that K(t) is a convex cone in R
n
, which according to Lemma A.5 consists of all
changes in the state x(t) (up to order ) we can eect by multiple variations of the control
().
We will study the geometry of K(t) in the next section, and for this will require the
following topological lemma:
LEMMA A.6 (ZEROES OF A VECTOR FIELD). Let S denote a closed, bounded,
convex subset of R
n
and assume p is a point in the interior of S. Suppose
: S R
n
is a continuous vectoreld that saties the strict inequalities
(8.21) [(x) x[ < [x p[ for all x S.
118
Then there exists a point x S such that
(8.22) (x) = p.
Proof. 1. Suppose rst that S is the unit ball B(0, 1) and p = 0. Squaring (8.21), we
deduce that
(x) x > 0 for all x B(0, 1).
Then for small t > 0, the continuous mapping
(x) := x t(x)
maps B(0, 1) into itself, and hence has a xed point x
) = 0.
2. In the general case, we can always assume after a translation that p = 0. Then 0
belongs to the interior of S. We next map S onto B(0, 1) by radial dilation, and map
by rigid motion. This process converts the problem to the previous situation.
A.5. FIXED ENDPOINT PROBLEM.
In this last section we treat the xed endpoint problem, characterized by the constraint
(8.23) x() = x
1
,
where = [()] is the rst time that x() hits the given target point x
1
R
n
. The payo
functional is
(P) P[()] =
_
0
r(x(s), (s)) ds.
ADDING A NEW VARIABLE. As in A.3 we dene the function x
n+1
: [0, ] R
by
_
x
n+1
(t) = r(x(t), (t)) (0 t )
x
n+1
(0) = 0,
and reintroduce the notation
x :=
_
x
x
n+1
_
=
_
_
_
_
x
1
.
.
.
x
n
x
n+1
_
_
_
_
, x
0
:=
_
x
0
0
_
=
_
_
_
_
x
0
1
.
.
.
x
0
n
0
_
_
_
_
,
119
x(t) :=
_
x(t)
x
n+1
(t)
_
=
_
_
_
_
x
1
(t)
.
.
.
x
n
(t)
x
n+1
(t)
_
_
_
_
,
f ( x, a) :=
_
f (x, a)
r(x, a)
_
=
_
_
_
_
f
1
(x, a)
.
.
.
f
n
(x, a)
r(x, a)
_
_
_
_
,
with
g( x) = x
n+1
.
The problem is therefore to nd controlled dynamics satisfying
(ODE)
_
x(t) =
f ( x(t), (t)) (0 t )
x(0) = x
0
,
and maximizing
(P) g( x()) = x
n+1
(),
being the rst time that x() = x
1
. In other words, the rst n components of x() are
prescribed, and we want to maximize the (n + 1)
th
component.
We assume that
(), satisfying
the maximization principle (M). As usual, we drop the superscript to simplify notation.
THE CONE OF VARIATIONS. We will employ the notation and theory from the
previous section, changed only in that we now work with n + 1 variables (as we will be
reminded by the the overbar on various expressions).
Our program for building the costate depends upon our taking multiple variations, as
in A.5, and understanding the resulting cone of variations at time :
(8.24) K = K() :=
_
N
k=1
k
Y(, s
k
) y
s
k
N = 1, 2, . . . ,
k
> 0, a
k
A,
0 < s
1
s
2
s
N
<
_
,
for
(8.25) y
s
k
:=
f ( x(s
k
), a
k
))
f ( x(s
k
), (s
k
)).
We are now writing
(8.26) y(t) = Y(t, s) y
s
for the solution of
(8.27)
_
y(t) =
A(t) y(t) (s t )
y(s) = y
s
,
with
A() :=
x
f ( x(), ()).
120
LEMMA A.7 (GEOMETRY OF THE CONE OF VARIATIONS). We have
(8.28) e
n+1
/ K
0
.
Here K
0
denotes the interior of K and e
k
= (0, . . . , 1, . . . , 0)
T
, the 1 in the k-th slot.
Proof. 1. If (8.28) were false, there would then exist n + 1 linearly independent vectors
z
1
, . . . , z
n+1
K such that
e
n+1
=
n+1
k=1
k
z
k
with positive constants
k
> 0
and
(8.29) z
k
= Y(, s
k
) y
s
k
for appropriate times 0 < s
1
< s
1
< < s
n+1
< and vectors y
s
k
=
f ( x(s
k
), a
k
))
f ( x(s
k
), (s
k
)), for k = 1, . . . , n + 1.
2. We will next construct a control
() = (x
()
T
, x
n+1
())
T
satisfying
(8.30) x
() = x
1
and
(8.31) x
n+1
() > x
n+1
().
This will be a contradiction to the optimality of the control (): (8.30) says that the new
control satises the endpoint constraint and (8.31) says it increases the payo.
3. Introduce for small > 0 the closed and convex set
S :=
_
x =
n+1
k=1
k
z
k
0
k
_
.
Since the vectors z
1
, . . . , z
n+1
are independent, S has an interior.
Now dene for small > 0 the mapping
: S R
n+1
by setting
(x) := x
() x()
121
for x =
n+1
k=1
k
z
k
, where x
() dened by (8.13).
We assert that if , , > 0 are small enough, then
(x) = p := e
n+1
= (0, . . . , 0, )
T
for some x S. To see this, note that
[
(x) x[ = [ x
() x() x[ = o([x[) as x 0, x S
< [x p[ for all x S.
Now apply Lemma A.6.
EXISTENCE OF THE COSTATE. We now restore the superscripts and so write
x
: [0,
] R
n
satisfying the adjoint dy-
namics (ADJ) and the maximization principle (M).
The proof explains what abnormal means in this context.
Proof. 1. Since e
n+1
/ K
0
according to Lemma A.7, there is a nonzero vector w R
n+1
such that
(8.32) w z 0 for all z K
and
(8.33) w
n+1
0.
Let p
() = w.
Then
(8.34) p
n+1
() w
n+1
0.
Fix any time 0 s < , any control value a A, and set
y
s
:=
f ( x
(s), a)
f ( x
(s),
(s)).
Now solve
_
y(t) =
A(t) y(t) (s t )
y(s) = y
s
;
122
so that, as in A.2,
0 w y() = p
() y() = p
(s) y(s) = p
(s) y
s
.
Therefore
p
(s) [
f ( x
(s), a)
f ( x
(s),
(s))] 0;
and then
(8.35)
H( x
(s), p
(s), a) =
f ( x
(s), a) p
(s)
f ( x
(s),
(s)) p
(s) =
H( x
(s), p
(s),
(s)),
for the Hamiltonian
H( x, p, a) =
f ( x, a) p.
2. We now must address two situations, according to whether
(8.36) w
n+1
> 0
or
(8.37) w
n+1
= 0.
When (8.36) holds, we can divide p
(s), p
(s), a) H(x
(s), p
(s),
(s))
for
H(x, p, a) = f (x, a) p +r(x, a).
This is the maximization principle (M), as required.
When (8.37) holds, we have an abnormal problem, as discussed in the Remarks and
Warning after Theorem 4.4. Those comments explain how to reformulate the Pontryagin
Maximum Principle for abnormal problems.
CRITIQUE. (i) The foregoing proofs are not complete, in that we have silently passed
over certain measurability concerns and also ignored in (8.29) the possibility that some of
the times s
k
are equal.
123
(ii) We have also not (yet) proved that
t H(x
(t), p
(t), p
(t), (t)) 0
in A.5.
A.6. REFERENCES.
We mostly followed Fleming-Rishel [F-R] for A.1A.3 and Macki-Strauss [M-S] for
A.4 and A.5. Another approach is discussed in Craven [Cr]. Hocking [H] has a nice
heuristic discussion.
124
References
[B-CD] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-
Jacobi-Bellman Equations, Birkhauser, 1997.
[B-J] N. Barron and R. Jensen, The Pontryagin maximum principle from dynamic programming and
viscosity solutions to rst-order partial dierential equations, Transactions AMS 298 (1986),
635641.
[C] F. Clarke, Optimization and Nonsmooth Analysis, Wiley-Interscience, 1983.
[Cr] B. D. Craven, Control and Optimization, Chapman & Hall, 1995.
[E] L. C. Evans, An Introduction to Stochastic Dierential Equations, lecture notes available at
http://math.berkeley.edu/ evans/SDE.course.pdf.
[F-R] W. Fleming and R. Rishel, Deterministic and Stochastic Optimal Control, Springer, 1975.
[F-S] W. Fleming and M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer,
1993.
[H] L. Hocking, Optimal Control: An Introduction to the Theory with Applications, Oxford Uni-
versity Press, 1991.
[I] R. Isaacs, Dierential Games: A mathematical theory with applications to warfare and pursuit,
control and optimization, Wiley, 1965 (reprinted by Dover in 1999).
[K] G. Knowles, An Introduction to Applied Optimal Control, Academic Press, 1981.
[Kr] N. V. Krylov, Controlled Diusion Processes, Springer, 1980.
[L-M] E. B. Lee and L. Markus, Foundations of Optimal Control Theory, Wiley, 1967.
[L] J. Lewin, Dierential Games: Theory and methods for solving game problems with singular
surfaces, Springer, 1994.
[M-S] J. Macki and A. Strauss, Introduction to Optimal Control Theory, Sringer, 1982.
[O] B. K. Oksendal, Stochastic Dierential Equations: An Introduction with Applications, 4th ed.,
Springer, 1995.
[O-W] G. Oster and E. O. Wilson, Caste and Ecology in Social Insects, Princeton University Press.
[P-B-G-M] L. S. Pontryagin, V. G. Boltyanski, R. S. Gamkrelidze and E. F. Mishchenko, The Mathematical
Theory of Optimal Processes, Interscience, 1962.
[T] William J. Terrell, Some fundamental control theory I: Controllability, observability, and du-
ality, American Math Monthly 106 (1999), 705719.
125