0% found this document useful (0 votes)
279 views18 pages

Unit4-MaximumPrinciple Important

The document summarizes Pontryagin's maximum principle for solving optimal control problems. It provides informal proofs of the principle and discusses its necessary conditions. These include forming the Hamiltonian, setting derivatives of the state and co-state variables equal to partial derivatives of the Hamiltonian, and requiring the Hamiltonian be maximized. Examples of minimum time, minimum fuel, and singular control problems are presented.

Uploaded by

Fathi Musa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
279 views18 pages

Unit4-MaximumPrinciple Important

The document summarizes Pontryagin's maximum principle for solving optimal control problems. It provides informal proofs of the principle and discusses its necessary conditions. These include forming the Hamiltonian, setting derivatives of the state and co-state variables equal to partial derivatives of the Hamiltonian, and requiring the Hamiltonian be maximized. Examples of minimum time, minimum fuel, and singular control problems are presented.

Uploaded by

Fathi Musa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

SEEM5410

Optimal Control Unit 4: Pontryagins Maximum Principle


Duan LI
http://www.se.cuhk.edu.hk/seem5410

2 There always exist some constraints on control u. As an extension of the classical calculus of variation, Pontryagins Minimum Principle enables us to deal with optimal control problems with constraints on u. We only give in this class some INFORMAL proof of the Pontryagins Minimum Principle. Important recognitions: u is a minimum. J (u) J (u ) = J 0, for all admissible u. J (u , u) = J (u , u) + h.o.t. J (u , u) 0 if u lies on the boundary during any portion of [t0 , tf ]. J (u , u) = 0 if u lies within the boundary during any portion of [t0 , tf ].

Figure 1: Admissible variation in control

4 The increment at u J (u , u) =

[ (x (tf ), tf ) p (tf )]T xf x (x (tf ), tf )]tf +[H (x (tf ), u (tf ), p (tf ), tf ) + t tf H + {[p (t) + (x (t), u (t), p (t), t)]T x(t) x t0 H (x (t), u (t), p (t), t)]T u(t) +[ u H +[ (x (t), u (t), p (t), t) x (t)]T p(t)}dt p +h.o.t

5 Set H (x(t), u(t), p(t), t) x (t) = p H (x(t), u(t), p(t), t) p (t) = x


tf

J (u , u) =
t0

H (x (t), u (t), t)]T u(t)dt + h.o.t [ u

tf

[H (x (t), u (t) + u(t), p (t), t) H (x (t), u (t), p (t), t)dt


t0

+ h.o.t 0 H (x (t), u (t) + u(t), p (t), t) H (x (t), u (t), p (t), t), for all admissible u(t), t [t0 , tf ].

6 Summary of Minimum Principle: Hamiltonian: H (x(t), u(t), p(t), t) = L(x(t), u(t), t) + pT (t)f (x(t), u(t), t) Necessary Optimum Conditions (t [t0 , tf ]): 1. x (t) = 2. p (t) = 3. H (x (t), u (t), p (t), t) H (x (t), u(t), p (t), t), for all admissible u(t). Boundary Conditions: [ (x (tf ), tf ) p (tf )]T xf x +[H (x (tf ), u (tf ), p (tf ), tf ) + (x (tf ), tf )]tf = 0 t
H p (x (t), u (t), p (t), t) H ( x ( t ) , u ( t ) , p (t), t) x

7 Additional Necessary Optimum Conditions: 1. If the nal time is xed and the Hamiltonian does not depend explicitly on time, then H (x (t), u (t), p (t)) = Constant, t [t0 , tf ] 2. If the nal time is free and the Hamiltonian does not depend explicitly on time, then H (x (t), u (t), p (t)) = 0, t [t0 , tf ]

8 Minimum-Time Optimal Control

Problem Formulation:
tf

min J =
t0

dt

s.t.x (t) = a(x(t), t) + B (x(t), t)u(t) Mi ui Mi+ , i = 1, . . . , m, where x(t) Rn , u(t) Rm , a = (a1 , . . . , an )T , B = (b1 , . . . , bm ) Rnm . Hamiltonian: H (x(t), u(t), p(t), t) = 1 + pT (t)[a(x(t), t) + B (x(t), t)u(t)]

9 Minimum principle 1 + (p (t))T [a(x (t), t) + B (x (t), t)u (t)] 1 + (p (t))T [a(x (t), t) + B (x (t), t)u(t)] (p (t))T B (x (t), t)u (t) (p (t))T B (x (t), t)u(t)

Bang-bang control: if (p (t))T bi (x (t), t) > 0 Mi Mi+ if (p (t))T bi (x (t), t) < 0 u i (t) = Undetermined if (p (t))T bi (x (t), t) = 0

10

Figure 2: The relationship between a time-optimal control and its coefcient in the Hamiltonian

11 Minimum-Time Optimal Control of Time Invariant Linear Systems (Pontryagin et al.)

Problem Formulation:
tf

min J =
t0

dt

s.t.x (t) = Ax(t) + Bu(t) x(t0 ) = x0 , x(tf ) = 0 | ui | 1, i = 1, . . . , m, If all eigenvalues of A have nonpositive real parts, then a unique optimal control exists that transfers any initial x0 to the origin. If all eigenvalues of A are real, and a unique optimal control exists, then each control component can switch at most (n 1) times.

12 Minimum-Fuel Optimal Control

Problem Formulation:
tf m

min J =
t0

[
i=1

| ui (t) |]dt

s.t.x (t) = a(x(t), t) + B (x(t), t)u(t) 1 ui 1, i = 1, . . . , m, where x(t) Rn , u(t) Rm , a = (a1 , . . . , an )T , B = (b1 , . . . , bm ) Rnm . Hamiltonian:
m

H (x(t), u(t), p(t), t) =


i=1

| ui (t) | +pT (t)[a(x(t), t)+B (x(t), t)u(t)]

13 Minimum principle
m T | u ( t ) | +( p ( t )) [ a ( x ( t ) , t ) + B ( x ( t ) , t ) u (t)] i i=1 m

i=1

| ui (t) | +(p (t))T [a(x (t), t) + B (x (t), t)u(t)]

Decomposable Form
T | u i (t) | +(p (t)) bi (x (t), t)ui (t)

| ui (t) | +(p (t))T bi (x (t), t)ui (t),

i = 1, . . . , m

14 Note
T | u ( t ) | +( p ( t )) b ( x ( t ) , t ) u i i i (t) [1 + (p (t))T bi (x (t), t)]u i (t) {1 + (p (t))T bi (x (t), t)}u i (t)

if u i (t) 0 if u i (t) 0

Bang-o-bang control: 1 if 0 1 u i (t) = 0 0

if (p (t))T bi (x (t), t) < 1 1 < (p (t))T bi (x (t), t) < 1 if (p (t))T bi (x (t), t) > 1 if (p (t))T bi (x (t), t) = 1 if (p (t))T bi (x (t), t) = 1

15

Figure 3: The relationship between a fuel-optimal control and its coecient in the Hamiltonian

16 Singular Control

If there exists a singular interval [t1 , t2 ] (with t2 > t1 ) in which the necessary condition from the Minimum Principle: H (x (t), u (t), p (t), t) H (x (t), u(t), p (t), t), provides no information in deciding the optimal control Singular Control

17 Singular Intervals in Linear Minimum-Time Control Problem formulation


tf

min J =
t0

dt

s. t. x (t) = Ax(t) + Bu(t) x(t0 ) = x0 , x(tf ) = 0 | ui | 1, i = 1, . . . , m Hamiltonian: H (x(t), u(t), p(t), t) = 1 + pT (t)[Ax(t) + Bu(t)] Singular interval: pT (t)B 0, t [t1 , t2 ]

18 It is impossible to have p(t) 0, t [t1 , t2 ]. Why? dk T [p (t)B ] 0, t [t1 , t2 ], k = 1, 2, . . . k dt

dk T k T k [ p ( t ) B ] = ( 1) p ( t ) A B = 0, t [t1 , t2 ], k = 1, 2, . . . dtk pT (t)[B | AB | A2 B | . . . | An1 B ] = 0

For a singular interval to exist, it is necessary that the system be uncontrollable.

You might also like