Optimal Control
Optimal Control
• Local maxima
– If for small
positive and negative h
Local
minima
Single variable cont…
Theorem 1
Theorem 2
Example
5 4 3
f ( x ) 12x 45x 40x 5 400
350
Soln. 300
250
Find f’(x) and then equate it with zero. 200
The extreme points are x=0,1 and 2
150
-50
-100
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Multivariable optimization
• Without constraint and With constraint
• Has similar condition with single variable case
Theorem 3
Theorem 4
Example: find extreme points of
• Function of two variable – Find extreme points
– Check the Hessian
matrix by determining
• Soln. second derivatives and
– Evaluate first partial determinants
derivatives and
Multivariable optimization with equality
constraints
• Problem formulation
– Find which minimizes F(x) subject to the
constraint
2
x2 1 x x
1
2
3 2
f ( x 1 , x 2 , x 3 ) 8x1x 3 1 x x
1
2
3
constrained variation method
• Finds a closed form expression for the first order
differential of f at all points where the constraints are
satisfied
• Example: minimize
• Subject to
Constrained variation
• At a minimum • Rewriting the equation
• Minimize
• Subject to
• Problem formulation
s.t.
• Procedure:
A function L can be formed as
• Subject to
– Is a numerical method
– Used for non differentiable or analytically unsolvable objective
functions
– Example
0.75 1 1
f x 2
0. 65 x tan 0.65
1 x x
General outline of NLP
• Start with an initial trial point X1
• Find a suitable direction Si that points in the
direction of optimum
• Find an appropriate step length i for movement in
direction of Si
• Obtain the new approximation Xi+1 as
• Stopping criteria is
f ' x i 1
Example: Find the minimum of the function, with starting point x=0.1 and stopping
criteria =0.01
0.75 1 1
f ( x ) 0.65 0. 65 x tan
1 x2 x
Solution: X1=0.1
1.5x 0.65x 1 1
f x
'
0.65 tan
1 x
2 2 1 x 2
x
1.51 3x 0.651 x 0.65
2 2
2.8 3.2 x 2
f ''
x
1 x
2 3
1 x 1 x
2 2 2
1 x
2 3
• Gradient
– N component vector of • Ji is the Hessian matrix of
partial derivatives second order derivatives
– represent direction of • Disadvantages
steepest decent – Needs storage of J
– Next iteration is – Computation of J is
obtained as difficult
– Needs the inversion of J
• Solution: use quasi
Newton Method
– Where is the gradient
of f
Example: minimize f(x1,x2) given by
1 1
x i 1 x i J f ( x i )
2
xi Xi+1 J-1 f check
[0;0] [0.5;1.5] [0.5 -0.5;-0.5 1] [1;-1] ||f||=1.41>
[-1;1.5] [-1;1.5] [0.5 -0.5;-0.5 1] [0;0] ||f||=0<
Using MATLAB
• Given a system
J dt
to
J U (t )T RU (t )dt
to
J ([ x(t ) d (t )]T Q[ x(t ) d (t )] U (t )T RU (t )) dt
to
J [ X (t ) D (t )] S [ X (t ) D (t )] ([ X (t ) d (t )] Q[ x (t ) d (t )] U (t )
f f
T
f f
T T
RU (t )) dt
to
Optimal regulator problem
• This is a sub problem of the optimal
servomechanism where the final point is zero
• Objective: is to drive a system towards a point
where the final state value is zero.
• The performance measure J is given by
tf
J [ X (t f )]T S [ X (t f )] ([ X (t )]T Q[ x(t )] U (t )T RU (t )) dt
to
Examples
• Consider a body of mass M moving along a
frictionless surface
u
M
xo , vo
x f ,v f
J dt t f
0
Examples
• If friction of the surface us considered in the
motion of the above body, the SS model
dx d d2
F b ma m (v(t )) m 2 ( x(t ))
dt dt dt
x 1 x2
F b b
x 2 x2 u x2
m m m
• The cost function can be taken to be the same
Conclusion
• When considering optimal control problems,
the following four points need to be
considered
– Existence of solution
– Uniqueness
– Main features of the optimal solution
– The form of the solution