0% found this document useful (0 votes)
129 views48 pages

Optimal Control

This document provides an introduction to optimal control problems and techniques for solving them. It defines optimization as finding the maximum or minimum of functions subject to constraints. Optimal control problems aim to determine the best control inputs to maximize an objective function while satisfying constraints. The document outlines different types of optimization problems and various solution techniques including analytical methods, numerical methods, and modern AI techniques. It also provides examples of formulating and solving single variable, multivariable, nonlinear, and constrained optimization problems.

Uploaded by

Tsedenia Tamiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views48 pages

Optimal Control

This document provides an introduction to optimal control problems and techniques for solving them. It defines optimization as finding the maximum or minimum of functions subject to constraints. Optimal control problems aim to determine the best control inputs to maximize an objective function while satisfying constraints. The document outlines different types of optimization problems and various solution techniques including analytical methods, numerical methods, and modern AI techniques. It also provides examples of formulating and solving single variable, multivariable, nonlinear, and constrained optimization problems.

Uploaded by

Tsedenia Tamiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Chapter 1

Introduction to Optimal Control


Contents
• Introduction
– Concept of optimization
– Mathematical formulation
– Types of optimization problems
– Solution techniques
• Optimal Control Problem formulation
• Classification of optimal control problems
• Examples
Introduction
What is Optimization?
• Optimization is the mathematical discipline
which is concerned with finding the maxima
and minima of functions, possibly subject to
constraints.
• Optimize means make as perfect, effective or
functional as possible
• Used to determine best solution without
actually testing all possible cases
Statement of optimization problem

General Statement of a optimization Problem


Find x which minimize: f(x)
Subject to:
g i ( x)  0 i  0,1,2,...h
li ( x )  0 i  h  1, h  2,...m
f(x), gi(x) and li(x) are twice continuously differentiable real
valued functions.
gi(x) is known as inequality constraint
li(x) is known as equality constrain
Example: maximize area of a circle inscribed
by a rectangle/square
• Consider the following figure

• Formulating this as an optimization problem


max f  r 2
subject to
2r  L
2r  W
Classification of optimization problems
• Based on existence of constraints
– Constrained and unconstrained
• Based on nature of design variables
– Static and dynamic
• Based on nature of the expression for objective
and constraint function
– Linear and nonlinear
– Nonlinear
• Geometric and quadratic
Optimization techniques
• Classical optimization techniques
– Based on hard mathematics
– Possible techniques are
• Analytical
– Works for continuously differentiable functions
– If it exists, it gives an exact solution
• Numerical
– Gives an approximate solution when analytical solution is not
possible

• Modern optimization techniques


– Based on AI techniques
Single variable optimization
• Local minima
– If for small Local
positive and negative h maxima

• Local maxima
– If for small
positive and negative h

Local
minima
Single variable cont…

Theorem 1

Theorem 2
Example

5 4 3
f ( x )  12x  45x  40x  5 400

350

Soln. 300

250
Find f’(x) and then equate it with zero. 200
The extreme points are x=0,1 and 2
150

X=0 is inflection point, x=2 is local 100

minima and x=1 is local maxima 50

-50

-100
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Multivariable optimization
• Without constraint and With constraint
• Has similar condition with single variable case

Theorem 3

Theorem 4
Example: find extreme points of
• Function of two variable – Find extreme points
– Check the Hessian
matrix by determining
• Soln. second derivatives and
– Evaluate first partial determinants
derivatives and
Multivariable optimization with equality
constraints

• Problem formulation
– Find which minimizes F(x) subject to the
constraint

gi(x)=0 for i=1,2,3, … m where m n


– Solution can be obtained using
• Direct substitution
• Constrained variation and Lagrangian method
Direct substitution
• Technique
• Converts constrained
– Express the m constraint
optimization to variables in terms of the
unconstrained remaining n-m variables
• Used for simpler – Substitute into the
problems objective function
Direct substitution cont…
• Example – find the values of x1,x2 and x3 which
maximize
subject to the equality constrain

Soln. Re-write the constraint equation to eliminate any


one of the variables
then

 2
x2  1 x  x
1
2
3   2
f ( x 1 , x 2 , x 3 )  8x1x 3 1  x  x
1
2
3 
constrained variation method
• Finds a closed form expression for the first order
differential of f at all points where the constraints are
satisfied
• Example: minimize

• Subject to
Constrained variation
• At a minimum • Rewriting the equation

• If we take small • Substituting


variations dx1 and dx2
• Necessary condition
• After Taylor series expansion
Constrained variation
• For general case

• Under the assumption that


Example minimize the following function subject to given
constraint

• Minimize

• Subject to

• Soln. The partial • Using the necessary condition


differentials are
Method of Lagrangian multipliers

• Problem formulation
s.t.
• Procedure:
A function L can be formed as

• Necessary conditions for extreme are given by


Lagrange Multiplier method cont…

• For a general case L


Lagrangian method
• Sufficient condition is

Has to be positive definite matrix


Example: find maximum of f given by

• Subject to

• Soln. The Lagrangian is • Giving


Formulation of multivariable optimization

• When the constraints are inequality constraints, i.e.

• It can be transformed to equality by adding slack


variable
• Lagrangian method can be used
Nonlinear programming

• When objective function or constraint is nonlinear


• Analytical
– If objective function is differentiable
– Constraints are equality
• Numerical method
– When objective function is not differentiable
Nonlinear programming

– Is a numerical method
– Used for non differentiable or analytically unsolvable objective
functions
– Example
0.75 1 1
f x   2
 0. 65 x tan  0.65
1 x x
General outline of NLP
• Start with an initial trial point X1
• Find a suitable direction Si that points in the
direction of optimum
• Find an appropriate step length i for movement in
direction of Si
• Obtain the new approximation Xi+1 as

• Test if Xi+1 is optimum,


X i 1  X i if not
 iSi optimum, set i=i+1
and go to step 2 else stop
Newton- Raphson method
• Is an NLP method based on first and second
derivatives of objective function
• It starts with an initial guess
• Improved approximation is obtained as
f ' xi 
xi 1  xi 
f '' xi 

• Stopping criteria is

f ' x i 1   
Example: Find the minimum of the function, with starting point x=0.1 and stopping
criteria =0.01

0.75 1 1
f ( x )  0.65   0. 65 x tan
1 x2 x

Solution: X1=0.1
1.5x 0.65x 1 1
f x  
'
  0.65 tan
1  x 
2 2 1 x 2
x
1.51  3x  0.651  x  0.65
2 2
2.8  3.2 x 2
f ''
x     
1  x 
2 3
1  x  1  x
2 2 2
1  x 
2 3

Xi f f’ f’’ Xi+1 Check


0.1 -0.1882 -0.7448 2.6865 0.377 |-0.13|>
0.377 -0.303 -0.1382 1.573 0.465 |-0.017|>
0.465 -0.3098 -0.0179 1.171 0.480 |-0.0005|<
Newton method for multivariable case

• Gradient
– N component vector of • Ji is the Hessian matrix of
partial derivatives second order derivatives
– represent direction of • Disadvantages
steepest decent – Needs storage of J
– Next iteration is – Computation of J is
obtained as difficult
– Needs the inversion of J
• Solution: use quasi
Newton Method
– Where is the gradient
of f
Example: minimize f(x1,x2) given by

f x1 , x 2   x1  x 2  2 x12  2 x1x 2  x 22


 4 2  0.5  0.5
J  J 1

 2 2    0. 5 0.5 

1 1
x i 1  x i  J f ( x i )   
  2
xi Xi+1 J-1 f check
[0;0] [0.5;1.5] [0.5 -0.5;-0.5 1] [1;-1] ||f||=1.41>
[-1;1.5] [-1;1.5] [0.5 -0.5;-0.5 1] [0;0] ||f||=0<
Using MATLAB

• MATLAB has built in function named fminunc


• It solves unconstrained minimization
• Steps
– Write the objective function as a user defined function in
mfile
– Call fminunc using the objective function and initial values
as an argument
• Example: minimize the function

• With initial point


Introduction
• Optimal control – is finding a controller that
drives a system towards a desired operating
condition while achieving a given performance
criteria
• The performance criteria is given as a cost
function J
• Most commonly used J

– (x(to),to), (x(tf),tf), is Condition at the beginning


and end of the control and
– (x(t),t) the cost in the entire interval
Introduction
• The type of function used for ,  in the
performance function determines the type of
optimal control
• Problem is constrained optimization problem
– Constraints may be given time, state condition,
amount of control effort or energy etc
• Solution techniques depend on the type of
function used for the initial and final conditions
Problem formulation

• Given a system

• With initial condition

• Find a control signal u that can


– drive the system to a final state
– Fulfill state constraint
– Minimize the function
Classification of optimal control problems

• Optimal control problems can be classified into


various types based on the performance
function used
• These are
– Minimum time control problem
– Terminal control problem
– Minimum control effort problem
– Optimal servo mechanism
– Optimal regulator problem
Minimum time control problem
• Objective is to minimize the time required to
drive a system from its initial state to final
state
• J or performance function is
tf


J  dt
to

• Example: optimal control of robotic


manipulator to finish a task
Example for Minimum time control problem

• A rocket burn trajectory is desired to minimize a


travel time between a starting point and a final point,
10 units of distance away.
– The thrust can be between an upper limit of 1.1 and a
lower limit of -1.1.
– The initial and final velocity must be zero and the
maximum velocity can never exceed 1.7.
– It is also desirable to minimize the use of fuel to perform
the maneuver. There is a drag resistance that is
proportional to the square of the velocity and mass is lost
as the fuel is burned during thrust operations.
Example: minimum time problem
Problem formulation is
minimize tf
subject to
ds/dt = v
dv/dt = (u-0.2*v^2)/m
dm/dt = -0.01 * u^2
Additional system path constraints
0.0 <= v(t) <= 1.7
-1.1 <= u(t) <= 1.1
Initial boundary conditions
s(0) = 0
v(0) = 0
m(0) = 1
Final boundary conditions
s(tf) = 10.0
v(tf) = 0.0
Terminal control problem
• Objective: is to minimize the final error value,
i.e. given a desired value, find a control signal
that can drive the system towards a point
where final value has minimum deviation from
desired value
• J has the form of
J  [ x(t f )  d ]T S [ x(t f )  d ]
– d is the desired value of the system at final time
Minimum control effort
• Objective: is to minimize the energy used to
drive a system from its initial state to a final
state
• J has the form
tf


J  U (t )T RU (t )dt
to

– R is a weighting matrix with dimension mxm


• Example: minimum fuel problem
Optimal servomechanism or tracking
problem
• Objective: is to design a control signal which
minimizes the error between desired actual
path of a system
• J has the form
tf
tf
 
T
J  [ x(t )  d (t )] Q[ x(t )  d (t )]dt  e(t )T Qe (t )dt
to
to

• Example: satellite tracking, continuous path


control of a robot etc
Optimal servomechanism contd
• The above problem can be extended to more
advanced forms
– Servomechanism with minimum control effort
tf


J  ([ x(t )  d (t )]T Q[ x(t )  d (t )]  U (t )T RU (t )) dt
to

– Servomechanism with minimum effort and


terminal control tf

J  [ X (t )  D (t )] S [ X (t )  D (t )]   ([ X (t )  d (t )] Q[ x (t )  d (t )]  U (t )
f f
T
f f
T T
RU (t )) dt
to
Optimal regulator problem
• This is a sub problem of the optimal
servomechanism where the final point is zero
• Objective: is to drive a system towards a point
where the final state value is zero.
• The performance measure J is given by
tf


J  [ X (t f )]T S [ X (t f )]  ([ X (t )]T Q[ x(t )]  U (t )T RU (t )) dt
to
Examples
• Consider a body of mass M moving along a
frictionless surface
u
M

xo , vo
x f ,v f

• Find a control signal u, bounded as -amax<u<amax


which can transfer the system from its initial
state to final state with minimum time.
Examples
• Model the system
d d2
F  ma  m (v (t ))  m 2 ( x(t ))
dt dt
x 1  x2
x 2  ( F / m)  u

• The performance measure J is then


tf


J  dt  t f
0
Examples
• If friction of the surface us considered in the
motion of the above body, the SS model
dx d d2
F b  ma  m (v(t ))  m 2 ( x(t ))
dt dt dt
x 1  x2
F b b
x 2   x2  u  x2
m m m
• The cost function can be taken to be the same
Conclusion
• When considering optimal control problems,
the following four points need to be
considered
– Existence of solution
– Uniqueness
– Main features of the optimal solution
– The form of the solution

You might also like