0% found this document useful (0 votes)
147 views

Lecture 1

This document provides an overview of the ECE 3650 Optimal Control course. It describes the course as introducing fundamental optimal control theory mathematics and applying optimal controllers in practice. The course covers static optimization, optimal control of discrete-time and continuous-time systems, and dynamic programming. It reviews optimization problem formulations, examples including a toy problem and diet problem, and remarks on large-scale problems and optimization in biology. Optimization methods discussed include finding the extremum of smooth functions, gradient search, and the simplex algorithm.

Uploaded by

mahmoud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views

Lecture 1

This document provides an overview of the ECE 3650 Optimal Control course. It describes the course as introducing fundamental optimal control theory mathematics and applying optimal controllers in practice. The course covers static optimization, optimal control of discrete-time and continuous-time systems, and dynamic programming. It reviews optimization problem formulations, examples including a toy problem and diet problem, and remarks on large-scale problems and optimization in biology. Optimization methods discussed include finding the extremum of smooth functions, gradient search, and the simplex algorithm.

Uploaded by

mahmoud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ECE 3650: Optimal Control (3 Credits, Fall 2014)

Lecture 1: Course Organization and


Introduction to Optimal Control

August 28, 2014

Zhi-Hong Mao
Associate Professor of ECE and Bioengineering
University of Pittsburgh, Pittsburgh, PA
1

Outline

Course description
Course organization
Brief review of optimization methods
What is optimal control?
Why optimal control?
Approaches to optimal control
An example

Course description
This course introduces:
Fundamental mathematics of optimal control theory
Implementation of optimal controllers for practical
applications

ECE 1673: Linear control systems Question: How


many of you
ECE 2646: Linear system theory have taken
ECE 2646 (or
equivalent
Nonlinear control Optimal control courses)?

Robust control Adaptive control

1
Course description
This course introduces:
Fundamental mathematics of optimal control theory
Implementation of optimal controllers for practical applications

This course covers:


Static optimization
Optimal control of discrete-time systems
Optimal control of continuous-time systems
Dynamic programming

Course organization

Time: Thursday 5:20 pm7:50 pm


Instructor: Dr. Zhi-Hong Mao
(Office) 1238 Benedum Hall
(Email) [email protected]
(Phone) 412-624-9674
(Office hours) Monday 3 pm5 pm

Course organization
Time: Thursday 5:20 pm-7:50 pm
Instructor: Dr. Zhi-Hong Mao

Text book
F. L. Lewis, Draguna Vrabie, and V. L. Syrmos, Optimal
Control, 3rd Edition, John Wiley and Sons, New York, 2012
(or 2nd Edition, 1995)
Lecture notes available at
http://www.pitt.edu/~zhm4/ECE3650
Email list
Emergent notices will be sent to you via emails

2
Course organization
Time: Monday 5:20 pm-7:50 pm
Instructor: Dr. Zhi-Hong Mao
Text book
Lecture notes
Email list

Course evaluation
Homework and class participation: 30% (late
homework will not be accepted)
Midterm: 30%
Final exam: 40%

Brief review of optimization methods


Formulation of optimization problems
Performance index or objective function
Control or decision variables
Constraints

minimize L(u )
subject to fi (u ) 0, i 1,..., n

Question: Are all these ingredients necessary?

Brief review of optimization methods


Formulation of optimization problems

Examples of optimization problems


A toy example: A childs rectangular play yard is to be
built next to the house. To make the three sides of the
play-pen, twenty-four feet of fencing are available.
What should be the dimensions of the sides to make
a maximum area?

x x

y 9

3
Brief review of optimization methods
Formulation of optimization problems

Examples of optimization problems


A toy example

x x

maximize xy
subject to 2 x y 24
10

Brief review of optimization methods


Formulation of optimization problems

Examples of optimization problems


A toy example
The diet problem (one of the first modern optimization
problems): In the 1930s-1940s, the Army wanted a low cost
diet that would meet the nutritional needs of a soldier

If you go on a diet and


lose five pounds, only
to gain back ten the
following month, how
many infuriating,
godforsaken pounds
do you weigh?
11

Brief review of optimization methods


Formulation of optimization problems

Examples of optimization problems


A toy example

The diet problem


minimize cost of food
subject to: total calories minimum requirement,
amount of vitamins minimum requirement,
amount of minerals minimum requirement, etc.
9 inequalities, 77 decision variables

Solution: The minimum cost of food is $ per year!


12

4
Brief review of optimization methods
Formulation of optimization problems

Examples of optimization problems


A toy example
The diet problem

A remark about large-scale


In the fifties, a travel-salesman problem (TSP)
through 49 cities (corresponding to 1176 variables
in the standard IP formulation) has been
considered large-scale, while today the world
record of solving a TSP is 13,509 cities
(91,239,786 variables)

13

Brief review of optimization methods


Formulation of optimization problems

Examples of optimization problems


A toy example
The diet problem

A remark about large-scale


In the fifties, a travel-salesman problem (TSP) through 49 cities
(corresponding to 1176 variables in the standard IP formulation) has been
been considered large-scale, while today the world record of solving a TSP
is 13,509 cities (91,239,786 variables)

Large-scale does not only depend on the number


of variables or constraints but also structures of the
problem (e.g., convex v.s. noncovex programming,
l0 v.s. l1 minimization)

14

Brief review of optimization methods


Formulation of optimization problems

Examples of optimization problems


A toy example
The diet problem
A remark about large-scale

Save wire organizing principle: At multiple


hierarchical levelsbrain, ganglion, individual cell
physical placement of neural components appears
consistent with a single, simple goal, i.e., to minimize
cost of connections among the components

15

5
Brief review of optimization methods
Formulation of optimization problems

Examples of optimization problems


A toy example
The diet problem
A remark about large-scale
Save wire organizing principle

Optimization in biology
Optimization theory not only explains current
adaptations of biological systems, but also helps to
predict new designs that may yet evolve
Biological world may provide solutions to engineering
problems: The structures, movements, and behaviors
of animals, and their life histories, have been shaped
by the optimization processes of evolution or of
learning by trial and error
16

Brief review of optimization methods

Formulation of optimization problems


Examples of optimization problems

Optimization methods
Extremum of a smooth
function
Gradient search
Simplex algorithm

17

Brief review of optimization methods


Formulation of optimization problems
Examples of optimization problems
Optimization methods
Extremum of a smooth function
Gradient search
Simplex algorithm

Lagrangian methods and Lagrange multipliers


Randomized algorithms

Genetic
algorithm

18

6
Brief review of optimization methods
Formulation of optimization problems
Examples of optimization problems
Question (Steiners
Optimization methods problem): How to find
Extremum of a smooth function a point inside a
Gradient search triangle that gives the
Simplex algorithm
shortest sum of
Lagrangian methods and Lagrange multipliers
Randomized algorithms
distances to the
vertices?
Energy-function-based
optimization
With applications in protein
folding problems, Hopfield
neural networks, robotic path
planning, etc.

19

Protein folding
20

Hemoglobin

21

7
What is optimal control?

Definition
Optimal control is to find optimal ways to control a
dynamic system

22

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system
The system is modeled as a set of first-order
differential equations (representation of the
dynamics of an n th-order system using n first-order
differential equations)

x Ax Bu
y Cx Du

23

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system
The system is modeled as a set of first-order differential equations (representation of
the dynamics of an n th-order system using n first-order differential equations)

Example: Newtons second law

d 2 y (t ) x1 (t ) y (t )
m u(t ) dy (t ) dx1 (t )
dt 2 x2 ( t )
dt dt

x Ax Bu
y Cx Du x1 (t ) 0 1 x1 (t ) 0
x (t ) 0 0 x (t ) 1 / mu(t )
2 2
Question: What are A, B, C,
and D for this example? x (t )
y (t ) [1 0] 1
x2 ( t ) 24

8
What is optimal control?
Definition

Formulation of optimal control problems


State-space description of a system

Objective functions or performance indices


Example 1: Suppose that the objective is to control a
dynamical system modeled by the equations
x Ax Bu, x(t0 ) x0
y Cx
on a fixed interval [t0, tf] so that the components of the
state vector are small. A suitable performance index
to be minimized would be
tf
J1 xT (t ) x(t )dt
t0

25

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system

Objective functions or performance indices


Example 1
Example 2: If the objective is to control the system so
that the components of the output, y(t), are to be small,
then we could use the performance index
tf
J 2 y T (t ) y (t )dt
t0
tf tf
xT (t )C T Cx(t )dt x T (t )Qx (t )dt
t0 t0

where the weight matrix Q = CTC is symmetric positive


semidefinite
26

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system

Objective functions or performance indices


Example 1
Example 2
Example 3: If the objective is to control the system so
that the components of the iutput, u(t), are to be small,
then we could use the performance index
tf
J 3 uT (t )u(t )dt or J 3 uT (t )Ru(t )dt
tf

t 0 t 0

where the weight matrix R is symmetric positive definite

27

9
What is optimal control?
Definition

Formulation of optimal control problems


State-space description of a system

Objective functions or performance indices


Example 1
Example 2
Example 3
Example 4: If we wish the final state x(tf) to be as close
as possible to 0, then we could use the performance
index
J 4 xT (t f )Fx(t f )

where F is a symmetric positive definite matrix

28

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system
Objective functions or performance indices

LQR (linear quadratic regulator) problem


The control aim is to keep the state small, the control
not too large, and the final state as near to 0 as
possible. The resulting performance index is

x (t )Qx(t ) uT (t )Ru(t ) dt
tf
J xT (t f )Fx(t f ) T
t0

Minimizing the above performance index subject to


x Ax Bu, x(t0 ) x0
y Cx
is called the LQR problem 29

What is optimal control?


Definition

Formulation of optimal control problems


State-space description of a system
Objective functions or performance indices

LQR (linear quadratic regulator) problem

x (t )Qx(t ) uT (t )Ru(t ) dt
tf
J xT (t f )Fx(t f ) T
t0

Question: What if the desired state is not 0 but xd(tf)?

30

10
What is optimal control?
Definition
Formulation of optimal control problems

Comparison with conventional optimization


problems

Question: What are the decision variables and


constraints of an optimal control problem?

31

Why optimal control?

Problems with classical control system design


Classical design is a trial-and-error process
Classical design is to determine the parameters of an
acceptable system
Classical design is essentially restricted to single-
input single-output LTI systems

32

Why optimal control?


Problems with classical control system design

Why optimal control?


Based on state-space description of systems and
applicable to control problems involving multi-input
multi-output systems and time-varying situations
Optimal design (in optimal control) v.s. acceptable
design (in classical control)
Optimal control theory provides strong analytical tools

33

11
Why optimal control?
Problems with classical control system design
Why optimal control?

Applications of optimal control


In engineering system design
In the study of biology (including neuroscience)
In management science and economics

34

Why optimal control?


Problems with classical control system design
Why optimal control?
Applications of optimal control

Word of caution
Optimal control design assumes that the system
model is exactly known and that there are no
disturbances
Lack of intuition in design
Optimal control should not be viewed as a
replacement of classical analytic methods; rather, it
should be considered as an addition that
complements the older tools of classical control

35

Approaches to optimal control

Calculus of variations
Pontryagins maximum principle
Dynamic programming
Hamilton-Jacobi-Bellman equation

36

12
Approaches to optimal control
Calculus of variations
Dynamic programming

Linear quadratic regulator


For an LTI system described by x Ax Bu, x(0) x0
with a quadratic cost function defined as
x Qx u Ru dt

J T T
0

the feedback control law that minimizes the value of


the cost is u Kx where K is given by
K R1BT P and P is found by solving the algebraic
Riccati equation (ARE):
AT P PA Q PBR1BT P 0
37

An example

38

An example

39

13
An example

DCM

Feedforward control 40

41

An example

DCM

42
Feedback control

14
43

44

An example

J xT Qx uT Ru dt

Km i
u Va , x (q, i, )'

J 20q(t )2 (t )2 0.01Va (t )2 dt

DCM 0

45
Optimal control

15
46

References
J. R. Banga. Optimization in computational systems biology. BMC Systems Biology, 2008.
J. W. Chinneck. Practical optimization: a gentle introduction. Available online at
http://www.sce.carleton.ca/faculty/chinneck/po.html
G. B. Dantzig. The diet problem. Interfaces 20, 43-47, 1990.
R. J. Jagacinski and J. M. Flach. Control Theory for Humans: Quantitative Approaches to
Modeling Performance. Lawrence Erlbaum Associates Publishers, Mahwah, NJ, 2003.
F. L. Lewis, D. L. Vrabie, and V. L. Syrmos. Optimal Control, 3rd Edition, John Wiley and
Sons, 2012.
D. E. Kirk. An introduction to dynamic programming. IEEE Transactions on Education E-
10, 212-219, 1967.
A. Martin. Large-scale optimization. Optimization and Operations Research, in
Encyclopedia of Life Support Systems, Eolss Publishers, Oxford, UK, 2004.
S. H. Zak. Systems and Control. Oxford University Press, 2003.
http://asweknowit.net/images_edu/dwa5%20brain%20cells%20non-neuronal.jpg
http://en.wikipedia.org/wiki/Conjugate_gradient_method
http://en.wikipedia.org/wiki/File:Hb-animation2.gif
http://en.wikipedia.org/wiki/Linear-quadratic_regulator
http://en.wikipedia.org/wiki/Protein_folding
http://faculty.cs.tamu.edu/amato/dsmft/research/folding/index.shtml.OLD2
http://molsim.chem.uva.nl/gallery/index.html
http://www.johnrdixonbooks.com/images/Optimization.pdf
http://www.mathworks.com/products/control/demos.html?file=/products/demos/shipping/co
ntrol/dcdemo.html 47

16

You might also like