0% found this document useful (0 votes)
78 views100 pages

Final Presntation Chapter 3 I3333

Final Presntation Chapter 3

Uploaded by

Mona Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views100 pages

Final Presntation Chapter 3 I3333

Final Presntation Chapter 3

Uploaded by

Mona Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 100

EE765 Optimal Control Theory I

.
.

1
Course contents
Review to control system performance ( Mathematical model in state
space , the Controllability and Observability concepts )
Introduction to optimal control concepts
Statement of the optimal control problem.
Performance measure ( idex)
Methods of optimization:
1. Dynamics programming
2. Calculus of variation and pontryagins minimum principle
Bellman Jaccobi equation and its applications.
case studies

2
The optimization concept
1. What is the optimization
Optimization is the task of making the best choice among a set of given alternatives, or
It is the collective process of finding the set of conditions required to achieve the best
result from a given situation.
2. Why optimization
The optimization is required for the following reasons :
To continual improve the performance of the system which can be regarded from two
viewpoints ( Economic and technologic).
To achieve the best system performance in the minimum time with minimum error.
3. How to perform the optimization

The comparison between different choices is usually done via an objective function. The

objective function is simply the performance measure ( i.e performance index)


4. What is the performance index:
A performance index is a quantities measure of the performance of a system and is
chosen so that the important system specifications can be meet. The choice of
performance index is depending on the natural of control problem

3
The course requires a review to the following topics:
1. State space concepts.
2. Solution of state space equations .
3. Controllability and Observability concepts.

In control system, classical design procedures are best suited for linear , single-input,
single-output systems with zero initial conditions. But for many complex problems ,
using the classical techniques, will not achieve the requirement performances.
Therefore, another approach must be used such as theory of optimal control.
What is the objective of optimal control theory ?
The objective of optimal control theory is to determine the control signals that will cause a
process to satisfy the physical constraints and at the same time minimize ( or maximize)
some performance criterion.
To apply this approach, the optimal control problem must be formulated. The formulation
of an optimal control problem requires:
4. A mathematical description ( or model) of the process to be controlled (state model).
5. A statement of the physical constraints ( system limitation).
6. Specification of a performance criterion ( performance index)

4
Review of control system performance
A control system has a good performance if the following specification are
satisfied;
1. Minimum steady-state error.
2. Good stability performance.
3. Reasonable system response speed.
4. The effects of disturbance is very small or neglected

To achieve the above requirement the following steps are necessary:


1. Determine and understand the physical system performance.
2. Define the mathematical model representation of the physical system.
3. Determine its original specifications in time or frequency domains.
4. Determine the requirement specifications.
5. Adjust the system parameters to meet the requirement specification if possible.
6. If the above step does not satisfied the requirement specifications, an additional
device must be inserted into the system such as controller or compensators
7. Finalized the modified system structure by optimization process to achieve all or
some requirement specifications in the same time .

5
State-space method
State is the smallest set of variable( called state variable). Or is defined as minimum
information required at time to, to find the response to subsequent inputs( t > to)
State vector is a vector that determines the system state X(t) for any time t to.

x(t) [x1, x2 ,......... ..,xn ]T


..........
State space. The n-dimensional space whose coordinate axes consist of the x1 axis, x2
axis,, xn axis is called a state space.
State-space equations
x ( t ) Ax Bu
Where A is called the state matrix,
y ( t ) Cx Du
B is the input matrix
C is the output matrix
D is the direct transmission matrix

6
State-space representation
1- If the forcing function does not involve derivative terms
y n a1 y n1 ............ an1 y an y u
x 1 x 2
let x1 y
x 2 x 3
x2 y
x3 y x 3 x 4
. .
. .
. .
xn y n 1 x n 1 x n
x n a n x1 a n 1 x 2 .......... ...... a 2 x n 1 a 1 x n u
or x Ax Bu
y Cx
x 1 0 1 0 . . . 0 x1 0
x 0 0 1 . . . 0 x2 0 x1
2 x
. . . . . . . . . 0
2
. . . . . . . . . 0 u x3

y 1 0 0 . . . 0 .
. . . . . . . . . 0

. 0 0 0 . . . 1 . 0
.
x a an 1 an 2 . . . a1 xn 1
n n
.
x
n

7
Examples

8
Example 2

9
2- If the forcing function involves derivative terms

10
Examples
1-For the following system derive the state-system model

11
2- Find the state-space for the following system u(s) + 4( s 4) 40 y(s)
s 16 s s 2
-

Y (s) 160 ( s 4 )
3
The corresponding transfer function is U ( s ) s 18 s 2 192 s 640
Then, the corresponding differential equation is y 18 y 192 y 640 y 160 u 640 u
The state-space can be obtained as follows x1 y 0 u
Let
x 2 y 0 u 1u x 1 1u
x 3 y 0 u 1u 2 u x 2 2 u
Where 0, 1 and 2 are determined from 0 b0 0
1 b1 a1 0 0
2 b2 a11 a2 0 160
3 b3 a1 2 a2 1 a3 0 2240
Then the state equation is

x1 0 1 0 x1 1
x 0 x1 0 1 0 x1 0
1 x2 2 u x 0 1 x2 160 u
2 0
2 0
x 3 a3 a2 a1 x3 3 x 3 640 192 18 x3 2240
x1 x1
y 1 0 0 x2 0u y 1 0 0 x2
x3 x3

12
Correlation between Transfer-function and
State-space
Let the state equation is given by
x ( t ) Ax Bu
y ( t ) Cx Du
Using Laplace transform technique yields

sX ( s) AX ( s) BU ( s)
sI A X (s) BU ( s)
X ( s ) ( sI A) 1 BU ( s )
Y ( s ) [C ( sI A) 1 B D]U ( s )
Y (s)
G(s) [C ( sI A) 1 B D] C ( s ) B D
U (s)
( s) ( SI A) 1 and
this is called State - transaitio n matrix

13
Example :Find the Transfer function for the following system

1
0 1
x C x1 C u and y 0
x1
R
1 R x 2 0 x2

x
L L
1
Then SI A
1

R 1
S2 S
L LC
R
G ( s ) C SI A B LC
1

R 1
S2 S
L LC

14
State-space representation forms
The state space can be represented by four forms
Let a system defined by

y n a1 y n 1 a2 y n 2 ........ an 1 y an y bou n b1u n 1 ..... bn 1u bnu


1. Controllable canonical form
This form is important in control system design. This form is
x 1 0 1 0 . . . 0 x1 0
x 0 0 1 . . . 0 x2 0
2
. . . . . . . . . .

A state equation is . . . . . . . . . . u
. . . . . . . . . .

n 1 0
x
0 0 . . . 1 xn 1 0
x a an 1 an 2 . . . a1 xn 1
n n

x1
x
2
.

Output state equation is y bn a n b0 bn 1 a n 1b0 . . . . b1 a1b0 . b0 u
.

x n 1
x
n 15
Observable canonical form
This form is important for design an observer estimator . This has the following form

A state equation is x1 0 0 0 . . 0 an x1 bn anb0


x 1 0 0 . . 0 an 1 x2 bn 1 an 1b0
2
. 0 1 0 . . . an 2 . .

. . . . . . . . . . u
. . . . . . . . . .

n 1 .
x
. . . . . . xn 1 .
x 0 0 0 . . 1 a1 xn b1 a1b0
n

x1
x
2
.

Output state equation is y 0 0 . . . . 1 . b 0 u
.

x
n 1
x
n
16
3- Diagonal canonical form
This can be derived only when the denominator polynomial involved only distinct real roots.

Y ( s ) b0 s n b1s n 1 ....... bn 1s bn

U ( s ) s p1 s p2 ............ s pn
c1 c c
b0 2 ....... n
s p1 s p2 s pn
A state equation is
x 1 p1 0 0 . . . 0 x1 1
x 0 p2 0 . . . 0 x2 1
2
. . . . . . . . . .

. . . . . . . . . . u
. . . . . . . . . .

n 1 0
x
0 0 . . . 0 xn 1 1
x1
x 0 0 0 . . . pn xn 1 x
n
2
.

Output state equation is y c1 c2 . . . . c n . b0 u
.

x
n 1
x
n 17
4- Jordan canonical form
This form is used when there are multiple roots in the denominator.

let p1 = p2 = p3 or Y (s) b0 s n b1 s n 1 ....... bn 1 s bn



U ( s ) s p1 3 s p4 .......... .. s pn
c1 c2 c3 c4 cn
b0 .......
s p1 3 s p1 2 s p1 s p4 s pn

A state equation is x 1 p1 1 0 0 . . 0 x1 0
x 0 p1 0 . . . 0 x2 0
2
. . . p1 0 . . 0 . 1

. 0 . 0 p4 . . 0 . 1 u
. . . . . . . . . .

n 1 0
x
0 0 . . . 0 xn 1 1
x 0 0 0 0 . . pn xn 1
n
x1
x
2
.
Output state equation is
y c1 c2 . . . . c n . b0 u
.

x
n 1
x
n 18
Digonalization of nxn matrix

19
If the matrix A involved an equal eigenvalues then the digonalization is impossible

20
Example

Obtain state-space representations in the controllable canonical form, observable canonical


form, and diagonal canonical form if the transfer function of the system is
Y (s) s3
2 or y 3y 2y u 3u
U ( s ) s 3s 2
y a1 y a 2 y bo u b1u b2 u
n=2, a2 =an= 2 , a1=3 bo= 0, b1= 1 and b2 = 3

1. Controllable canonical form

x 1 0 1 x1 0
x 1 0 1 x1 0 x 2 3 x 1u
x a u
a1 x2 1 2 2
2 2
x1
x1 y (t ) 3 1
y (t ) b2 a2b0 b1 a1b0 b0u x2
x2
21
2. observable canonical form
x 1 0 2 x1 3
x 1 0 a2 x1 b2 a2b0 x u
x 1 a x b a b u 2
1
3 x 2 1
2 1 2 1 1 0
x
x1 y ( t ) 0 1 1

y 0 1 b0u x2
x2
3. Diagonal canonical form Y ( s ) b0 s n b1s n 1 ....... bn 1s bn

U ( s ) s p1 s p2 ............ s pn
c1 c2 cn
b0 .......
s p1 s p2 s pn
Y ( s) s3 b0 s 2 b1 b2 s3
2
U ( s ) s 3s 2 s p1 s p2 s 1 s 2
Y ( s) c c 2 1
b0 1 2
U (s) s p1 s p2 s 1 s 2
x 1 p1 0 x1 1 x 1 1 0 x1 1
x 0 u
p2 x2 1 x 0 2 x 1u
2 2 2
x1 x1
y (t ) c1 c2 b0u y (t ) 2 1 22
x2 x2
23
24
problems
1- Obtain the state-space representation for the following system in
(a) controllable canonical form .
Y (s) s6
(b) observable canonical form
U (s) s 2 5s 6

2- Obtain a state-space in a diagonal canonical form

y 6 y 11 y 6 y 6u
3- Find x1(t) and x2(t) of the system described by

x 1 0 1 x1
x 3 2 x
2 2
Where the initial conditions are

x1 (0) 1
x (0) 1
2
25
Solving the time-invariant state equation
1. Homogeneous sate equation
Scalar differential equation x ax (1)

The solution can be assumed as polynomial x(t ) b0 b1t b2t 2 ......... bk t k ...
By substituting this into equation 1 yields

b1 2b2t 3b3t 2 ......... kbk t k 1 ... a b0 b1t b2t 2 ......... bk t k ... (2)

Equating the coefficients of the equal powers of t yields b1 ab 0


1 1 2
b2 ab 1 a b0
2 2
1 1
b 3 ab 2 a 3b0
3 32
.
.
1 k
bk a b0
k!
Where b0 can be determined by substituting t=0 into equation 2
x(0) = b0
Then , the solution is 1 1
x(t ) 1 at a 2t 2 ...... a k t k .... x(0)
2! k!
e x(0)
at
26
Vector matrix differential equation x Ax
Where x = n-vector A = n*n constant matrix
If x is defined as
x(t ) b0 b1t b2t 2 ......... bk t k ...
b 1 Ab 0

1 1
b2 Ab 1 A 2b0
2 2
1 1
b3 Ab 2 A 3b0
3 3 2
.
.
1
bk A 3b 0
k!

The solution is 1 22 1 kk
x(t ) I At A t ...... A t .... x(0)
2! k!
e At x(0) (t ) x(0)
Where (t) is called State-transition matrix
27
Laplace transform approach to find the solution of
homogeneous state equations
Let x Ax
Taking the Laplace transform of both sides yields sx( s ) x(0) Ax( s )
Or
( sI A) x( s) x(0)
Premultiplying both sides by ( SI A) -1 yields X(s) = ( sI A ) -1 x(0)

Where 1 I A A2
( sI A) 2 3 ....
s s s
Then , take the inverse Laplace transform yields

1 1 A2t 2 A3t 3
[(sI A) ] I At .... eAt
Therefore , the solution is 2! 3!

x(t ) e At x(0) (t ) x(0)


28
Properties of state-transition matrices

For the time-invariant system


1- (0) e A( 0) I
(t) eAt eAt (t)
1 1
2-

3- 1 (t ) (t )
4- (t1 t2 ) e A t1 t2 e At1 e At2 (t1 ) (t2 ) (t2 ) (t1 )
5- (t2 t1 ) (t1 t0 ) (t2 t0 ) (t1 t0 ) (t2 t1 )

6-
(t ) n (nt)

29
Examples
1. Obtain the state-transition (t) and inverse state-transition matrix -1(t) of the following

x1 0 1 x1
x 2 3 x
2 2
0 1
For this system A
2 3
Then (t) e sI A
At 1
1
s 0 0
sI A
0 s
1 s 1
2 3 2 s 3


s3 1
1 s 3 1 s 1 s 2 s 1 s 2
sI A
1

s 1 s 2 2 s 2 s
s 1 s 2 s 1 s 2
2 et
e2t
e t
e2t

(t) e t
At
2t t 2t
2e 2e e 2e
2e e
t 2t
e e
t 2t
(t) e t
1 At
2t
2e 2e e 2e
2t t
30
x ( t ) Ax Bu
2- non-Homogeneous sate equation
y ( t ) Cx Du
t t
x(t ) e x(0) e
At A ( t )
Bu( )d (t ) x(0) (t )Bu( )d
0 0

Example : obtain the time response of the following system


x 1 0 1 x1 0
x 2 3 x 1u
2 2
Where u(t) is the unit step function applied at t = 0. u(t) = 1

2e e e e t 2t t 2t

(t) e t
At
2t
2e 2e e 2e
2t t

t 2 t
2e e e t e 2 t
(t )
2 e t
2 e 2 t
e t 2e 2 t
1 e 2 t

x1 (0) t
0 x1 (0) e
t

x(t ) (t ) (t ) 1.d (t ) 2 2
x2 (0) 0 1 x2 (0) e t e 2t
31
Sylvesters interpolation formula
for compute ( eAt )
The solution of eAt can be obtained by solving the following determinant equation

m 1
1 1 21 . . . 1 e 1t
m 1
1 2 2 2 2 e 1t
. . . . .
. . . . . 0
. . . . .
m 1
1 m 2 m . . . m e mt
I A A2 . . . A m 1 e At
By solving the above determinant eAt can be obtained in terms of the Ak ( k = 0, 1,2, m-1 )
Or it can be solved by writing the solution as

e At 0 (t ) I 1 (t ) A 2 (t ) A2 .... m 1 (t ) Am1

32
33
Example
eAt 0 1
Compute using Sylvester's interpolation formula if A
0 2
1 1 e 1t
1 2 e 2 t 0
I A e At
Where are the eigenvalues of matrix A, and are the roots of the characteristic equation

I A 0
Then

0 0 1 1 0 1
0 0 2 0 1 2 e 2t 0

1 I A e At
0
0 2
or 2e At Ae 2t A 2 I 0
2 0 1 0, 2 -2
1
or e At ( A 2 I Ae 2t )
2



1
1
2
1 e
2t

0 e 2t

34
An alternative approach is 0 (t ) 1 (t )1 e t1

0 (t ) 1 (t )2 e t2

1 0, 2 2
0 (t ) 1
0 (t ) 21 (t ) e 2t
1
1 (t ) (1 e 2t )
2

Then
e At 0 (t ) I 1 (t ) A I
1
2

1 e 2t A



1
1
2

1 e 2t
0 e 2t

Problem compute eAt by two methods if 0 1
A
2 3

35
Controllability
Controllability is the one of the important objective of the modern control .
A system is said to be completely state controllable if

For any initial time to , each initial state X( to) can be transferred to any
final state X (tf) in a finite time tf > to by means of an unconstrained control U(t).
Unconstrained control U(t) has no limit on the amplitudes.
Consider the following system . It is clear that all initial state variable Xn(t) can be controlled by the input
signal u(t)

x 1 a1 x1 0 x2 0 x3 u x 1 a1 0 0 x 1 1
x 0 0 x 2 1 u
x 2 0 x1 a2 x2 0 x3 u 2 a2
x 3 0 0 a 3 x 3 1
x 3 0 x1 0 x2 a3 x3 u
x1
y k1 x1 k 2 x2 k3 x3 y k1 k2 k 3 x 2
x 3 36
But for the system given below, the state variable X1 is not controlled by control u(t) . Then the system

is uncontrollable .

x 1 a4 0 0 x1 0
x 0 a5 0 x2 1u
2
x 3 0 0 a6 x3 1
x1
y k4 k5 k6 x2
x3

37
Conditions for state controllable
Method I
The system is completely state controllable if the rank [ is the number of independent row] of the
following nxn matrix is equal n.

B AB A2 B .. . An 1 B n
The above matrix is called controllability matrix.
Or the determinant of the matrix 0
Example 1 x 1 1 1 x1 0
Consider the system given by x 2 1 x 1u
2 2
State whether the above system is controllable or Not
controllability matrix is
0 1
Mc B AB
1 1
Rank of Mc =2 = n
Then the above system satisfied the controllability condition and it is controllable.

38
Example 2

Determine whether the following system is controllable or not

x 1 2 0 x1 1
x 1 1 x 1u
2 2
controllability matrix is

1 2
Mc B AB
1 2
Rank of Mc =1 n
determinant =0
Then the above system does not satisfied the controllability condition and it is
uncontrollable .

39
Method II
If the system is represented by canonical controllable form with non-repeated eigenvalues , then the
controllability condition can be derived in the following
1. Transfer the canonical controllable form into diagonal form by using the transformation
x = Tz
2. Then check the new matrix B.
3. If matrix B has no zero row , then the system is controllable
4. This can be done by the following

x Ax Bu
Tz ATz Bu
z T 1 ATz T 1 Bu z Bu
where T 1 AT , B T 1 B
5. Where transformation matrix T can be found from 1 1 . . . 1
2 . . . n
1
12 22 . . . 2n

T . . . . . .
. . . . . .

. . . . . .
n 1 n 1 . . . nn1
1 2
40
Example use the transformation method state whether the following system is controllable or not.
x 1 1 1 x1 0
x 2 1 x 1u

2 2
The eigenvalues can be found by ,1 = j and 2 = -j
Then I A 0
1 1 1 1
T j j

1 2
1 j 1
T 1
Therefore 2 j j 1

1 1
B T 1 B
2 j 1

The B has no zero row , then the system is completely state controllable

41
Condition for complete state controllability in the s-plane

A necessary and sufficient condition for complete state controllability is that no cancellation occur in
the transfer function.
Y ( s) s 2.5

Ex consider the following transfer function U ( s ) s 2.5 s 1

Clearly, cancellation of factor ( s + 2.5) is occurred , then the system is uncontrollable.

The state space form is x 1 0 1 x1 1


x 2.5 1.5 x 1u

2 2

controllability matrix is 1 1
Mc B AB
1 1

Rank of Mc =1 n
determinant =0
Then the above system does not satisfied the controllability condition and it is
uncontrollable .

42
Output controllability
In some cases, it is desired to control the output rather than the state of the system .
A system is said to be output controllable if it is possible to construct an unconstrained vector U(t) that will
transfer any given initial output y (t0) to any final output y (tf) in a finite time t0 t tf .
Any system described by state-space is completely output controllable if

rank M o CA CAB CA 2 B . . . CA n 1
D m
where m no of rows of C matrix
Where Mo is called output matrix
Example : consider the system defined by
x 1 1 1 x1 0
x 2 1 x 1u
2 2
x1
y 1 0
The Mo is x2
M 0 CB CAB D 0 1 0
The rank of Mo = 1= m
Then the system is completely output controllable

43
How to obtain a controllable canonical form

Steps to obtain the controllable canonical form

1. The system must be completely state controllable.

2. Find a characteristic equation SI A s n a1s n 1 ........ an 1s an


3. Define a transformation matrix T as T M c *W
an 1 an 2 . . . a1 1
a an 3 . . . 1 0
n2
. . . . . . .

W . . . . . . .
. . . . . . .

a1 1 . . . 0 0
1 0 . . . 0 0

Where ais are the coefficients of the characteristic equation


44
4- Define a new state vector
~
x
x T~ x
0 1 0 . . . 0
x Ax Bu 0
0 1 . . . 0

T~x AT ~
. . . . . .
x Bu where
~
A


~
x T 1 AT ~ 1 ~~ ~
x T Bu A x B u



0 0 1
a an 1 a1
n
0
0

.
~
B .
.

.
1

45
EXAMPLE
Consider the system defined by
x1 1 0 1 x1 0
x 1 2 0 x 0 u
2 2
x 3 0 0 3 x3 1
x1
y 1 1 0 x2
x3
Transfer the system equation into controllable canonical form
1. The controllability matrix is
0 1 4

Mc B AB
A2 B 0 0 1
1 3 9
2. The rank of controllability matrix = 3 = n
3. Thus the system is completely state controllable
4. Characteristic equation is
SI A s 6s 2 11s 6
3

an a3 6 , an 1 a2 11, an 2 a1 6

46
The Transformation matrix T is
T M cW
a2 a1 1 11 6 1
where W a1 1 0 6 1 0
1 0 0 1 0 0
0 1 4 11 6 1 2 1 0
T 0 0 1 6 1 0 1 0 0
1 3 9 1 0 0 2 3 1

0 1 0
The inverse matrix is T 1 1 2 0
3 4 1

0 1 0
~
Then
A T 1 AT 0 0 1
6 11 6
0
~
B T 1 B 0
1
47
Observability
If the initial state vector , X(t0), can be determined from the observation of y(t) measured over a
finite interval of time from t0 [ t0 t tf ]. the system is said to be completely observable.
If all state variable has effect on the output , then the system is said to be observable. For
example if the output equation for the diagonalized system with distinct eigenvalues is given by
Y=Cx = [ 1 1 1 ] x
Then the system is observable

48
If any state variable has no effect upon the output , then it can not evaluate this state-
variable by observing the output . Then the system is called unobservable.
If the output equation is Y= [ 0 1 1 ] x then the system unobservable

The concept of Observability is useful in solving the problem of reconstructing unmeasurable


state variables from measurable variables in the minimum possible length of time.

49
Condition for Observability
The system is completely state observable if the rank of the following[ nmxn ] matrix is equal n.
C
CA
The above matrix is called Observability matrix.

Or the determinant of the matrix 0
M0 .
Example 1

.
Consider the system given by
CA n 1
x1 0 1 0 x1 0
x 0 0 1 x2 0 u
2
x 3 4 3 2 x3 1
x1
y 0 5 1 x2
x3

State whether the above system is observable or Not


From inspection ,it can be concluded that the system is unobservable . This conclusion is valid only for diagonalized
system with distinct eigenvalues.
Observability matrix is C
CA

0 5 1

M 0 . 4 3 3

. 12 13 9
Rank of Mo =3 = n CA n 1
Then the above system satisfied the Observability condition and it is Observable.

50
Example 2
Determine if the system given below is observable

Solution

0 1 0
The state and output equations for the system are
X AX Bu
5 21 x 1u
4
The Observability matrix is y CX 5 4 x

C 5 4
Mo
CA 20 16
The determinant for this Observability matrix equals 0 . Thus, the Observability matrix does not have full rank,
and the system is not observable

51
Condition for complete Observability in the S-plane

A necessary and sufficient condition for complete state Observability is that


no cancellation occur in the transfer function.
Consider the following system

C ( s) s2

R( s ) s 2 s 5

Show that if the cancellation is occurred the system is unobservable

52
How to obtain a observable canonical form

Steps

1. The system must be completely state observable.

2. Find a characteristic equation SI A s n a1s n 1 ........ an 1s an


3. Define a transformation matrix Q as
Q W * M o
1

an 1 an 2 . . . a1 1
a an 3 . . . 1 0
n2
. . . . . . .

W . . . . . . .
. . . . . . .

a1 1 . . . 0 0
1 0 . . . 0 0

Where ais are the coefficients of the characteristic equation


53
4- Define a new state vector
~
x
x Q~ x
0 0 0 . . . a n
x Ax Bu 1 0 0 . . . a
n 1

Q~ x AQ~
x Bu . . . . . .
~


where A
~ ~ ~
x Q AQ~
x Q Bu A~
1 1
x Bu

0 0 1
0 0 1 a1

bn anb0
b a b
n 1 n 1 0
.
~ ~
B . C CQ 0 0 . 1
.

.
b ab
1 1 0

54
Example
1 1 0
Consider a system defined by X AX Bu x 2 u
4 3
y CX 1 1 x
C 1 1
Observability matrix is Mo
CA 3 2
Characteristic equation is SI A s 2 2s 1 s 2 a1s a2
Then
a n a2 1 , an 1 a1 2
The transformation matrix Q is

a 1 2 1
W 1
1 0 1 0
1
2 1 1 1 1 0
Q W * M o
1


1 0 3 2
1 1
~ 0 1
A Q 1 AQ
1 0
Q 1 1 2

1 1
~ 0 ~
B Q 1 B , C CQ 0 1
2
55
In general the controllability and Observability can be illustrated graphically by

So
U Y
Sco

Sc

Su
Sco = completely controllable and completely Observable.
Sc = Controllable but unobservable.
So = Observable but uncontrollable.
Su = Uncontrollable and unobservable

56
problems

1. For the following plants , determine the controllability . If the controllability can be determined by
inspection, state that it can and then verify your conclusions using the controllability matrix

57
2. Determine whether the system given below is Observable or not.

x 1 2 1 3 x1 2
x 0 2 1 x 2 1 u
2
x 3 7 8 9 x 3 2
x1
y 4 6 8 x 2
x 3
3. Determine whether or not each of the following systems is observable.

4. Given the plant below , what relationship must be exist between C1 and C2 in order for the system to be
unobservable.

58
Definitions
Equilibrium state
If there is a state xe so that f(xe,t)=0 for all t then this state is called equilibrium state.
Isolated equilibrium state [ isolated from each other].

Euclidean norm
x - xe is called the Euclidean norm and is defined by

x xe [(x1 x1e )2 (x2 x2e )2 ........... (xn xne)2 ]

Positive definiteness
If V(x) is scalar function , then it is said to be positive definite in a region if
1. V(x) > 0 for all non-zero state x

2. V(0) =0
Such as
V ( x ) x 12 2 x 22
59
Negative definiteness
The scalar function V(x) is said to be negative definite if V(x) is positive definite.
V (x) x12 (2x1 2x2 )2
V (x) x12 (2x1 2x2 )2 ( positive)
Positive semidefiniteness
V(x) is said to be positive semidefiniteness if it is positive at all states in the region except at the origin
and at certain other states, where it is zero.
1. V(x) > 0
2. = 0 at x = 0 and other states V ( x) ( x x ) 2
1 2

V(x) > 0 for all x1, x2 and


V(x) = 0 at x1 = x2 = 0 or x1 = x2
Negative semidefiniteness
V(x) is said to be negative semidefiniteness if - V(x) is positive semidefiniteness

V ( x ) ( x1 x 2 ) 2
Indefiniteness
V(x) is said to be indefinite if in the region it assumes both positive and negative values

V (x) x1x2 x22


60
61
Quadratic form

n n
V ( x) X PX Pij X i X j ( Pij Pji )
i 1 j 1

1. V(x) be positive definite if successive principal minors of P be positive

2. V(x) is positive semidefinite if det(p) = 0 ( singular )

62
Example
Show that the following quadratic form is positive definite:

V ( x) 10x12 4x22 x32 2x1x2 2x2 x3 4x1 x3


The quadratic form can be written as

n n
V ( x ) X PX Pij X i X j
T

i 1 j 1

1
( Pij Pji coefficient of X i X j when i j )
2

p11=10 , p22 = 4, p33=1


P12=p21 =1
P13=p31 = -2
P23=p32 = -1

63
problems

64
Physical constraints
Define the physical constraints is the next step for optimization process. This
constraints are depending on the behaviours of the system . For example the
constraints of automobile car are:

1. Initial state x1(to) position and final state position x1(tf).

2. Initial speed x2(to) and final speed x2(tf).

3. The system performance (e.g fuel consumption, speed limitetc)

4. The constraints on deriving input (control input)

65
The performance index

What is the performance index :


A performance index is a quantities measure of the performance of a
system and is chosen so that the important system specifications can be
meet. The choice of performance index is depending on the natural of
control problem
The performance index to judge the goodness of the system
response should have the following basic three properties
Reliability ( so that can be applied with confidence)
easy to apply
selectivity

66
The best control system ( optimum control system) is defined as the control
system that minimizes a certain index and achieve of some or all of requirements.
The performance indices can be classified according to the system error into

1. The integral of the error T


IE e(t )dt
0
2. The integral of the square of the error, ISE, ( it is not sensitive to parameter
variations) T
ISE e 2 (t )dt
0

3. The integral of the absolute magnitude of the error ( IAE) ( more sensitive )
T
IAE e(t ) dt
0

4. The integral of the time multiplied by the absolute value of the error ( ITAE)
( produces smaller overshoots and oscillations)
T
ITAE t e(t ) dt
0
The previous performance indices can be used to achieve performance based
upon parameters optimization. That is the plant parameters are fixed at values
that satisfy a desired index without constraining any variable within the system.
If some parameters have limits (constraints) on their maximum and/ or
minimum values, another approach is necessary.

This approach is defined in terms of the state X(t) ( A history of state


values in interval [to, tf] is called a state trajectory ) and control vectors
input u(t). These terms must be satisfied the following condition:

Both the state trajectory x(t) and control input u(t) must be admissible

68
Admissible state trajectory

Admissible state trajectory means that a state trajectory which satisfies the state
variable constraints during the entire time interval [ to, tf].

If the state variable constraints are defined as


Therefore any trajectory satisfied the above constraints is called admissible state
Trajectory.

69
Admissible control
Admissible control means that the input control u(t) is satisfied control
constraints during the entire time interval [ to, tf].
If the control input is defined as

if satisfied the system constraints

70
71
The optimal control problem
It is to find an admissible control u* which causes the system to
follow an admissible trajectory x* that minimizes the performance measure.
This means that :
After the system performance is analyzed under the physical constraints , the next task
is to determine a control function that minimize the selected performance
measure.
Several comments are in order:
1. It may be impossible to find a control which is admissible and causes the system to
follow an admissible trajectory.
2. Even if an optimal control exist , it may not be the only one.
3. The chosen performance measure may not include all factors required for
optimization , such as cost , size,..etc.
4. If more than admissible control causes the system to follow admissible trajectory ,
then the one which produces a global minimum of performance measure rather
than local minima must be chosen.

72
73
1. If a system represented by the state equation of order n is to be
optimally controlled . Then
2. The initial ( starting ) time to and the initial state x(to) must be specified.
3. A statement of the physical constraints must be defined
3. A continuous performance index for specific problem must be specified. This index can be
written as
tf

S and
Where S and L areJ real ( x ( tmust
f ), tbe
L ( xso( tthat
f ) selected ), u they
( t ), tare
) dtpositive, so that the index J
increasing function of t within to t t otf . If the magnitude of J is minimum over the time
interval , the system performance is said to be optimal.

Quadratic performance index

74
Types of optimal control problems
(a) The terminal control problem: To minimize the deviation of the final state of a
system from its desired value within a given period of time e.g automatic aircraft
landing system. Whereby the optimal control policy will focus on minimizing errors
in the state vector at the point of landing.

in this case Set S [ x(t f ) r (t f )]T [ x(t f ) r (t f )] and L 0

75
For more generality the above equation can be modified by insert a real symmetric positive
semi-definite nXn weighting matrix H to obtain

J [ x(t f ) r (t f )]T H [ x(t f ) r (t f )]


Or

76
(b) The minimum- Time control problem: To transfer a system from an arbitrary initial
state x(to) to a specified target in minimum time in this case set S =0 , L =1, Then

The automobile (car ) is a minimum time problem. Other example , is the


interception of attacking aircraft and missiles.

77
78
For more than one control input , the general performance measure can be writing as

79
80
Methods of optimization
There are three approaches in the optimal control theory:
1. calculus of variations,
It is the oldest among the three and treats only the interior solution and it deals with
unbounded regions
When choice variables are bounded, and may jump from one bound to the other in
the interval considered as shown in the following figure other approach must be
used

81
The maximum or minimum Pontryagins principles
This is deals with bounded regions .
Leads to a nonlinear two-point boundary-value problem.
It is derived by using some appropriate forms of differentiation in an
infinite-dimensional space.

Dynamic programming.
It leads to functional equation.
It is based on the principle of optimality.
It solves the optimization problem by first obtaining the value function.
the maximum value of the objective function beyond time t can be
considered as a function of the state of the system at time t. This function
is called the value function. The value function yields the value which the
best possible performance from t to the end of the interval achieves

82
Dynamic Programming
This method is based on the principle of optimality.
Principle of Optimality:
Suppose the optimal solution for a problem passes through some intermediate
point (x1,t1), then the optimal solution to the same problem starting at (x 1,t1)
must be the continuation of the same path

83
The optimal path for a multistage decision process is given as in the following
figure

And Suppose that the first decision ( made at a)results in segment a-b with cost
Jab and the remaining decision yield segment b-e at a cost Jbe .
The minimum cost from a to e then is

If a-b-e is the optimal path from a to b , then b-e is the optimal path from b to e

84
If the path b-c-e is the optimal path as shown in the following figure

Then the new minimum cost

J*ae=Jab +Jbce
Therefore the new minimum cost to be smaller or optimal then

85
Examples
Shortest Path Pro blem.
Goal is to travel from A to B in the shortest time possible
Travel times for each leg are shown in the figure
There are 20 options to get from A to B.

86
Start at B and work backwards, invoking the principle of optimality along the way.
First step backward can be either up (10) or down (11)

Consider the travel time from point x


Can go up and then down 6 + 10 = 16
Or can go down and then up 7+11 = 18
Clearly best option from x is go up, then down, with a time of 16
From principle of optimality, this is best way to get to B for any path that passes through x.
Repeat process for all other points, until finally get to initial point

shortest path traced by moving in the directions of the arrows.

87
88
Example 2
Routing Problem through a street maze (minimize cost to travel from c to h)

Once again, start at end (h) and work backwards


Can get to h from e, g directly, but there are 2 paths to h from e.
Basics: J*gh = 2 and

optimal cost from e to h given by


89
Also
Principle of optimality tells that, since we already know the best
way to h from e, do not need to reconsider the various options from
e again when starting at d just use the best.
Optimal cost from c to h given by

90
91
An optimal control problem
Consider a system described by first order differential equation

x ax (t ) bu (t )

92
If the state is quantizing within the allowable values and time within the range

t [0, 2] using N=2, t = T/N =1.


Approximate the continuous system as:

Or

for
which gives that

Use approximate calculation , the cost function becomes

Take =2 and a =0 , b =1. and t = 1 then

93
And the cost function becomes

The first step in the computational procedure is to find the optimal policy for the
last stage of operation. The optimal control for each state value is the one which
yield the trajectory having the minimum cost
To limit the required number of calculation , and thereby make the computational
procedure feasible , the allowable state and control values must be quantized.
if x(k) = [0, 0.5, 1, 1.5] and u(k) = [ -1, -0.5, 0, 0.5 ,1] then,

94
Computational procedure for determine the optimal
policy over the last stage
1. Put k=1 , select one of the quantized values of x(1), try all quantized values of u(1) and calculate
the resulting trajectories. Then the optimal control for this stageis the one which gives the
minimum cost.
2. Repeat the procedure in step 1 for the remaining quantized levels of x(1)

The cost of operation between state x1=x(0) and x2==x(1)[ over interval k=0 to k =1] is dependent
on the value of state x(0) and on the value of control applied, u(0) ; therefore is defined as
J 01 = J01(x(0), u(0))
The cost of operation between state x(1) and x(2) is dependent on the value of state x(1) and on
the value of control applied, u(1) ; therefore is defined as
J 12 = J12(x(1), u(1))
The minimum cost of the optimal last stage is defined as J*12 (x(1))and it is function of state x(1).
The optimal control applied at k is u*(x(1),k)
The minimum cost of operation over the last two stages for the specified quantized value of
x(0), therefore can be defined as
J*02 (x(0), u(0)) =J01 (x(0),u(0))+J*12(x(1))
Thus, the cost of the optimal trajectory is
*
J 02 ( x ( 0 )) min [ J 02
*
( x ( 0 ), u ( 0 )]
u (0)
95
In summary
1. For two stage xa xb xc

2. Calculate ( k =1)
J*12(x(1), u(1))=J12(x(1),u(1)) + J*22 (x(2))
Then
J 12* ( x (1)) min [ J 12* ( x (1), u (1)]
u (1 )

2. Calculate (k = 0)
J*02 (x(0), u(0)) =J01 (x(0),u(0))+J*12(x(1))
3. Finally calculate *
J 02 ( x ( 0 )) min [ J 02
*
( x ( 0 ), u ( 0 )]
u (0)

96
In general

For N stage process the general appropriate


form is
*
J kN ( x(k ), u (k )) J k ,k 1 ( x(k ), u (k ) J k*1, N ( x(k 1)),
min cost J kN
*
( x(k )) min[ J kN
*
( x(k ), u (k ))]
u (k )

97
98
99
In general

100

You might also like