0% found this document useful (0 votes)
19 views65 pages

Chap 10 T

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views65 pages

Chap 10 T

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Chapter 10: State Feedback and Observers

Recall the most basic controller; the feedback controller


with gain K.

e u
r K plant y
+ +
Note that we are taking the scalar output variable and
feeding it back, through a gain factor.
By using standard control techniques, we can choose
K for various performance criteria. We could also
make the controller into a transfer function itself,
and use root-locus, Bode plots, or another design
tool to place the closed-loop poles.

Poles can also be "placed" using the technique of


State-Variable Feedback:
x& = Ax + bu Assume a single input /
y = cx + du single output system.

Suppose we construct "state-feedback"

u = Kx + Ev
new "reference" input
"feedforward" gain (usually I)
state vector (nx1)
state feedback gain matrix (1xn)

Then x& = Ax + b( Kx + Ev)


= ( A + bK) x + bEv
x& = Ax + b( Kx + Ev)
= ( A + bK) x + bEv
Our goal is to select a gain matrix K so
that this new "system" matrix has
eigenvalues where we want them,
rather than using those of the original
A-matrix.

K
+u
v E
+
b
+ +
∫ x c + +
y

d
There is a special case when we can easily find a gain
matrix K in order to choose any eigenvalues we want
for the "closed-loop" system: the controllable
canonical form.
 0 1  0 
 0 1  M 
   
A= 0 O  b = M 
   
 O 1  0 
 − a 0 − a1 L − an−2 − a n −1  1 

c = [ arbitrary ]
n−1
Characteristic Polynomial: φ(λ) = λn
+ an−1λ + L+ a1 + a0
λ1
Consider the controllability matrix for this system:

P= b [ Ab L An−1b ]
0 0 L 0 1 
 M M N − a n −1 
 
= M 0 1 − a n −1 
 
0 1 N 
1 − a n −1 ( ≠ 0) 

This matrix will obviously have rank n iff the b matrix


has a non-zero value as its final element (hence the
name "controllable canonical form").
One may ask: How do we know a controllable system
is still controllable after state feedback?

Theorem: Let the closed loop controllability matrix be


denoted (assuming E = I): reference scaling matrix

[
PCL = b ( A + bK )b L ( A + bK ) n −1 b ]

Then rank( PCL) = rank(P). (so the CL system is


controllable iff the open-loop system is controllable.)

PROOF: It can be shown that


[b ]
( A + bK )b ( A + bK ) 2 b L ( A + bK ) n −1 b =
 I Kb K ( A + bK )b L K ( A + bK ) n − 2 b 
 
0 I + Kb 
[b Ab L An −1b  ] I O 
 
M O + Kb 
0 L 0 I 

Because this matrix has one's on the diagonal, it is


nonsingular. Because it is nonsingular, it does not
change the rank of the matrix it multiplies, which is just
the original controllability matrix. n
NOTE:
NOTE: This
This result
result does
does NOT
NOTextend
extend toto observability!
observability!
That
Thatis,
is,state
statefeedback
feedbackmight
might make
makeaapreviously
previously
observable
observable system
system unobservable!!!
unobservable!!!

For simplicity, let's use a zero input and compute what


happens with feedback matrix K. Let (i.e., V = 0)
K = [k 0 k1 L k n −1 ]

so for the controllable canonical form:


 0  
M  
bK =  [k 0 k1 L k n −1 ] =  0 
 0  
1  k L k n −1 
   0 k1

Giving:

 0 1 0 L 
 M O O 0 
A + bK =  
 0 L 0 1 
− a + k − a1 + k1 L − a n −1 + k n −1 
 0 0
So that the characteristic equation of this closed loop
matrix is:

φ( λ ) = λn + ( a n − 1 − k n − 1 ) λn − 1 + L + ( a 0 − k 0 )

It is clear that because we are allowed to choose


the k's arbitrarily, we can assign, or "place" all of
the eigenvalues wherever we want them
(provided that any complex ones occur in
complex conjugate pairs).

Because any controllable system can be transformed


into controllable canonical form, we make make the
statement:
the ability to place
controllability the poles anywhere
through state
feedback

This gives us a technique to stabilize unstable systems


and do much more:
Example: Consider the system

 1 3 1
x& = Ax + bu , where A =   b= 1
4 2   

The eigenvalues of the A-matrix are -2 and +5, so


the system is initially unstable. We first examine
the controllability of the system to see if there's
any hope to stabilize it:
1 4 
P = [b Ab] =  
1 6 

This matrix has full rank so the system is controllable.


Suppose we decide we would like the closed-loop poles
(eigenvalues) of the system to be at -5 and -6. We
compute the result of state feedback:
1 3 1 1 3 k 0 k1 
A + bK =  +  [k 0 k1 ] =  +

4 2 1

4 2 k 0 k1 

1 + k0 3 + k1 
=
4 + k0 2 + k1 

So the characteristic equation is:

λ − k 0 − 1 − 3 − k1
φ(λ ) = = A BIG MESS TO DEAL WITH!!
− 4 − k0 λ − 2 − k1
To make the algebra easier, we compute the
controllable canonical form:
The inverse of the controllability matrix is:

−1  3 − 2
P = 
 − .5 .5 

So if we compute our similarity transformation as


we did in chapter 8,

−1  − .5 .5 
U = 
 1.5 − .5
Then
−1 0 1 −1 0 
A =U AU =   and b =U b= 
10 3 1 

Now use this form to compute the state feedback:

0 1  0 0  0 1 
A + bK =  +  =
10

3  k 0 k1  10 + k 0 3 + k1 

For which the characteristic equation is:

φ ( λ ) = λ2 + ( − k 1 − 3 ) λ + ( − k 0 − 10 )
φ ( λ ) = λ2 + ( − k 1 − 3 ) λ + ( − k 0 − 10 )

If we desire these poles to be at -5 and -6, then

φ des ( λ ) = λ2 + 11λ + 30 = (λ + 5 )(λ + 6 )

So by inspection we get:

K = [k 0 k1 ] = [− 40 − 14 ]

The state-feedback for the controllable canonical


form is therefore:

u = Kx
But because of the similarity transformation we
performed,
x = U −1 x
−1
So u = K x = KU x = Kx

−1
where K = KU

So now because we are implementing state-feedback in


our original system, we use feedback gain
U −1 K
K
 − .5 .5 
u = Kx = [− 40 − 14 ]  x = [− 1 − 13]x
 1.5 − .5
To check,
 0 − 10  
σ( A + bK ) = σ( ACL ) = σ    = {− 5,− 6}
  3 − 11 

So it is very easy to compute the proper state feedback


if the system happens to be in controllable canonical
form. Sometimes it is inconvenient to do this
transformation (and then back again), so we have a
famous formula: Ackermann's Formula
First we point out that there is an easier way to compute
the similarity transformation matrix when we know the
characteristic equation. We'll use this result in the
derivation of Ackermann's Formula.

F
Note that

A = U − 1 AU and b = U −1b because x = Ux

Then the controllability matrix for the controllable form


is:
[
P = b A b L A n −1b ]
= [U −1b U −1 Ab L U −1 A n −1b ]
= U −1 P

So Transformation is
U = P P −1 related to the two
system’s P matrices
Now for the derivation of Ackermann's formula:
When we applied state feedback to the controllable
canonical form, we got a closed-loop matrix of the
form:
 0 1 L 0 
 M O O M 
ACL = A + bK =  
 0 L 0 1 
− a + k − a1 + k1 L − a n −1 + k n −1 
 0 0

So

φ( λ ) = λn + ( a n − 1 − k n − 1 )λn − 1 + L + ( a1 − k 1 ) λ + ( a 0 − k 0 )

For ACL
Denote the desired closed-loop characteristic
polynomial as:
n −1
φ des ( λ ) = λ n + a ndes
−1λ + L + a 1des λ + a 0des

Equate above equation with the last equation on 334


So we have the equalities:
Hold
− 1 − an − 1 = − k n −1 L a1 − a1 = − k1 a0des − a 0 = − k 0
a ndes des equations
for a minute

Suppose we plug A into this equation:


controllable cannoical form
φ des ( A ) = A n + a ndes−1 A n −1 + L + a1des A + a 0des I A into φ des

φ (λ ) = λ + a n −1λ n −1
+ L + a1 A + a0 For A
n des des
I
We know from the Cayley-Hamilton Theorem that:
n −1 solve for A n
A + a n −1 A
n
+ L + a1 A + a 0 I = 0
substitute into
Substituting into φ ( A ) :
des

n −1
φ des ( A ) = ( a ndes
−1 − a n −1 ) A + L + ( a1
des
− a 1 ) A + ( a des
0 − a0 )I

Now here's the trick:

Let ei denote the ith


unit vector, i.e.,
e1 = [1 0 L 0]T , e 2 = [0 1 0 L 0]T , etc .

Now notice that:


property of
e1T A = e T2 L eiT A = eiT+ 1 L e nT − 1 A = e nT the controllable
canonical form

e nT A = [last row of A ] (But this is unimportant)

Multiply these relations again by A from the right:

e1T A A = e T2 A = e3T , etc., up to :


F
and use this
e1T A n −1 = e nT relationship

So now if we multiply φ des


( A ) by e1T we get:

T n −1
e1T φdes ( A ) = ( a ndes
−1 − a )
n −1 1e A + L + ( a1
des
− a )
1 1e T
A + ( a des
0 − a )
0 1e T

= ( a ndes
−1 − a )
n −1 ne T
+ L + ( a1
des
− a )
1 2e T
+ ( a des
0 − a )
0 1e T

= −( k n −1 ) enT − L − ( k1 )eT2 − ( k0 )e1T use formula for K


= [− k0 − k1 L − k n −1 ]
= −K
= − KU by definition

This formula is still not useful because it requires


knowledge of the similarity transformation matrix U
and the controllable form A .
now manipulate
e1T φ des ( A ) = − KU this equation

Recall that A = U −1 AU
Substitute this into the formula at the top of the page:

e1T U −1φ des ( A)UU −1 = − K , postmultiply by U −1


e1T U −1φ des ( A) = − K

We have already shown that U = PP −1 so U −1 = P P −1

K = − e1T P P − 1 φ des ( A)

Note φ des (U −1 AU ) = U −1φ des ( A)U see page 215


K = − e1Τ P P −1φdes ( A)
123
T
e1 P
e1 P = [1 0 L 0 ] 0 0 L 0 1
T

 1 
 
 = [0 L 0 1] = en
T
M N
0 1 O 
 
1 

So finally,
formula for
K= −enT P −1φdes ( A) calculations

Ackermann's Formula
Note that for this formula, we need only the open-loop
A-matrix, the controllability matrix P, and the desired
characteristic polynomial.
However, it does require the inverse of a matrix, which
is not always advisable for numerical accuracy
reasons; especially when the system is "weakly"
controllable. In fact, MATLAB offers the ACKER
command, but advises against ever using it!

There is a way to make it work better by using


numerically robust methods for solving simultaneous
equations rather than computing matrix inverses:
K = − enT P −1φ des ( A)

Define f T = − enT P −1

Then f T P = − enT or
P T f = − en

If we solve this as a set of linear simultaneous


equations (which is done in MATLAB without the use
of inverses), then we can use the solution to
compute feedback K:

K = f T φ des ( A)
Note: One can show that the numerator of the
transfer function is the same before and after
state-feedback. This implies that

The
The zeros
zeros of
of aa system
system are
are not
not affected
affected by
by state
state
feedback.
feedback.

This also helps explain why state-feedback might


affect the observability of a system: Suppose
state feedback were used to place a pole of a
system at the same place as a zero. Then
these modes would not appear in the output,
through pole-zero cancellation (and by the
definition of a zero).
Full State Observers (Estimators):
State feedback is relatively easy, but we have assumed
throughout that we have access to all the signals in
the state vector x in order to construct the feedback
controller. Actually, we will usually only have
physical access to the input and output of a system,
with the state-variables being "internal."

u b
+ +
∫ x
c
+ +
y

d
To get around this problem, we can show how to
build another system, called an observer (or
estimator) that re-constructs the state vector from
the system input and output, and allows us to use
its output for state feedback.
Begin with the system
x& = Ax + Bu (this procedure holds for
y = Cx + Du multivariable systems )

And assume we know the matrices {A, B, C, D}.

First let's consider a naive approach:


If we know the initial condition on the state vector and
the A and B matrices, we can construct a new system
giving an "estimated" state vector xˆ(t ) :

x&ˆ = Axˆ + Bu
yˆ = Cxˆ + Du, xˆ ( 0) = x(0)

If the world is perfect and we expect no external


disturbances or other sources of error, this might work
sufficiently well. However, it is an open-loop
estimator, so that in the presence of uncertainties or
imperfections (especially if the original system
happens to be unstable), the estimated value xˆ ( t ) will
eventually diverge from the true value of x (t ) .
Instead we seek a closed-loop observer:

We construct a new dynamic system, which is similar in


construction to the original system, but whose states
are the estimates of x (t ) and which has two "inputs,"
the original system's u and the error between the true
output y and the estimated output ŷ . Inclusion of y − yˆ
as an input “closes the loop.” It is therefore “driven”
by the output error:

build this x&ˆ = A xˆ + Bu + L ( y − yˆ ) yˆ = Cxˆ + Du


x&ˆ = A xˆ + Bu + L ( y − (C xˆ + Du ))
for x&ˆ = ( A − LC )xˆ + (B − LD )u + Ly
analysis
Now consider the “error” in the observation that results.
Define
~∆ x = x − xˆ
x& = Ax + Bu y = Cx + Du
x&ˆ = ( A − LC)xˆ + (B − LD)u + Ly
Then
~
x& = x& − x&ˆ
= Ax + Bu − ( A − LC ) xˆ − ( B − LD )u − L(Cx + Du )
= A( x − xˆ ) − LC ( x − xˆ )
= ( A − LC )( x − xˆ )
= ( A − LC ) ~x closed-loop system
Because ~ x represents the error signal, we would like
this set of equations to be asymptotically stable, so
that the eigenvalues of (A-LC) are in the left half-
plane. We can place them there by choosing an
appropriate L matrix as if it were a state-feedback
gain.

Note that the L does not appear in this expression


exactly as the K does in (A+BK), so the formulas
don't hold exactly.
We can "fix" this by realizing that the eigenvalues of a
matrix are always the same as the eigenvalues of its
transpose. We therefore use pole-placement
methods in order to compute L from the pole-
placement problem:
AT − C T LT

{
To do this, we must have the pair AT , C T being }
controllable, which is the same as saying {A, C } is
observable!!
Now that the error system is stable,
x ( t ) → 0 as t → ∞
~
so
xˆ (t ) → x ( t ) as t → ∞
We then use these estimated state-variables to
construct the state-feedback control law as before:

u = K xˆ + v (E = I) assume E=I

We usually place the poles of the closed-loop system


? according to a performance criterion, but where
should one place the poles of the estimator system?
They should be placed "farther left" than the poles of
! the system dynamics. That is, the estimator dynamics
should be faster than the plant dynamics. That way,
the true state variable do no "outrun" the observer
variables that are trying to estimate them!
Block Diagram:

D
u
v x& x y
+ B + ∫ C +

A
Kx$ +

x&$ x$
K + ∫ C + y$

A
y − y$
L
How would you simulate the entire system, controller,
( u = K xˆ + v), observer and all? Create an
"augmented-state" system:
First notice the "observer dynamics":
x&ˆ = ( A − LC ) xˆ + ( B − LD ) u + Ly
= ( A − LC ) xˆ + ( B − LD )( K xˆ + v ) + L ( Cx + D ( K xˆ + v ))
= [ A − LC + BK ]xˆ + LCx + Bv simplify substitute for
u(t) and y(t)

Together with the plant: x& = Ax + B ( K xˆ + v ) substitute u(t)

 x&   A BK   x  B 
 x&ˆ  =  LC    +  v
A − LC + BK   xˆ   B 
necessary for
   external input

Treat these as the


"new" system
matrices.
How do we know that attaching the observer onto the
plant does not change the eigenvalues of the closed-
loop system? That is, how do we know that we can
choose the gain matrices K and L independently of
one-another?
Consider the plant and error dynamics together:
x& = Ax + BKxˆ let V = 0
x& = ( A − LC)~
from
= Ax + BK ( x − ~x) ~ x = x − xˆ ~ x before
= ( A + BK ) x − BK ~
x

So together:
 x&   A + BK − BK   x 
~  =
 x&   0 A − LC   ~
x 
 x&   A + BK − BK   x 
~  =
&
x   0 A − LC   ~
x 

When we try to find the eigenvalues of this system:

  A + BK − BK   A + BK − λI − BK
det    − λI  =
 0 A − LC   0 A − LC − λI

= A + BK − λI A − LC − λI
So we see that part of the eigenvalues are
determined by choice of K alone (the closed-loop
plant eigenvalues), and the others are
determined by L alone (observer eigenvalues).

This important result is called the separation principal.


Reduced-Order Observers:

We often have available part of the state vector for


feedback; for example if the output equation is

 x1 
y = [I 0] 
 x2 
either naturally or through a similarity transformation
to make it that way.
If this is the case, then we can save time and money by
using a reduced-order observer, that just estimates
the missing part of the state vector, x2 .
Unfortunately, it will be more complex to derive.
Step one: How to transform the system such that the
C-matrix has the form I 0
First augment the C-matrix to create a nonsingular
matrix:

C  } all q rows from the original C - matrix


W = 
R n - q more rows lin. indep. of the first q rows

Now compute V = W −1 = [V1 V2 ] so that

 I q×q 0  C 
WV = I =   =  [V1 V2 ]
 0 I ( n − q )× ( n − q )   R 

Now if we use V as a similarity transformation:


x& = V −1 AVx + V −1Bu
y = CV x = [CV1 CV2 ]x = [I q × q 0]x = x1

where x1 is clearly a vector of the first q elements of


the transformed state vector x . Partition this
transformed system to the form:

 x&1   A11 A12   x1   B1 


 x&  =  A    +  u
 2   21 A22   x2   B2 
 x1 
y = [I q × q 0]  + Du
 x2 

So we have x1 and we would like to observe x2 .


Step two: Strategy: Find equations for x 2 alone; i.e.,
considering x 1 to be a known signal:
A12 x2 = x&1 − A11 y − B1u new output
x& 2 = A22 x2 + A21x1 + B2u
u
Define some new variables:

u = A21 x1 + B2 u functions of known
∆ (available) signals.
y = x&1 − A11 x1 − B1u
more about x&1
later
Then x&2 = A22 x2 + u Treat these as a new set
y = A12 x2 of state- and output
equations.
x&2 = A22 x2 + u
y = A12 x2

One can show that if the original system {A, C} is


observable, then this new system {A22 ,A12 } is also
observable.

Step three: Simply design a full-order observer for


this reduced set of equations:
x&ˆ = A xˆ + u + L ( y − yˆ )
2 22 2 y = A12 x2
= ( A22 − L A12 ) xˆ 2 + u + L y
where the eigenvalues of ( A22 − L A12 ) are placed to
the left of the eigenvalues of A22 through proper
choice of matrix L .
However, recall that we are considering x1 to be
known, and that y depends on x&1, a pure time
derivative, which is difficult to compute.
To get around this problem, perform the following
change of variable:
z = xˆ2 − L x1 ( xˆ2 = z + L x1 )
z& = x&ˆ − L x&
2 1
subst. yF
Then:

z& = ( A22 − L A12 )( z + L x1 ) + u + L y − L x&1 subst. for x&2


= ( A22 − L A12 )( z + L x1 ) + ( A21x1 + B2u) + L ( x&1 − A11x1 − B1u) − L x&1
= ( A22 − L A12 )z + ( A22 − L A12 )L x1 + ( A21 − L A11) x1 + ( B2 − L B1)u
= ( A22 − L A12 )z + [( A22 − L A12 )L + ( A21 − L A11)]x1 + ( B2 − L B1)u

factor out x1
This is the second equation to simulate (with the x1
equation) in order to find an estimate of x2 .

Step four: Now because xˆ 2 = z + L y is an estimate of x2,


we can define an error signal e = x2 − ( z + L x1 ) . So

e& = x&2 − z& − L x&1 subst. for x&2 , z&, x&1

= A21 x1 + A22 x2 + B2u − ( A22 − L A12 )( z + L x1 )


− ( A21 − L A11 )x1 − (B2 − L B1 )u − L A11 x1 − L A12 x2 − L B1u

= ( A22 − L A12 )( x2 − z − L x1 )
e& = ( A22 − L A12 )e
e
The eigenvalues of A22 − L A12 are placed to adjust
the rate at which z + L y approaches x2 .

Step five: Now the estimate for the whole state is:
 xˆ1   y − Du 
xˆ =   = z + L x 
 xˆ2   1

To un-do the original similarity transformation, recall


that x = Wx ( x = V x ) so

 y − Du 
xˆ = Vx = [V1 V2 ]
ˆ 
 z + L x1 
REMARKS:
Except for the similarity transformation, the reduced
order observer requires fewer computations and
integrations.
In the reduced order observer, the output variable y
appears directly as an estimate for part of the
state. This means that sensor noise appears in
the state variable that is fed back. In the full-order
observer, the state variable estimates are all the
result of at least one integration which tends to
smooth out any noise.
On the other hand, the reduced order observer can
often have smaller transients, dur to part of the
estimate being “perfect”.
How would one simulate the whole system??

First define an "augmented" state vector as

 x  &  x&  
ξ=    ξ =   =L 
z   z&  

which will be a complicated expression! Then


simulate the system using output equation

y= C 0ξ
To compute estimates of the state variables,

 y − Du 
xˆ = V  
 z + L x1 
And then when you want to apply state-feedback, use
u = K xˆ
or
u = K xˆ + v

when a non-zero reference input signal v is


present.

Above all, do this in a well-documented script-file, so


that the project will be re-usable and de-buggable!
Some further details: We have the original system

x& = Ax + Bu
and

z& = ( A22 − LA12 ) z + [( A22 − L A12 ) L + ( A21 − L A11 )]x1 + ( B2 − L B1 )u

but
x1 = y = C x = (CV )(V −1 x ) = Cx
so

z& = ( A22 − LA12 ) z + [( A22 − L A12 ) L + ( A21 − L A11 )]Cx + ( B2 − L B1 )u

These are in terms of x and z


Now apply the feedback: u = Kxˆ + v

x& = Ax + Bu = Ax + B( Kxˆ + v )
= Ax + BKxˆ + Bv
  Cx  
= Ax + BK [V1 V2 ]   + Bv
  z + L Cx  
= Ax + BK [V1Cx + V2 L Cx + V2 z ] + Bv
= [ A + BK (V1 + V2 L )C ]x + BKV 2 z + Bv

This is one state equation, the other is for z& :


z& = ( A22 − LA12 ) z + [( A22 − L A12 ) L + ( A21 − L A11 )]Cx
+ ( B2 − L B1 )u
u = Kxˆ + v

= ( A22 − LA12 ) z + [( A22 − L A12 ) L + ( A21 − L A11 )]Cx


+ ( B2 − L B1 )( K [V1Cx + V2 L Cx + V2 z ] + v )

= [( A22 − LA12 ) + ( B2 − L B1 ) KV 2 ]z
+ [( A22 − L A12 ) L + ( A21 − L A11 ) + ( B2 − L B1 ) K (V1 + V2 L )]Cx
+ ( B2 − L B1 )v

This is the second state equation. It is (n+(n-q))


dimensional. Just simulate the two augmented
equations in MATLAB the result will be an ntime x
(n+(n-q)) matrix (ntime = # of time points).
The first n columns will be the state variables
of the controlled plant. The last n-q columns
will be the observed auxiliary state variables z;
not the actual observed plant state variables
themselves.
Now if you want to get MATLAB to show you
the actual observed state variables, we must
compute:
Cx ( + Du )  Cx 
xˆ = V   = [V1 V2 ] 
 z + L Cx   z + L Cx 
= V1Cx + V2 L Cx + V2 z
= (V1 + V2 L )Cx + V2 z
 x
= [(V1 + V2 L )C V2 ] 
z
or, because MATLAB gives the state variables at each
time in rows, instead of columns, use the command

xhat=([ (V1+V2*lbar)*C V2]) * xaug’ )’ ;

where xaug is the augmented state vector

 x
xaug = 
 z

Try this in an example:


yaw v : sideslip velocity
p : roll rate
r : yaw rate
φ : roll angle
roll
pitch ψ : yaw angle
ζ : rudder angle
ξ : aileron angle
x = [v p r φ ψ ζ ξ]Τ

α : aileron command
u = [α ρ]Τ ρ : rudder command
 0.277 0 − 32.9 9.81 0 − 5.543 0  0 0
− 0.103 − 8.325 3.75 0 0 0 − 28.640 0 0
   
 0.365 0 −.639 0 0 − 9.49 0  0 0
A=  0 1 0 0 0 0 0  B= 0 0
   
 0 0 1 0 0 0 0  0 0
 0 0 0 0 0 −10 0  20 0
   
 0 0 0 0 0 0 −15   0 10

0 0 0 1 0 0 0 0 0
C=  D=  
0 0 0 0 1 0 0 0 0
outputs vs. time x1 & x1hat vs. time
0.05 10

0
0
-10
-0.05
-20
-0.1 -30

-0.15 -40
0 0.5 1 0 0.5 1

x2 & x2hat vs. time x3 & x3hat vs. time


1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1
0 0.5 1 0 0.5 1
x4 & x4hat vs. time x5 & x5hat vs. time
0.1 0.05

0.05 0

0 -0.05

-0.05 -0.1

-0.1 -0.15
0 0.5 1 0 0.5 1

x6 & x6hat vs. time x7 & x7hat vs. time


2 1.5

1
1
0.5
0
0
-1
-0.5

-2 -1
0 0.5 1 0 0.5 1
MORE REMARKS:
We have less flexibility if we try to feedback not the
entire state, but just the output (as in classical
controller design); i.e., not all eigenvalues can be
placed if we use:
u = Ky = KCx

There are estimators, called functional estimators,


that estimate not the entire state vector, but some
scalar function of it, for example Kxˆ . These can
be quite efficient, and suffice to compute the
estimate and the feedback signal simultaneously.
When placing observer poles, it is not advisable to
place them too far left of the plant poles, because
1) they then have a large bandwidth and are
susceptible to noise.
2) they may then give a large transient response,
saturating the amplifiers.
A good rule of thumb is to place observer poles
about two to three times farther left than the
plant's closed-loop poles.

It is also possible to "place" eigenvalues and


eigenvectors, although not all at the same time.
And finally, . . .

The state feedback we have computed is unique for a


given set of desired closed-loop poles. This is only
true in the single input/single output case. In
multivariable systems, there will be many different
gain matrices K that will place the closed-loop poles
at any given location.
Each choice of gain K will have its own merits, and
is generally a more difficult problem. This is largely
the topic of the next course in multivariable control.

You might also like