0% found this document useful (0 votes)
19 views54 pages

Lec 18

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views54 pages

Lec 18

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

ELEC4410

Control Systems Design


Lecture 18: State Feedback Tracking and State
Estimation

Julio H. Braslavsky
[email protected]

School of Electrical Engineering and Computer Science


The University of Newcastle

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 1
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation
A tip for Lab 2

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 2
State Feedback
In the last lecture we introduced state feedback as a technique
for eigenvalue placement. Briefly, given the open-loop state
equation

ẋ(t) = Ax(t) + Bu(t)


y(t) = Cx(t),

we apply the control law u(t) = Nr(t) − Kx(t) and obtain the
closed-loop state equation

ẋ(t) = (A − BK)x(t) + BNr(t)


y(t) = Cx(t),

If the system is controllable, by appropriately designing K, we


are able to place the eigenvalues of (A − BK) at any desired
locations.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 3
Regulation and Tracking
The associated block diagram is the following

b- - g- ˙
- g- - -b









6 6







Two typical control problems of interest:


The regulator problem, in which and we aim to keep
lim (i.e., a pure stabilisation problem)







The tracking problem, in which  is specified to track .












When , constant, the regulator and tracking problems are









essentially the same. Tracking a nonconstant reference is a more




difficult problem, called the servomechanism problem.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 4
Regulation and Tracking
We review the state feedback design procedure with an
example.
Example (Speed control of a DC motor). We consider a DC
motor described by the state equations
+V−
2 3 2 32 3 2 3








i













ω
4 5 4 54 5 4 5





















2 3





h i



4 5





The input to the DC motor is the voltage V(t), and the states are
the current i(t) and rotor velocity ω(t). We assume that we can
measure both, and take ω(t) as the output of interest.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 5
Regulation and Tracking
Example (continuation). ➀ The open-loop characteristic
polynomial is
−1
= s2 + 12s + 20.02
 s+10 
∆(s) = det(sI − A) = det 0.02 s+2

which has two stable roots at s = −9.9975 and s = −2.0025. The


motor open-loop step response is
Step Response
0.12 The system takes about 3s to
0.1
reach steady-state. The final
speed is about the ampli-


0.08
tude of the voltage step.
Amplitude

0.06
We would like to design a state
0.04 feedback control to make the
motor response faster and ob-
0.02

tain tracking of to constant





0
0 0.5 1 1.5
Time (sec)
2 2.5 3 reference inputs .

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 6
Regulation and Tracking
Example (continuation). To design the state feedback gain, we
next ➁ compute the controllability matrix
 
h i 0 2
C = B AB =  
2 −4

which is full rank ⇒ the system is controllable.

Also, from the open-loop characteristic polynomial we form


controllability matrix in x̄ coordinates is
 −1  −1  
1 α1 1 12 1 −12
C̄ =   =   =  
0 1 0 1 0 1

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 7
Regulation and Tracking
Example (continuation). We now ➂ propose a desired
characteristic polynomial. Suppose that we we would like the
closed-loop eigenvalues to be at s = −5 ± j, which yield a step
response with 0.1% overshoot and about 1s settling time.

The desired (closed-loop) characteristic polynomial is then

∆K (s) = (s + 5 − j)(s + 5 + j) = s2 + 10s + 26

With ∆K (s) and ∆(s) we determine the state feedback gain in x̄


coordinates
h i h i
K̄ = (ᾱ1 − α1 ) (ᾱ2 − α2 ) = (10 − 12) (26 − 20.02)
h i
= −2 5.98

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 8
Regulation and Tracking
Example (continuation). Finally, ➃ we obtain the state feedback
gain K in the original coordinates using Bass-Gura formula,

−1 2 −1
 1 −12   0 
K = K̄C̄C = [ −2 5.98 ] 0 1 2 −4
h i
= 12.99 −1

As can be verified, the eigenvalues of (A − BK) are as desired.


Step Response
0.08

0.07 The closed-loop step response,


as desired, settles in s, with no


0.06

0.05 significant overshoot.


Amplitude

0.04
Note, however, that we still
0.03
have steady-state error (





0.02 
). To fix it, we use the feed-




forward gain .


0.01

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (sec)

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 9
Regulation and Tracking
Example (continuation). The system transfer function does not
have a zero at s = 0, which would prevent tracking of constant
references (as we can see in the step response, which otherwise,
would asymptotically go to 0).
Step Response
Thus, ➄ we determine with the


1.2

1
formula





0.8




Amplitude


0.6











0.4

0.2
and achieve zero steady-state
error in the closed-loop step re-
0
0 0.5 1 1.5 2 2.5
Time (sec)
3 3.5 4 4.5 5 sponse.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 10
Regulation and Tracking
Example (continuation). We have designed a state feedback
controller for the speed of the DC motor. However, the tracking
achieved by feedforward precompensation would not tolerate
(it is not robust to) to uncertainties in the plant model.
Step Response
1.2

To see this, suppose the real ma- 1

trix in the system is slightly dif-




0.8
ferent from the one we used to

Amplitude
compute , 0.6

2 3 2 3
0.4





˜



4 5 4 5


0.2





0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (sec)

The closed-loop step response given by the designed gains N, K


(based on a different A-matrix) doesn’t yield tracking.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 11
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 12
Robust Tracking: Integral Action
We now introduce a robust approach to achieve constant
reference tracking by state feedback. This approach consists in
the addition of integral action to the state feedback, so that
the error ε(t) = r − y(t) will approach 0 as t → ∞, and this
property will be preserved
under moderate uncertainties in the plant model
under constant input or output disturbance signals.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 13
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:












b b
?- - f ˙ ?- b















f -f - - -f



b


6 6





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:












b b
?- - f ˙ ?- b

















b- f - f -f - - -f




6 6 6





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:












b b
?- - f ˙ ?- b




















b- f - - f -f - - -f




6 6 6





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:












b b
?- - f ˙ ?- b




















b- f - - -f -f - - -f





6 6 6





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:












b b
?- - f ˙ ?- b




















b- f - - -f -f - - -f





6 6 6





The main idea in the addition of integral action is to augment the plant
with an extra state: the integral of the tracking error ,




˙ (IA1)


















The control law for the augmented plant is then
2 3





h i
(IA2)




 


4 5





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
          
ẋ(t) A 0 x(t) B h i x(t) 0
 =   −   K kz   +  r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
          
ẋ(t) A 0 x(t) B h i x(t) 0
 =   −   K kz   +  r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
          
ẋ(t) A 0 x(t) B h i x(t) 0
 =   −   K kz   +  r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r

The state feedback design with integral action can be done as a


normal state feedback design for the augmented plant

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
          
ẋ(t) A 0 x(t) B h i x(t) 0
 =   −   K kz   +  r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r

The state feedback design with integral action can be done as a


normal state feedback design for the augmented plant

If Ka is designed such that the closed-loop augmented matrix


(Aa − Ba Ka ) is rendered Hurwitz, then necessarily in steady-state

lim ż(t) = 0 ⇒ lim y(t) = r, achieving tracking.


t t





The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Integral Action — How does it work?












˙























The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?












˙



































˙























The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?












˙



































˙



























































The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?
The block diagram of the closed-loop system controlled by state
feedback with integral action thus collapses to




























where GK (s) is a BIBO stable transfer function, by design, and
also the overall closed-loop is BIBO stable.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 17
Integral Action — How does it work?
The block diagram of the closed-loop system controlled by state
feedback with integral action thus collapses to




























where GK (s) is a BIBO stable transfer function, by design, and
also the overall closed-loop is BIBO stable. Let’s express GK (s) in
terms of its numerator and denominator polynomials,

N(s)
GK (s) =
D(s)

Then, from the block diagram above,


(−kz )N(s) N(s)
sD(s) D(s) 1
Y(s) = R(s)+ Di (s)+ Do (s)
1 + (−k z )N(s)
sD(s)
1+ (−kz )N(s)
sD(s)
1+ (−kz )N(s)
sD(s)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 17
Integral Action — How does it work?
In other words,









 































 

 


























 




The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Integral Action — How does it work?
In other words,









 































 

 


























 




If the reference and the disturbances are constants, say , and


then, because the closed-loop is BIBO stable, the steady state value of
is determined by the 3 transfer functions above evaluated at ,
















 
 

lim




















 

 





















 


The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Integral Action — How does it work?
In other words,









 































 

 


























 




If the reference and the disturbances are constants, say , and


then, because the closed-loop is BIBO stable, the steady state value of
is determined by the 3 transfer functions above evaluated at ,
















 
 

lim




















 

 





















 








That is: the output will asymptotically track constant references and
reject constant disturbances irrespective of the values , and .


The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Robust Tracking Example
Example (Robust speed tracking in a DC motor). Let’s go back
to the DC motor example and design a state feedback with
integral action to achieve robust speed tracking.

We have to redesign the state feedback gain K — it must be


computed together with kz for the augmented plant given by
h i  −10 1 0  h i 0
A 0 B
Aa = −C 0
= −0.02 −2 0 , B a = 0
= 2
−1 0 0 0

If we compute the characteristic polynomial of the augmented


matrix Aa we find

∆a (s) = s3 + 12s2 + 20.02s = s∆(s).

Unsurprisingly, it just adds a root at s = 0 to the original.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 19
Robust Tracking Example
Example (continuation). We now move on to compute Ca and
C̄a — for the augmented pair (Aa , Ba ),
0 2 −24
  −1  
1 12 20.02 1 −12 123.98
Ca = 2 −4 7.96 , C̄a = 0 1 12 = 0 1 −12
0 0 −2 0 0 1 0 0 1

Note that the augmented pair (Aa , Ba ) will always be control-


lable as long as the original pair (A, B) is controllable and the
system has no zeros at s = 0.

We select as desired augmented characteristic polynomial

∆Ka (s) = (s + 6)∆K (s) = s3 + 16s2 + 86s + 156

(We keep the original desired characteristic polynomial and let


the extra pole be faster, to keep the same specifications in the
response of y(t).)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 20
Robust Tracking Example
Example (continuation). As before, from ∆a (s) and ∆aK (s) we
compute K̄a and now obtain
h i h i
K̄a = (16 − 12) (86 − 20.02) (156 − 0) = 4 65.98 156

and finally,
h i
−1
 
Ka = K̄a C̄a C = 12.99 2 −78
| {z } | {z }
K kz

Note that the first two elements of the augmented Ka


correspond
h i to the new state feedback gain K for the motor state
ω(t)
i(t)
, while the last element is the state feedback gain kz for
the augmented state z(t).

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 21
Robust Tracking Example
Example (continuation). A block diagram for implementation:



















 









And a S IMULINK diagram (including disturbances):

d_i d_o

1
−78
s 1
B* u C* u
s
r Integrator1 Gain2
Integrator Matrix
Matrix
Gain2 Scope
Gain1
A* u

Matrix
12.99
Gain
Demux
Gain
2

Gain1

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 22
Robust Tracking Example
Example (continuation). We simulate the closed loop system to
a unit step input applied at t = 0, an input disturbance d = 0.5
applied at t = 8s, and an output disturbance do = 0.3 applied at
t = 4s.
1.4

1.2

0.8
y(t)

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
time [s]

We can see the asymptotic tracking of the reference despite the


disturbances.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 23
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 24
State Estimation
State feedback requires measuring the states but, normally, we
do not have direct access to all the states. Then, how do we
implement state feedback?

If the system is observable the states can be estimated by an


observer.
˙













Plant


Observer

State feedback from estimated states

An observer is a dynamic system that estimates the states of the


plant based on the measurement of its inputs and outputs.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 25
State Estimation: A “Naive” Observer
How would we build an observer? One intuitive way would be to
reproduce a model of the plant and run it simultaneously to
obtain a state estimate ^
x(t).
˙









˙




Plant copy

A “naive” design of an observer.


The problem with this “naive” design is that if the plant and its
“copy” in the observer have different initial conditions, the
estimates will generally not converge to the true values.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 26
State Estimation: A “Feedback” Observer
A better observer structure includes error feedback correction.
˙












error





˙





A “self-corrected” (feedback) design of an observer.


By appropriately designing the matrix gain L, we could adjust
the observer to give a state estimate that will asymptotically
converge to the true state.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 27
State Estimation: Final Observer Structure
By rearranging the previous block diagram, we get to the final
structure of the observer
˙






If the system is observable, we


can then choose the gain L to


ascribe the eigenvalues of (A −


LC) arbitrarily. Observer

We certainly want the observer


˙


to be stable!


From the block diagram, the ob-




server equations are
Observer structure

x˙ (t) = A^
^ x(t) + Bu(t) + L [y(t) − C^
x(t)]
= (A − LC)^
x(t) + Bu(t) + Ly(t) (O)

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 28
State Estimation
From the observer state equation (O), and the plant state
equation

ẋ = Ax + Bu
y = Cx

We can obtain a state equation for the estimation error ε = x − ^


x


ε̇ = ẋ − ^
x − Bu − LC(x − ^
= Ax + Bu − A^ x)
= A(x − ^
x) − LC(x − ^
x)

= (A − LC)ε ⇒ ε(t) = e(A−LC)t ε(0) .

Thus, we see that for the error to asymptotically converge to zero


x(t) → x(t)) we need (A − LC) to be Hurwitz.
ε(t) → 0, (and so ^

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 29
Observer Design
In summary, to build an observer, we use the matrices A, B and
C from the plant and form the state equation

= (A − LC)^
x(t) + Bu(t) + Ly(t)

where L is such that the eigenvalues of (A − LC) have negative


real part.

How to choose L? We can use, by duality, the same procedure


that we already know for the design of a state feedback gain K
to render A − BK Hurwitz. Notice that the matrix transpose

(A − LC)T = AT − CT LT
= Adual − Bdual Kdual

We choose Kdual to make Adual − Bdual Kdual Hurwitz, and finally

L = KTdual
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 30
State Estimation Example
Example (Current estimation in a DC motor). We revisit the DC
motor example seen earlier. Before we used state feedback to
achieve robust reference tracking and disturbance rejection.
This required measurement of both states: current i(t) and
velocity ω(t).

Suppose now that we don’t measure current, but only the motor
speed. We will construct an observer to estimate i(t).

Recall the plant equations


      
d ω(t) −10 1 ω(t) 0
 =   +   V(t)
dt i(t) −0.02 −2 i(t) 2
 
h i ω(t)
y= 1 0  
i(t)

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 31
State Estimation Example
Example (continuation). We first check for observability
(otherwise, we won’t be able to build the observer)
   
C 1 0
O=   =  
CA −10 1

which is full rank, and hence the system is observable.

By duality, in the procedure to compute Kdual , the role of C is


played by Cdual = OT , and C̄dual is the same
 −1  
1 α1 1 −12
C̄dual = C̄ =   =  
0 1 0 1

since C̄ only depends on the characteristic polynomial of A,


which is the same as that of AT .

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 32
State Estimation Example
Example (continuation). Say that the desired eigenvalues for the
observer are s = −6 ± j2, (slightly faster than those set for the
closed-loop plant, which is standard) which yields

∆Kdual = s2 + 12s + 40

Thus, from the coefficients of ∆Kdual and those of ∆(s) we have


h i h i
K̄dual = (12 − 12) (40 − 20.02) = 0 19.98

We now return to the original coordinates and get Kdual ,


h i
−1
Kdual = K̄dual C̄Cdual , = 0 19.98 , by chance, the same as K̄dual

KTdual
 0

Finally, L = = 19.98 . It can be checked with M ATLAB that
L effectively place the eigenvalues of (A − LC) at the desired
locations.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 33
State Estimation Example
Example (continuation). We simulated the observer with
S IMULINK, setting some initial conditions to the plant (and none to
the observer)
1
estimation error in speed
States estimation error in current
0.8
B* u 1 C* u
s
0.6
r Matrix Integrator Matrix
Gain1 Gain2
0.4
A* u
0.2
Matrix
Gain
0

−0.2
B* u

−0.4
Estimated
states
1 −0.6
L* u
s
−0.8

A−L*C* u −1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time [s]

We can see how the estimation errors effectively converge to


zero in about 1s.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 34
State Estimation Example
Example (Feedback from Estimated States). The observer can
be combined with the previous state feedback design; we just
need to replace the state measurements by the estimated states.

d_i d_o

1
−78
s 1
B* u C* u
s
r Gain2

Scope

A* u

B* u

1
12.99 Demux L* u
s
Gain
2
A−L*C* u
Gain1

Note that for the integral action part we still need to measure the
real output (its estimate won’t work).
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 35
State Estimation Example
Example (continuation). The figure below shows the results of
simulating the complete closed-loop system, with feedback from
estimated states and integral action for robust reference
tracking and disturbance rejection.
1.4

1.2

0.8
y(t)

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
time [s]

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 36
One Tip for Lab 2
Although we will further discuss state feedback and observers,
we have now all the ingredients necessary for Laboratory 2 and
Assignment 3.

One trick that may come handy in the state design for Lab 2 is
plant augmentation. It consists of prefiltering the plant before
carrying out the state feedback design.

The system for Lab 2 has a transfer function of the form

k
G(s) =
s(s + a)(s2 + 2ζωs + ω2 )

When the damping ζ is small, the plant has a resonance at ω, as


is the case in the system of Lab 2.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 37
One Tip for Lab 2
Control design for a system with resonances is tricky. One way to
deal with them is prefilter the plant with a notch filter of the form

s2 + 2ζωs + ω2
F(s) =
s2 + 2ζ̄ωs + ω2

with better damping ζ̄ = 0.7 (say) and the same natural


frequency. The notch filter cancels the resonant poles and
replaces them by a pair of more damped poles. Of course, this
can only be done for stable poles.

The augmented plant then becomes

k
Ga (s) = G(s)F(s) =
s(s + a)(s2 + 2ζ̄ωs + ω2 )

We then do state feedback design for this augmented plant.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 38
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.
If the states are not all measurable, but the state-space
description of the plant is also observable, then it is possible
to estimate the states with an observer.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.
If the states are not all measurable, but the state-space
description of the plant is also observable, then it is possible
to estimate the states with an observer.
The observer can be used in conjunction with the state
feedback by feeding back estimated estates.

The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39

You might also like