Lec 18
Lec 18
Julio H. Braslavsky
[email protected]
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 1
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation
A tip for Lab 2
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 2
State Feedback
In the last lecture we introduced state feedback as a technique
for eigenvalue placement. Briefly, given the open-loop state
equation
we apply the control law u(t) = Nr(t) − Kx(t) and obtain the
closed-loop state equation
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 3
Regulation and Tracking
The associated block diagram is the following
b- - g- ˙
- g- - -b
6 6
difficult problem, called the servomechanism problem.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 4
Regulation and Tracking
We review the state feedback design procedure with an
example.
Example (Speed control of a DC motor). We consider a DC
motor described by the state equations
+V−
2 3 2 32 3 2 3
i
ω
4 5 4 54 5 4 5
2 3
h i
4 5
The input to the DC motor is the voltage V(t), and the states are
the current i(t) and rotor velocity ω(t). We assume that we can
measure both, and take ω(t) as the output of interest.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 5
Regulation and Tracking
Example (continuation). ➀ The open-loop characteristic
polynomial is
−1
= s2 + 12s + 20.02
s+10
∆(s) = det(sI − A) = det 0.02 s+2
0.08
tude of the voltage step.
Amplitude
0.06
We would like to design a state
0.04 feedback control to make the
motor response faster and ob-
0.02
0
0 0.5 1 1.5
Time (sec)
2 2.5 3 reference inputs .
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 6
Regulation and Tracking
Example (continuation). To design the state feedback gain, we
next ➁ compute the controllability matrix
h i 0 2
C = B AB =
2 −4
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 7
Regulation and Tracking
Example (continuation). We now ➂ propose a desired
characteristic polynomial. Suppose that we we would like the
closed-loop eigenvalues to be at s = −5 ± j, which yield a step
response with 0.1% overshoot and about 1s settling time.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 8
Regulation and Tracking
Example (continuation). Finally, ➃ we obtain the state feedback
gain K in the original coordinates using Bass-Gura formula,
−1 2 −1
1 −12 0
K = K̄C̄C = [ −2 5.98 ] 0 1 2 −4
h i
= 12.99 −1
0.06
0.04
Note, however, that we still
0.03
have steady-state error (
0.02
). To fix it, we use the feed-
forward gain .
0.01
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (sec)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 9
Regulation and Tracking
Example (continuation). The system transfer function does not
have a zero at s = 0, which would prevent tracking of constant
references (as we can see in the step response, which otherwise,
would asymptotically go to 0).
Step Response
Thus, ➄ we determine with the
1.2
1
formula
0.8
Amplitude
0.6
0.4
0.2
and achieve zero steady-state
error in the closed-loop step re-
0
0 0.5 1 1.5 2 2.5
Time (sec)
3 3.5 4 4.5 5 sponse.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 10
Regulation and Tracking
Example (continuation). We have designed a state feedback
controller for the speed of the DC motor. However, the tracking
achieved by feedforward precompensation would not tolerate
(it is not robust to) to uncertainties in the plant model.
Step Response
1.2
0.8
ferent from the one we used to
Amplitude
compute , 0.6
2 3 2 3
0.4
˜
4 5 4 5
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (sec)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 11
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 12
Robust Tracking: Integral Action
We now introduce a robust approach to achieve constant
reference tracking by state feedback. This approach consists in
the addition of integral action to the state feedback, so that
the error ε(t) = r − y(t) will approach 0 as t → ∞, and this
property will be preserved
under moderate uncertainties in the plant model
under constant input or output disturbance signals.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 13
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:
b b
?- - f ˙ ?- b
f -f - - -f
b
6 6
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:
b b
?- - f ˙ ?- b
b- f - f -f - - -f
6 6 6
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:
b b
?- - f ˙ ?- b
b- f - - f -f - - -f
6 6 6
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:
b b
?- - f ˙ ?- b
b- f - - -f -f - - -f
6 6 6
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The State Feedback with Integral Action scheme:
b b
?- - f ˙ ?- b
b- f - - -f -f - - -f
6 6 6
The main idea in the addition of integral action is to augment the plant
with an extra state: the integral of the tracking error ,
˙ (IA1)
The control law for the augmented plant is then
2 3
h i
(IA2)
4 5
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 14
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
ẋ(t) A 0 x(t) B h i x(t) 0
= − K kz + r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
ẋ(t) A 0 x(t) B h i x(t) 0
= − K kz + r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
ẋ(t) A 0 x(t) B h i x(t) 0
= − K kz + r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Robust Tracking: Integral Action
The closed-loop state equation with the state feedback control
u(t) given by (IA1) and (IA2) is
ẋ(t) A 0 x(t) B h i x(t) 0
= − K kz + r
ż(t) −C 0 z(t) 0 | {z } z(t) 1
| {z } | {z } Ka
Aa Ba
h i h i
x(t) 0
= (Aa − Ba Ka ) z(t)
+ 1
r
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 15
Integral Action — How does it work?
˙
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?
˙
˙
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?
˙
˙
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 16
Integral Action — How does it work?
The block diagram of the closed-loop system controlled by state
feedback with integral action thus collapses to
where GK (s) is a BIBO stable transfer function, by design, and
also the overall closed-loop is BIBO stable.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 17
Integral Action — How does it work?
The block diagram of the closed-loop system controlled by state
feedback with integral action thus collapses to
where GK (s) is a BIBO stable transfer function, by design, and
also the overall closed-loop is BIBO stable. Let’s express GK (s) in
terms of its numerator and denominator polynomials,
N(s)
GK (s) =
D(s)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Integral Action — How does it work?
In other words,
If the reference and the disturbances are constants, say , and
then, because the closed-loop is BIBO stable, the steady state value of
is determined by the 3 transfer functions above evaluated at ,
lim
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Integral Action — How does it work?
In other words,
If the reference and the disturbances are constants, say , and
then, because the closed-loop is BIBO stable, the steady state value of
is determined by the 3 transfer functions above evaluated at ,
lim
That is: the output will asymptotically track constant references and
reject constant disturbances irrespective of the values , and .
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 18
Robust Tracking Example
Example (Robust speed tracking in a DC motor). Let’s go back
to the DC motor example and design a state feedback with
integral action to achieve robust speed tracking.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 19
Robust Tracking Example
Example (continuation). We now move on to compute Ca and
C̄a — for the augmented pair (Aa , Ba ),
0 2 −24
−1
1 12 20.02 1 −12 123.98
Ca = 2 −4 7.96 , C̄a = 0 1 12 = 0 1 −12
0 0 −2 0 0 1 0 0 1
and finally,
h i
−1
Ka = K̄a C̄a C = 12.99 2 −78
| {z } | {z }
K kz
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 21
Robust Tracking Example
Example (continuation). A block diagram for implementation:
And a S IMULINK diagram (including disturbances):
d_i d_o
1
−78
s 1
B* u C* u
s
r Integrator1 Gain2
Integrator Matrix
Matrix
Gain2 Scope
Gain1
A* u
Matrix
12.99
Gain
Demux
Gain
2
Gain1
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 22
Robust Tracking Example
Example (continuation). We simulate the closed loop system to
a unit step input applied at t = 0, an input disturbance d = 0.5
applied at t = 8s, and an output disturbance do = 0.3 applied at
t = 4s.
1.4
1.2
0.8
y(t)
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
time [s]
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 23
Outline
Regulation and Tracking
Robust Tracking: Integral Action
State Estimation
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 24
State Estimation
State feedback requires measuring the states but, normally, we
do not have direct access to all the states. Then, how do we
implement state feedback?
Plant
Observer
˙
Plant copy
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 26
State Estimation: A “Feedback” Observer
A better observer structure includes error feedback correction.
˙
error
˙
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 27
State Estimation: Final Observer Structure
By rearranging the previous block diagram, we get to the final
structure of the observer
˙
If the system is observable, we
can then choose the gain L to
ascribe the eigenvalues of (A −
LC) arbitrarily. Observer
to be stable!
From the block diagram, the ob-
server equations are
Observer structure
x˙ (t) = A^
^ x(t) + Bu(t) + L [y(t) − C^
x(t)]
= (A − LC)^
x(t) + Bu(t) + Ly(t) (O)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 28
State Estimation
From the observer state equation (O), and the plant state
equation
ẋ = Ax + Bu
y = Cx
x˙
ε̇ = ẋ − ^
x − Bu − LC(x − ^
= Ax + Bu − A^ x)
= A(x − ^
x) − LC(x − ^
x)
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 29
Observer Design
In summary, to build an observer, we use the matrices A, B and
C from the plant and form the state equation
= (A − LC)^
x(t) + Bu(t) + Ly(t)
(A − LC)T = AT − CT LT
= Adual − Bdual Kdual
L = KTdual
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 30
State Estimation Example
Example (Current estimation in a DC motor). We revisit the DC
motor example seen earlier. Before we used state feedback to
achieve robust reference tracking and disturbance rejection.
This required measurement of both states: current i(t) and
velocity ω(t).
Suppose now that we don’t measure current, but only the motor
speed. We will construct an observer to estimate i(t).
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 31
State Estimation Example
Example (continuation). We first check for observability
(otherwise, we won’t be able to build the observer)
C 1 0
O= =
CA −10 1
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 32
State Estimation Example
Example (continuation). Say that the desired eigenvalues for the
observer are s = −6 ± j2, (slightly faster than those set for the
closed-loop plant, which is standard) which yields
∆Kdual = s2 + 12s + 40
KTdual
0
Finally, L = = 19.98 . It can be checked with M ATLAB that
L effectively place the eigenvalues of (A − LC) at the desired
locations.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 33
State Estimation Example
Example (continuation). We simulated the observer with
S IMULINK, setting some initial conditions to the plant (and none to
the observer)
1
estimation error in speed
States estimation error in current
0.8
B* u 1 C* u
s
0.6
r Matrix Integrator Matrix
Gain1 Gain2
0.4
A* u
0.2
Matrix
Gain
0
−0.2
B* u
−0.4
Estimated
states
1 −0.6
L* u
s
−0.8
A−L*C* u −1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time [s]
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 34
State Estimation Example
Example (Feedback from Estimated States). The observer can
be combined with the previous state feedback design; we just
need to replace the state measurements by the estimated states.
d_i d_o
1
−78
s 1
B* u C* u
s
r Gain2
Scope
A* u
B* u
1
12.99 Demux L* u
s
Gain
2
A−L*C* u
Gain1
Note that for the integral action part we still need to measure the
real output (its estimate won’t work).
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 35
State Estimation Example
Example (continuation). The figure below shows the results of
simulating the complete closed-loop system, with feedback from
estimated states and integral action for robust reference
tracking and disturbance rejection.
1.4
1.2
0.8
y(t)
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
time [s]
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 36
One Tip for Lab 2
Although we will further discuss state feedback and observers,
we have now all the ingredients necessary for Laboratory 2 and
Assignment 3.
One trick that may come handy in the state design for Lab 2 is
plant augmentation. It consists of prefiltering the plant before
carrying out the state feedback design.
k
G(s) =
s(s + a)(s2 + 2ζωs + ω2 )
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 37
One Tip for Lab 2
Control design for a system with resonances is tricky. One way to
deal with them is prefilter the plant with a notch filter of the form
s2 + 2ζωs + ω2
F(s) =
s2 + 2ζ̄ωs + ω2
k
Ga (s) = G(s)F(s) =
s(s + a)(s2 + 2ζ̄ωs + ω2 )
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 38
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.
If the states are not all measurable, but the state-space
description of the plant is also observable, then it is possible
to estimate the states with an observer.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39
Summary
For a plant with a controllable state-space description, it is
possible to design a state feedback controller u(t) = −Kx(t)
that will place the closed-loop poles in any desired location.
State feedback control can incorporate integral action,
which yields robust reference tracking and disturbance
rejection for constant references and disturbances.
If the states are not all measurable, but the state-space
description of the plant is also observable, then it is possible
to estimate the states with an observer.
The observer can be used in conjunction with the state
feedback by feeding back estimated estates.
The University of Newcastle Lecture 18: State Feedback Tracking and State Estimation – p. 39