Chapter 4 - Control Synthesis For LTI Systems
Chapter 4 - Control Synthesis For LTI Systems
1
makes the closed-loop system
asymptotically stable.
rank[λi I − A | B] = n (4.4)
and apply the same reasoning done for the system stability analysis of Chap-
ter 3. To this end, we consider a quadratic Lyapunov function
It is found
∃P = P T > 0, P ∈ Rn×n
such that P (A + BK) + (A + BK)T P < 0 (4.9)
∃K ∈ Rm×n
2
If K is given, (4.9) is a LMI feasibility test. If satisfied, then K is a stabilizing
control law. On the contrary, if K is unknown and has to be designed the
LMI (4.9) is non-linear because it contains the product of the unknowns K
and P (compare the product PBK). Thus, it is required a change of variables
that makes the inequality (4.9) linear in the unknowns P and K. To this
end, consider the following change of variables (due to Bernussou)
⇓
−1 −1
(A + BK)P +P T
|{z}(A + BK) < 0 (4.12)
X
⇓
KX ) + XAT + XK
(AX + B |{z} T T
| {z } B < 0 (4.13)
Y Y T
∃X = X T > 0, X ∈ Rn×n
(AX + BY ) + (AX + BY )T < 0 (4.14)
∃Y ∈ Rm×n
K = Y X −1 (4.15)
∃X = X T > 0, X ∈ Rn×n
such that (AX + BY ) + (AX + BY )T < 0
∃Y ∈ Rm×n
K = Y X −1
In the LMI literature, the variables’ change (4.10) is well-known, and it has
helped to establish the LMI approach in the 80’s and 90’s in the control
comuntty for solving different control synthesis problems.
3
4.4 D-stabilizability conditions
It is of interest to determine LMI conditions under which is possible to
design a state-feedback control law
u(t) = Kx(t), K ∈ Rm×n (4.16)
such that the closed-loop system
ẋ(t) = (A + BK)x(t) (4.17)
has all eigenvalues in the LMI region
(
R = {z : fR (z) < 0}
(4.18)
fR (z) = L + M z + M T z̄
with
L = LT ∈ Rs×s , M ∈ Rs×s (4.19)
L = [λkl ], M = [µkl ] (4.20)
By repeating the same arguments used in the R-stability analysis of Chapter
3, one arrives to conclude that the system (4.17) is R-stabilizable if and only if
h i
= λkl P −1 + µkl (A + BK)P −1 + µlk P −1 (A + BK)T <0
1≤k,l≤s
(4.23)
Finally, by using the change of variables (4.10), one arrives to
4
Theorem 4.4.1. D-stabilizability Theorem
The D-stabilizability problem (4.16) - (4.20) has solution if and only if
K = Y X −1
5
2αX + (AX + BY ) + (AX + BY )T < 0
" #
−rX (AX + BY )
<0
(AX + BY )T −rX
" #
[(AX + BY ) + (AX + BY )T ]sin(θ) [(AX + BY ) − (AX + BY )T ]cos(θ)
<0
[(AX + BY )T − (AX + BY )]cos(θ) [(AX + BY )T + (AX + BY )]sin(θ)
6
This control design framework is very general and can recast many control
design problems of practical interest.
ẋ = Ax + Bu + F d
y = Cy x + D y u u = K(s)y
z = Cz x + D z u + D d d
(
ẋ = Ax + Bu
y = Cx + Du
7
Figure 4.6: Block diagram
xe
u(t) = Ke Kx
x
The integral effect ensures that e(t) → 0 for t → ∞ when r(t) ≡ r ∀t. The
gain Ke and Kx could be chosen to minimize the tracking error during the
transients also, by having as objective of the control design the minimization
of kek2 or kek∞ .
8
ẋ = Ax + B1 u + B2 ω
z = C x + D ω + D u
∞ 1 11 12
Open-Loop System (4.24)
z 2 = C 2 x + D 22 u
z = C x + D ω + D u
1 3 31 32
n
Controller u = Kx (4.25)
ẋ = (A + B1 K)x + B2 ω
z = (C + D K)x + D ω
∞ 1 12 11
Closed-Loop System (4.26)
z2 = (C2 + D22 K)x
z = (C + D K)x + D ω
1 3 32 31
Notice that all transfer matrix T∞ , T2 and T1 depend on the controller gain
K and map the disturbance ω(t) to the objectives
z∞ (t) = T∞ (s)ω(t)
z1 (t) = T1 (s)ω(t)
in closed-loop. Notice that T∞ (s), T2 (s) and T1 (s) are here to be interpreted
as ARMA models, with s = dt d
, the derivative operator. Then, it makes
sense to consider the following optimal control design problems
∗
H∞ Optimal control K∞ = arg min kT∞ kH∞ (4.29)
K∈S
9
L1 Optimal control K1∗ = arg min kT1 kL1 (4.31)
K∈S
Similarly
K = Y X −1
10
Proof. We start from the LMI conditions (3.34) derived for the H∞ analysis
and apply them to the closed-loop system (4.26). It is found
∃X = X T > 0, ∃Y
s. t.
(3.35) ⇒ (AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T
B2T −γ 2 I T
D11 <0
(C1 X + D12 Y ) D11 −I
T
∃X = X > 0, ∃Y
s. t.
(3.55) ⇒ " T + B BT
#
(AX + B 1 Y ) + (AX + B 1 Y ) 2 2 ∗
<0
(C2 X + D12 Y ) + D11 B2T −γ 2 I + D11 D11
T
H∞ Optimal control
11
following optimization problem:
h i
X ∗ Y ∗ = arg minX,Y,γ γ
T B2 (C1 X + D12 Y )T
(AX + B1 Y ) + (AX + B1 Y )
B2T −γI T
D11 <0
(C1 X + D12 Y ) D11 −γI
X = X T > 0
1. ∃X = X T > 0, Q = QT > 0, Y s. t.
tr {Q} < ν 2
2. ∃X = X T > 0, Q = QT > 0, Y s. t.
(AX + B1 Y ) + (AX + B1 Y )T B2
<0 (4.37)
B2T −I
Q (C2 X + D22 Y )
>0
(C2 X + D22 Y )T X
tr {Q} < ν 2
12
If a solution X,Y exists, then the sub-optimal H2 controller is given by
K = Y X −1
(A+B1 K)X+X(A+B1 K)+B2 B2T < 0 ⇔ (AX+B1 Y )+(AX+B1 Y )T +B2 B2T < 0
tr {Q} < ν 2
13
The second formulation can be proved by considering that the Schur’s factors
of
(AX + B1 Y ) + (AX + B1 Y )T B2
<0
B2T −I
are (
−I < 0
(AX + B1 Y ) + (AX + B1 Y )T − B2 (−I)−1 B2T < 0
H2 Optimal control
Notice also that, because tr {Q} is a linear function of Q and ρ enters only
in the last condition tr {Q} < ρ, the above optimization problem can also
14
be rewritten as
h i
∗ Y ∗ = arg min
X X,Y,Q tr {Q}
" #
T
(AX + B 1 Y ) + (AX + B 1 Y ) B 2
<0
B2T
−I
" #
Q (C 2 X + D 22 Y )
>0
(C2 X + D22 Y )T
X
X = X T > 0, Q = QT > 0
15
4.8.2 Riccati-based LQG optimal control
Consider the system
u(t) = Kx(t)
Assumptions:
1. (A, B1 ) stabilizable
2. Q = QT ≥ 0, R = RT > 0
3. w(t) is a Gaussian white noise vector with zero mean and covariance
W = WT > 0
The solution is given by
K = −R−1 B1T P
where P = P T > 0 is the solution of the Algebraic Riccati Equation (ARE)
It is found that
1. A + B1 K is asymptotically stable
Jmin = tr P B2 W B2T
16
Then, in order to verify that a bound exists in this case, we can apply the
Lyapunov’s criterium. Then, let consider
Observe also that the asymptotic stability ensures that the Algebraic Lya-
punov Equation
AT P + P A = −Q
has a unique solution P̄ = P̄ T > 0 associated to Q. This means that we
have (
AT P + P A + Q = 0
⇒ AT (P − P ) + (P − P )A ≤ 0
AT P + P A + Q ≤ 0
and necessarily
P −P >0
for the asymptotic stability. Moreover, because of the unicity of P̄ , one has
We want now to repeat the same arguments for the following controlled
system
with cost Z ∞
J= (xT (t)Qx(t) + uT (t)Ru(t)) dt
Z0 ∞ (4.48)
= xT (t)[Q + K T RK]x(t) dt
0
17
with Q = QT > 0 and R = RT > 0. By repeating the same arguments of
the previous open-loop case, one finds
J ≤ xT0 P x0 (4.50)
Then, the matrices P and K that minimize the cost can be found by solving
minP,K γ
P = PT > 0
(A + BK)T P + P (A + BK) + Q + K T RK < 0
T
x0 P x0 ≤ γ
Note that (4.49) is not on LMI because we have the nonlinear products
K T RK, P BK and K T B T P . A way to arrive to a LMI formulation is to use
the usual change of variables
X = P −1 , KX = Y (4.51)
⇓
X(K B + A ) + (A + BK)X + X T QX + XK T RKX < 0
T T T
⇓
(AX + BY ) + (AX + BY ) + X T QX + Y T RY < 0
T
(4.52)
Next, the term (4.52) can be seen as the Schur’s factor of the matrix
(AX + BY ) + (AX + BY )T YT
X
X −Q−1 0 <0 (4.53)
Y 0 −R−1
where
−Q 0
<0 (4.54)
0 −R
Moreover, the condition
xT0 P x0 ≤ γ ⇔ xT0 X −1 x0 ≤ γ
18
can be written as
γ xT0
>0 (4.55)
x0 X
In conclusion, the LQ optimal controller can be computed via solving
h i
∗ Y ∗ = arg min
X X,Y γ
(AX + B1 Y ) + (AX + B1 Y )T X YT
X −Q−1 0 <0
Y 0 −R−1
" #
γ xT0
>0
x X
0
X = XT > 0
−1
KLQ = Y ∗ X ∗
A further formulation can be found by observing that
xT0 P x0 = tr xT0 X −1 x0 = tr X −1 x0 xT0 (4.56)
−1
KLQ = Y ∗ X ∗
19
4.10 Solution via LMI - the LQG case
(A + B1 K)T P + P (A + B1 K) + Q + K T RK < 0
and
J ≤ tr P B2 W B2T
Then, with the usual change of variables (4.51), we arrive to the same LMI
(4.53). Moreover, by setting
Z > B2T X −1 B2 , X −1 = P
one has
Moreover,
Z B2T
Z > B2T X −1 B2 ⇔ >0
B2 X
Then, the optimization problem for the LQG synthesis becomes:
h i
∗ Y ∗ = arg min
X X,Y tr {ZW }
(AX + B Y ) + (AX + B Y )T X Y T
1 1
X −Q−1 0 <0
Y 0 −R−1
" #
Z B T
2
>0
B X
2
X = X T > 0, Z = Z T > 0
KLQG = Y ∗ X ∗−1
20
As a final remark, observe that
(AX + B1 Y ) + (AX + B1 Y )T YT
X
X −Q−1 0 <0
Y 0 −R−1
can be rewritten as
1 1
(AX + B1 Y ) + (AX + B1 Y )T XQ 2 Y T R2
1
Q2 X −I 0 <0
1
R2 Y 0 −I
where
1 1
Q2 Q2 = Q ≥ 0
1 1
R2 R2 = R > 0
This cost is more general that (4.58) because it involves also mixed products
T
where P = P > 0 is the unique solution of the Algebraic Lyapunov Equa-
tion
21
Moreover, for any control law K such that (A + B1 K) is asymptotically
stable, exists P = P T > 0 that satisfies
and
P − P > 0 ⇒ tr P B2 W B2T (4.65)
≥J
Expression (4.64) can easily be converted in LMI. Specifically:
⇓
−1
P −1
P
K)T P )T
(A + B1 + P (A + B1 K) (Cz + Dzu <0
(Cz + Dzu ) −I
I I
P −1 = X, KX = Y
⇓
(AX + B1 Y )T + (AX + B1 Y ) (Cz X + Dzu Y )T
<0 (4.66)
(Cz X + DzuY ) −I
and, by introducing a new matrix ∃Z = Z T > 0 such that
Z B2T
T −1
Z > B2 X B2 ⇔ >0
B2 X
one finds
tr {ZW } ≥ tr B2T X −1 B2 W
>J
As conclusion: the H2 optimal control law
−1
u(t) = Kx, K = Y ∗X ∗
22
stabilizes the system and minimizes the cost function
J = lim E[z T (t)z(t)]
t→∞
(C3 X + D32 Y )T
λX 0
0 (ζ − µ)I T
D31 >0 (4.68)
(C3 X + D32 Y ) D31 ζI
The controller, if exists, is given by
K = Y X −1
Proof. The proof follows the same lines of the other optimal control prob-
lems. We start from (3.63)-(3.64), that we apply to the closed-loop system
(4.24)-(4.27). It is found
(A + B1 K)T P + P (A + B1 K) + λP P B2
<0 (4.69)
B2T P −µI
(C3 + D32 K)T
λP 0
0 (ζ − µ)I T
D31 >0 (4.70)
(C3 + D32 K) D31 ζI
By transforming (4.69) and (4.70) via the following congruence transforma-
tion −1 −1
P P
4.69 <0
I I
23
−1 −1
P P
I 4.70 I <0
I I
and denoting X = P −1 and Y = KX one arrives to (4.67) and (4.68).
with
L = LT ∈ Rs×s , M ∈ Rs×s
L = [λkl ], M = [µkl ]
Lemma 4.13.1. Allocation’s lemma - All eigs of (A + B1 K) belong to R if
and only if ∃X = X T > 0, ∃Y ∈ Rm×n such that
MR (A, B1 , X, Y ) = L ⊗ X + M ⊗ (AX + B1 Y ) + M T ⊗ (AX + B1 Y )T < 0
(4.72)
where the (k,l)- generic block-matrix of (4.72) is given by
h i
MR (A, B1 , X, Y ) = λkl X + µkl (AX + B1 Y ) + µlk (AX + B1 Y )T <0
1≤k,l≤s
(4.73)
24
Figure 4.8: R-region
Example 4.4.
R = {z : 2α + z + z < 0}
25
4.14 Multi-objective control
The previous LMI conditions that describe the optimization problems
underlying the synthesis of the various controllers are all convex in the vari-
ables X and Y
kT∞ kH∞ < γ, kT2 k2 < ν, kT1 kL1 < ζ, λi (A + B1 K) ∈ R
Then, they can be combined because the intersection of convex con-
straints is still a convex constraint.
2. H∞ + R − stability
∗
KH 2 /HR
= min kT∞ kH∞
K∈RS
3. H∞ /H2 Multi-objective
a > 0 is a given scalar. "The ε-constrained approach"
∗
KH ∞ /H2
min = kT∞ kH∞
K∈S
subject to kT2 kH2 < a
Formulations:
1.
h i
X Y = arg minX,Y,Q aγ + bν (given a>0, b>0)
∗ ∗
(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T
B2T −γI T
D11 <0
(C1 X + D12 Y ) D11 −γI
" #
(AX + B1 Y ) + (AX + B1 Y )T B2
T
<0
B −I
" 2 #
−Q (C2 X + D22 Y )
>0
(C2 X + D22 Y )T X
X = X T > 0, Q = QT > 0, tr {Q} < ν 2
26
∗
KH ∞ /H2
= Y ∗ X ∗−1
2. Given R = z : L + M z + M T z < 0
h i
∗ Y ∗ = arg min
X X,Y γ
(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T
B2T −γI T
<0
D11
(C1 X + D12 Y ) D11 −γI
h i
λkl X + µkl (AX + B1 Y ) + µlk (AX + B1 Y )T
<0
1≤k,l≤s
X = XT > 0
∗
KH ∞ /R
= Y ∗ X ∗−1
3.
h i
∗ Y ∗ = arg min
X X,Y,Q γ (given a)
(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T
B2T −γI T
D11 <0
(C1 X + D12 Y ) D11 −γI
"
#
(AX + B1 Y ) + (AX + B1 Y )T B2
<0
" B2T −I
#
−Q (C2 X + D22 Y )
T
>0
(C2 X + D22 Y ) X
tr {Q} < a2
X = X T > 0, Q = QT > 0,
∗ ∗
KH ∞ /H2
= Y ∗ X −1
27
4.15 Static and dynamic output feedback
4.15.1 Static output feedback
Consider the system
(
ẋ(t) = Ax(t) + Bu(t), x(0) = x0
(4.75)
y(t) = Cx(t) + Du(t)
28
determine K, if it exists, such that
Notice that is deriving (4.79) from (4.78) we have used the obvious relation-
ship
(∀i) λi (A) < γ ⇔ A < γI
In most practical cases, one chooses γ1 = ∞ and solve
(
γ ∗ = minK γ
(A + BKC) + (A + BKC)T < −2γI
29
Open-loop (x ∈ Rn , u ∈ Rm , y ∈ Rp )
ẋ = Ax + B1 u + B2 w
z∞ = C1 x + D11 w + D12 u
z2 = C2 x + D21 w + D22 u (4.81)
z1 = C3 x + D31 w + D32 u
y = Cx + Dw
Controller (ξ ∈ Rn )
(
ξ˙ = Ak ξ + Bk y
(4.82)
u = Ck ξ + Dk y
By defining the extended state
x
xcl = ∈ R2n×2n (4.83)
ξ
the following closed-loop system is achieved
" # " #" # " #
ẋ A + B1 Dk C B1 Ck x B1 Dk D + B2
= + w
ξ˙
Bk C Ak ξ Bk D
" #
h i x
z ∞ = C1 + D12 Dk C D12 Ck + (D11 + D22 Dk D)w
ξ
(4.84)
" #
h i x
z2 = C2 + D22 Dk C D22 Ck + (D21 + D22 Dk D)w
ξ
" #
h i x
z1 = C3 + D32 Dk C D32 Ck + (D31 + D32 Dk D)w
ξ
that can be rewrite in the following compact form
ẋcl = Acl xcl + Bcl w
z = C x + D w
∞ 1 cl 1
(4.85)
z2 = C 2 xcl + D2 w (D2 = 0 is required)
z = C x + D w
2 3 cl 3
30
Then, we can derive LMI conditions characterizing the various synthesis
problem by starting from the LMI conditions derived for the analysis. In
particular:
In all above cases, we need to arrive to LMI conditions that are linear
in the controller variables Ak , Bk , Ck , Dk and P . To this end, we need
to use the matrix completion results seen previously in Chapter 2. To
this end, consider the matrix P ∈ R2n×2n partitioned as
X N −1 Y M
P = , P = (4.90)
NT ∗ MT ∗
31
Moreover, by defining
Y In In X
Πy = ∈ R2n×2n , Πx = ∈ R2n×2n (4.92)
M T 0n 0n N T
it is found that
P Πy = Πx (4.93)
⇓
X N Y In I X
= n
NT ∗ MT 0 0n N T
• H∞
T T
T
Πy Acl P + P Acl P Bcl C 1 Πy
T
I
TP
Bcl −γI D1 I <0
I C1 D1 −γI I
⇓
T
ΠTy P Acl Πy + ΠTy ATcl P Πy ΠTy P Bcl ΠTy C 1
TPΠ T
D1 < 0 (4.94)
Bcl y −γI
C 1 Πy D1 −γI
Now, it is worth noting that
• H2
We can act similarly for the conditions (4.87( for the H2 optimization.
In this case, one has
T T
Πy Acl P + P Acl P Bcl Πy
TP <0
I Bcl −I I
⇓
ΠTy P Acl Πy + (ΠTy P Acl Πy )T ΠTy P Bcl
<0 (4.97)
(ΠTy P Bcl )T −I
and
" #
I Q C2 I Q C 2 Πy
T = <0 (4.98)
ΠTy C2 P Πy (C 2 Πy )T ΠTy P Πy
32
• L1
Again, we transform the condition (4.88) in the following way:
T T
Πy Acl P + P Acl + λP P Bcl Πy
TP <0
I Bcl −µI I
⇓
T
(Πy P Acl Πy ) + (Πy P Acl Πy )T + λ(ΠTy P Πy ) ΠTy P Bcl
< 0 (4.99)
(Πy P Bcl )T −I
and
T
Πy λP 0 C3 Πy
I 0 (ζ − µ)I D3 I >0
I C3 D3 ζI I
⇓
T
λΠy P Πy 0 (C 3 Πy )T
T
0 (ζ − µ)I D3 > 0 (4.100)
C 3 Πy D3 ζI
⇓
h i
λkl ΠTy P Πy +µkl (ΠTy P Acl Πy )+µlk (ΠTy P Acl Πy )T < 0 (4.101)
1≤k,l≤s
1.
AY + B1 Ĉk A + B1 D̂k C
ΠTy P Acl Πy = ΠTx Acl Πy = (4.102)
Âk XA + B̂k C
2.
33
3.
Y I
ΠTy P Πy = ΠTx Πy = (4.104)
I X
4.
B2 + B1 D̂k D
ΠTy P Bcl = ΠTx Bcl = (4.105)
XB2 + B̂k D
5.
6.
(4.107)
C 1 Πy = C1 Y + D12 Ĉk C1 + D12 D̂k C
7.
(4.108)
C 2 Πy = C2 Y + D22 Ĉk C2 + D22 D̂k C
8.
(4.109)
C 3 Πy = C3 Y + D32 Ĉk C3 + D32 D̂k C
34
Lemma 4.16.2. H2 sub-optimality
Acl is asymptotically stable and kT2 kH2 < ν iif ∃X = X T > 0,
∃Y = Y T > 0, ∃Q = QT > 0, ∃Âk , B̂k , Ĉk , D̂k such that jointly solve
T T
(AY + B1 Ĉk ) + (AY + B1 Ĉk ) Âk + (A + B1 D̂k C) (B2 + B1 D̂k D)
Âk + (A + B1 D̂k C)T (XA + B̂k C) + (XA + B̂k C)T (XB2 + B̂k D)
<0
(B2 + B1 D̂k D)T (XB2 + B̂k D)T −I
(4.115)
tr {Q} < ν 2
Lemma 4.16.3. L1 sub-optimality
Acl is asymptotically stable and kT1 kL1 < ζ if ∃X = X T > 0,
∃Y = Y T > 0, ∃Âk , B̂k , Ĉk , D̂k , λ, µ > 0 such that jointly solve
T T
(AY + B1 Ĉk ) + (AY + B1 Ĉk ) + λY Âk + (A + B1 D̂k C) + λI (B2 + B1 D̂k D)
 + (A + B D̂ C)T + λI (XA + B̂ C) + (XA + B̂ C)T + λX (XB + B̂ D)
k 1 k k k 2 k
<0
(B2 + B1 D̂k D)T (XB2 + B̂k D)T −µI
λY λI (C3 Y + D32 Ĉk )T
0
λI λX (C3 + D32 D̂k C)T
0
>0
0 0 (ζ − µ)I (D31 + D32 D̂k C) T
(C3 Y + D32 Ĉk ) (C3 + D32 D̂k C) (D31 + D32 D̂k D) ζI
Lemma 4.16.4. R-stabilizability
It is found that all eigenvalues of Ad belong to the LMI region
R = z : L + Mz + MT z < 0
iff there exist X = X T > 0, Y = Y T ≥ 0, Âk , B̂k , Ĉk , and D̂k such that the
following LMI holds:
h Y I
AY + B1 Ĉk A + B1 D̂k C
MR = λkl + µkl
I X Âk XA + B̂k C
T ÂTk
i
(AY + B1 Ĉk )
+ µlk <0
(A + B1 D̂k C)T (XA + B̂k C)T 1≤k,l≤s
All previous sub-optimality conditions can be turned in optimal synthesis
methods by minimizing the norm bounds under the necessary given LMI
conditions. This is left to the reader
35
4.17 Controller computation
When X, Y, Âk , B̂k , Ĉk and D̂k has been found, it remains the problem
of find matrices Ak , Bk , Ck , Dk of controller. To this end, when N and M
are square matrices, this can be done easily by considering the inverse rela-
tionships
Dk = D̂k (4.117)
Ck = (Ĉk − Dk CY )M −T (4.118)
Bk = N −1 (B̂k − XB1 Dk ) (4.119)
h i
Ak = N −1 Âk − N Bk CY − XB1 Ck M T − X(A + B1 Dk C)Y M −T
(4.120)
A way to force N and M to be invertible consists of adding the LMI
Y I
>0 (4.121)
I X
in the synthesis problem if it is not already present among the other LMI.
In fact, the relationship
X N Y I I X
P Πy = Πx ⇔ =
NT ∗ MT 0 0 NT
XY + N M T = I ⇒ N M T = I − XY
36
4.18 Extensions
4.18.1 The use of frequency dynamical weights
From a practical ("engineering") point of view it can be advantageous
to weight in frequency the exogenous signals w(t) and the objectives z(t).
The frequency characterization of the exogenous signals, via suitable stable
filters, allows a better specification of the class of interest of the exogenous
signals. To characterize in frequency the objectives it means to specify the
relevance of those in different frequency ranges. Typically, LOW-PASS filters
are chosen for the objectives
On the contrary, HIGH-PASS filters are used to weight the input, when
there are included among the objectives
37
The rationale of the choice of low-pass filters is dictated by the fact that
in many problems of industrial interest the objectives are mostly important
to be minimized at low-frequency. On the contrary, high-pass filters are used
to define the working range of interest of the actuators. The idea is that,
where the magnitude of the high-pass filter is high the control actions will
be low (there will not have such frequency).
with input the original disturbance w(t). Then, we say that w̃ is the filtered
version of w(t). In a similar way, we consider the original objective vector
z(t) as the filtered version of the objective vector z̃ of the physical model.
The, we can write, via the filter Wz (s),
38
Notice that (4.123) and (4.124) are ARMA models in the derivative operator
d
s = dt . Moreover, let (
ẋw = Aw xw + Bw w
(4.126)
w̃ = Cw xw + Dw w
(
ẋz = Az xz + Bz z̃
(4.127)
z = Cz xz + Dz z̃
be the state-space representations of the Ws (s) and Wz (s)
Ww (s) = Cw (sI − Aw )−1 Bw + Dw (4.128)
Wz (s) = Cz (sI − Az )−1 Bz + Dz (4.129)
Then, we want include the filter in the physical model in order to arrive to
an extended model that can be used directly for the synthesis by using the
LMI methodologies illustrated in the previous chapters. To this end, we can
define an extended state
x − state of the original model
xz − state of the filter Wz
zw − state of the filterWs
Then, the overall state-space representation is given by
ẋ A 0 ECw x B EDw
ẋz = Bz H Az Bz LCw xz + Bz F u + Bz LDw w (4.130)
ẋw 0 0 Aw xw 0 Bw
x
(4.131)
z = Dz H Cz Dz LCw xz + Dz F u + Dz LDw w
xw
39
Then, one may design a controller for the extended model with state
x
xe = xz
xw
• r - reference signals
40
• yr - reference output (yr ≈ r)
• e(t) = r(t) − yr (t) tracking error
ẋp (t) = Ap xp (t) + Bp u(t) + Bw wp (r),
wp (t)disturbace
z(t) = Cz xp (t) + Dz u(t) + Fz wp (r)
yr (t) = Cr xp (t) + Dr u(t) + Fr wp (r)
Figure 4.16: Once the controller K is determined from the synthesis algo-
rithms, it needs to be partitioned in Kf f and Kf b
with
x wp
x= e w=
xp r
41
In this case, it could make sense to add among the objectives
xe wp
ze = 1 0 +0×u+ 0 0
xp r
In order to impose two eigs in zero for the closed-loop system, one can
consider the following control scheme
42
Figure 4.18: Control scheme
43
Figure 4.19: Once the controller K is determined from the synthesis algo-
(2) (1)
rithms, it needs to be partitioned in Kf f , Kf f and Kf b
ẋp = Ap xp + Bp u + Bw wp
z = C x + D u + F w
z p z z p
yr = Cr xp + Dr u + Fr wp
y = C x + D u + F w
p p p p p p
44
(
ẋf f = Ak xf f + Bf f xe
Kf f (s) =
uf f = Ck xf f + Df f xe
(
ẋf b = Ak xf b + Bf b yp
Kf b (s) =
uf b = Ck xf b + Df b yp
Figure 4.21: Once the controller matrices Bk and Dk have been determined
from the synthesis algorithms, it needs to be partitioned in Bf f and Bf b and
Df f and Df b respectively.
by denoting
xe wp x
y= e ,
x= , w= , Bk = Bf f Bf b , Dk = Df f Df b
xp r yr
45
" # " #" # " # " #" #
ẋ e 0 −C r x e −D r −Fr 1 wp
= + u+
ẋp 0 Ap xp Bp Bw 0 r
" # " #
h i x h i w
e p
z = 0 Cz + Dz u + Fz 0
x p r
" #" # " # " #" #
1 0 x e 0 0 0 wp
y = 0 C + u+
p xp Dp Fp 0 r
(
ξ˙ = Ak ξ + Bk y
u = uf f + uf b = Ck ξ + D k y
46
4.24 Internal Model Principle
The above examples are special cases of a more general theory denoted
as the Internal Model Principle
• The roots of D(s)A(s) + B(s)N (s) are the poles of the closed-loop
system;
• The roots of D(s)A(s) are the open-loop poles of the controller and
plant respectively;
The assumption in equation (4.134) ensures that r(t) is not evanescent, that
is it consists of divergent or bounded modes, as it is typical for reference
signals
r(t) = sin ωt, cos ωt, eαt , teαt , ...
The eigenvalues of Ar
are contained between the roots of
e(t) → 0 (t → ∞) ⇔ D(s)A(s) = 0
Equivalently, they are the poles of the closed-loop system
G(s) = N (s) B(s)
D(s) A(s)
47
The proof is based on the final value Lemma. Let f (t) ∈ C1 (continuous
with derivative continuous). They, when the limit exists finite,
lim f (t) = lim sF (s), F (s) = L(f (t))
t→∞ s→0
Assume that
R(s)
L(r(t)) = , R, D polynomial
D(s)
It is found:
D(s)A(s)
R(s)
lim e(t) = lim s (4.135)
t→∞ s→0 D(s)A(s) + B(s)N (s)
D(s)
Because all roots of D(s) have Re(zi ) ≥ 0 and those of D(s)A(s) + B(s)N (s)
have Re(zi ) < 0 for the asymptotic stability, the terms in D(s) in (4.135)
cancels exactly assuming that they are not contained in A(s). Then, it is
found
lim e(t) = 0 (4.136)
t→∞
ω2
r(t) = sin(ωt) ⇒ L(r(t)) =
s2 + ω 2
The signal satisfies the following state space representation:
0 −ω 2 ṙ
r̈
= ⇔ r̈(t) + ω 2 r(t) = 0
ṙ 1 0 r
Then, by repeating the above arguments, one has that the following scheme
provides a useful solution
48
ẍe (t) = r(r) − yr (t) − w2 xe (t)
(2) (1)
u(t) = Kf f ẋe (t) + Kf f xe (t) + Kf b xp (t)
0 −w2 −Cr
ẍe ẋe −Dr −Fr 1
w
ẋe = 1 0 0 xe + 0 u + 0 0 p
r
ẋp 0 0 Ap xp Bp Bw 0
ẋe
wp
z = 0 0 Cz xe + Dz u + Fz 0
r
xp
ẋe
wp
x = xe , w =
r
xp
Figure 4.24: Once the controller K is determined from the synthesis algo-
(2) (1)
rithms, it needs to be partitioned in Kf f , Kf f and Kf b
49