0% found this document useful (0 votes)
17 views49 pages

Chapter 4 - Control Synthesis For LTI Systems

Chapter 4 discusses control synthesis for linear time-invariant (LTI) systems, focusing on static state-feedback control and stabilizability conditions. It introduces necessary conditions for stability using Lyapunov functions and linear matrix inequalities (LMIs), and presents theorems for stabilizability and D-stabilizability. The chapter also covers optimal control synthesis methods, including H∞, H2, and L1 control, providing a framework for addressing various control design problems.

Uploaded by

eghbaledx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views49 pages

Chapter 4 - Control Synthesis For LTI Systems

Chapter 4 discusses control synthesis for linear time-invariant (LTI) systems, focusing on static state-feedback control and stabilizability conditions. It introduces necessary conditions for stability using Lyapunov functions and linear matrix inequalities (LMIs), and presents theorems for stabilizability and D-stabilizability. The chapter also covers optimal control synthesis methods, including H∞, H2, and L1 control, providing a framework for addressing various control design problems.

Uploaded by

eghbaledx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter 4

Control Synthesis for LTI


Systems

4.1 Static State-feedback control

Figure 4.1: Static State-feedback control

The basic regulation synthesis problem, given the plant


ẋ(t) = Ax(t) + Bu(t) (4.1)
with u(t) ∈ Rm and x(t) ∈ Rn , consists in determining a matrix K ∈ Rm×n
such that the control law
u(t) = Kx(t) (4.2)

1
makes the closed-loop system

ẋ(t) = (A + BK)x(t) (4.3)

asymptotically stable.

4.2 Stabilizability conditions


From Systems Theory we know that the conditions that ensure the exis-
tence of such a K consist of checking if all eigenvalues of A with Re(λi ) ≥ 0
satisfy the PBH Reachability test for stabilizability

rank[λi I − A | B] = n (4.4)

This condition consists to check if all eigenvalues of the no-reachable subsys-


tem are asymptotically stable (Re(λi ) < 0). In fact, it can be proved that
the matrix
[λI − A | B]
can loose rank only when λ is a non-reachable eigenvalue. Notice that the
synthesis problem has always solution if the system is stabilizable.

4.3 Stabilizability conditions by LMI


Consider the closed-loop system

ẋ(t) = (A + BK)x(t) (4.5)

and apply the same reasoning done for the system stability analysis of Chap-
ter 3. To this end, we consider a quadratic Lyapunov function

V (x) = xT P x, P = P T > 0, P ∈ Rn×n (4.6)

and we want to check under which conditions

V̇ (x) < 0, ∀x/ {0x } (4.7)

It is found

V̇ (x) = ẋT P x + xT P ẋ = xT (A + BK)T P x + xT P (A + BK)x


(4.8)
= xT [(A + BK)T P + P (A + BK)]x

Then, V̇ (x) < 0, ∀x, if and only if

∃P = P T > 0, P ∈ Rn×n
such that P (A + BK) + (A + BK)T P < 0 (4.9)
∃K ∈ Rm×n

2
If K is given, (4.9) is a LMI feasibility test. If satisfied, then K is a stabilizing
control law. On the contrary, if K is unknown and has to be designed the
LMI (4.9) is non-linear because it contains the product of the unknowns K
and P (compare the product PBK). Thus, it is required a change of variables
that makes the inequality (4.9) linear in the unknowns P and K. To this
end, consider the following change of variables (due to Bernussou)

X = P −1 and Y = KX ∈ Rm×n (4.10)

Then, (4.9) can be transformed via a congruence transformation as

P −1 [P (A + BK) + (A + BK)T P ]P −1 < 0 (4.11)


−1 −1
(A + BK)P +P T
|{z}(A + BK) < 0 (4.12)
X

KX ) + XAT + XK
(AX + B |{z} T T
| {z } B < 0 (4.13)
Y Y T

∃X = X T > 0, X ∈ Rn×n
(AX + BY ) + (AX + BY )T < 0 (4.14)
∃Y ∈ Rm×n

If matrices X and Y exist satisfying (4.14), the controller is given by

K = Y X −1 (4.15)

The above discussion can be summarised in the following Theorem

Theorem 4.3.1. Stabilizability Theorem


The stabilizability problem (4.1) - (4.3) has solution if and only if

∃X = X T > 0, X ∈ Rn×n
such that (AX + BY ) + (AX + BY )T < 0
∃Y ∈ Rm×n

In this case, the stabilizing control is given by

K = Y X −1

In the LMI literature, the variables’ change (4.10) is well-known, and it has
helped to establish the LMI approach in the 80’s and 90’s in the control
comuntty for solving different control synthesis problems.

3
4.4 D-stabilizability conditions
It is of interest to determine LMI conditions under which is possible to
design a state-feedback control law
u(t) = Kx(t), K ∈ Rm×n (4.16)
such that the closed-loop system
ẋ(t) = (A + BK)x(t) (4.17)
has all eigenvalues in the LMI region
(
R = {z : fR (z) < 0}
(4.18)
fR (z) = L + M z + M T z̄
with
L = LT ∈ Rs×s , M ∈ Rs×s (4.19)
L = [λkl ], M = [µkl ] (4.20)
By repeating the same arguments used in the R-stability analysis of Chapter
3, one arrives to conclude that the system (4.17) is R-stabilizable if and only if

∃P = P T > 0, P ∈ Rn×n and ∃K ∈ Rm×n such that


MR (A + BK, P ) = L ⊗ P + M ⊗ P (A + BK) + M T (A + BK)T P < 0 (4.21)
Notice that MR (A + BK, P ) ∈ Rns×ns is a block-matrix and each block has
dimension n × n. In particular, the generc (k, l) block-entry of MR (A +
BK, P ) has the following expression
h i
MR (A + BK, P ) = λkl P + µkl P (A + BK) + µlk (A + BK)T P <0
1≤k,l≤s
(4.22)
Again, because (4.22) is nonlinear in P and K as before, we can adjust things
via the following congruence transformation
 −1   −1 
P P
 P −1   P −1 
.. M (A + BK, P ) .. =
   
 R
. .
 
   
P −1 P −1

h i
= λkl P −1 + µkl (A + BK)P −1 + µlk P −1 (A + BK)T <0
1≤k,l≤s
(4.23)
Finally, by using the change of variables (4.10), one arrives to

4
Theorem 4.4.1. D-stabilizability Theorem
The D-stabilizability problem (4.16) - (4.20) has solution if and only if

∃X = X T > 0, X ∈ Rn×n and ∃Y ∈ Rm×n such that


h i
MR (A, B, X, Y ) = λkl X + µkl (AX + BY ) + µlk (AX + BY )T <0
1≤k,l≤s

If X and Y exist, the D-stabilizability controller is given by

K = Y X −1

Example 4.1. D-stabilizability of S(α, θ, r)

Figure 4.2: S(α, θ, r) region

S(α, θ, r) = fα (z) ∩ fθ (z) ∩ fr (z)


fα (z) = {z : 2α + z + z < 0}
   
−r z
fr (z) = z : <0
z̄ −r
   
(z + z)sin(θ) (z − z)cos(θ)
fθ (z) = z : <0
(z − z)cos(θ) (z + z)sin(θ)
Then, there exists K ∈ Rm×n that allocates the eigenvalues of (A + BK) in
the LMI region S(α, θ, r) if and only if

∃X = X T > 0, X ∈ Rn×n and ∃Y ∈ Rm×n such that

5


2αX + (AX + BY ) + (AX + BY )T < 0





" #
−rX (AX + BY )



 <0
(AX + BY )T −rX





" #
 [(AX + BY ) + (AX + BY )T ]sin(θ) [(AX + BY ) − (AX + BY )T ]cos(θ)



 <0
[(AX + BY )T − (AX + BY )]cos(θ) [(AX + BY )T + (AX + BY )]sin(θ)

If a solution X,Y exists, the controller gain in given by


K = Y X −1

4.5 Optimal H∞ , H2 and L1 control synthesis


In all optimal control problems we will consider the following extended
plant description.

Figure 4.3: Extended plant

• ω - is a vector that collects all exogenous signals acting on the synthesis:


noise, disturbances, reference signals, etc.
• z - is the vector of control objectives. Each component should be
minimized by the choice of the controller (optimal control).
• x/y - represents the measurement vector. When the state is measurable
it coincides with x(t). Otherwise, it represents the output y(t) that
collects all available measures provided by the sensors.
• u - represents the vector of all commands to the system actuators. It
coincides with the input of the system.

6
This control design framework is very general and can recast many control
design problems of practical interest.

Example 4.2. - Disturbance Rejection

Figure 4.4: Example 4.2 - Disturbance Rejection


ẋ = Ax + Bu + F d

y = Cy x + D y u u = K(s)y

z = Cz x + D z u + D d d

Example 4.3. - Tracking with Integral Effect

Figure 4.5: Example 4.3 - Tracking with Integral Effect

(
ẋ = Ax + Bu
y = Cx + Du

ẋe = e(t), e(t) = r(t) − y(t) Tracking error to be minimized

u(t) = Ke xe (t) + Kx x(t) Control law


 
x
By introducing the extended state y = e it can be rewritten as
x

7
Figure 4.6: Block diagram

 
  xe
u(t) = Ke Kx
x

The integral effect ensures that e(t) → 0 for t → ∞ when r(t) ≡ r ∀t. The
gain Ke and Kx could be chosen to minimize the tracking error during the
transients also, by having as objective of the control design the minimization
of kek2 or kek∞ .

4.6 State-Space representation and problems for-


mulation

Figure 4.7: State-Space representation and problem formulation

8



ẋ = Ax + B1 u + B2 ω

z = C x + D ω + D u
∞ 1 11 12
Open-Loop System (4.24)


z 2 = C 2 x + D 22 u

z = C x + D ω + D u
1 3 31 32

n
Controller u = Kx (4.25)




ẋ = (A + B1 K)x + B2 ω

z = (C + D K)x + D ω
∞ 1 12 11
Closed-Loop System (4.26)


z2 = (C2 + D22 K)x

z = (C + D K)x + D ω
1 3 32 31

with closed-loop transfer matrices

T∞ (s) = (C1 + D12 K)(sI − (A + B1 K))−1 B2 + D11 (proper)

T2 (s) = (C2 + D22 K)(sI − (A + B1 K))−1 B2 (strictly proper) (4.27)

T1 (s) = (C3 + D32 K)(sI − (A + B1 K))−1 B2 + D31 (proper)

Notice that all transfer matrix T∞ , T2 and T1 depend on the controller gain
K and map the disturbance ω(t) to the objectives

z∞ (t) = T∞ (s)ω(t)

z2 (t) = T2 (s)ω(t) (4.28)

z1 (t) = T1 (s)ω(t)

in closed-loop. Notice that T∞ (s), T2 (s) and T1 (s) are here to be interpreted
as ARMA models, with s = dt d
, the derivative operator. Then, it makes
sense to consider the following optimal control design problems


H∞ Optimal control K∞ = arg min kT∞ kH∞ (4.29)
K∈S

H2 Optimal control K2∗ = arg min kT2 kH2 (4.30)


K∈S

9
L1 Optimal control K1∗ = arg min kT1 kL1 (4.31)
K∈S

where the condition K ∈ S restricts K to be a stabilizing controller.

By remembering the induced norms properties one can write

Then, K∞∗ is the unique op-

kZ∞ k2 ≤ kT∞ kH∞ kωk2 → timal controller that makes


kT∞ kH∞ the smallest

Similarly

Then, K1∗ is the unique op-


kZ1 k∞ ≤ kT1 kL1 kωk∞ → timal controller that makes
kT1 kL1 the smallest

Then, K2∗ is the unique op-


kZ2 k∞ ≤ kT2 kH2 kωk2 → timal controller that makes
kT2 kH2 the smallest

4.7 LMI formulation of optimal control synthesis


4.7.1 H∞ Optimal control
We consider first this preliminary Lemma on sub-optimal H∞ control

Lemma 4.7.1. H∞ sub-optimal Lemma


Consider the system (4.24)-(4.27). Then, there exists K ∈ Rm×n such
that
(A + B1 K)is asymptotically stable and kT∞ kH∞ < γ
if and only if ∃X = X T > 0, X ∈ Rn×n and Y ∈ Rm×n such that

(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T


 
 B2T −γI T
D11 <0 (4.32)
(C1 X + D12 Y ) D11 −γI

If a solution X,Y exists, then the sub-optimal H∞ controller is given by

K = Y X −1

10
Proof. We start from the LMI conditions (3.34) derived for the H∞ analysis
and apply them to the closed-loop system (4.26). It is found

(A + B1 K)T P + P (A + B1 K) P B2 (C1 + D12 K)T


 
 B2T P −γI D11T <0 (4.33)
(C1 + D12 K) D11 −γI

Then, by transforming (4.33) via congruence matrices


 −1   −1
(A + B1 K)T P + P (A + B1 K) P B2 (C1 + D12 K)T
 
P P
 I  B2T P −γI T
D11  I <0
I (C1 + D12 K) D11 −γI I
(4.34)
one achieves
X(A + B1 K)T + (A + B1 K)X B2 X(C1 + D12 K)T
 
 B2T −γI T
D11 <0
(C1 + D12 K) D11 −γI

with X = P −1 . Moreover, by denoting Y = KX one arrives to 4.32. Notice


that the alternative formulations (3.32) and (3.52) lead respectively to the
following conditions for the synthesis



∃X = X T > 0, ∃Y
s. t.




 
(3.35) ⇒ (AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T

 B2T −γ 2 I T
D11 <0
 



(C1 X + D12 Y ) D11 −I


T
∃X = X > 0, ∃Y


s. t.

(3.55) ⇒ " T + B BT
#
 (AX + B 1 Y ) + (AX + B 1 Y ) 2 2 ∗
<0


(C2 X + D12 Y ) + D11 B2T −γ 2 I + D11 D11
T

H∞ Optimal control

The synthesis of the H∞ optimal control then consists of solving the

11
following optimization problem:
h i


 X ∗ Y ∗ = arg minX,Y,γ γ





  
T B2 (C1 X + D12 Y )T

 (AX + B1 Y ) + (AX + B1 Y )


B2T −γI T
D11 <0
 
 




 (C1 X + D12 Y ) D11 −γI





X = X T > 0

If a solution exists, it is unique and the optimal controller is given by


∗ −1
K∞ = Y ∗X ∗

4.7.2 H2 optimal control


We repeat the same logical path used in the H∞ section and start by
introducing first a Lemma for sub-optimal H2 control synthesis We start
applying the Formulation 1 and 1’ of the Chapter 3 to the closed-loop system
4.26

Lemma 4.7.2. H2 sub- optimal Lemma


Consider the system (4.24)-(4.27). Then, there exists K ∈ Rm×n such
that
(A + B1 K)is asymptotically stable andkT2 kH2 < ν
if and only if one of the two equivalent formulations are satisfied:

1. ∃X = X T > 0, Q = QT > 0, Y s. t.

(AX + B1 Y ) + (AX + B1 Y )T + B2 B2T < 0 (4.35)


 
Q (C2 X + D22 Y )
>0 (4.36)
(C2 X + D22 Y )T X

tr {Q} < ν 2

2. ∃X = X T > 0, Q = QT > 0, Y s. t.

(AX + B1 Y ) + (AX + B1 Y )T B2
 
<0 (4.37)
B2T −I
 
Q (C2 X + D22 Y )
>0
(C2 X + D22 Y )T X
tr {Q} < ν 2

12
If a solution X,Y exists, then the sub-optimal H2 controller is given by

K = Y X −1

for both formulations.

Proof. We start from Formulation 1 applied to (4.26). It is found that

(A + B1 K) is asymptotically stable and kT2 kH2 < ν

if and only if ∃X = X T > 0 and K ∈ Rm×n such that

(A + B1 K)X + X(A + B1 K)T + B2 B2T < 0


tr (C2 D22 K)X(C2 D22 K)T < ν 2


Then, by the usual variables’ change KX = Y , with Y ∈ Rm×n , X ∈ Rn×n


one finds that

(A+B1 K)X+X(A+B1 K)+B2 B2T < 0 ⇔ (AX+B1 Y )+(AX+B1 Y )T +B2 B2T < 0

Moreover, let’s introduce a matrix Q = QT > 0 such that

(C2 + D22 K)X(C2 + D22 K)X T < Q (4.38)

It follows that (4.38) implies

tr (C2 + D22 K)X(C2 + D22 K)T (4.39)



< tr {Q}

Moreover, (4.39) can be manipulated as follows


−1 −1 −1 −1 T
(C2 XX
| {z } +D22 Y
|X{z })X(C2 XX
| {z } +D22 Y
|X{z })
I K I K

(C2 X + D22 Y )X −1 XX −1 (C2 X + D22 Y )T < Q (4.40)


Then, (4.40) can be rewritten equivalently as
 
Q (C2 X + D22 Y )
>0
(C2 X + D22 Y )T X

In fact, it Schur’s factors are


(
X>0
Q − (C2 X + D22 Y )X −1 (C2 X + D22 Y )T > 0
Finally, we need to add the condition

tr {Q} < ν 2

13
The second formulation can be proved by considering that the Schur’s factors
of
(AX + B1 Y ) + (AX + B1 Y )T B2
 
<0
B2T −I
are (
−I < 0
(AX + B1 Y ) + (AX + B1 Y )T − B2 (−I)−1 B2T < 0

H2 Optimal control

Then, the synthesis of the H2 optimal controller consists of solving the


following optimization problem (only the second formulation is considered
for simplicity):
h i


 X ∗ Y ∗ = arg minX,Y,Q,ρ ρ (ρ = ν 2 )





" #
(AX + B1 Y ) + (AX + B1 Y )T B2



<0


T




 B 2 −I



" #
Q (C 2 X + D 22 Y )

 T
>0



 (C 2 X + D 22 Y ) X









tr {Q} < ρ





X = X T > 0, Q = QT > 0

If a solution exists, it is unique and the optimal controller is given by


−1
K2∗ = Y ∗ X ∗

Notice also that, because tr {Q} is a linear function of Q and ρ enters only
in the last condition tr {Q} < ρ, the above optimization problem can also

14
be rewritten as
h i
∗ Y ∗ = arg min
X X,Y,Q tr {Q}








 " #
T


 (AX + B 1 Y ) + (AX + B 1 Y ) B 2
<0


B2T



 −I

" #

 Q (C 2 X + D 22 Y )
>0


(C2 X + D22 Y )T




 X





X = X T > 0, Q = QT > 0

4.8 LQ and LQG optimal control via LMI


4.8.1 Riccati-based LQ optimal control
Consider the following system
ẋ(t) = Ax(t) + B1 u(t), x(0) = x0
Find a state-feedback controller
u(t) = Kx(t), K ∈ Rm×n
that stabilizes the system and minimizes
Z ∞
J= xT (t)Qx(t) + uT (t)RuT (t) dt
0
Assumptions:
1. (A, B1 ) stabilizable
2. Q = QT ≥ 0, R = RT > 0
The solution is given by
K = −R−1 B1−1 P
where P = P T > 0 is the solution of the Algebraic Riccati Equation (ARE)
AT P + P A − P B1 R−1 B1T P + Q = 0n×n
It is found that
1. A + B1 K is asymptotically stable
2. The associated cost
Jmin = xT0 P x0
Notice also that
Jmin = xT0 P x0 = tr P x0 xT0


15
4.8.2 Riccati-based LQG optimal control
Consider the system

ẋ(t) = Ax(t) + B1 u(t) + B2 w(t), x(0) = 0x

Find a state-feedback controller

u(t) = Kx(t)

that stabilizes the system ans minimizes

J = lim E[xT (t)Qx(t) + uT (t)Ru(t)]


t→∞

Assumptions:
1. (A, B1 ) stabilizable

2. Q = QT ≥ 0, R = RT > 0

3. w(t) is a Gaussian white noise vector with zero mean and covariance
W = WT > 0
The solution is given by
K = −R−1 B1T P
where P = P T > 0 is the solution of the Algebraic Riccati Equation (ARE)

AT P + P A − P B1 R−1 B1T P + Q = 0n×n

It is found that
1. A + B1 K is asymptotically stable

2. The associated cost is minimized

Jmin = tr P B2 W B2T


4.9 Solution via LMI-The LQ case


Consider first the problem to determine a bound to the cost
Z ∞
J= xT (t)Qx(t) dt (4.41)
0

for the autonomous system

ẋ(t) = Ax(t), x(0) = x0 (4.42)

when the matrix Q = QT > 0 is given. A necessary condition for J is (4.41)


to be bounded is that( 4.42) is asymptotically stable (x(t) → 0x for t → ∞).

16
Then, in order to verify that a bound exists in this case, we can apply the
Lyapunov’s criterium. Then, let consider

V (x) = xT P x, P = PT > 0 (4.43)

Then, the condition that ensures asymptotic stability and a bound on J is


given by
V̇ (x) ≤ −xT (t)Qx(t), ∀t (4.44)
In fact, by integrating (4.44) from 0 to T and tacking the limit for T → ∞
on both sides, one has
Z ∞
lim (xT (T )P x(T )) − xT (0)P x(0) ≤ − xT (t)Qx(t) dt
T →∞ 0

Asymptotic stability ensures that x(T ) → 0x for T → ∞. Then,


Z ∞
J= xT (t)Qx(t) dt ≤ xT (0)P x(0) (4.45)
0

Observe also that the asymptotic stability ensures that the Algebraic Lya-
punov Equation
AT P + P A = −Q
has a unique solution P̄ = P̄ T > 0 associated to Q. This means that we
have (
AT P + P A + Q = 0
⇒ AT (P − P ) + (P − P )A ≤ 0
AT P + P A + Q ≤ 0
and necessarily
P −P >0
for the asymptotic stability. Moreover, because of the unicity of P̄ , one has

J = xT(0) P x(0) ≤ xT(0) P x(0) (4.46)

We want now to repeat the same arguments for the following controlled
system

ẋ(t) = Ax(t) + Bu(t), x(0) = x0


u(t) = Kx(t)

Then, the closed-loop system is given by

ẋ(t) = (A + BK)x(t), x(0) = x0 (4.47)

with cost Z ∞
J= (xT (t)Qx(t) + uT (t)Ru(t)) dt
Z0 ∞ (4.48)
= xT (t)[Q + K T RK]x(t) dt
0

17
with Q = QT > 0 and R = RT > 0. By repeating the same arguments of
the previous open-loop case, one finds

∃P = P T > 0, K ∈ Rn such that

(A + BK)T P + P (A + BK) + Q + K T RK < 0 (4.49)

J ≤ xT0 P x0 (4.50)
Then, the matrices P and K that minimize the cost can be found by solving


minP,K γ





P = PT > 0

(A + BK)T P + P (A + BK) + Q + K T RK < 0





 T
x0 P x0 ≤ γ

Note that (4.49) is not on LMI because we have the nonlinear products
K T RK, P BK and K T B T P . A way to arrive to a LMI formulation is to use
the usual change of variables

X = P −1 , KX = Y (4.51)

Then (congruence transformation)

P −1 [(A + BK)T P + P (A + BK) + Q + K T RK]P −1 < 0


X(K B + A ) + (A + BK)X + X T QX + XK T RKX < 0
T T T


(AX + BY ) + (AX + BY ) + X T QX + Y T RY < 0
T
(4.52)
Next, the term (4.52) can be seen as the Schur’s factor of the matrix

(AX + BY ) + (AX + BY )T YT
 
X
 X −Q−1 0 <0 (4.53)
Y 0 −R−1

where  
−Q 0
<0 (4.54)
0 −R
Moreover, the condition

xT0 P x0 ≤ γ ⇔ xT0 X −1 x0 ≤ γ

18
can be written as
γ xT0
 
>0 (4.55)
x0 X
In conclusion, the LQ optimal controller can be computed via solving
h i
∗ Y ∗ = arg min


 X X,Y γ






 


 (AX + B1 Y ) + (AX + B1 Y )T X YT

X −Q−1 0 <0

  

 

Y 0 −R−1




 " #
γ xT0




 >0
x X

0









X = XT > 0

−1
KLQ = Y ∗ X ∗
A further formulation can be found by observing that
xT0 P x0 = tr xT0 X −1 x0 = tr X −1 x0 xT0 (4.56)
 

Finally, we need to convert (4.56) in an expression given in terms of X and


not X −1 . This can be done by introducing a new matrix Z = Z T > 0 such
that  
−1 Z I
Z>X ⇔ >0 (4.57)
I X
(N.B. : tr Z x0 xT0 > tr X −1 x0 xT0 ).
 

Then, the optimization problem becomes


h i
∗ Y ∗ = arg min Zx0 xT0



 X X,Y tr






 


 (AX + B1 Y ) + (AX + B1 Y )T X YT

X −Q−1 0 <0

 

 

Y 0 −R−1




 " #
Z I




 >0
I X










X = X T > 0, Z = Z T > 0

−1
KLQ = Y ∗ X ∗

19
4.10 Solution via LMI - the LQG case

ẋ(t) = Ax(t) + B1 u(t) + B2 w(t), x(0) = 0x


u(t) = Kx(t)
(4.58)
J = lim E xT (t)Qx(t) + uT (t)Ru(t)
 
t→∞
Q = Q > 0, R = RT > 0
T

By repeating the same arguments, it is possible to show that the stabi-


lizability requires

∃P = P T > 0, ∃K such that

(A + B1 K)T P + P (A + B1 K) + Q + K T RK < 0

and
J ≤ tr P B2 W B2T


Then, with the usual change of variables (4.51), we arrive to the same LMI
(4.53). Moreover, by setting

Z > B2T X −1 B2 , X −1 = P

one has

tr {ZW } ≥ tr B2T X −1 B2 W = tr X −1 B2 W B2T (4.59)


 
≥J

Moreover,
Z B2T
 
Z > B2T X −1 B2 ⇔ >0
B2 X
Then, the optimization problem for the LQG synthesis becomes:
h i
∗ Y ∗ = arg min


 X X,Y tr {ZW }





  

 (AX + B Y ) + (AX + B Y )T X Y T

 1 1

X −Q−1 0 <0

  



Y 0 −R−1




 " #

 Z B T
 2

 >0
B X

2









X = X T > 0, Z = Z T > 0

KLQG = Y ∗ X ∗−1

20
As a final remark, observe that

(AX + B1 Y ) + (AX + B1 Y )T YT
 
X
 X −Q−1 0 <0
Y 0 −R−1

can be rewritten as
1 1
 
(AX + B1 Y ) + (AX + B1 Y )T XQ 2 Y T R2
1
Q2 X −I 0 <0
 

1
R2 Y 0 −I

where

1 1
Q2 Q2 = Q ≥ 0
1 1
R2 R2 = R > 0

In this case, Q need not to be invertible (R must always be invertible R > 0)

4.11 The LQG problem (and LQ) can be seen as H2


optimal synthesis problems

ẋ(t) = Ax(t) + B1 u(t) + B2 w(t), x(0) = 0x


z(t) = Cz x(t) + Dzu u(t) − objective

The cost in this case is expressed as

J = lim E[z T (t)z(t)] (4.60)


t→∞

This cost is more general that (4.58) because it involves also mixed products

z T (t)z(t) =(xT (t)CzT + uT (t)Dzu


T
)(Cz x(t) + Dzu u(t))
= xT (t)CzT Cz x(t) + xT (t)CzT Dzu u(t) + uT (t)Dzu
T
Cz x(t) + uT (t)Dzu
T
Dzu u(t)
(4.61)
By repeating the same arguments used to derive the LQG optimal controller
above, it is found that
J = tr P B2 W B2T (4.62)


T
where P = P > 0 is the unique solution of the Algebraic Lyapunov Equa-
tion

(A + B1 K)T P + P (A + B1 K) + (Cz + Dzu )T (Cz + Dzu ) = 0 (4.63)

21
Moreover, for any control law K such that (A + B1 K) is asymptotically
stable, exists P = P T > 0 that satisfies

(A + B1 K)T P + P (A + B1 K) + (Cz + Dzu )T (Cz + Dzu ) < 0 (4.64)

and
P − P > 0 ⇒ tr P B2 W B2T (4.65)

≥J
Expression (4.64) can easily be converted in LMI. Specifically:

(A + B1 K)T P + P (A + B1 K) (Cz + Dzu )T


 
<0
(Cz + Dzu ) −I


 −1
 P −1
  
P
K)T P )T

  (A + B1 + P (A + B1 K) (Cz + Dzu  <0
(Cz + Dzu ) −I
I I
P −1 = X, KX = Y

(AX + B1 Y )T + (AX + B1 Y ) (Cz X + Dzu Y )T
 
<0 (4.66)
(Cz X + DzuY ) −I
and, by introducing a new matrix ∃Z = Z T > 0 such that

Z B2T
 
T −1
Z > B2 X B2 ⇔ >0
B2 X

one finds
tr {ZW } ≥ tr B2T X −1 B2 W

>J
As conclusion: the H2 optimal control law
−1
u(t) = Kx, K = Y ∗X ∗

is the unique solution of


h i

 ∗ ∗
X Y = arg minX,Y,Z tr {ZW }






 " #
T + (AX + B Y ) (C X + D Y )T


 (AX + B 1 Y ) 1 z zu
<0





 (C z X + D zuY ) −I
" #
T


 Z B 2
>0






 B 2 X





X = X T > 0, Z = Z T > 0

22
stabilizes the system and minimizes the cost function
J = lim E[z T (t)z(t)]
t→∞

Notice that the classic LQG solution can be achieved by setting


 1  
Q2 0
Cz = , Dzu = 1
0 R2
In fact, in this case
1 1 1 1
z T (t)z(t) = xT (t) Q 2 Q 2 x(t) + uT (t) |R 2{zR 2} u(t)
| {z }
Q R

4.12 L1 optimal control


Lemma 4.12.1. L1 sub-optimal Lemma
Consider the system (4.24)-(4.27). Then, there exists K ∈ Rm×n such that
(A + B1 K)is asymptotically stable and kT1 kL1 < ζ
if there exist X = X T > 0, Y, γ > 0, µ > 0, λ > 0 s. t.
(AX + B1 Y )T + (AX + B1 Y ) + λX B2
 
<0 (4.67)
B2T −µI

(C3 X + D32 Y )T
 
λX 0
 0 (ζ − µ)I T
D31 >0 (4.68)
(C3 X + D32 Y ) D31 ζI
The controller, if exists, is given by
K = Y X −1
Proof. The proof follows the same lines of the other optimal control prob-
lems. We start from (3.63)-(3.64), that we apply to the closed-loop system
(4.24)-(4.27). It is found

(A + B1 K)T P + P (A + B1 K) + λP P B2
 
<0 (4.69)
B2T P −µI
(C3 + D32 K)T
 
λP 0
 0 (ζ − µ)I T
D31 >0 (4.70)
(C3 + D32 K) D31 ζI
By transforming (4.69) and (4.70) via the following congruence transforma-
tion  −1   −1 
P   P
  4.69  <0
I I

23
 −1   −1 
P   P
 I  4.70  I <0
I I
and denoting X = P −1 and Y = KX one arrives to (4.67) and (4.68).

4.12.1 L1 optimal control


h i
∗ Y ∗ = arg min


 X X,Y,µ ζ





" #
(AX + B1 Y )T + (AX + B1 Y ) + λX B2




 <0
B2T −µI






 


 λX 0 (C3 X + D32 Y )T

(ζ − µ)I T
0 D31 >0

 



 (C3 X + D32 Y ) D31 ζI









X = X T > 0, µ > 0, (λ > 0 f ixed)

The solution, if it exists, is given by


−1
K1∗ = Y ∗ X ∗

4.13 Allocation of (A + B1 K)’s eigenvalues


In this case one wants to design a controller that allocates the eigenvalues
of (A + B1 K) in a given LMI region
R = z : L + Mz + MT z < 0 (4.71)


with
L = LT ∈ Rs×s , M ∈ Rs×s
L = [λkl ], M = [µkl ]
Lemma 4.13.1. Allocation’s lemma - All eigs of (A + B1 K) belong to R if
and only if ∃X = X T > 0, ∃Y ∈ Rm×n such that
MR (A, B1 , X, Y ) = L ⊗ X + M ⊗ (AX + B1 Y ) + M T ⊗ (AX + B1 Y )T < 0
(4.72)
where the (k,l)- generic block-matrix of (4.72) is given by
h i
MR (A, B1 , X, Y ) = λkl X + µkl (AX + B1 Y ) + µlk (AX + B1 Y )T <0
1≤k,l≤s
(4.73)

24
Figure 4.8: R-region

Example 4.4.
R = {z : 2α + z + z < 0}

MR (A, B1 , X, Y ) = 2αX + (AX + B1 Y ) + (AX + B1 Y )T < 0 (4.74)


−1
If X,Y exist solving (4.74), then all eigs λi of (A + B1 Y
|X{z }) will have
K
Re(λi ) < −α, where
K = Y X −1
is the controller determined.

25
4.14 Multi-objective control
The previous LMI conditions that describe the optimization problems
underlying the synthesis of the various controllers are all convex in the vari-
ables X and Y
kT∞ kH∞ < γ, kT2 k2 < ν, kT1 kL1 < ζ, λi (A + B1 K) ∈ R
Then, they can be combined because the intersection of convex con-
straints is still a convex constraint.

4.14.1 Examples of multy-objectives synthesis problems


1. H∞ /H2 Multi-objective
a, b > 0 are given scalars. "The scalarization approach"

KH 2 /H∞
= min akT∞ kH∞ + bkT2 kH2
K∈S

2. H∞ + R − stability


KH 2 /HR
= min kT∞ kH∞
K∈RS

3. H∞ /H2 Multi-objective
a > 0 is a given scalar. "The ε-constrained approach"

KH ∞ /H2
min = kT∞ kH∞
K∈S
subject to kT2 kH2 < a

Formulations:
1.
h i


 X Y = arg minX,Y,Q aγ + bν (given a>0, b>0)
∗ ∗





 
(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T





B2T −γI T
D11 <0

 



(C1 X + D12 Y ) D11 −γI






" #

 (AX + B1 Y ) + (AX + B1 Y )T B2


T
<0
B −I




" 2 #
−Q (C2 X + D22 Y )




 >0
(C2 X + D22 Y )T X










X = X T > 0, Q = QT > 0, tr {Q} < ν 2

26

KH ∞ /H2
= Y ∗ X ∗−1

2. Given R = z : L + M z + M T z < 0


h i
∗ Y ∗ = arg min


 X X,Y γ






 


 (AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T

B2T −γI T
<0

D11
 


 (C1 X + D12 Y ) D11 −γI





 h i
 λkl X + µkl (AX + B1 Y ) + µlk (AX + B1 Y )T


 <0

 1≤k,l≤s
X = XT > 0


KH ∞ /R
= Y ∗ X ∗−1

3.
h i
∗ Y ∗ = arg min


 X X,Y,Q γ (given a)





 
(AX + B1 Y ) + (AX + B1 Y )T B2 (C1 X + D12 Y )T





B2T −γI T
D11 <0

 



(C1 X + D12 Y ) D11 −γI








"
 #
(AX + B1 Y ) + (AX + B1 Y )T B2

<0


" B2T −I
#

−Q (C2 X + D22 Y )





T
>0
(C2 X + D22 Y ) X










tr {Q} < a2










X = X T > 0, Q = QT > 0,

∗ ∗
KH ∞ /H2
= Y ∗ X −1

27
4.15 Static and dynamic output feedback
4.15.1 Static output feedback
Consider the system
(
ẋ(t) = Ax(t) + Bu(t), x(0) = x0
(4.75)
y(t) = Cx(t) + Du(t)

The static output feedback takes the form

u(t) = Ky(t), K ∈ Rm×p (4.76)


From Linear Systems theory, it is well known that control laws of the form
(4.76) are not general and sometime fail to stabilize the system (4.75). In
such cases, one moves to dynamic output feedback control laws that, on
the contrary, are always able to stabilize the system (4.75) under certain
conditions.
However, it is easy to formulate static output feedback synthesis prob-
lems. Consider, as an example, the following R-stabilization problem
 
z+z
H(−γ2 ,−γ2 ) = z : −γ1 ≤ ≤ −γ2
2

Figure 4.9: R-region

with γ1 ≥ 0 and γ2 ≥ 0. When γ2 = 0 and γ1 = ∞, the problem be-


comes the classic stabilizability problem. Then, the synthesis problem can
be formulated as follows

28
determine K, if it exists, such that

− γ1 ≤ Re {λi (A + BKC)} ≤ −γ2 i = 1, 2, ..., n (4.77)

Be recalling the well know property that


 L + LT   L + LT 
min λi ≤ Re {λi (L)} ≤ max λi (4.78)
i 2 i 2
holds true for any L ∈ Rn×n , the solution can be determined by solving

∃K ∈ Rm×p s.t. − 2γ1 I < (A + BKC) + (A + BKC)T ≤ −2γ2 I (4.79)

The above LMI conditions can be rewritten as


(
(A + BKC) + (A + BKC)T < −2γ2 I
∃K ∈ R m×p
s.t. (4.80)
(A + BKC) + (A + BKC)T > −2γ1 I

Notice that is deriving (4.79) from (4.78) we have used the obvious relation-
ship
(∀i) λi (A) < γ ⇔ A < γI
In most practical cases, one chooses γ1 = ∞ and solve
(
γ ∗ = minK γ
(A + BKC) + (A + BKC)T < −2γI

1. If γ ∗ < 0 than the solution is stabilizing


2. If γ ∗ < −2γ2 than the solution is H(−∞,−γ2 ) stabilizing

4.16 Dynamical output feedback

Figure 4.10: System

29
Open-loop (x ∈ Rn , u ∈ Rm , y ∈ Rp )


 ẋ = Ax + B1 u + B2 w

z∞ = C1 x + D11 w + D12 u



z2 = C2 x + D21 w + D22 u (4.81)

z1 = C3 x + D31 w + D32 u





y = Cx + Dw

Controller (ξ ∈ Rn )
(
ξ˙ = Ak ξ + Bk y
(4.82)
u = Ck ξ + Dk y
By defining the extended state
 
x
xcl = ∈ R2n×2n (4.83)
ξ
the following closed-loop system is achieved
" # " #" # " #
 ẋ A + B1 Dk C B1 Ck x B1 Dk D + B2
 = + w
ξ˙

Bk C Ak ξ Bk D









 " #


 h i x

 z ∞ = C1 + D12 Dk C D12 Ck + (D11 + D22 Dk D)w
ξ




(4.84)
 " #


 h i x

z2 = C2 + D22 Dk C D22 Ck + (D21 + D22 Dk D)w
ξ









 " #


 h i x
z1 = C3 + D32 Dk C D32 Ck + (D31 + D32 Dk D)w


ξ
that can be rewrite in the following compact form



 ẋcl = Acl xcl + Bcl w

z = C x + D w
∞ 1 cl 1
(4.85)


 z2 = C 2 xcl + D2 w (D2 = 0 is required)

z = C x + D w
2 3 cl 3

and the following closed-loop transfer matrices result


T∞ (s) = C 1 (sI − Acl )−1 Bcl + D1
T2 (s) = C 2 (sI − Acl )−1 Bcl (D2 = 0)
−1
T1 (s) = C 3 (sI − Acl ) Bcl + D3

30
Then, we can derive LMI conditions characterizing the various synthesis
problem by starting from the LMI conditions derived for the analysis. In
particular:

• H∞ - Acl asymptotically stable and kT∞ kH∞ < γ iif ∃P = P T >


0, P ∈ R2n×2n such that
T
 
ATcl P + P Acl P Bcl C 1
TP T 
−γI D1  < 0 (4.86)

 Bcl
C1 D1 −γI

Notice that it is not a LMI in Ak , Bk , Ck , Dk , P

• H2 - Acl asymptotically stable and kT2 kH2 < ν iif ∃P = P T > 0,


Q = QT > 0 such that
 T  " #
Acl P + P Acl P Bcl Q C2
TP < 0, T > 0, tr {Q} < ν 2 (4.87)
Bcl −I C2 P

• L1 - Acl asymptotically stable and kT1 kL1 < ζ if ∃P = P T > 0, ∃λ, µ


such that

λP 0 C3

 T 
Acl P + P Acl + λP P Bcl
TP < 0,  0 (ζ − µ)I DT3  > 0
Bcl −µI
C3 D3 ζI
(4.88)

• R-stabilizability - All the eigenvalues of Acl belong to the LMI region


R iif ∃P = P T > 0 such that

MR (Acl , P ) = λkl P + µkl P Acl + µlk ATcl P 1≤k,l≤s < 0 (4.89)


 

In all above cases, we need to arrive to LMI conditions that are linear
in the controller variables Ak , Bk , Ck , Dk and P . To this end, we need
to use the matrix completion results seen previously in Chapter 2. To
this end, consider the matrix P ∈ R2n×2n partitioned as
   
X N −1 Y M
P = , P = (4.90)
NT ∗ MT ∗

with X = X T > 0 ∈ Rn×n , Y = Y T > 0 ∈ Rn×n , M, N ∈ Rn×n .


Then, because P P −1 = I2n , one has that
   
Y I
P = n (4.91)
MT 0n

31
Moreover, by defining
   
Y In In X
Πy = ∈ R2n×2n , Πx = ∈ R2n×2n (4.92)
M T 0n 0n N T

it is found that
P Πy = Πx (4.93)

    
X N Y In I X
= n
NT ∗ MT 0 0n N T

The matrices Πx and Πy will be instrumental to arrive to the desired LMI


formulations. For example, we can start from the H∞ condition (4.86). If
we apply the congruence transformation

• H∞
 T T
 T  
Πy Acl P + P Acl P Bcl C 1 Πy
T 
 I 
 TP
Bcl −γI D1   I <0
I C1 D1 −γI I


T
 
ΠTy P Acl Πy + ΠTy ATcl P Πy ΠTy P Bcl ΠTy C 1
TPΠ T 
D1  < 0 (4.94)

 Bcl y −γI
C 1 Πy D1 −γI
Now, it is worth noting that

ΠTy ATcl P Πy = (ΠTy P Acl Πy )T (4.95)


T
ΠTy C 1 = (C 1 Πy )T (4.96)

• H2
We can act similarly for the conditions (4.87( for the H2 optimization.
In this case, one has
 T  T  
Πy Acl P + P Acl P Bcl Πy
TP <0
I Bcl −I I

ΠTy P Acl Πy + (ΠTy P Acl Πy )T ΠTy P Bcl
 
<0 (4.97)
(ΠTy P Bcl )T −I
and
 " #   
I Q C2 I Q C 2 Πy
T = <0 (4.98)
ΠTy C2 P Πy (C 2 Πy )T ΠTy P Πy

32
• L1
Again, we transform the condition (4.88) in the following way:
 T  T  
Πy Acl P + P Acl + λP P Bcl Πy
TP <0
I Bcl −µI I

 T
(Πy P Acl Πy ) + (Πy P Acl Πy )T + λ(ΠTy P Πy ) ΠTy P Bcl

< 0 (4.99)
(Πy P Bcl )T −I
and
 T   
Πy λP 0 C3 Πy
 I  0 (ζ − µ)I D3   I >0
I C3 D3 ζI I

 T
λΠy P Πy 0 (C 3 Πy )T

T
 0 (ζ − µ)I D3  > 0 (4.100)
C 3 Πy D3 ζI

• R-stabilizability Finally, for this case we achieve


 T   
Πy Πy

 ΠTy  
 M (Acl , P )  Πy 
<0
 ...   ... 
ΠyT Πy


h i
λkl ΠTy P Πy +µkl (ΠTy P Acl Πy )+µlk (ΠTy P Acl Πy )T < 0 (4.101)
1≤k,l≤s

Now, by considering the definition of Πy and Πx in (4.92), we are ready to


make explicit the relationship among the various terms and the controller
variables Ak , Bk , Ck , Dk . In particular we have

1.
 
AY + B1 Ĉk A + B1 D̂k C
ΠTy P Acl Πy = ΠTx Acl Πy = (4.102)
Âk XA + B̂k C

2.

(AY + B1 Ĉk )T ÂTk


 
(ΠTy P Acl Πy )T = ΠTy ATcl Πx =
(A + B1 D̂k C)T (XA + B̂k C)T
(4.103)

33
3.  
Y I
ΠTy P Πy = ΠTx Πy = (4.104)
I X

4.  
B2 + B1 D̂k D
ΠTy P Bcl = ΠTx Bcl = (4.105)
XB2 + B̂k D

5.

(ΠTy P Bcl )T = Bcl


T
(4.106)
 
Πx = (B2 + B1 D̂k D)T (XB2 + B̂k D)T

6.
(4.107)
 
C 1 Πy = C1 Y + D12 Ĉk C1 + D12 D̂k C

7.
(4.108)
 
C 2 Πy = C2 Y + D22 Ĉk C2 + D22 D̂k C

8.
(4.109)
 
C 3 Πy = C3 Y + D32 Ĉk C3 + D32 D̂k C

that are expressed in terms of the new controller matrices

Âk = N Ak M T + N Bk CY + XB1 Ck M T + X(A + B1 Dk C)Y (4.110)

B̂k = N Bk + XB1 Dk (4.111)


Ĉk = Ck M T + Dk CY (4.112)
D̂k = Dk (4.113)
with Âk ∈ Rn×n , B̂k ∈ Rn×p , Ĉk ∈ Rm×n , D̂k ∈ Rm×p . By using (4.102)-
(4.109) one can prove the following sub-optimality Lemmas:

Lemma 4.16.1. H∞ sub-optimality


Acl is asymptotically stable and kT∞ kH∞ < γ iif ∃X = X T > 0,
∃Y = Y T > 0, ∃Âk , B̂k , Ĉk , D̂k such that jointly solve
 T T

(AY + B1 Ĉk ) + (AY + B1 Ĉk ) Âk + (A + B1 D̂k C) (B2 + B1 D̂k D) (C1 Y + D12 Ĉk )T
 Âk + (A + B1 D̂k C)T (XA + B̂k C) + (XA + B̂k C)T (XB2 + B̂k D) (C1 + D12 D̂k C)T 
 <0
(B2 + B1 D̂k D)T (XB2 + B̂k D)T −γI D)T
 (D11 + D12 D̂k

(C1 Y + D12 Ĉk ) (C1 + D12 D̂k C) (D11 + D12 D̂k D) −γI
(4.114)

34
Lemma 4.16.2. H2 sub-optimality
Acl is asymptotically stable and kT2 kH2 < ν iif ∃X = X T > 0,
∃Y = Y T > 0, ∃Q = QT > 0, ∃Âk , B̂k , Ĉk , D̂k such that jointly solve
T T
 
(AY + B1 Ĉk ) + (AY + B1 Ĉk ) Âk + (A + B1 D̂k C) (B2 + B1 D̂k D)
Âk + (A + B1 D̂k C)T (XA + B̂k C) + (XA + B̂k C)T (XB2 + B̂k D) 
 <0
(B2 + B1 D̂k D)T (XB2 + B̂k D)T −I
(4.115)

(C2 Y + D22 Ĉk )T (C2 + D22 D̂k C)T


 
Q
 (C2 Y + D22 Ĉk ) Y I >0 (4.116)
(C2 + D22 D̂k C) I X

tr {Q} < ν 2
Lemma 4.16.3. L1 sub-optimality
Acl is asymptotically stable and kT1 kL1 < ζ if ∃X = X T > 0,
∃Y = Y T > 0, ∃Âk , B̂k , Ĉk , D̂k , λ, µ > 0 such that jointly solve
T T
 
(AY + B1 Ĉk ) + (AY + B1 Ĉk ) + λY Âk + (A + B1 D̂k C) + λI (B2 + B1 D̂k D)
 + (A + B D̂ C)T + λI (XA + B̂ C) + (XA + B̂ C)T + λX (XB + B̂ D)
k 1 k k k 2 k
  <0
(B2 + B1 D̂k D)T (XB2 + B̂k D)T −µI

 
λY λI (C3 Y + D32 Ĉk )T
0

 λI λX (C3 + D32 D̂k C)T 
0 
 
>0
 

 0 0 (ζ − µ)I (D31 + D32 D̂k C) T
 
 
(C3 Y + D32 Ĉk ) (C3 + D32 D̂k C) (D31 + D32 D̂k D) ζI
Lemma 4.16.4. R-stabilizability
It is found that all eigenvalues of Ad belong to the LMI region
R = z : L + Mz + MT z < 0


iff there exist X = X T > 0, Y = Y T ≥ 0, Âk , B̂k , Ĉk , and D̂k such that the
following LMI holds:

h Y I
 
AY + B1 Ĉk A + B1 D̂k C

MR = λkl + µkl
I X Âk XA + B̂k C
T ÂTk
 i
(AY + B1 Ĉk )
+ µlk <0
(A + B1 D̂k C)T (XA + B̂k C)T 1≤k,l≤s
All previous sub-optimality conditions can be turned in optimal synthesis
methods by minimizing the norm bounds under the necessary given LMI
conditions. This is left to the reader

35
4.17 Controller computation
When X, Y, Âk , B̂k , Ĉk and D̂k has been found, it remains the problem
of find matrices Ak , Bk , Ck , Dk of controller. To this end, when N and M
are square matrices, this can be done easily by considering the inverse rela-
tionships
Dk = D̂k (4.117)
Ck = (Ĉk − Dk CY )M −T (4.118)
Bk = N −1 (B̂k − XB1 Dk ) (4.119)
h i
Ak = N −1 Âk − N Bk CY − XB1 Ck M T − X(A + B1 Dk C)Y M −T
(4.120)
A way to force N and M to be invertible consists of adding the LMI
 
Y I
>0 (4.121)
I X

in the synthesis problem if it is not already present among the other LMI.
In fact, the relationship
    
X N Y I I X
P Πy = Πx ⇔ =
NT ∗ MT 0 0 NT

implies that the following equality holds

XY + N M T = I ⇒ N M T = I − XY

Then, the invertibility of the product N M T requires the invertibility of I −


XY . In fact, notice that (4.121) is equivalent to
(
Y − X −1 > 0 ⇔ XY − I > 0
X>0

by Schur’s factors. At this point the matrices N and M can be determined


easily (with a Cholesky decomposition for example). Unfortunately, the LMI
(4.121) tends to be satisfied with eigenvalues very close to zero, that is tends
to be bad-conditioned. It is recommended to substitute (4.121) with
 
Y tI
>0 (4.122)
tI X

with t a variable to be maximized. This procedure tends to maximize the


minimum eigenvalue of XY , so that I − XY becomes well-conditioned.

36
4.18 Extensions
4.18.1 The use of frequency dynamical weights
From a practical ("engineering") point of view it can be advantageous
to weight in frequency the exogenous signals w(t) and the objectives z(t).
The frequency characterization of the exogenous signals, via suitable stable
filters, allows a better specification of the class of interest of the exogenous
signals. To characterize in frequency the objectives it means to specify the
relevance of those in different frequency ranges. Typically, LOW-PASS filters
are chosen for the objectives

Figure 4.11: Low-pass filter

On the contrary, HIGH-PASS filters are used to weight the input, when
there are included among the objectives

Figure 4.12: High-pass filter

37
The rationale of the choice of low-pass filters is dictated by the fact that
in many problems of industrial interest the objectives are mostly important
to be minimized at low-frequency. On the contrary, high-pass filters are used
to define the working range of interest of the actuators. The idea is that,
where the magnitude of the high-pass filter is high the control actions will
be low (there will not have such frequency).

Figure 4.13: Actuator band

4.19 Extended model for the use of filters


Let
ẋ = Ax + Bu + E w̃
(4.123)
z̃ = Hx + F u + Lw̃
be a physical model with new disturbance and objective vectors w̃ and z̃.
We want to think w̃ as the output of a filter Ww (s)

w̃(t) = Ww (s)w(t) (4.124)

with input the original disturbance w(t). Then, we say that w̃ is the filtered
version of w(t). In a similar way, we consider the original objective vector
z(t) as the filtered version of the objective vector z̃ of the physical model.
The, we can write, via the filter Wz (s),

z(t) = Wz (s)z̃(t) (4.125)

38
Notice that (4.123) and (4.124) are ARMA models in the derivative operator
d
s = dt . Moreover, let (
ẋw = Aw xw + Bw w
(4.126)
w̃ = Cw xw + Dw w
(
ẋz = Az xz + Bz z̃
(4.127)
z = Cz xz + Dz z̃
be the state-space representations of the Ws (s) and Wz (s)
Ww (s) = Cw (sI − Aw )−1 Bw + Dw (4.128)
Wz (s) = Cz (sI − Az )−1 Bz + Dz (4.129)
Then, we want include the filter in the physical model in order to arrive to
an extended model that can be used directly for the synthesis by using the
LMI methodologies illustrated in the previous chapters. To this end, we can
define an extended state
x − state of the original model
 
 xz − state of the filter Wz 
zw − state of the filterWs
Then, the overall state-space representation is given by
        
ẋ A 0 ECw x B EDw
 ẋz  = Bz H Az Bz LCw   xz  + Bz F  u + Bz LDw  w (4.130)
ẋw 0 0 Aw xw 0 Bw
 
 x
(4.131)

z = Dz H Cz Dz LCw  xz  + Dz F u + Dz LDw w
xw

Figure 4.14: Block diagram

39
Then, one may design a controller for the extended model with state
 
x
xe =  xz 
xw

and finds a controller of the form


 
x(t)
(4.132)
 
u(t) = Kxe (t) = Kx Kz Kw  xz (t) 
xw (t)

obtained by the previous synthesis methods.

4.20 Tracking problems with integral effect


Many times it is to interest in tracking problems to have offset-free re-
sponse to constant set-points

r(t) ≡ r ∀t ⇒ lim e(t) = 0


t→∞

Many tracking problems can be converted to classical regulation problems


and solved as shown in this section.

4.21 Static state feedback (scalar case)

Figure 4.15: Static state feedback (scalar case)

• r - reference signals

40
• yr - reference output (yr ≈ r)
• e(t) = r(t) − yr (t) tracking error

ẋp (t) = Ap xp (t) + Bp u(t) + Bw wp (r),
 wp (t)disturbace
z(t) = Cz xp (t) + Dz u(t) + Fz wp (r)

yr (t) = Cr xp (t) + Dr u(t) + Fr wp (r)

This problem can be converted in a regulation problem by introducing a new


state component
ẋe (t) = r(t) − yr (t)
= r(t) − Cr xp (t) − Dr u(t) − Fr w(t)
Finally, one finds an extended plant
" # " #" # " # " #" #
 ẋ e 0 −C r x e −D r −Fr 1 wp
 ẋ = 0 A + u+



p " p # xp Bp " #Bw 0 r
 h i xe h i wp
z = 0 Cz x + Dz u + Fz 0



p r
The control action becomes
u(t) = Kf f xe (t) + Kf b xp (t)
 
xe (t)
= [Kf f Kf b ]
xp (t)
This can be converted in a standard model

Figure 4.16: Once the controller K is determined from the synthesis algo-
rithms, it needs to be partitioned in Kf f and Kf b

with    
x wp
x= e w=
xp r

41
In this case, it could make sense to add among the objectives
   
  xe   wp
ze = 1 0 +0×u+ 0 0
xp r

in order to reject the disturbance also during the transients.

4.22 Static state feedback with zero range-tracking


error

Figure 4.17: Static state feedback with zero range-tracking error

In order to impose two eigs in zero for the closed-loop system, one can
consider the following control scheme

42
Figure 4.18: Control scheme

ẍe (t) = r(t) − yr (t) = r(t) − Cr xp (t) − Dr u(t) − Fr wp (t)


        
ẍe 0 0 −Cr ẋe −Dr −Fr 1  
w
ẋe  = 1 0 0   xe  +  0  u +  0 0 p
r
ẋp 0 0 Ap xp Bp Bw 0
 
 ẋe
 
   wp
z = 0 0 Cz xe  + Dz u + Fz 0
r
xp
(2) (1)
u = Kf f ẋe (t) + Kf f xe (t) + Kf b xp (t)
 
h i ẋe
= Kf(2)
f
(1)
Kf f Kf b xe 
xp

43
Figure 4.19: Once the controller K is determined from the synthesis algo-
(2) (1)
rithms, it needs to be partitioned in Kf f , Kf f and Kf b

4.23 Dynamic output-feedback tracking problems

Figure 4.20: Dynamic output-feedback tracking problems




ẋp = Ap xp + Bp u + Bw wp

z = C x + D u + F w
z p z z p


yr = Cr xp + Dr u + Fr wp

y = C x + D u + F w
p p p p p p

44
(
ẋf f = Ak xf f + Bf f xe
Kf f (s) =
uf f = Ck xf f + Df f xe

(
ẋf b = Ak xf b + Bf b yp
Kf b (s) =
uf b = Ck xf b + Df b yp

e(t) = r(t) − yr (t)

ẋe (t) = e(t) = r(t) − Cr xp (t) − Dr u(t) − Fr wp (t)

The model can be converted in a standard one

Figure 4.21: Once the controller matrices Bk and Dk have been determined
from the synthesis algorithms, it needs to be partitioned in Bf f and Bf b and
Df f and Df b respectively.

by denoting

  
  
xe wp x
y= e ,
   
x= , w= , Bk = Bf f Bf b , Dk = Df f Df b
xp r yr

45
" # " #" # " # " #" #
 ẋ e 0 −C r x e −D r −Fr 1 wp
= + u+






 ẋp 0 Ap xp Bp Bw 0 r





 " # " #
 h i x h i w
e p
z = 0 Cz + Dz u + Fz 0


 x p r





 " #" # " # " #" #

 1 0 x e 0 0 0 wp
y = 0 C + u+



p xp Dp Fp 0 r

(
ξ˙ = Ak ξ + Bk y
u = uf f + uf b = Ck ξ + D k y

Then, the previous theory can be used to determine Ak , Bk , Ck , Dk and


the two controllers can be implemented separately if desired.

46
4.24 Internal Model Principle
The above examples are special cases of a more general theory denoted
as the Internal Model Principle

Figure 4.22: Internal Model Principle


B(s) N (s)
• P (s) = A(s) , C(s) = D(s)

• The roots of D(s)A(s) + B(s)N (s) are the poles of the closed-loop
system;
• The roots of D(s)A(s) are the open-loop poles of the controller and
plant respectively;

Now, let r(t) be a modal signal, that is

ṙ(t) = Ar r(t) (4.133)

with eigenvalues of Ar such that

Re(λi ) ≥ 0, i = 1, 2, ... (4.134)

The assumption in equation (4.134) ensures that r(t) is not evanescent, that
is it consists of divergent or bounded modes, as it is typical for reference
signals
r(t) = sin ωt, cos ωt, eαt , teαt , ...


Then, the Internal Model Principle states that:

The eigenvalues of Ar



 are contained between the roots of




e(t) → 0 (t → ∞) ⇔ D(s)A(s) = 0
Equivalently, they are the poles of the closed-loop system





G(s) = N (s) B(s)


D(s) A(s)

47
The proof is based on the final value Lemma. Let f (t) ∈ C1 (continuous
with derivative continuous). They, when the limit exists finite,
lim f (t) = lim sF (s), F (s) = L(f (t))
t→∞ s→0
Assume that
R(s)
L(r(t)) = , R, D polynomial
D(s)
It is found:
D(s)A(s)
 R(s)
lim e(t) = lim s  (4.135)
t→∞ s→0 D(s)A(s) + B(s)N (s) 
D(s)

Because all roots of D(s) have Re(zi ) ≥ 0 and those of D(s)A(s) + B(s)N (s)
have Re(zi ) < 0 for the asymptotic stability, the terms in D(s) in (4.135)
cancels exactly assuming that they are not contained in A(s). Then, it is
found
lim e(t) = 0 (4.136)
t→∞

4.25 Static state feedback tracking problems with


zero error to sinusoidal reference

ω2
r(t) = sin(ωt) ⇒ L(r(t)) =
s2 + ω 2
The signal satisfies the following state space representation:
0 −ω 2 ṙ
    

= ⇔ r̈(t) + ω 2 r(t) = 0
ṙ 1 0 r
Then, by repeating the above arguments, one has that the following scheme
provides a useful solution

Figure 4.23: Internal model principle application example

48
ẍe (t) = r(r) − yr (t) − w2 xe (t)

(2) (1)
u(t) = Kf f ẋe (t) + Kf f xe (t) + Kf b xp (t)

0 −w2 −Cr
        
ẍe ẋe −Dr −Fr 1  
w
ẋe  = 1 0 0   xe  +  0  u +  0 0 p
r
ẋp 0 0 Ap xp Bp Bw 0
 
 ẋe
 
   wp
z = 0 0 Cz xe + Dz u + Fz 0
 
r
xp
 
ẋe  
wp
x = xe , w =
 
r
xp

Figure 4.24: Once the controller K is determined from the synthesis algo-
(2) (1)
rithms, it needs to be partitioned in Kf f , Kf f and Kf b

49

You might also like