0% found this document useful (0 votes)
6 views20 pages

Guaranteed Cost Control For Discrete-Time Linear Systems Under Controller Gain Perturbations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views20 pages

Guaranteed Cost Control For Discrete-Time Linear Systems Under Controller Gain Perturbations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Linear Algebra and its Applications 312 (2000) 161–180

www.elsevier.com/locate/laa

Guaranteed cost control for discrete-time linear


systems under controller gain perturbations聻
Guang-Hong Yang, Jian Liang Wang∗ , Yeng Chai Soh
School of Electrical and Electronic Engineering, Nanyang Technological University, Block S2,
Nanyang Avenue, Singapore 639798, Singapore
Received 3 June 1999; accepted 8 March 2000
Submitted by L. Rodman

Abstract

This paper considers the problem of guaranteed cost control for discrete-time linear sys-
tems under state feedback control gain perturbations. Two classes of perturbations are con-
sidered, namely, additive and multiplicative. The state feedback control designs for optimal
guaranteed cost control under the two classes of gain perturbations are given in terms of so-
lutions to algebraic Riccati equations. The designs are such that the cost of the closed-loop
system is guaranteed to be within a certain bound for all admissible uncertainties. Numerical
examples are included to illustrate the design procedures. © 2000 Elsevier Science Inc. All
rights reserved.
Keywords: Discrete-time systems; Linear quadratic regulator; Uncertainty; Gain perturbations; Robust
control; Riccati equation approach

1. Introduction

In the area of robust control for uncertain discrete-time linear systems, there have
been a number of design methods for achieving the robust stability and the robust
performance of uncertain closed-loop systems, see [3,6,7,12,18,21] and the refer-
ences therein. In particular, Corless and Manela [3], Fu et al. [6], and Magana and


This work is supported by the Academic Research Fund of the Ministry of Education, Singapore,
under grant MID-ARC 3/97.
∗ Corresponding author. Tel.: +65-799-4846; fax: +65-792-0415.
E-mail address: [email protected] (J.L. Wang)

0024-3795/00/$ - see front matter  2000 Elsevier Science Inc. All rights reserved.
PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 1 0 0 - 2
162 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

Zak [18] investigated the problem of robust stabilization of discrete-time systems


with parametric uncertainties. Fu et al. [6] and Peres et al. [19] examined the ro-
bust H∞ performance problem for uncertain discrete-time systems. The robust H2
performance problem (namely, guaranteed cost control problem) for uncertain dis-
crete-time systems was addressed by Xie and Soh [21]. An alternative method for
the robust H2 performance problem is given by convex optimization in [12].
While the above methods yield controllers that are robust with respect to uncer-
tainties in the plant under control, their robustness with respect to uncertainties in
the controllers themselves has not been studied. In a recent paper [14], Keel and
Bhattacharyya have shown by a number of examples that the controllers designed by
using weighted H∞ , µ and l1 synthesis techniques may be very sensitive, or fragile,
with respect to errors in the controller coefficients, although they are robust with
respect to plant uncertainty. This raises a new issue: how to design a controller for a
given plant with uncertainty such that the controller is insensitive to some amount of
error with respect to its gains, i.e., the controller is non-fragile.
Recently, there have been some efforts to tackle the non-fragile controller design
problem, see [2,4,5,8–11,13]. In [5], a guaranteed cost control approach for non-
fragile linear quadratic control via state feedback is formulated by using linear matrix
inequalities. Haddad and Corrado [9] extended the robust fixed-structure guaranteed
cost controller synthesis framework to synthesize robust non-fragile controllers for
controller gain variations and system parametric uncertainty. In [10], a non-fragile
state feedback control is given for the case of polytopic uncertainties existing in both
the system dynamics and the controller gains, by using linear matrix inequalities.
In this paper, we study the problem of guaranteed cost control for discrete-time
systems under gain perturbations. The paper is organized as follows. In Section 2,
the problem under consideration and some preliminaries are given. Section 3 investi-
gates the guaranteed cost control problem under additive gain perturbations. Section
4 gives the results for the case of multiplicative gain perturbations. A numerical
example is given in Section 5. Finally, Section 6 concludes the paper.

2. Problem statement and preliminaries

Consider a discrete-time linear system described by the equation


xk+1 = Axk + Buk , (1)
n m
where xk ∈ R is the state and uk ∈ R is the control input, A and B are known
constant matrices. The cost function associated with this system is

X 
J = xkT Qxk + uTk Ruk , (2)
k=0
where Q > 0 and R > 0 are given weighting matrices. For a given controller uk =
Kxk , the actual controller implemented is assumed to be
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 163

uk = (K + 1K)xk , (3)
where K is the nominal controller gain, and 1K represents the gain perturbations. In
this paper, the following two classes of perturbations are considered:
(a) 1K is of the additive form
1K = H1 F1 E1 , F1T F1 6 ρI, ρ > 0, (4)
with H1 and E1 being known constant matrices, and F1 the uncertain parameter
matrix.
(b) 1K is of the multiplicative form
1K = H2 F2 E2 K, F2T F2 6 ρI, ρ > 0, (5)
with H2 and E2 being known constant matrices, and F2 the uncertain parameter
matrix.

Remark 2.1. The controller gain perturbations can result from the actuator degra-
dations, as well as from the requirement for re-adjustment of controller gains during
the controller implementation stage (see [1,4,14,20]). These perturbations in the con-
troller gains are modeled here as uncertain gains that are dependent on uncertain
parameters. The models of additive uncertainties (4) and multiplicative uncertain-
ties (5) are used to describe the controller gain variations in [9] and [5], respective-
ly. The multiplicative model can also be used to describe degradations of actuator
effectiveness [1].

For the actual controller implemented, we introduce the following definition,


which is similar to that in [21].

Definition 2.2. Consider the system (1) with the cost function (2). The control law
(3) with controller gain perturbations (4) or (5) is said to be a guaranteed cost control
with matrix P > 0 if

[A + B(K + 1K)]T P [A + B(K + 1K)]


−P + (K + 1K)T R(K + 1K) + Q < 0

for all uncertainties 1K satisfying (4) or (5).

Definition 2.3 [6]. The closed-loop uncertain system


xk+1 = [A + B(K + 1K)]xk (6)
is said to be quadratically stable if there exists a matrix P > 0 such that
[A + B(K + 1K)]T P [A + B(K + 1K)] − P < 0
for all uncertainties 1K satisfying (4) or (5).
164 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

The following result shows that a guaranteed cost control for the system (1) will
guarantee the quadratic stability of the closed-loop system (6) and defines an upper
bound on the cost function (2).

Lemma 2.4. Consider the system (1) with the cost function (2). Suppose that the
control law (3) with controller uncertainties (4) or (5) is a guaranteed cost control
with matrix P > 0. Then the closed-loop uncertain system (6) is quadratically stable
and
X∞
 
J = xkT Q + (K + 1K)T R(K + 1K) xk 6 x0T P x0 (7)
k=0
for all uncertainties 1K satisfying (4) or (5).

Proof. The quadratic stability of system (6) is immediate from Definitions 2.2 and
2.3. Let V (xk ) = xkT P xk . Then, along the state trajectory of (6), we have

V (xk+1 ) − V (xk )=xkT ([A + B(K + 1K)]T P [A + B(K + 1K)] − P )xk



6 − uTk Ruk + xkT Qxk .

It follows that
N−1
X 
J = lim uTk Ruk + xkT Qxk 6 lim [V (x0 ) − V (xN )] = V (x0 ).
N→∞ N→∞
k=0
Thus, the proof is complete. 

In this paper, the problem under consideration is to design a state feedback gain
K such that the control law (3) with (4) or (5) is a guaranteed cost control associ-
ated with a cost matrix P. In particular, the optimal guaranteed cost control will be
pursued.
It should be noted that the results in [12] can provide sufficient conditions for the
guaranteed cost control problem under controller gain perturbations.
P In fact, if the
closed-loop cost function J in (7) is bounded by J¯ = ∞ k=0 kx T (Q + K T R K)x ,
0 k
where R0 > (I + H2 F2 E2 )T R(I + H2 F2 E2 ) for any F2 satisfying (5), then for the
multiplicative case, applying the results in [12] to the system xk+1 = Axk + B(I +
H2 F2 E2 )u and the cost function J¯, a sufficient condition for the guaranteed cost
control problem under the multiplicative gain perturbations is obtained. The case for
the additive gain perturbations is similar. In this paper, we will provide necessary and
sufficient conditions for the guaranteed cost control problem under controller gain
perturbations.

Lemma 2.5 [15]. Given matrices Y, M and N. Then


Y + M1N + N T DT M T < 0
for all D satisfying DT D 6 σ I if and only if there exists a constant  > 0 such that
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 165

σ T
Y + MM T + N N < 0.


Definition 2.6 [17]. A symmetric matrix P is said to be a stabilizing solution to the


Riccati equation
AT P A − P − AT P B(B T P B + R)−1 B T P A + N = 0
if it satisfies the Riccati equation and the matrix A − B(B T P B + R)−1 B T P A is
stable.

In this paper, we adopt the following notations. For a symmetric matrix E, λmax (E)
denotes the maximal eigenvalue of E. For a matrix F, kF k2 = [λmax (F T F )]1/2 .

3. Guaranteed cost control under additive gain perturbations

In this section, we consider the guaranteed cost control problem under additive
gain perturbations of the form (4). We first give the following theorem.

Theorem 3.1. Consider the system (1) with the cost function (2). There exists a
state feedback gain K such that the control law (3) with additive uncertainty (4) is a
quadratic guaranteed cost control with a cost matrix P if and only if there exists a
constant  > 0 such that
R2 = R2 (P , ) , I − H1T (B T P B + R)H1 > 0 (8)
and
ρ T
Sa (P , ) , AT P A − P + E E1 + Q
 1
−AT P B(B T P B + R)−1 B T P A < 0. (9)

Furthermore, if (8) and (9) are satisfied, then a guaranteed cost control law is given
by (3) with
−1 T
K = − BTP B + R B P A. (10)

Proof. Let the control law (3) with controller gain uncertainty (4) be a quadratic
guaranteed cost control with a cost matrix P. Then from Definition 2.1, it follows
that

[A + B(K + 1K)]T P [A + B(K + 1K)]


−P + (K + 1K)T R(K + 1K) + Q < 0

for all uncertainties 1K of the form (4). By Schur complement and (4), this inequal-
ity is equivalent to the following inequality:
166 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

 
−P −1 0 A + B(K + 1K)
 0 −R −1 K + 1K 
[A + B(K + 1K)] T (K + 1K) T Q−P
 −1
  
−P 0 A + BK BH1
= 0 −R −1 K  +  H1  F1 [ 0 0 E1 ]
(A + BK) T KT Q−P 0
  T
BH1  
+  H1  F1 0 0 E1  < 0.
0

By Lemma 2.5, the above inequality is equivalent to that there exists a constant  > 0
such that
    T  
P −1 0 A + BK BH1 BH1 0
 ρ
0 −R −1 K  +   H1   H1  +  0  [0 0 E1 ]
 ET
(A + BK)T KT Q−P 0 0 1
 −1

−P + BH1 H1 B T T BH1 H1 T A + BK
= H1 H1T B T −R −1 + H1 H1T K  < 0.
ρ T
(A + BK) T K T Q − P +  E1 E1

By Schur complement and completing the square, it follows that the above inequality
is equivalent to
   −1    T !−1
M11 M12 P 0 BH1 BH1
M= , − >0 (11)
M12T M22 0 R −1 H1 H1

and
 
  A + BK ρ T
D1 = (A + BK)T KT M −P + E E1 + Q
K  1
= (A + BK)T M11 (A + BK) + K T M12
T
(A + BK) + (A + BK)T M12 K
ρ
+K T M22 K − P + E1T E1 + Q

ρ T
= A M11 A − P + E1 E1 + Q − AT (M11 B + M12 )R1−1 (M11 B + M12 )T A
T

h  i
+ K T + AT (M11 B + M12 )R1−1
h iT
×R1 K T + AT (M11 B + M12 )R1−1 < 0, (12)

where
R1 = B T M11 B + M22 + M12
T
B + B T M12 . (13)
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 167

It is easy to show that M > 0 is equivalent to that inequality (8) holds. By computing
directly, we have
      T  
P 0 P 0 BH1 −1 BH1 P 0
M= + R2
0 R 0 R H1 H1 0 R
 −1 T T −1 T 
P + P BH1 R2 H1 B P P BH1 R2 H1 R
= . (14)
RH1 R2−1 H1T B T P R + RH1 R2−1 H1T R

Thus, from (13) and (14), it follows that

R1 = B T P B + B T P BH1 R2−1 H1T B T P B + R + RH1 R2−1 H1T R


+B T P BH1 R2−1 H1T R + RH1 R2−1 H1T B T P B
= X + XH1 R2−1 H1T X, (15)

M11 B + M12 = P B + P BH1 R2−1 H1T B T P B + P BH1 R2−1 H1T R


 
= P B I + H1 R2−1 H1T X , (16)

 −1  
R1−1 (M11 B + M12 )T = X + XH1 R2−1 H1T X I + H1 R2−1 H1T X B T P
= X−1 B T P , (17)

where
X = B T P B + R. (18)
By (11), (12) and (14)–(18), it follows
ρ
D1 = AT P A − P + E1T E1 + Q
h   i
−A P B I + H1 R2−1 H1T X X−1 − H1 R2−1 H1T B T P A
T

h i h iT
+ K T + AT P BX−1 R1 K T + AT P BX−1
h i
= Sa (P , ) + K T + AT P BX−1 R1 [K T + AT P BX−1 ]T . (19)

From (11) and (19), the necessity is obvious. For the sufficiency, the proof is com-
pleted by substituting K in (10) into Eq. (19). 

Theorem 3.1 provides a necessary and sufficient condition for the solution to the
quadratic guaranteed cost control problem. But it remains unclear as to how one can
choose the design parameter  in order to achieve the minimal guaranteed cost of the
closed-loop system. Denote
168 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

a = sup { > 0 : Sa (P , ) = 0 has a stabilizing solution P > 0


and (8) holds} . (20)

Then, the design parameter  for achieving suboptimal guaranteed cost of the closed-
loop system falls in the range of 0 <  < a . The next theorem shows that the op-
timal guaranteed cost control (i.e., the control law that yields the minimal cost as
defined in (2)) is obtained at the boundary value of  = a .

Theorem 3.2. Consider the system (1) with cost function (2). Suppose that the pair
(A, B) is stabilizable. If there exists a state feedback gain K such that the control law
(3) with additive uncertainty (4) is a quadratic guaranteed cost control with a cost
matrix P0 , then the following Riccati equation with a defined by (20) has a unique
stabilizing solution Popt > 0 satisfying Popt 6 P0 and
R2 (Popt , a ) > 0, (21)

Sa (Popt , a ) = 0, (22)
and the control law (3) with
K = −(B T Popt B + R)−1 B T Popt A (23)
is such that the resulting closed-loop system (6) is quadratically stable, and J 6
x0T Popt x0 for all uncertainties 1K of the form (4).

Proof. By Theorem 3.1, there exists a constant  0 > 0 such that the inequalities
(8) and (9) hold for  =  0 and P = P0 . Let P01 > 0 be a stabilizing solution to
Sa (P ,  0 ) = 0. By the comparison theorem [17, Theorem 13.3.1], we have P01 6 P0
and R2 (P01 ,  0 ) > 0. Thus, a in (20) is well-defined. Choose sequences {n }∞n=1 and
{Pn }∞
n=1 such that 0 < n 6  ,
n+1 n  → a (n → ∞), P n is a stabilizing solution
to Sa (P , n ) = 0 and R2 (Pn , n ) > 0. By the definition of Sa (P , ) in (9) and the
comparison theorem, we have Pn > Pn+1 > 0 (n = 1, 2, . . .). Thus, limn→∞ Pn =
P∞ > 0 exists, and P∞ satisfies Sa (P∞ , a ) = 0 and R2 (P∞ , a ) > 0. By [17, The-
orem 16.6.4], it follows that P∞ is a stabilizing solution to Sa (P∞ , a ) = 0 and
P∞ > 0. Consider a sequence {σn }∞ n=1 with σn > 0, σn → 0 (n → ∞), then there
exists a sequence {0n }∞ n=1 with 0 < 0n < a , 0n → a (n → ∞) such that
Sa (P∞ , 0n ) − σn I < 0, n = 1, 2, . . .
By the proof of Theorem 3.1, it follows that

[A+ B(K + 1K)]T P∞ [A + B(K + 1K)] − P∞


+ (K + 1K)T R(K + 1K) + Q − σn I < 0, n = 1, 2, . . . ,

where K is given by (23) with Popt = P∞ , and 1K is given by (4). Let n → ∞, we


have
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 169

[A + B(K + 1K)]T P∞ [A + B(K + 1K)]


−P∞ + (K + 1K)T R(K + 1K) + Q 6 0.

Let Pδ = δP∞ with δ > 1. Then, from Q > 0 and the above inequality, it follows

[A + B(K + 1K)]T Pδ [A + B(K + 1K)]


−Pδ + (K + 1K)T R(K + 1K) + Q
= δ{[A + B(K + 1K)]TPδ [A + B(K + 1K)]
−Pδ + (K + 1K)T R(K + 1K) + Q}
−(δ − 1)[(K + 1K)T R(K + 1K) + Q]
6 −(δ − 1)[(K + 1K)T R(K + 1K) + Q]
< 0.

Thus, uk = (K + 1K)xk is a guaranteed cost control with Pδ . By Lemma 2.4 and


letting δ → 1, we have J 6 limδ→1 x0T Pδ x0 = x0T P∞ x0 . Since  0 6 a , it follows
P∞ 6 P01 6 P0 . The proof is completed by letting Popt = P∞ . 

Remark 3.3. Theorem 3.2 presents a design procedure for optimal guaranteed cost
control, and the closed-loop value of the cost function J is bounded by the mini-
mal value x0T Popt x0 . From (8), the parameter a in Theorem 3.2 lies in the range of
0 < a 6 λa , where
  T T −1
λa = λmax H1 (B Pa B + R)H1 if H1 6= 0, (24)
∞ if H1 = 0,
and Pa > 0 is the stabilizing solution to the Riccati equation
Sa,∞ (P ) = AT P A − P + Q − AT P B(B T P B + R)−1 B T P A = 0. (25)
Then, from (20)–(22), it follows that

a = max {0 <  6 λa : Sa (P , ) = 0 has a stabilizing solution P > 0


and R2 (P , ) > 0} . (26)

For a given  ∈ (0, λa ], we can solve the Riccati equation Sa (P , ) = 0 for a


stabilizing solution P > 0 and check if P satisfies R2 (P , a ) > 0. Hence, by initial-
izing at  = λa , and gradually decreasing  until a solution exists (for Sa (P , ) = 0,
P > 0 and R2 (P , ) > 0), the optimal value a can be obtained. However, the search
for a may be difficult if the interval (0, λ] is very large. Moreover, since the optimal
parameter a is on the boundary of the interval (0, a ) yielding a family of guaranteed
cost controls, it will be safe to choose an  slightly smaller than a for achieving
a suboptimal guaranteed cost control in a practical design. It should be noted that
170 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

Eq. (25) turns out to be that of the standard quadratic optimal control [16] for the
system (1) with the cost function (2). Also, the Riccati equation (22) corresponds to
that of the standard quadratic optimal control for the system (1) with a cost function
X∞

Ja = xkT Q̄xk + uTk Ruk ,
k=0
where Q̄ = ρ/a E1T E1 + Q.

4. Guaranteed cost control under multiplicative gain perturbations

In this section, we consider the guaranteed cost control problem under the multi-
plicative gain perturbations (5).

Theorem 4.1. Consider the system (1) with the cost function (2). There exists a state
feedback gain matrix K such that the control law (3) with multiplicative uncertainty
(5) is a quadratic guaranteed cost control with a cost matrix P if and only if there
exists a constant  > 0 such that

R20 = R20 (P , ) , I − H2T B T P B + R H2 > 0 (27)
and
Sm (P , ) , AT P A − P + Q − AT P BD0 B T P A < 0, (28)
where

D0 = D0 (P , ) , I − ρH2 H2T E2T E2
h   ρ i−1
× B T P B + R I − ρH2 H2T E2T E2 + E2T E2 . (29)

Furthermore, if both (27) and (28) hold, then a guaranteed cost control law with the
cost matrix P is given by (3) with
(
  h  
K = − I − ρ X−1 − H2 H2T E2T  I − ρE2 H2 H2T E2T
)
i−1
+ρE2 X−1 E2T E2 X−1 B T P A, (30)

where X = B T P B + R.

Proof. From the proof of Theorem 3.1, it is easy to see that

[A + B(K + 1K)]T P [A + B(K + 1K)]


−P + (K + 1K)T R(K + 1K) + Q < 0
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 171

for all uncertainties 1K satisfying (5) if and only if there exists a constant  > 0
such that inequality (27) holds and
 
  A + BK ρ
D2 = (A + BK)T K T M0 − P + K T E2T E2 K + Q
K 
"
 
−1 T
=AT P A − P + Q − AT P B I + H2 R20 H2 X
#
 T
−1 −1 T −1 T
×R3 I + H2 R20 H2 X − H2 R20 H2 B T P A
h   i
−1 T
+ K T + AT P B I + H2 R20 H2 X R3−1 R3−1
h   iT
−1 T
× K T + AT P B I + H2 R20 H2 X R3−1 < 0, (31)

where
 −1    T !−1
P 0 BH2 BH2
M0 = − > 0, (32)
0 R −1 H2 H2
−1 T ρ
R3 =X + XH2 R20 H2 X + E2T E2 , (33)

X =B T P B + R. (34)
Denote
−1 T
R10 = X + XH2 R20 H2 X. (35)
Then, from (27), (33) and (35), it follows
ρ −1 T  ρ −1
R3−1 = R10
−1
− R10 E2 I + E2 R10 −1 T
E2 −1
E2 R10 , (36)
 
h  i−1
−1 −1 T
R10 = X I + H2 R20 H2 X
   −1 −1
= X I + H2 H2T I − XH2 H2T X

=X−1 − H2 H2T , (37)


 −1
−1 T
I + H2 R20 H2 X = X−1 X−1 − H2 H2T , (38)

−1 T −1
H2 R20 H2 = H2 H2T I − XH2 H2T . (39)
By combining (29) and (36)–(39), it follows
   T
−1 T
I +H2 R20 H2 X R3−1 I + H2 R20 −1 T −1 T
H2 X − H2 R20 H2
 −1  −1
=X−1 X−1 − H2 H2T X−1 − H2 H2T I − XH2 H2T
172 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

ρ  ρ −1
− X−1 E2T I + E2 X−1 E2T − ρE2 H2 H2T E2T E2 X−1
 
ρ  ρ −1
=X−1 − X−1 E2T I − ρE2 H2 H2T E2T + E2 X−1 E2T E2 X−1
 
=D0 (P , ) (40)

and
(
   
R3−1 −1 T
I + XH2 R20 H2 B P A = I − ρ X−1 − H2 H2T E2T
T

)
h   i−1
−1 T
×  I − ρE2 H2 H2 E2 + ρE2 X E2
T T
E2 X−1 B T P A. (41)

Thus, the proof is completed by (31), (40) and (41). 

Although it is not obvious at first glance, inequality (28) is actually equivalent to


a standard algebraic Riccati inequality, as is shown in the next lemma. Suppose that
the matrix I − ρE2 H2 H2T E2T is singular. Then there exists an orthonormal matrix T1
such that  
1
T1 E2 H2 H2 E2 T1 = diag Ws1 , Is0 ,
T T T
(42)
ρ
where Ws1 > 0 is a diagonal with eigenvalues not including 1/ρ, and Is0 is an s0 × s0
identity matrix with s0 > 0. Denote
 
Ē2s1
Ē2 = T1 E2 = , Ē2s0 ∈ Rs0 ×m . (43)
Ē2s0
If Ē2s0 6= 0, then let T0 be an orthonormal matrix such that
T0 Ē2s
T
Ē T T = diag[0, Us ],
0 2s0 0
Us > 0, (44)
where Us ∈ Rs×s is diagonal and 0 < s 6 s0 . If Ē2s0 = 0, then let T0 = I and s = 0.
Denote
B̄ =BT0T , R̄ = T0 RT0T ,
ρ −1 (45)
N̄ = N̄ () = T0 Ē2s
T
Is1 − ρWs1 Ē2s1 T0T ,
 1

and decompose B̄, R̄ and N̄ as follows:


   
  R̄m−s R̄s1 ρ N̄m−s N̄s1
B̄ = B̄m−s B̄s , R̄ = , N̄ = , (46)
R̄sT1 R̄s  N̄sT1 N̄s
where B̄s ∈ Rn×s , R̄s ∈ Rs×s and N̄s ∈ Rs×s . Then, we have the following lemma.

Lemma 4.2.
(i) If the matrix I − ρE2 H2 H2T E2T is non-singular, then
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 173
h ρ −1 i−1 T
BD0 B T = B B T P B + R + E2T I − ρE2 H2 H2T E2T E2 B . (47)

(ii) If the matrix I − ρE2 H2 H2T E2T is singular, then
 ρ −1
BD0 B T = B̄m−s B̄m−s
T
P B̄m−s + R̄m−s + N̄m−s B̄m−s
T
, (48)

where B̄m−s , R̄m−s and N̄m−s are defined by (46), and D0 is given by (29).

Proof. (i) It is immediate from (29).


(ii) Choose a sequence {σn }∞
n=1 with σn → 1 (n → ∞) such that I − σn ρE2 H2
H2 E2T is nonsingular for n = 1, 2, . . .. Denote
T


D0n = I − σn ρH2 H2T E2T E2
h ρ i−1
× (B T P B + R)(I − σn ρH2 H2T E2T E2 ) + E2T E2 , (49)

ρ
N̄n = T0 Ē2s
T
(I − σn ρWs1 )−1 Ē2s1 T0T .
1 s1
(50)

Then, from (42)–(45), (49) and (50), we have
h ρ i−1
D0n = B T P B + R + E2T (I − σn ρE2 H2 H2T E2T )−1 E2

   −1
ρ 1
= B T P B + R + Ē2T diag (Is1 − σn ρWs1 )−1 , Is0 Ē2
 1 − σn
 −1
ρ T −1 ρ
= B P B + R + Ē2s1 (Is1 − σn ρWs1 ) Ē2s1 +
T
Ē Ē2s0
T
 (1 − σn ) 2s0
  −1
ρ
=T0T B̄ T P B̄ + R̄ + N̄n + diag 0, Us T0 . (51)
(1−σn )

Decompose the matrix N̄n as follows:


 
ρ N̄n m−s N̄ns1
N̄n = , (52)
 N̄ns
T
1
N̄ns
where N̄ns ∈ Rs×s for n = 1, 2, . . . By (46), (51) and (52), it follows
" ρ
B̄m−s
T P B̄
m−s + R̄m−s +  N̄n m−s
BD0n B = B̄
T
B̄sT P B̄m−s + R̄sT1 + ρ N̄ns
T
1
ρ
#−1
B̄m−s P B̄s + R̄s1 +  N̄ns1
T
B̄ T
B̄sT P B̄s + R̄s + ρ N̄ns + (1−σρ
n)
U s
" #
Dan −Dan Ȳ12n Dbn
= B̄ B̄ T , (53)
−(Dan Ȳ12n Dbn )T Dbn + Dbn Ȳ12n
T D Ȳ
an 12n Dbn
174 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

where
 −1
T −1
 ρ
Dan = Ȳ11n − Ȳ12n Dbn Ȳ12n , Dbn = Ȳ22n + Us (54)
(1 − σn )
with
ρ
Ȳ11n = B̄m−s
T
P B̄m−s + R̄m−s + N̄n m−s ,

ρ
Ȳ12n = B̄m−s
T
P B̄s + R̄s1 + N̄ns1 ,

ρ
Ȳ22n = B̄sT P B̄s + R̄s + N̄ns .

Since Us > 0, it follows from (45), (46), (50), (52) and (54) that
 ρ −1
lim Dbn = 0, lim Dan = B̄m−s T
P B̄m−s + R̄m−s + N̄m−s . (55)
n→∞ n→∞ 
By (53) and (55), we have
 T 
(B̄m−s P B̄m−s + R̄m−s + ρ N̄m−s )−1 0 T
BD0 B = lim BD0n B = B̄
T T

n→∞ 0 0
 ρ −1
= B̄m−s B̄m−s
T
P B̄m−s + R̄m−s + N̄m−s B̄m−s
T
.

Thus, the proof is complete. 

From Lemma 4.2, Sm (P , ) = 0 is a standard Riccati equation. Theorem 4.1 pro-


vides a necessary and sufficient condition for the solution to the quadratic guaranteed
cost control problem with multiplicative uncertainty. But, similar to the case of ad-
ditive controller gain uncertainty, it remains unclear as to how one can choose the
design parameter  in order to achieve the minimal guaranteed cost of the closed-loop
system. Denote

m = sup { > 0 : Sm (P , ) = 0 has a stabilizing solution


P > 0 and (27) holds} . (56)

Then, the design parameter  for achieving suboptimal guaranteed cost of the closed-
loop system falls in the range of 0 <  < m . The next theorem shows that the op-
timal guaranteed cost control (i.e., the control law that yields the minimal cost as
defined in (2)) is obtained at the boundary value of  = m .

Theorem 4.3. Consider the system (1) with the cost function (2) and multiplicative
gain uncertainty (5). Suppose that the pair (A, B) is stabilizable if I − ρE2 H2 H2T E2T
is nonsingular; or the pair (A, B̄m−s ) with B̄m−s given by (45) and (46) is stabiliz-
able if I − ρE2 H2 H2T E2T is singular, and
I − ρE2 H2 H2T E2T > 0. (57)
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 175

If there exists a state feedback gain K such that the control (3) with multiplicative
uncertainty (5) is a quadratic guaranteed cost control with a cost matrix P0 , then the
following Riccati equation with m defined by (56) has a unique stabilizing solution
Popt > 0 satisfying Popt 6 P0 and
R20 (Popt , m ) > 0, (58)
Sm (Popt , m ) = 0 (59)
and the control law (3) with
n
K =− I − ρ(X−1 − m H2 H2T )E2T
h i−1 
−1 T
× m (I − ρE2 H2 H2 E2 ) + ρE2 X E2
T T
E2 X−1 B T Popt A (60)

where X = B T Popt B + R is such that the resulting closed-loop system (6) is quad-
ratically stable, and J 6 x0T Popt x0 for all uncertainties 1K of the form (5).

Proof. By Lemma 4.2 and (28), it follows that if I − ρE2 H2 H2T E2T is nonsingular,
then
Sm (P , ) = AT P A − P + Q − AT P B
h ρ −1 i−1 T
× B T P B + R + E2T I − ρE2 H2 H2T E2T E2 B PA

(61)
and if I − ρE2 H2 H2T E2T is singular, then
Sm (P , ) = AT P A − P + Q − AT P B̄m−s
 ρ −1
× B̄m−s
T
P B̄m−s + R̄m−s + N̄m−s B̄m−s
T
P A. (62)

From (57), (45) and (46), it follows that
Sm (P , 1 ) 6 Sm (P , 2 ) if 1 > 2 > 0. (63)
By using Theorem 4.1 and (61)–(63), the rest of the proof is similar to that of Theo-
rem 3.2, and is omitted. 

Remark 4.4. Theorem 4.3 presents a design of an optimal guaranteed cost control
under the multiplicative gain perturbations, and the closed-loop value of the cost
function J is bounded by the minimal value of x0T Popt x0 . It is easy to show that the
condition (57) can be satisfied by the bound condition kH2 F2 E2 k2 6 1 on the gain
perturbations for all F2 satisfying F2T F2 6 ρI . This implies that the control effort is
at most permitted to degrade to zero. From (27), we have that the design parameter
m in Theorem 4.3 satisfies 0 < m 6 λm with

[λmax {H2T (B T Pm B + R)H2 }]−1 if H2 6= 0,
λm = (64)
∞ if H2 = 0,
176 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

where Pm = Pa is either the stabilizing solution to the Riccati equation (25) if I −


ρE2 H2 H2T E2T > 0, or the stabilizing solution to the Riccati equation Sm (P , ∞) = 0
with Sm (P , ) given by (62) if I − ρE2 H2 H2T E2T is singular. From (56), (58) and
(59), it follows that

m = max {0 <  6 λm : Sm (P , ) = 0 has a stabilizing solution P > 0


and R20 (P , ) > 0} . (65)

Thus, similar to the additive case in Remark 3.3, by initializing at  = λm , and gradu-
ally decreasing  until a solution P exists (for Sm (P , ) = 0, P > 0 and R20 (P , ) >
0), the optimal value m can be obtained. Moreover, since the optimal parameter
m is on the boundary of the interval (0, m ) yielding a family of guaranteed cost
controls, it will be safe to choose an  slightly smaller than m for achieving a sub-
optimal guaranteed cost control in a practical design. When condition (57) is not
satisfied, the suboptimal guaranteed cost control can be searched by solving (28). If
I − ρE2 H2 H2T E2T > 0 in Theorem 4.3, then from (61), it follows that the Riccati
equation (59) corresponds to that of the standard quadratic optimal control [16] for
the system (1) with a cost function
X∞

Jm = xkT Qxk + uTk R̄uk ,
k=0
−1
where R̄ = R + (ρ/m )E2T I − ρE2 H2 H2T E2T E2 .

For extending the reliable control design in [20] for continuous-time systems
to discrete-time systems, we consider a special case in which ρ = 1, H2 = E2 =
diag[0, Is ] with Is an s × s identity matrix (s < m), which corresponds to permitting
the partial control effort diag[0, Is ]u (the last s actuators) to degrade to zero. It covers
the case of the last s actuator outages considered in [20]. We decompose the matrices
B and R as follows:
 
Rm−s Rs1
B = [Bm−s Bs ], R = (66)
RsT1 Rs
with Bs ∈ Rn×s and Rs ∈ Rs×s . Then the following result presents an optimal guar-
anteed cost control.

Theorem 4.5. Consider the system (1) with the cost function (2) and multiplica-
tive gain uncertainty (5). Suppose that the pair (A, Bm−s ) is stabilizable and the
controller uncertainty in (5) is given by
ρ = 1, H2 = E2 = diag[0 Is ]. (67)
Let P > 0 be the stabilizing solution to the following Riccati equation:

S0 (P ) , AT P A − P + Q − AT P Bm−s
×(Bm−s
T
P Bm−s + Rm−s )−1 Bm−s
T
P A = 0. (68)
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 177

Then the control law uk = (K + 1K)xk with


 −1 
I X11 X12
K = − m−s −1 T
T X −1 X ) X B P A (69)
0 λ0 (X22 − X12 11 12

is such that the resulting closed-loop system (6) is quadratically stable, and J 6
x0T P x0 for all uncertainties 1K with (5) and (67), where X11 , X12 , X22 ∈ Rs×s
and λ0 are defined as follows:
λ0 = (λmax [BsT P Bs + Rs ])−1 , (70)
 T 
  Bm−s P Bm−s + Rm−s Bm−s
T PB + R
s s1
X11 X12
, 
X12
T X22
BsT P Bm−s + RsT1 BsT P Bs + Rs
=X = B P B + R.
T
(71)

Furthermore, if any other feedback gain K0 is such that uk = (K0 + 1K)xk (with
1K given by (5) and (67)) is a guaranteed cost control with cost matrix P0 , then
P 6 P0 .

Proof. Let T1 = I and T0 = I . Then, from (66), (67) and (42)–(46), we have B̄m−s =
Bm−s , R̄m−s = Rm−s and N̄m−s = 0. By (62), it follows
Sm (P , ) = S0 (P ), (72)
which is independent of . By (60) and m = λ0 , we have
n  
K =− I − ρ X−1 − λ0 H2 H2T E2T
h  i−1 
−1 T
× λ0 I − ρH2 H2 E2 E2 + ρE2 X E2
T T
E2 X−1 B T P A
n  
=− I − X−1 − λ0 diag[0 Is ] X
 −1 o
× λ0 [Im−s , 0]X + diag[0, Is ] diag[0, Is ] X−1 B T P A
(     −1 )
0 0 λ0 X11 λ0 X12
=− I − I − λ0 diag[0, Is ] X−1 B T P A
X12
T X22 0 Is
" −1
#
Im−s  X11 X12 
=− T X −1 X X−1 B T P A.
0 λ0 X22 − X12 11 12

By Theorem 4.3, the conclusion follows. 

Remark 4.6. Theorem 4.5 presents an optimal guaranteed cost control for the spe-
cial case of H2 = E2 = diag[0, Is ] and ρ = 1, which covers the case of the outages
of the last s actuators. The design equation (68) corresponds to that of the stan-
178 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

dard quadratic optimal control for the system xk+1 = Axk + Bm−s um−s
k with a cost
function
X ∞

Jm = xkT Qxk + [um−s
k ]T Rm−s um−s
k .
k=0
It should be noted that no design parameter is involved in the design equation (68).
The result also is an extension of the reliable control design in [20] for continuous-
time systems to discrete-time systems.

5. An example

Consider the uncertain system (1), performance index (2) and state feedback con-
troller (3) with
     
−1 0.5 0 1 0
A= , B= , Q= , R = 1.
1 1.5 1 0 1
Obviously, (A, B) is a controllable pair (hence, stabilizable), and the eigenvalues are
at −1.1861 and 1.6761, both unstable.
Case 1. For additive controller uncertainties of the form (4) with
 
  1 0
H 1 = 1 1 , E1 = , ρ = 0.2,
0 1
we can use the result in Theorem 3.2 to design an optimal guaranteed cost control
law. First, we compute the bound λa in (24) for the optimal parameter value a in
(20). By solving (25) and using (24), we have λa = 0.0836. Then, by the method giv-
en in Remark 3.3, we have that the optimal value of  as given in (26) is a = 0.0266.
The corresponding smallest performance matrix Popt and optimal feedback gain K
are given by
 
44.1062 −14.3698
Popt = , K = [−1.7120, −1.0375]
−14.3698 17.7787
and the closed-loop eigenvalues are at −0.6915 and 0.1540, both stable.
Case 2. For multiplicative controller uncertainties of the form (6) with
H2 = 1, E2 = 1, ρ = 0.2,
we use Theorem 4.3 to design an optimal guaranteed cost control law. First, by using
Remark 4.4, we have that λm = 0.0836. Then, by the method given in Remark 4.4,
we have that the optimal value of  as given in (65) is m = 0.04. The corresponding
smallest performance matrix Popt and optimal feedback gain K are given by
 
91.4905 −23.2850
Popt = , K = [−1.8931, −0.9719],
−23.2850 23.9163
and the closed-loop eigenvalues are at −0.6064 and 0.1345, both stable.
G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180 179

6. Conclusions

In this paper, we have investigated the problem of guaranteed cost control of


discrete-time linear systems under two classes of controller gain perturbations. For
additive controller gain perturbations, an optimal guaranteed cost control design is
presented in terms of an algebraic Riccati equation, which corresponds to the stan-
dard optimal control design for the same system with a modified cost function. Under
a bound condition for the gain perturbations, an optimal guaranteed cost control
design is also given for the case of the multiplicative gain perturbations. A numerical
example is given to illustrate the design procedures.

References

[1] J. Ackerman, Sampled-Data Control Systems, Springer, Berlin, 1985.


[2] F. Blanchini, R. Lo Cigno, R. Tempo, Control of ATM networks: fragility and robustness issues, in:
Proceedings of the American Control Conference, Philadephia, PA, 1998, pp. 2947–2851.
[3] M. Corless, J. Manela, Control of uncertain discrete-time systems, in: Proceedings of the 1986
American Control Conference, Seattle, WA, 1986, pp. 515–520.
[4] P. Dorato, Non-fragile controller design, an overview, in: Proceedings of the American Control
Conference, Philadephia, PA, 1998, pp. 2829–2831.
[5] D. Famularo, C.T. Abdallah, A. Jadbabais, P. Dorato, W.M. Haddad, Robust non-fragile LQ con-
trollers: the static state feedback case, Proc. American Control Conference, Philadephia, PA, 1998,
pp. 1109–1113.
[6] M. Fu, C.E. de Souza, L. Xie, Quadratic stabilization and H∞ control of discrete-time uncertain
systems, in: Proceedings of the International Symposium MTNS-91, Kobe, Japan, 1991, pp. 269–
274.
[7] G. Garcia, J. Bernussou, D. Arizelier, Robust stabilization of discrete-time linear systems with
norm-bounded time-varying uncertainty, Syst. Control Lett. 22 (1994) 327–339.
[8] W.M. Haddad, J.R. Corrado, Resilient controller design via quadratic Lyapunov bounds, in: Pro-
ceedings of the IEEE Conference on Decision Control, San Diego, CA, 1997, pp. 2678–2683.
[9] W.M. Haddad, J.R. Corrado, Robust resilient dynamic controllers for systems with parametric un-
certainty and controller gain variations, in: Proceedings of the American Control Conference, Phi-
ladephia, PA, 1998, pp. 2837–2841.
[10] A. Jadbabaie, T. Chaouki, D. Famularo, P. Dorato, Robust, non-fragile and optimal controller design
via linear matrix inequalities, in: Proceedings of the American Control Conference, Philadephia, PA,
1998, pp. 2842–2846.
[11] D. Kaesbauer, J. Ackermann, How to escape from the fragility trap, in: Proceeings of the American
Control Conference, Philadephia, PA, 1998, pp. 2832–2836.
[12] I. Kaminer, P.P. Khargonekar, M.A. Rotea, Mixed H2 /H∞ control for discrete-time systems via
convex optimization, Automatica 29 (1) (1993) 57–70.
[13] L.H. Keel, S.P. Bhattacharyya, Stability margins and digital implementation of controllers, in: Pro-
ceedigns of the American Control Conference, Philadephia, PA, 1998, pp. 2852–2856.
[14] L.H. Keel, S.P. Bhattacharyya, Robust fragile or optimal? IEEE Trans. Automatic Control AC, 42
(8) (1997) 1098–1105.
[15] P.P. Khargonekar, I.R. Petersen, K. Zhou, Robust stabilization of uncertain systems and H∞ optimal
control, IEEE Trans. Automatic Control AC 35 (3) (1990) 351–361.
[16] H. Kwakernaak, R. Sivan, Linear Optimal Control Systems, Wiley, New York, 1972.
180 G.-H. Yang et al. / Linear Algebra and its Applications 312 (2000) 161–180

[17] P. Lancaster, L. Rodman, Algebraic Riccati Equations, Clarendon Press, Oxford, 1995.
[18] M.E. Magana, S.H. Zak, Robust state feedback stabilization of discrete-time uncertain dynamical
systems, IEEE Trans. Automatic Control AC 33 (9) (1998) 889–891.
[19] P.L.D. Peres, J.C. Geromel, S.R. Souza, Convex analysis of discrete-time uncertain system H∞
control problem, in: Proceedings of the 30th IEEE Conference on Decision and Control, Brighton,
UK, 1991, pp. 521–526.
[20] R.J. Veillette, Reliable linear-quadratic state feedback control, Automatica 31 (1995) 137–143.
[21] L. Xie, Y.C. Soh, Guaranteed cost control of uncertain discrete-time systems, Control Theory Adv.
Technol. 10 (4) (1995) 1235–1251.

You might also like